CN108280858B - Linear global camera motion parameter estimation method in multi-view reconstruction - Google Patents

Linear global camera motion parameter estimation method in multi-view reconstruction Download PDF

Info

Publication number
CN108280858B
CN108280858B CN201810085740.8A CN201810085740A CN108280858B CN 108280858 B CN108280858 B CN 108280858B CN 201810085740 A CN201810085740 A CN 201810085740A CN 108280858 B CN108280858 B CN 108280858B
Authority
CN
China
Prior art keywords
matrix
camera
absolute
representing
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810085740.8A
Other languages
Chinese (zh)
Other versions
CN108280858A (en
Inventor
秦红星
胡闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jinlaojin Information Technology Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201810085740.8A priority Critical patent/CN108280858B/en
Publication of CN108280858A publication Critical patent/CN108280858A/en
Application granted granted Critical
Publication of CN108280858B publication Critical patent/CN108280858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Abstract

The invention relates to a linear global camera motion parameter estimation method in multi-view reconstruction, belonging to the technical field of multi-view geometry and three-dimensional reconstruction, comprising the following steps: s1: inputting a plurality of images, and matching the plurality of images of the fixed scene pairwise; s2: optimizing all cameras in the same coordinate system by adopting a multi-view geometry-based theory and a global optimization method; s3: calculating absolute rotation of all cameras; s4: and calculating the absolute translation vector of the camera according to the geometric constraint relation of polar lines in the multi-view geometry by using the absolute rotation and point matching of the camera. The method can process the reconstruction application of a large scene, has small noise influence, is simple to calculate, does not need additional other information, and greatly improves the precision of the motion parameters.

Description

Linear global camera motion parameter estimation method in multi-view reconstruction
Technical Field
The invention belongs to the technical field of multi-view geometry and three-dimensional reconstruction, and relates to a linear global camera motion parameter estimation method in multi-view reconstruction.
Background
In computer vision, three-dimensional reconstruction is a research hotspot problem, a reconstructed three-dimensional model has wide application scenes, and three-dimensional reconstruction technology plays an important role in human life. Particularly, the three-dimensional reconstruction based on the image has very wide application in the fields of three-dimensional game development, reconstruction of ancient cultural relics, design of three-dimensional digital cities, design of industrial machine prototypes and the like, so that the research on the three-dimensional reconstruction technology based on the image is one of important directions in the field of computer vision and graphics research. Through years of research and development of computer technology, some problems, such as calibration of a camera, acquisition of internal parameters and the like, are solved well, and meanwhile, solving of external parameters of the camera, namely estimation of motion parameters, is a good method which cannot be found, especially under the condition of large-scale scenes and noise influence. At present, motion parameter estimation is the focus of research in three-dimensional reconstruction.
With the progress of research, researchers have proposed many motion parameter estimation methods. The motion parameter estimation methods include a linear solution method, an iterative solution method, namely, a method of continuously iterating to approximate a real value, a local optimization method, and a global optimization method, namely, all cameras are put together and unified to a same coordinate system to perform optimization solution, an independent solution method, a structure solution method, and a calculation estimation method with a scene point in a recovery space. The problem is now treated basically as an optimization problem.
Although each of these motion parameter estimation methods has advantages, there are also different disadvantages. Either the computation is too complex to balance between computational complexity and accuracy, or it is difficult to implement, or it is not accurate enough to be computationally simple but not as efficient, or it is sensitive to noise. And the reconstruction application of thousands of image scenes on a large scale cannot be effectively solved.
Disclosure of Invention
In view of the above, the present invention aims to provide a linear global camera motion parameter estimation method in multi-view reconstruction, and in order to overcome the defects of the existing motion parameter estimation method such as sensitivity to noise and low precision, the linear global parameter estimation method provided by the present invention has the characteristics of noise immunity and accuracy, so that the method can calculate paired essential matrices based on a multi-view geometric method, thereby overcoming the influence of errors or errors in initializing paired essential matrices, and the method is linear and has high calculation efficiency.
In order to achieve the purpose, the invention provides the following technical scheme:
a linear global camera motion parameter estimation method in multi-view reconstruction comprises the following steps:
s1: matching a plurality of images of a fixed scene pairwise;
s2: optimizing all cameras in the same coordinate system by adopting a multi-view geometry-based theory and a global optimization method;
s3: calculating absolute rotation of all cameras;
s4: and calculating the absolute translation vector of the camera according to the geometric constraint relation of polar lines in the multi-view geometry by using the absolute rotation and point matching of the camera.
Further, step S1 specifically includes:
s11: matching a plurality of images of a fixed scene pairwise;
s12: and judging the matching points obtained by pairwise matching, if the interior points are judged to be insufficient or not meet the requirements, defining the matching point pairs as missing, and not reserving the matching point pairs or skipping the images corresponding to the matching point pairs.
Further, step S2 is to optimize all cameras in the same coordinate system by using a multi-view geometry-based theory and global optimization method, calculate the absolute rotation matrix of the cameras by using the objective function,
Figure BDA0001562364270000021
wherein R isiThe rotation matrix is absolute and represents the orientation or direction of the camera i under the global coordinate system;
Figure BDA0001562364270000022
is a relative rotation matrix representing the rotation of the ith camera relative to the jth camera; rjRepresenting the orientation or heading, R, of camera j in a global coordinate systemnRepresenting the orientation or direction of the camera n under the global coordinate system, n representing the number of images, | · | | survivalFRepresenting the Frobenius norm of a matrix.
Further, the objective function is calculated by:
s21: all relative rotations are collected to construct a 3n multiplied by 3n symmetric matrix G;
s22: counting the number of relative rotation matrices valid for each row block in the matrix G, a 3n × 3n matrix D is constructed.
Further, the symmetric matrix G is a matrix of,
Figure BDA0001562364270000023
the matrix D is a matrix of a number of,
Figure BDA0001562364270000031
wherein, I is an identity matrix,
Figure BDA0001562364270000032
a relative rotation matrix for the cameras representing the rotation of the ith camera relative to the jth camera; dkAnd k is more than or equal to 1 and less than or equal to n, and the number of effective relative rotations obtained in the k-th row block in the matrix G is represented.
Further, step S4 specifically includes:
s41: calculating a translation vector of the camera according to the absolute rotation of the camera and the matching point;
s42: and according to the epipolar geometry constraint relation in the multi-view geometry, combining the matching point pairs to obtain an absolute translation vector.
Further, in step S41, the translation vector of the camera is obtained from the essential matrix, which is,
Eij=Ri T(Ti-Tj)Rj
wherein, Ti=[ti]×
Figure BDA0001562364270000033
TiRepresents tiInner product of an obliquely symmetric matrix, TjRepresents tjInner product of an oblique symmetric matrix, tiIs the absolute translation vector of the ith camera, representing the position in global coordinates, tjRepresenting an absolute translation vector for the jth camera;
in step S42, the absolute translation vector obtained by combining the matching point pair satisfies:
Figure BDA0001562364270000034
wherein the content of the first and second substances,
Figure BDA0001562364270000035
the m characteristic point on the ith image is taken;
Figure BDA0001562364270000036
is the mth characteristic point on the jth image.
The invention has the beneficial effects that: the linear global camera motion parameter estimation method in multi-view reconstruction provided by the invention adopts a global optimization idea on the basis of multi-view geometry, all cameras are put into the same coordinate system to be considered together, two 3n multiplied by 3n matrixes are calculated, and finally the motion parameters of the cameras under the global coordinate are obtained linearly.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of an aperture imaging model;
FIG. 3 is a schematic view of epipolar geometry;
fig. 4 is a presentation from image to motion.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention provides a linear global camera motion parameter estimation method in multi-view reconstruction, based on feature point matching and multi-view geometric theory, firstly matching every two of images of a series of fixed scenes to obtain matching point pairs, calculating paired intrinsic matrices from the matching point pairs, and decomposing the intrinsic matrices to obtain paired relative rotation matrices; then, global optimization is carried out by utilizing the paired relative rotations according to an objective function to obtain an absolute rotation matrix; then, a new expression of the essential matrix is found out by the multi-view geometric theory, a linear equation system is constructed by combining image characteristic point pairs, and an absolute translation vector is solved, as shown in fig. 1, the method specifically comprises the following steps:
step 1: inputting a series of images, and matching the images by a feature matching algorithm.
A series of ordered or unordered images of a fixed scene are matched pairwise by surf feature descriptors to obtain matching points. And judging the obtained matching points, defining the matching points as missing once the internal points are insufficient or not in accordance with the requirements, and not reserving the matching points or skipping the two images.
Step 2: and calculating by using the matching points based on the multi-view theory to obtain an intrinsic matrix, and decomposing singular values of the intrinsic matrix to obtain a pair of relative rotation matrixes.
Step 201: an essence matrix is calculated, which is an important concept in the multi-view geometry theory, relating to the camera extrinsic parameters, relating the physical coordinates of a spatial point P observed by the left camera to the position of the same point observed by the right camera, as shown in fig. 2.
Step 202: decomposed essential matrix EijA pair of relative rotations is obtained. The essential matrix formula is as follows:
Figure BDA0001562364270000041
wherein, tijRepresenting the position of the ith camera relative to the jth camera as a relative translation vector;
Figure BDA0001562364270000042
indicates a correspondence to tijAn inner product skew-symmetric matrix; rijIs a relative rotation matrix representing the rotation of the ith camera relative to the jth camera.
The principle of singular value decomposition is used to separate the essential matrix EijIt can be decomposed into a rotation matrix RijAnd a translation vector tij. But obtain more than one rotation, then need to carry out screening, its screening principle is: the spatial point should be in front of the two cameras, as shown in fig. 2, and the spatial point P should be in front of the left and right cameras.
In addition, according to the epipolar geometry principle in multi-view geometry, the spatial point P and its projection points on image i and image j are denoted as P, respectivelyiAnd pjWhich satisfies epipolar geometric constraints as shown in fig. 3. The specific relation equation is as follows:
Figure BDA0001562364270000043
definition of the present embodiment
Figure BDA0001562364270000044
Representing the focus of the camera in a global coordinate system, defining RiE SO (3), representing the orientation of the camera in the global coordinate system, the following relation being defined for a pair of images i and j:
Rij=Ri TRj
Figure BDA0001562364270000051
at the same time, it is easy to verify:
Figure BDA0001562364270000052
when all the relative rotation matrices are obtained, step 3 may be entered.
And step 3: and establishing an objective function by utilizing the relative rotation, and solving the objective function according to the characteristic value decomposition to obtain an absolute camera rotation matrix.
After the processing of step 2, the relative rotation between all the two images is obtained, that is, the absolute rotation is solved according to the relative rotation.
First, an objective function is constructed according to formula (3), and the objective function is as follows:
Figure BDA0001562364270000053
to solve the objective function, a 3n × 3n symmetric matrix G is constructed, containing all pairs of relative rotation matrices, and the formula is as follows:
Figure BDA0001562364270000054
a3 × 3n matrix R is also defined, such that the matrix R contains all absolute pairs of rotations, which are defined as follows:
R=[R1 R2 ... Rn]
then, counting the number of effective relative rotation matrixes in each row block in the G matrix to construct a D matrix, wherein the construction formula is as follows:
Figure BDA0001562364270000055
where I is the identity matrix and dkRepresenting the effective relative rotation in k-line blocks in matrix G
Figure BDA0001562364270000056
K is more than or equal to 1 and less than or equal to n.
Thus, GR can be verifiedT=DRTThus matrix D-1G three eigenvectors with eigenvalues of 1 form a matrix RTThe column (c). To extract the absolute rotation matrix, a 3n × 3 matrix M is defined, which contains these eigenvectors, where M is defined as:
M=[M1;M2;...Mn]
wherein each MiIs an estimate of the absolute rotation of the ith camera.
Finally, each M is decomposed by using singular valuesiEach absolute rotation matrix R is obtainedi
And 4, step 4: and according to the multi-view geometric theory, constructing a linear equation by using an absolute rotation matrix and the matching point pairs, and solving to obtain an absolute camera translation vector.
Absolute rotation R of each camera obtained by step 3iTurning to solving for the absolute translation vector of the camera, first find an expression of the essential matrix that contains only rotation and translation:
Eij=Ri T(Ti-Tj)Rj
wherein, Ti=[ti]×
Figure BDA0001562364270000061
Further, according to the epipolar geometry constraint relationship in the multi-view geometry, combining the equations obtained by matching the point pairs:
Figure BDA0001562364270000062
wherein the content of the first and second substances,
Figure BDA0001562364270000063
is the m-th feature point on the ith image,
Figure BDA0001562364270000064
is the m-th feature point on the i-th image.
Finally, according to least square, solving a linear equation system, and obtaining an absolute translation vector t of each camera by finding feature vectors of 4 minimum feature values of a 3n multiplied by 3n matrixi
Thus, the motion parameter of each camera, namely the absolute rotation matrix R of each camera is estimatediAnd absolute translation vector tiWhich is shown in fig. 4 by image-to-camera motion estimation.
Finally, it is noted that the above-mentioned preferred embodiments illustrate rather than limit the invention, and that, although the invention has been described in detail with reference to the above-mentioned preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims (1)

1. A linear global camera motion parameter estimation method in multi-view reconstruction is characterized in that: comprises the following steps:
s1: inputting a plurality of images, and matching the plurality of images of the fixed scene pairwise;
s2: optimizing all cameras in the same coordinate system by adopting a multi-view geometry-based theory and a global optimization method;
s3: calculating absolute rotation of all cameras;
s4: calculating an absolute translation vector of the camera according to a polar line geometric constraint relation in multi-view geometry by using the absolute rotation and matching point pairs of the camera;
step S1 specifically includes:
s11: matching a plurality of images of a fixed scene pairwise;
s12: judging matching point pairs obtained through pairwise matching, if the matching point pairs are judged to be insufficient or not meet the requirements, defining the matching point pairs as missing, and not reserving the matching point pairs or skipping images corresponding to the matching point pairs;
step S2 is to optimize all cameras in the same coordinate system by using a multi-view geometry-based theory and global optimization method, calculate the absolute rotation matrix of the cameras by an objective function,
Figure FDA0003412154820000011
wherein R isiThe rotation matrix is absolute and represents the orientation or direction of the camera i under the global coordinate system;
Figure FDA0003412154820000012
representing the i-th phase relative to each other as a relative rotation matrixRotation at jth camera; rjRepresenting the orientation or heading, R, of camera j in a global coordinate systemnRepresenting the orientation or direction of the camera n under the global coordinate system, n representing the number of images, | · | | survivalFRepresents the Frobenius norm of a matrix;
the objective function is calculated as follows:
s21: all relative rotations are collected to construct a 3n multiplied by 3n symmetric matrix G;
s22: counting the number of effective relative rotation matrixes of each row block in the matrix G, and constructing a matrix D of 3n multiplied by 3 n;
the symmetric matrix G is a matrix of a symmetric matrix G,
Figure FDA0003412154820000013
the matrix D is a matrix of a number of,
Figure FDA0003412154820000021
wherein, I is an identity matrix,
Figure FDA0003412154820000022
a relative rotation matrix for the cameras representing the rotation of the ith camera relative to the jth camera; dkRepresenting the number of effective relative rotations obtained in the k-th row of blocks in the matrix G, wherein k is more than or equal to 1 and less than or equal to n;
step S4 specifically includes:
s41: calculating a translation vector of the camera according to the absolute rotation of the camera and the matching point pair;
s42: according to the epipolar geometry constraint relation in the multi-view geometry, combining the matching point pairs to obtain an absolute translation vector;
in step S41, the translation vector of the camera is obtained from the intrinsic matrix, which is,
Eij=Ri T(Ti-Tj)Rj
wherein, Ti=[ti]×
Figure FDA0003412154820000023
TiRepresents tiInner product of an obliquely symmetric matrix, TjRepresents tjInner product of an oblique symmetric matrix, tiIs the absolute translation vector of the ith camera, representing the position in global coordinates, tjRepresenting an absolute translation vector for the jth camera;
in step S42, the absolute translation vector obtained by combining the matching point pair satisfies:
Figure FDA0003412154820000024
wherein the content of the first and second substances,
Figure FDA0003412154820000025
the m characteristic point on the ith image is taken;
Figure FDA0003412154820000026
is the mth characteristic point on the jth image.
CN201810085740.8A 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction Active CN108280858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810085740.8A CN108280858B (en) 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810085740.8A CN108280858B (en) 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction

Publications (2)

Publication Number Publication Date
CN108280858A CN108280858A (en) 2018-07-13
CN108280858B true CN108280858B (en) 2022-02-01

Family

ID=62805627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810085740.8A Active CN108280858B (en) 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction

Country Status (1)

Country Link
CN (1) CN108280858B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166171B (en) * 2018-08-09 2022-05-13 西北工业大学 Motion recovery structure three-dimensional reconstruction method based on global and incremental estimation
CN109741403B (en) * 2018-12-29 2023-04-07 重庆邮电大学 Camera translation calibration method based on global linearity
CN110782524B (en) * 2019-10-25 2023-05-23 重庆邮电大学 Indoor three-dimensional reconstruction method based on panoramic image
CN111161355B (en) * 2019-12-11 2023-05-09 上海交通大学 Multi-view camera pose and scene pure pose resolving method and system
CN111724466B (en) * 2020-05-26 2023-09-26 同济大学 3D reconstruction optimization method and device based on rotation matrix
CN111986247B (en) * 2020-08-28 2023-10-27 中国海洋大学 Hierarchical camera rotation estimation method
CN114170296B (en) * 2021-11-10 2022-10-18 埃洛克航空科技(北京)有限公司 Rotary average estimation method and device based on multi-mode comprehensive decision
CN114972536B (en) * 2022-05-26 2023-05-09 中国人民解放军战略支援部队信息工程大学 Positioning and calibrating method for aviation area array swing scanning type camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104019799A (en) * 2014-05-23 2014-09-03 北京信息科技大学 Relative orientation method by using optimization of local parameter to calculate basis matrix
CN104200523A (en) * 2014-09-11 2014-12-10 中国科学院自动化研究所 Large-scale scene three-dimensional reconstruction method for fusion of additional information
US9619892B2 (en) * 2013-05-13 2017-04-11 Electronics And Telecommunications Research Institute Apparatus and method for extracting movement path of mutual geometric relationship fixed camera group
CN106981083A (en) * 2017-03-22 2017-07-25 大连理工大学 The substep scaling method of Binocular Stereo Vision System camera parameters

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076754A1 (en) * 2015-09-11 2017-03-16 Evergig Music S.A.S.U. Systems and methods for matching two or more digital multimedia files

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619892B2 (en) * 2013-05-13 2017-04-11 Electronics And Telecommunications Research Institute Apparatus and method for extracting movement path of mutual geometric relationship fixed camera group
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104019799A (en) * 2014-05-23 2014-09-03 北京信息科技大学 Relative orientation method by using optimization of local parameter to calculate basis matrix
CN104200523A (en) * 2014-09-11 2014-12-10 中国科学院自动化研究所 Large-scale scene three-dimensional reconstruction method for fusion of additional information
CN106981083A (en) * 2017-03-22 2017-07-25 大连理工大学 The substep scaling method of Binocular Stereo Vision System camera parameters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Robust Rotation and Translation Estimation in Multiview Reconstruction";Daniel Martinec等;《2007 IEEE Conference on Computer Vision and Pattern Recognition》;20070716;全文 *

Also Published As

Publication number Publication date
CN108280858A (en) 2018-07-13

Similar Documents

Publication Publication Date Title
CN108280858B (en) Linear global camera motion parameter estimation method in multi-view reconstruction
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
CN110503680B (en) Unsupervised convolutional neural network-based monocular scene depth estimation method
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
CN108038420B (en) Human behavior recognition method based on depth video
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN106251399A (en) A kind of outdoor scene three-dimensional rebuilding method based on lsd slam
CN111814719A (en) Skeleton behavior identification method based on 3D space-time diagram convolution
CN108921926A (en) A kind of end-to-end three-dimensional facial reconstruction method based on single image
CN107481279A (en) A kind of monocular video depth map computational methods
CN114332415B (en) Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology
CN110852182A (en) Depth video human body behavior recognition method based on three-dimensional space time sequence modeling
US20160163114A1 (en) Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
CN113850865A (en) Human body posture positioning method and system based on binocular vision and storage medium
CN113065546A (en) Target pose estimation method and system based on attention mechanism and Hough voting
CN111598995B (en) Prototype analysis-based self-supervision multi-view three-dimensional human body posture estimation method
CN104463962B (en) Three-dimensional scene reconstruction method based on GPS information video
CN102708589B (en) Three-dimensional target multi-viewpoint view modeling method on basis of feature clustering
CN117036612A (en) Three-dimensional reconstruction method based on nerve radiation field
Xu Fast modelling algorithm for realistic three-dimensional human face for film and television animation
CN113506342B (en) SLAM omni-directional loop correction method based on multi-camera panoramic vision
Wang et al. Pursuing 3-D scene structures with optical satellite images from affine reconstruction to Euclidean reconstruction
Chen et al. Pose estimation from multiple cameras based on Sylvester’s equation
Chan et al. A 3D-point-cloud feature for human-pose estimation
Kumar et al. CNN-LSTM Hybrid model based human action recognition with skeletal representation using joint movements based energy maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240402

Address after: No. 07, Zone E, Shenxinda Zhongchuang Space, No. 2, 3rd Floor, Unit 1, Building 1, No. 252, Zhudu Avenue, Jiang'an Town, Yibin City, Sichuan Province, 644200

Patentee after: Yibin Zhibohui Technology Co.,Ltd.

Country or region after: China

Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China

TR01 Transfer of patent right

Effective date of registration: 20240410

Address after: 812, No. 71 Jinyu West Street, Dalong Street, Panyu District, Guangzhou City, Guangdong Province, 511400

Patentee after: Guangzhou Jinlaojin Information Technology Co.,Ltd.

Country or region after: China

Address before: No. 07, Zone E, Shenxinda Zhongchuang Space, No. 2, 3rd Floor, Unit 1, Building 1, No. 252, Zhudu Avenue, Jiang'an Town, Yibin City, Sichuan Province, 644200

Patentee before: Yibin Zhibohui Technology Co.,Ltd.

Country or region before: China