CN116645420A - Multi-camera space-time synchronization method based on dynamic capturing equipment - Google Patents

Multi-camera space-time synchronization method based on dynamic capturing equipment Download PDF

Info

Publication number
CN116645420A
CN116645420A CN202310375395.2A CN202310375395A CN116645420A CN 116645420 A CN116645420 A CN 116645420A CN 202310375395 A CN202310375395 A CN 202310375395A CN 116645420 A CN116645420 A CN 116645420A
Authority
CN
China
Prior art keywords
camera
data
acquisition
rgb
subsystems
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310375395.2A
Other languages
Chinese (zh)
Inventor
钱骁
尹子鳗
余杭
林赟
陈风云
蔡通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinqing Tech Co ltd
Original Assignee
Beijing Xinqing Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xinqing Tech Co ltd filed Critical Beijing Xinqing Tech Co ltd
Priority to CN202310375395.2A priority Critical patent/CN116645420A/en
Publication of CN116645420A publication Critical patent/CN116645420A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a multi-camera space-time synchronization method based on dynamic capturing equipment, which comprises the following steps: s1, constructing an acquisition scene; s2, performing time synchronization; s3, performing space synchronization; s4, testing space-time synchronization. Compared with the prior art, the method has the advantages that: 1. the invention designs and builds a data acquisition system with low cost and high reliability, which consists of a marked human motion capturing subsystem, an RGB-D multi-view camera acquisition subsystem and a set of synchronous response host through analyzing the merits of the main stream human body posture estimation data acquisition system. 2. Aiming at the designed and built data acquisition system, the invention provides and realizes a space-time synchronization method based on the system, and achieves the effect of extremely high reliability.

Description

Multi-camera space-time synchronization method based on dynamic capturing equipment
Technical Field
The invention relates to the technical field of human body posture estimation, in particular to a multi-camera space-time synchronization method based on dynamic capture equipment.
Background
With the increasing interest in the field of computer vision in human body posture estimation, related work is also being performed well, and the specific implementation method for performing posture estimation is also becoming mature. The maturity of the data training and model building methods makes the appearance of larger-scale, more accurate and more complex gesture estimation data sets become necessary, and meanwhile, in order to form a large-scale data set which is more complete and meets the two-dimensional and three-dimensional human gesture estimation algorithm, the data acquisition technology is rapidly developed, and a plurality of large-scale open-source data sets are generated, such as a COCO data set for two-dimensional human gesture estimation, a human3.6M data set for three-dimensional human gesture estimation and the like.
However, due to factors such as acquisition technology, the large-scale public data set still cannot meet the requirement of a human body posture estimation algorithm which is developed at high speed, so that the acquisition of data by utilizing the technology which is currently owned also becomes a necessary requirement.
The current mainstream data acquisition methods comprise dense reconstruction, multi-view fitting and an inertial sensor system, but all three methods have limitations. A classical dense reconstruction example is to use hundreds of RGB cameras to build dense human surface and low-level semantic information fields in the task of acquiring CMU panotic dataset, together with depth cameras, to acquire three-dimensional true values of human bones. Although the three-dimensional true value of the human skeleton obtained by the dense reconstruction scheme is accurate, the dense reconstruction uses a large number of visual acquisition devices to obtain information, so that the method has the advantages of high cost, large engineering quantity, limited acquisition space and relatively high failure rate. The multi-view fitting is the most mainstream data acquisition scheme at present, a plurality of cameras are adopted to synchronously acquire RGB image data, estimated three-dimensional human body posture results are respectively output under the monocular condition, and more accurate human body key point positions are acquired by adopting optimization methods such as space smoothing and the like. The multi-view fitting method is low in cost, and the system failure rate is low due to the fact that the number of used cameras is small, but the reliability is low because the basic information of multi-view estimation comes from monocular estimation, and the position accuracy of the reconstructed three-dimensional human key points is low and the reliability is poor. The inertial sensor is realized by adopting the law of inertia, and is mainly used for detecting and measuring the acceleration and the rotation motion. In the field of motion capture, the inertial sensor can be used as one of marked motion capture schemes according to the characteristics of acceleration and rotational motion detected by the inertial sensor. When data are collected, the sensor meeting the requirements is stuck to the body part of the subject to be obtained, and then the data transmitted back by the sensor are obtained. The scheme of the inertial sensor is not limited by scenes, is not limited by the indoor and the size of the scenes, has higher accuracy, but the post-processing of data is more complex, the situation of low reliability of true values after manufacturing is easy to occur, the cost of the sensor is high, if one sensor acts on each joint to collect information, the cost and the complexity of the system can be increased rapidly along with the increase of the collection points.
In summary, the existing human body posture estimation algorithm data set acquisition technology has the following defects: 1. acquisition scenes are limited. Because the dynamic capture equipment is used, the designed human body posture data acquisition system still faces the problem of being limited by indoor acquisition, and the scene generalization capability is low. 2. The method for performing the space-time synchronization is not simple enough and still needs the support of the existing theory.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the technical difficulties, and provide a multi-camera space-time synchronization method based on dynamic capturing equipment, which can be used for: 1. and (5) time synchronization. Because the complete data acquisition system consists of an RGB-D multi-view camera acquisition subsystem, a human motion capture subsystem and a response host, the information among the subsystems is not synchronous, the acquisition rates are different, and the problem that the data rates are difficult to match under unstable operation is faced. 2. And (5) spatially synchronizing. The working space of the dynamic capturing subsystem is inconsistent with that of the multi-view RGB-D camera subsystem, so that each subsystem has a corresponding coordinate system and needs to physically unify two reference coordinate systems.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a multi-camera space-time synchronization method based on dynamic capturing equipment specifically comprises the following steps:
s1, constructing an acquisition scene
Starting from the defects of three main human body three-dimensional key point information acquisition schemes in the current stage, the method has the advantages of low design cost, simplicity in construction and reliability in acquired information on the basis of avoiding the defects of the main human body three-dimensional key point information acquisition schemes. After the design is completed, the arrangement of the acquisition sites and the construction of the acquisition scenes are started. When the field is initially built, 5m x 5m fields are selected and built according to the limitation of the actual field, and the motion capturing devices and the RGB-D camera devices are respectively arranged at equal intervals around the field so as to capture the full-angle three-dimensional key point data information of the human body. In order to better acquire three-dimensional key point information of a human body, a plurality of experiments are carried out on the specific setting position and the orientation direction of the dynamic capture equipment according to the size of an actual field so as to obtain the position with the most accurate and reliable acquired information. In order to more accurately acquire RGB image data and depth image data, an RGB-D camera is disposed at a position where a center point of the RGB-D camera is leveled with a center point of a human body with reference to a placement position of the camera at the time of photographing.
S2, performing time synchronization
The built dynamic capturing subsystem composed of a plurality of dynamic capturing devices has synchronous internal information, camera information in the RGB-D multi-camera subsystem is synchronous, but the information between the two subsystems is not synchronous, and the problems of how to match under the conditions of different system acquisition rates and unstable acquisition data rates are faced. In order to realize information synchronization between two subsystems to the greatest extent, firstly, absolute time stamps of data information acquired by the two subsystems relative to a main board clock response are respectively obtained, a start command is simultaneously sent to the two subsystems by double threads when data are acquired, the formally started time of the two subsystems is respectively recorded, the difference value of the absolute time of the two subsystems is taken, and the relative delay time of the two subsystems is obtained and recorded. And then solving the problem by adopting a redundancy matching method. Meanwhile, in order to solve the problem of unstable data transmission rate, a data acquisition expansion card is deployed on a main board, so that stable data input of the system is realized on the basis of reducing the manufacturing cost.
S3, performing space synchronization
Considering that the multi-view RGB-D camera subsystem and the motion capture subsystem are respectively provided with a coordinate system for measuring the position of the captured object, fusion of the image data collected by the RGB-D camera and the data collected by the motion capture device is required to be carried out, and the two coordinate systems are converted into the same coordinate system, namely, the spatial synchronization of the two subsystems is carried out. Camera calibration is a long-standing and important problem in the field of computers, and relates to imaging geometry, lens distortion, homography matrix, nonlinear optimization and the like. The method for self-calibration and calibration board calibration during camera calibration is generally adopted, and the system also adopts the method because the characteristic points of the calibration board calibration method are easy to find and the stability is good. When the acquisition scene is built in the last step, the coordinate points of the motion capture subsystem are located at the central value of the ground space, the calibration plate is selected for calibrating the multi-camera according to the camera calibration principle, and the checkerboard calibration plate is used for directly enabling the world coordinate systems of the two subsystems to coincide in a physical sense, so that space synchronization is completed.
S4, testing space-time synchronization
After the construction of the data acquisition scene and the setting of the data acquisition equipment are completed, and the time synchronization and the space synchronization of the motion capturing subsystem and the RGB-D multi-camera system are completed, the whole constructed system is put into test. The method is characterized in that a subject wears a specific dynamic capture suit or directly sticks an optical reflection ball at a key point of a human body to be captured so as to enable dynamic capture equipment to collect key point data. After the equipment is debugged, the subject wearing the equipment continuously performs large-amplitude activities in the built acquisition scene, and higher credibility is provided for subsequent verification. And a tester observes the three-dimensional human body posture situation captured by the motion capture system in real time through the host, intercepts the frames returned by the motion capture system and the RGB-D system at different intervals for a plurality of times to check whether the data returned by the two subsystems are the data at the same moment or not, and further verifies the correctness and reliability of the space-time synchronization based on the motion capture device in the previous step.
Compared with the prior art, the invention has the advantages that:
1. the invention designs and builds a data acquisition system with low cost and high reliability, which consists of a marked human motion capturing subsystem, an RGB-D multi-view camera acquisition subsystem and a set of synchronous response host through analyzing the merits of the main stream human body posture estimation data acquisition system.
2. Aiming at the designed and built data acquisition system, the invention provides and realizes a space-time synchronization method based on the system, and achieves the effect of extremely high reliability.
Drawings
Fig. 1 is a schematic view of the radial distortion of the present invention (normal undistorted, barrel distorted, pincushion distorted, respectively, from left to right).
Fig. 2 is a schematic view of tangential distortion of the present invention (left hand as a schematic view of tangential distortion of a lens).
Fig. 3 is a diagram of the relationship between the camera coordinate system and the world coordinate system of the present invention.
Detailed Description
The invention will be described in further detail with reference to the following embodiments and the accompanying drawings.
A multi-camera space-time synchronization method based on dynamic capturing equipment specifically comprises the following steps:
s1, constructing an acquisition scene
Starting from the defects of three main human body three-dimensional key point information acquisition schemes in the current stage, the method has the advantages of low design cost, simplicity in construction and reliability in acquired information on the basis of avoiding the defects of the main human body three-dimensional key point information acquisition schemes. After the design is completed, the arrangement of the acquisition sites and the construction of the acquisition scenes are started. When the field is initially built, 5m x 5m fields are selected and built according to the limitation of the actual field, and the motion capturing devices and the RGB-D camera devices are respectively arranged at equal intervals around the field so as to capture the full-angle three-dimensional key point data information of the human body. In order to better acquire three-dimensional key point information of a human body, a plurality of experiments are carried out on the specific setting position and the orientation direction of the dynamic capture equipment according to the size of an actual field so as to obtain the position with the most accurate and reliable acquired information. In order to more accurately acquire RGB image data and depth image data, an RGB-D camera is disposed at a position where a center point of the RGB-D camera is leveled with a center point of a human body with reference to a placement position of the camera at the time of photographing.
S2, performing time synchronization
The built dynamic capturing subsystem composed of a plurality of dynamic capturing devices has synchronous internal information, camera information in the RGB-D multi-camera subsystem is synchronous, but the information between the two subsystems is not synchronous, and the problems of how to match under the conditions of different system acquisition rates and unstable acquisition data rates are faced. In order to realize information synchronization between two subsystems to the greatest extent, firstly, absolute time stamps of data information acquired by the two subsystems relative to a main board clock response are respectively obtained, a start command is simultaneously sent to the two subsystems by double threads when data are acquired, the formally started time of the two subsystems is respectively recorded, the difference value of the absolute time of the two subsystems is taken, and the relative delay time of the two subsystems is obtained and recorded. And then solving the problem by adopting a redundancy matching method. Meanwhile, in order to solve the problem of unstable data transmission rate, a data acquisition expansion card is deployed on a main board, so that stable data input of the system is realized on the basis of reducing the manufacturing cost.
S3, performing space synchronization
Considering that the multi-view RGB-D camera subsystem and the motion capture subsystem are respectively provided with a coordinate system for measuring the position of the captured object, fusion of the image data collected by the RGB-D camera and the data collected by the motion capture device is required to be carried out, and the two coordinate systems are converted into the same coordinate system, namely, the spatial synchronization of the two subsystems is carried out. Camera calibration is a long-standing and important problem in the field of computers, and relates to imaging geometry, lens distortion, homography matrix, nonlinear optimization and the like. The method for self-calibration and calibration board calibration during camera calibration is generally adopted, and the system also adopts the method because the characteristic points of the calibration board calibration method are easy to find and the stability is good. When the acquisition scene is built in the last step, the coordinate points of the motion capture subsystem are located at the central value of the ground space, the calibration plate is selected for calibrating the multi-camera according to the camera calibration principle, and the checkerboard calibration plate is used for directly enabling the world coordinate systems of the two subsystems to coincide in a physical sense, so that space synchronization is completed.
S4, testing space-time synchronization
After the construction of the data acquisition scene and the setting of the data acquisition equipment are completed, and the time synchronization and the space synchronization of the motion capturing subsystem and the RGB-D multi-camera system are completed, the whole constructed system is put into test. The method is characterized in that a subject wears a specific dynamic capture suit or directly sticks an optical reflection ball at a key point of a human body to be captured so as to enable dynamic capture equipment to collect key point data. After the equipment is debugged, the subject wearing the equipment continuously performs large-amplitude activities in the built acquisition scene, and higher credibility is provided for subsequent verification. And a tester observes the three-dimensional human body posture situation captured by the motion capture system in real time through the host, intercepts the frames returned by the motion capture system and the RGB-D system at different intervals for a plurality of times to check whether the data returned by the two subsystems are the data at the same moment or not, and further verifies the correctness and reliability of the space-time synchronization based on the motion capture device in the previous step.
Considering that the main stream three-dimensional key point information acquisition schemes at the present stage have the defects, alternative schemes are selected to acquire the three-dimensional key point information of the human body. In order to avoid the defects of the three-dimensional human body key point information acquisition schemes and find an effective alternative scheme from each defect, the current rapid development of the motion capture equipment is considered, and the motion capture equipment is selected for acquiring three-dimensional human body key point data. In order to obtain accurate three-dimensional key point data of a human body, a plurality of motion capture devices are selected to form a motion capture system, and in order to obtain all-dimensional and multi-angle image data of a subject in an acquisition scene, a plurality of RGB-D cameras are arranged around the acquisition scene. The human body three-dimensional key point information acquisition scheme based on the dynamic capturing equipment is low in manufacturing cost, the space for acquiring the scene is not limited more loosely, and the established actual site edge length can be selected according to the needs in principle. When the three-dimensional key point information of the human body is acquired, firstly, a complete acquisition site is built according to a design thought, and a plurality of dynamic capturing devices and RGB-D cameras are respectively arranged around the site to complete the building of the whole data acquisition system. The time synchronization among the plurality of dynamic capturing devices is automatically realized, so the problem of information synchronization among the plurality of RGB-D cameras is considered. When three-dimensional key point information of a human body is acquired, each RGB-D camera acquires RGB images and depth images of a specific angle, in order to enable the images acquired by a plurality of RGB-D cameras to be synchronized in time so as to acquire the RGB images and the depth images of all view angles at a certain moment when data are processed later, input synchronization and output synchronization are carried out by utilizing an input synchronization port and an output synchronization port which are attached to RGB-D camera equipment by 3.5mm, a plurality of equipment are linked together, and coordination among the equipment can be carried out through software after the equipment is linked. The dynamic capturing system and the internal equipment of the RGB-D multi-camera system are synchronized, and further the problems of information synchronization between the two subsystems and information matching under the condition of unstable data operation due to the difference of acquisition rates between the systems are considered. In order to synchronize the acquired data information of the two subsystems, the absolute time difference is used for controlling the data acquisition starting time of the two subsystems, and simultaneously, in order to ensure stable transmission of acquired data, the data acquisition expansion card is used for assisting in data transmission under the requirement of reducing the manufacturing cost as much as possible. The system can be tested for stable data entry on this hardware configuration.
Any theoretical physical model is an approximation of a real thing on a particular assumption, however there are errors in practical application, and imaging models of cameras are no exception. In practice, the main sources of the imaging errors of the camera are two parts, the first is errors generated by the manufacture of the sensor, for example, the imaging unit of the sensor is not square, and the sensor is askew; secondly, errors generated by lens manufacture and installation, and nonlinear radial distortion of the lens generally exists; the lens is mounted non-parallel to the camera sensor and also produces tangential distortion.
The light transmitted through the edge of the lens easily generates radial distortion, the farther the light is from the center of the lens, the larger the distortion, as shown in fig. 1, the radial distortion extends outwards from a certain center, and the more outwards, the larger the distortion is; it is apparent that distortion is a nonlinear transformation of distance, and can be approximated by a polynomial, as described in many references.
x, y are normalized image coordinates, i.e., the origin of coordinates has moved to the principal point, and the pixel coordinates are divided by the focal length. K1, K2, K3 are radial distortion coefficients, r2=x2+y2, where the camera sensor and the lens are not parallel, because of the included angle, the imaging position changes when light passes through the lens to the image sensor, and tangential distortion occurs, as shown in fig. 2.
x, y are normalized image coordinates, i.e., the origin of coordinates has moved to the principal point, and the pixel coordinates are divided by the focal length. p1, p2 are radial distortion coefficients, r2=x2+y2;
the above principle occurs under the camera coordinate system, and when we need the position of an object in the imaging plane under the world coordinate system, we need to do coordinate transformation, as shown in fig. 3.
The relationship of point P in the world coordinate system to the image coordinates can be expressed as:
camera calibration is to calibrate internal parameters and external parameters, and approximate the actual physical imaging relationship through a theoretical mathematical model and an optimization means. Because the space coordinate systems of the dynamic capturing subsystem and the RGB-D multi-camera subsystem are not uniform, the space synchronization between the two subsystems is also needed, and the RGB-D multi-camera subsystem is calibrated based on a calibration method so as to unify the coordinate systems of the two subsystems.
The technical scheme of the invention is formed by gradually solving the disadvantages of each acquisition system on the basis of analyzing the merits of the main stream human body posture data acquisition scheme, and is realized based on the existing theoretical support design, and other alternative schemes which are researched at present cannot avoid the problems of the main stream human body posture data acquisition system and cannot generate the results with high reliability and theoretical support, so that the effect of the scheme cannot be achieved.
The invention and its embodiments have been described above without limitation. If one of ordinary skill in the art is informed by this disclosure, the embodiments similar to the technical solution are not creatively designed and all the embodiments belong to the protection scope of the present invention without departing from the gist of the present invention.

Claims (1)

1. A multi-camera space-time synchronization method based on dynamic capturing equipment is characterized in that: the specific implementation comprises the following steps:
s1, constructing an acquisition scene
Starting from the defects of three main human body three-dimensional key point information acquisition schemes in the current stage, the method has the advantages of low design cost, simplicity in construction and reliability in acquired information on the basis of avoiding the defects of the main human body three-dimensional key point information acquisition schemes. After the design is completed, the arrangement of the acquisition sites and the construction of the acquisition scenes are started. When the field is initially built, 5m x 5m fields are selected and built according to the limitation of the actual field, and the motion capturing devices and the RGB-D camera devices are respectively arranged at equal intervals around the field so as to capture the full-angle three-dimensional key point data information of the human body. In order to better acquire three-dimensional key point information of a human body, a plurality of experiments are carried out on the specific setting position and the orientation direction of the dynamic capture equipment according to the size of an actual field so as to obtain the position with the most accurate and reliable acquired information. In order to more accurately acquire RGB image data and depth image data, an RGB-D camera is disposed at a position where a center point of the RGB-D camera is leveled with a center point of a human body with reference to a placement position of the camera at the time of photographing.
S2, performing time synchronization
The built dynamic capturing subsystem composed of a plurality of dynamic capturing devices has synchronous internal information, camera information in the RGB-D multi-camera subsystem is synchronous, but the information between the two subsystems is not synchronous, and the problems of how to match under the conditions of different system acquisition rates and unstable acquisition data rates are faced. In order to realize information synchronization between two subsystems to the greatest extent, firstly, absolute time stamps of data information acquired by the two subsystems relative to a main board clock response are respectively obtained, a start command is simultaneously sent to the two subsystems by double threads when data are acquired, the formally started time of the two subsystems is respectively recorded, the difference value of the absolute time of the two subsystems is taken, and the relative delay time of the two subsystems is obtained and recorded. And then solving the problem by adopting a redundancy matching method. Meanwhile, in order to solve the problem of unstable data transmission rate, a data acquisition expansion card is deployed on a main board, so that stable data input of the system is realized on the basis of reducing the manufacturing cost.
S3, performing space synchronization
Considering that the multi-view RGB-D camera subsystem and the motion capture subsystem are respectively provided with a coordinate system for measuring the position of the captured object, fusion of the image data collected by the RGB-D camera and the data collected by the motion capture device is required to be carried out, and the two coordinate systems are converted into the same coordinate system, namely, the spatial synchronization of the two subsystems is carried out. Camera calibration is a long-standing and important problem in the field of computers, and relates to imaging geometry, lens distortion, homography matrix, nonlinear optimization and the like. The method for self-calibration and calibration board calibration during camera calibration is generally adopted, and the system also adopts the method because the characteristic points of the calibration board calibration method are easy to find and the stability is good. When the acquisition scene is built in the last step, the coordinate points of the motion capture subsystem are located at the central value of the ground space, the calibration plate is selected for calibrating the multi-camera according to the camera calibration principle, and the checkerboard calibration plate is used for directly enabling the world coordinate systems of the two subsystems to coincide in a physical sense, so that space synchronization is completed.
S4, testing space-time synchronization
After the construction of the data acquisition scene and the setting of the data acquisition equipment are completed, and the time synchronization and the space synchronization of the motion capturing subsystem and the RGB-D multi-camera system are completed, the whole constructed system is put into test. The method is characterized in that a subject wears a specific dynamic capture suit or directly sticks an optical reflection ball at a key point of a human body to be captured so as to enable dynamic capture equipment to collect key point data. After the equipment is debugged, the subject wearing the equipment continuously performs large-amplitude activities in the built acquisition scene, and higher credibility is provided for subsequent verification. And a tester observes the three-dimensional human body posture situation captured by the motion capture system in real time through the host, intercepts the frames returned by the motion capture system and the RGB-D system at different intervals for a plurality of times to check whether the data returned by the two subsystems are the data at the same moment or not, and further verifies the correctness and reliability of the space-time synchronization based on the motion capture device in the previous step.
CN202310375395.2A 2023-04-11 2023-04-11 Multi-camera space-time synchronization method based on dynamic capturing equipment Pending CN116645420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310375395.2A CN116645420A (en) 2023-04-11 2023-04-11 Multi-camera space-time synchronization method based on dynamic capturing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310375395.2A CN116645420A (en) 2023-04-11 2023-04-11 Multi-camera space-time synchronization method based on dynamic capturing equipment

Publications (1)

Publication Number Publication Date
CN116645420A true CN116645420A (en) 2023-08-25

Family

ID=87621903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310375395.2A Pending CN116645420A (en) 2023-04-11 2023-04-11 Multi-camera space-time synchronization method based on dynamic capturing equipment

Country Status (1)

Country Link
CN (1) CN116645420A (en)

Similar Documents

Publication Publication Date Title
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
CN109076200B (en) Method and device for calibrating panoramic stereo video system
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
US9547802B2 (en) System and method for image composition thereof
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN110390719A (en) Based on flight time point cloud reconstructing apparatus
TW201717613A (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN104154898B (en) A kind of initiative range measurement method and system
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN111583386B (en) Multi-view human body posture reconstruction method based on label propagation algorithm
CN101916455A (en) Method and device for reconstructing three-dimensional model of high dynamic range texture
CN106199066A (en) The direction calibration method of intelligent terminal, device
CN111811462A (en) Large-component portable visual ranging system and method in extreme environment
TWI526670B (en) Device and method for measuring three-dimensional images of tunnel deformation
CN116977449B (en) Compound eye event camera active calibration method based on flicker checkerboard
Johnson et al. A distributed cooperative framework for continuous multi-projector pose estimation
CN116645420A (en) Multi-camera space-time synchronization method based on dynamic capturing equipment
CN110363818A (en) The method for detecting abnormality and device of binocular vision system
CN110785792A (en) 3D modeling method, electronic device, storage medium, and program product
Abdullah et al. Rectification Y-coordinate of epipolar geometry in calibration of binocular eye system
CN111784749A (en) Space positioning and motion analysis system based on binocular vision
Qu et al. Computer vision-based 3D coordinate acquisition of surface feature points of building structures
TWI604261B (en) A method for capturing multi-dimensional visual image and the system thereof
JP2019133112A (en) Imaging device and method for controlling imaging device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination