CN113379822B - Method for acquiring 3D information of target object based on pose information of acquisition equipment - Google Patents

Method for acquiring 3D information of target object based on pose information of acquisition equipment Download PDF

Info

Publication number
CN113379822B
CN113379822B CN202110618956.8A CN202110618956A CN113379822B CN 113379822 B CN113379822 B CN 113379822B CN 202110618956 A CN202110618956 A CN 202110618956A CN 113379822 B CN113379822 B CN 113379822B
Authority
CN
China
Prior art keywords
image acquisition
image
acquisition device
axis
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110618956.8A
Other languages
Chinese (zh)
Other versions
CN113379822A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co Ltd filed Critical Tianmu Aishi Beijing Technology Co Ltd
Priority to CN202110618956.8A priority Critical patent/CN113379822B/en
Publication of CN113379822A publication Critical patent/CN113379822A/en
Application granted granted Critical
Publication of CN113379822B publication Critical patent/CN113379822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The embodiment of the invention provides a method for acquiring 3D information of a target object based on pose information of acquisition equipment, which comprises the following steps of (1) acquiring a plurality of images of the target object by using the acquisition equipment; (2) The calibration device obtains 6 poses of the acquisition device when the acquisition device acquires each image, wherein the poses are Xs, ys, zs,Deflection angle, omega inclination angle and kappa rotation angle; wherein Xs, ys and Zs are XYZ axis coordinates of the image acquisition center in a calibration space coordinate system;the included angle between the projection of the Z axis on the XZ coordinate plane and the Z axis is set; omega is the included angle between the z axis and the XZ coordinate plane; kappa is the included angle between the projection of the Y axis on the xy coordinate plane and the Y axis; (3) The processor acquires a large number of pixel point pairs with the same name among the images, calculates and acquires three-dimensional coordinates corresponding to the pixel points with the same name according to the 6 poses of the acquisition equipment and the camera parameters of the acquisition equipment, and acquires a three-dimensional model point cloud with the three-dimensional coordinates. The absolute size calibration of the target object is realized by a method for acquiring the position and the gesture of the camera.

Description

Method for acquiring 3D information of target object based on pose information of acquisition equipment
Technical Field
The invention relates to the technical field of morphology measurement, in particular to the technical field of 3D morphology measurement.
Background
Currently, when 3D acquisition and measurement are performed by using a visual mode, a camera is usually rotated relative to a target object, or a plurality of cameras are arranged around the target object to perform acquisition simultaneously. For example, the digitale EmiLy project of university of south california adopts a spherical bracket, and hundreds of cameras are fixed at different positions and different angles on the bracket, so that the 3D acquisition and modeling of a human body are realized. In either case, however, the camera needs to be relatively short from the object, at least within a arrangeable range, so that the camera can be formed to capture images of the object at different locations.
However, in some applications, image acquisition around the object is not possible. For example, when the monitoring probe collects a monitored area, it is difficult to set a camera around a target object or rotate the camera around the target object because the area is large, the distance is long, and the collection object is not fixed. How to perform 3D acquisition and modeling of a target object in this situation is a problem to be solved.
Even further problems are not addressed by how to get their exact dimensions for these distant objects even if 3D modeling is done, so that having a 3D model with absolute dimensions is not a problem. For example, when modeling a building at a distance, in order to obtain its absolute dimensions, it is common in the prior art to place a marker on or beside the building, and to obtain the size of the 3D model of the building based on the size of the marker. However, not all cases allow us to place a calibration object near the object, and even if a 3D model is obtained, the absolute size cannot be obtained, and thus the true size of the object cannot be known. For example, at a house on the bank of a river, a marker must be placed on the house if it is to be modeled, however this is difficult to accomplish if it is not possible to cross the river. In addition to the long distance, there is a short distance, but in the three-dimensional modeling of the antique vase, for example, the target object cannot be placed for some reason, the target point or the target object cannot be attached to the vase for protection, and at this time, how to obtain the absolute size of the vase model becomes a huge problem. Moreover, some 3D models of objects cannot be scanned without placing a calibration object, and it is undesirable to form a calibration spot even if the object is irradiated with a light beam. How to measure the size of the target at this time becomes a difficult problem.
In addition, sometimes 3D acquisition modeling devices need to be placed on mobile devices, such as for use on an autopilot car, or mounted on robots, providing them with 3D vision. While the objects they encounter are uncertain, it is not possible to place the calibration object in its entirety in all areas where the vehicle or robot is travelling. How to obtain the 3D dimensions of the surrounding objects in this case becomes a problem.
It has also been proposed in the prior art to define camera position using empirical formulas including rotation angle, target size, object distance, to compromise both speed of synthesis and effect. However, in practical applications, it was found that: unless an accurate angle measuring device is provided, the user is insensitive to the angle, and the angle is difficult to accurately determine; the size of the target is difficult to accurately determine, for example, in a scene of constructing the 3D model of the river-side house. And the error of measurement causes the camera position to set up the error, thus can influence and gather the synthetic speed and result; further improvements in accuracy and speed are needed.
Therefore, the following technical problems are urgently needed to be solved: (1) the 3D size of the object can be obtained without a calibration object on or around the object. In particular, it can be applied to 3D dimensional measurement of varying ambient environments. (2) Meanwhile, the synthesis speed and the synthesis precision are both considered. (3) A three-dimensional model of a remote object is acquired.
Disclosure of Invention
In view of the above, the present invention has been made to provide a method of overcoming the above problems or at least partially solving the above problems.
The embodiment of the invention provides a method for acquiring 3D information of a target object based on pose information of acquisition equipment,
(1) Collecting a plurality of images of the target object by using a collecting device;
(2) The calibration device obtains 6 poses of the acquisition device when the acquisition device acquires each image, wherein the poses are Xs, ys, zs,Angle of deflection,Omega tilt angle, kappa rotation angle; wherein Xs, ys and Zs are XYZ axis coordinates of the image acquisition center in a calibration space coordinate system; />The included angle between the projection of the Z axis on the XZ coordinate plane and the Z axis is set; omega is the included angle between the z axis and the XZ coordinate plane; kappa is the included angle between the projection of the Y axis on the xy coordinate plane and the Y axis;
(3) The processor acquires a large number of pixel point pairs with the same name among the images, calculates and acquires three-dimensional coordinates corresponding to the pixel points with the same name according to the 6 poses of the acquisition equipment and the camera parameters of the acquisition equipment, and acquires a three-dimensional model point cloud with the three-dimensional coordinates.
In an alternative embodiment: the position information includes XYZ coordinates, and the posture information includes a yaw angle, a pitch angle, and a roll angle.
In an alternative embodiment: the processor also calculates the three-dimensional coordinates of the homonymous image points according to the following parameters combined with the acquisition equipment: principal point coordinates (x) 0 ,y 0 ) Focal length f, radial distortion coefficient k 1 Coefficient of radial distortion k 2 Tangential distortion difference coefficient p 1 Tangential distortion difference coefficient p 2 The image sensing element non-square scaling factor α, and/or the distortion factor β of the image sensing element non-orthogonality.
In an alternative embodiment: the position of the image acquisition device when the image acquisition device rotates to acquire a group of images accords with the following conditions:
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element of the image acquisition device; m is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; μ is an empirical coefficient.
In an alternative embodiment: μ <0.482, μ <0.357, or μ <0.198.
In an alternative embodiment: when the acquisition equipment is 3D image acquisition equipment, two adjacent acquisition positions of the 3D image acquisition equipment accord with the following conditions:
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
In an alternative embodiment: delta <0.603, delta <0.410, delta <0.356. Or delta <0.311; or delta <0.284; or delta <0.261; or δ <0.241; or delta <0.107.
In an alternative embodiment: the three-dimensional coordinates corresponding to the same-name image points are obtained by carrying out space front intersection calculation on the matched same-name image points.
In an alternative embodiment: the absolute size of the target is obtained.
The invention also provides a calibration method which is applied to the method.
Inventive aspects and technical effects
1. The absolute size calibration of the target object is realized by a method for acquiring the position and the gesture of the camera, and the method for resolving the same-name image point is adopted, so that the target object does not need to be placed in advance or a calibration point is projected.
2. By optimizing the position of the camera for collecting the pictures, the synthesis speed and the synthesis precision can be improved simultaneously. When the camera acquisition position is optimized, the angle is not required to be measured, the size of the target is not required to be measured, and the applicability is stronger.
3. The method is characterized in that the camera optical axis rotates in a mode of forming a certain included angle with the turntable instead of being parallel to the turntable to collect the image of the target object, 3D synthesis and modeling are realized, rotation around the target object is not needed, and the adaptability of a scene is improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic diagram of a marking device applied to a 3D intelligent vision apparatus according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a marking device applied to a 3D image acquisition apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a marking device applied to an on-board 3D image acquisition device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an embodiment of the present invention in which a marking device is applied to a vehicle-mounted 3D image acquisition device;
the device comprises an image acquisition device 1, a rotating device 2, a cylindrical shell 3, a rotating device 4, a calibrating device 5 and a target object 6.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
3D acquisition calibration flow
When the object to be acquired is continuously changed, or the object distance is far, or the object cannot be provided with a mark point, the method can be used for:
a coordinate system XYZ in terms of the position and attitude of the acquisition device and a coordinate system XYZ in terms of the calibration space are provided.
The position and posture sensor is arranged on the acquisition equipment, 6 positions and postures of the acquisition equipment are measured in real time, wherein the positions and postures are Xs, ys and Zs respectively,Offset angle, ω tilt angle, κ spin angle. Wherein Xs, ys and Zs are XYZ axis coordinates of the image acquisition center in a calibration space coordinate system;the included angle between the projection of the Z axis on the XZ coordinate plane and the Z axis is set; omega is the included angle between the z axis and the XZ coordinate plane; kappa is the angle between the projection of the Y-axis on the xy-coordinate plane and the Y-axis.
1. The acquisition of a plurality of images of the object using the acquisition device, the specific acquisition process and requirements will be described in more detail below. In the acquisition process, 6 pose parameters at each acquisition time are recorded by utilizing a pose sensor. I.e. 6 pose parameters (external parameters) per image are recorded.
2. Extracting feature points of all the acquired images, and matching the feature points. And obtaining a large number of pixel point pairs with the same name among the images. And extracting and matching the feature points of the images by adopting a SURF operator. The SURF feature matching method mainly comprises three processes, namely feature point detection, feature point description and feature point matching. The method uses a Hessian matrix to detect feature points, uses a Box filter (Box Filters) to replace second-order Gaussian filtering, uses an integral image to accelerate convolution to improve calculation speed, and reduces the dimension of a local image feature descriptor to accelerate matching speed.
3. Under the condition that the internal parameters and the external parameters of all the photos are known, the matched homonymous image points can be subjected to space front intersection calculation to obtain three-dimensional coordinates corresponding to the homonymous image points, namely, the point cloud with accurate three-dimensional coordinates is obtained, and the three-dimensional size of the target is obtained.
4. The process of solving the spatial front intersection of the homonymous image points is as follows: homonymous image points (x) of two images 1 ,y 1 ),(x 2 ,y 2 ) The external orientation element of the image is
The focal length of the sensor is f, the traditional photogrammetry generally adopts the following point projection coefficient method to perform space front intersection, and the object space coordinates (X, Y, Z) of the point are obtained:
wherein:
in the process of resolving object space points of the same-name image points of a plurality of images, the object space points are imaged on the plurality of images, and at this time, the method based on the point projection coefficient of intersection of two image points is not applicable. The basic idea of multi-ray front intersection is as follows: on the basis of the collineation conditional equation, the object point coordinates are taken as unknown parameters, the image point coordinates are taken as observed values, and the ground coordinates are calculated by a adjustment method.
The method is provided with a collineation conditional equation, and the imaging point expression form is as follows:
linearizing a collineation conditional equation by taking (X, Y, Z) as an unknown parameter to obtain an error equation:
For each image point, two error equations may be obtained, and if there are n matching images, 2n error equations may be obtained. The error equation is expressed in matrix form as:
v=a·x-L:
then, given an iteration convergence threshold, X is calculated by a least squares method.
X=(A T ·A) -1 ·(A T L) finally, the ground point coordinates (X, Y, Z) are expressed as:
(X,Y,Z) T =(X 0 ,Y 0 ,Z 0 ) T +(ΔX,ΔY,ΔZ) T
in the above step 3, the internal parameters of the camera mainly include the principal point x 0 Like principal point y 0 Focal length (f), radial distortion coefficient k 1 Coefficient of radial distortion k 2 Tangential distortion difference coefficient p 1 Tangential distortion difference coefficient p 2 A CCD non-square proportionality coefficient alpha and a CCD non-orthogonality distortion coefficient beta. These parameters are all available at the camera calibration field.
Calibration device structure
The calibration device can be composed of a position sensor and an attitude sensor (a module for detecting the position and the attitude can be combined into a position and attitude sensor, namely a positioning and orientation system for detecting the position and the attitude). For example, common position sensors include GPS positioning modules, beidou modules, etc.; common attitude sensors include IMU inertial sensors, gyroscopes, and the like.
When the calibration device 5 is applied to the above 3D intelligent vision apparatus, please refer to fig. 1, it may be located on the cylindrical housing or in the housing, and the relative positions of the calibration device and the image acquisition device of the intelligent vision apparatus are fixed and calibrated in advance.
When the calibration device 5 is applied to a typical 3D image acquisition apparatus (e.g. a camera with a track), please refer to fig. 2, the calibration device is located around the camera, e.g. may be located on the camera housing or mounted on the camera housing by a fixing plate. And the relative positions of the calibration device and the image acquisition device of the intelligent vision equipment are fixed, and the calibration is well performed in advance.
Utilize 3D intelligent vision equipment
Comprises an image acquisition device 1, a rotating device 2 and a cylindrical shell 3. As shown in fig. 1, an image pickup apparatus 1 is mounted on a rotating apparatus 2, and the rotating apparatus 2 is accommodated in a cylindrical housing 3 and is freely rotatable therein.
The image acquisition device 1 is used for acquiring a group of images of the target object through the relative motion of the acquisition area of the image acquisition device 1 and the target object; and the acquisition region moving device is used for driving the acquisition region of the image acquisition device to generate relative motion with the target. The acquisition area is the effective field of view range of the image acquisition device.
The image acquisition device 1 may be a camera and the rotation device 2 may be a turntable. The camera is arranged 2 on the turntable, the optical axis of the camera forms a certain included angle with the turntable, and the turntable surface is approximately parallel to the object to be acquired. The turntable drives the camera to rotate, so that the camera can acquire images of the target object at different positions.
Further, the camera is mounted on the turntable by an angle adjusting device, which can be rotated to adjust the included angle between the optical axis of the image acquisition device 1 and the turntable surface, and the adjusting range is-90 ° < γ <90 °. When a closer object is shot, the optical axis of the image acquisition device 1 can be shifted towards the central axis of the turntable, namely, gamma is adjusted towards the-90 degrees. When the photographing cavity is arranged, the optical axis of the image acquisition device 1 can be offset from the central axis of the turntable, namely, gamma is adjusted to 90 degrees. The adjustment can be completed manually, a distance measuring device can be arranged for the 3D intelligent vision equipment, the distance between the distance measuring device and the target object is measured, and the gamma angle is automatically adjusted according to the distance.
The turntable can be connected with the motor through a transmission device, rotate under the drive of the motor and drive the image acquisition device 1 to rotate. The transmission may be a conventional mechanical structure such as a gear system or a belt.
In order to increase the acquisition efficiency, a plurality of image acquisition devices 1 may be provided on the turntable. The plurality of image acquisition devices 1 are distributed along the circumference of the turntable in sequence. For example, an image acquisition device 1 can be respectively arranged at two ends of any diameter of the turntable. The image acquisition device 1 can be arranged at intervals of 60-degree circumference, and 6 image acquisition devices 1 are uniformly arranged on the whole disc. The plurality of image capturing devices may be the same type of camera or different types of cameras. For example, a visible light camera and an infrared camera are arranged on the turntable, so that images with different wave bands can be acquired.
The image capturing device 1 is used for capturing an image of a target object, and may be a fixed-focus camera or a zoom camera. In particular, the camera may be a visible light camera or an infrared camera. Of course, it should be understood that any device having an image capturing function may be used, and the device is not limited to the present invention, and may be, for example, a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a video camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, a smart glasses, a smart watch, a smart bracelet, and all devices having an image capturing function.
The rotating device 2 can be in various forms such as a rotating arm, a rotating beam, a rotating bracket and the like besides a rotating disc, so long as the rotating device can be driven to rotate. In either case, the optical axis of the image capturing device 1 has a certain angle γ with the rotation plane.
In general, the light sources are distributed around the lens of the image acquisition device 1 in a dispersed manner, for example, the light sources are annular LED lamps around the lens and are located on the turntable; or may be provided in the cross section of the cylindrical housing. In particular, a light-softening device, for example a light-softening housing, can be arranged in the light path of the light source. Or the LED area light source is directly adopted, so that the light is softer, and the light is more uniform. More preferably, an OLED light source may be used, which is smaller, softer to light, and flexible to attach to a curved surface. The light source may be positioned at other locations that provide uniform illumination of the target. The light source can also be an intelligent light source, namely, the light source parameters can be automatically adjusted according to the conditions of the target object and the ambient light.
When 3D acquisition is performed, the optical axis direction of the image acquisition device at different acquisition positions is unchanged relative to the target object, and is generally approximately perpendicular to the surface of the target object, and at this time, the positions of two adjacent image acquisition devices 1, or the two adjacent acquisition positions of the image acquisition devices 1, satisfy the following conditions:
μ<0.482
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices 1; f is the focal length of the image acquisition device 1; d is the rectangular length of a photosensitive element (CCD) of the image acquisition device; m is the distance from the photosensitive element of the image acquisition device 1 to the surface of the target along the optical axis; μ is an empirical coefficient.
D, taking a rectangular length when the two positions are along the length direction of the photosensitive element of the image acquisition device 1; when the above two positions are along the width direction of the photosensitive element of the image pickup device 1, d takes a rectangular width.
In either of the above two positions of the image pickup device 1, the distance from the photosensitive element to the surface of the object along the optical axis is taken as M.
As described above, L should be the straight line distance between the optical centers of the two image capturing devices 1, but since the optical center position of the image capturing device 1 is not easily determined in some cases, the center of the photosensitive element of the image capturing device 1, the geometric center of the image capturing device 1, the center of the axis of connection of the image capturing device with the cradle head (or platform, stand), the center of the lens proximal end or distal end surface may be used instead in some cases, and the error caused by this is found to be within an acceptable range through experiments, so that the above range is also within the scope of the present invention.
By using the device provided by the invention, experiments are carried out, and the following experimental results are obtained.
From the above experimental results and a lot of experimental experience, it can be derived that the value of μ should satisfy μ <0.482, at which time it is already possible to synthesize a partial 3D model, although some cannot be synthesized automatically, but it is acceptable in case of not high requirements, and the portion that cannot be synthesized can be compensated by manual or replacement algorithm. In particular, when the value of μ satisfies μ <0.357, the balance between the synthesis effect and the synthesis time can be optimally balanced; for better synthesis, μ <0.198 can be chosen, in which case the synthesis time increases, but the quality of the synthesis is better. And when μ is 0.5078, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
The above data are obtained by experiments performed to verify the condition of the formula, and are not limiting on the invention. Even without this data, the objectivity of the formula is not affected. The person skilled in the art can adjust the parameters of the equipment and the details of the steps according to the requirement to perform experiments, and other data are obtained according with the formula.
The adjacent acquisition positions refer to two adjacent positions on a moving track, in which acquisition actions occur when the image acquisition device moves relative to a target object. This is generally well understood for image acquisition device motion. However, when the object moves to cause the relative movement of the two objects, the motion of the object is converted into the motion of the object, and the image acquisition device moves according to the relativity of the motion. At this time, two adjacent positions of the image acquisition device, at which acquisition actions occur in the converted movement track, are measured.
Using 3D image acquisition device
(1) The acquisition area moving device is a rotary structure
As shown in fig. 2, the object 6 is fixed at a certain position, and the rotation device 4 drives the image acquisition device 1 to rotate around the object 6. The rotation device 4 can drive the image acquisition device 1 to rotate around the target object 6 through the rotation arm. Of course, the rotation is not necessarily a complete circular motion, and can be only rotated by a certain angle according to the acquisition requirement. The rotation is not necessarily circular, and the motion track of the image acquisition device 1 can be other curve tracks, so long as the camera is ensured to shoot an object from different angles.
The rotation device can also drive the image acquisition device to rotate, and the image acquisition device can acquire the image of the target object from different angles through the rotation.
The rotating device can be in various forms such as a cantilever, a turntable, a track and the like, and can be handheld, vehicle-mounted or airborne, as shown in fig. 3, so that the image acquisition device 1 can generate motion.
In addition to the above manner, in some cases, the camera may be fixed, and the stage carrying the object rotates, so that the direction of the object facing the image capturing device changes at any time, and the image capturing device is enabled to capture images of the object from different angles. However, in this case, the calculation can still be performed as converted into a motion of the image acquisition device, so that the motion corresponds to a corresponding empirical formula (which will be described in detail below). For example, in a scenario where the stage is rotated, it may be assumed that the stage is stationary and the image capture device is rotated. The distance of the shooting position when the image acquisition device rotates is set by utilizing an empirical formula, so that the rotating speed of the image acquisition device is deduced, the rotating speed of the objective table is reversely deduced, the rotating speed control is convenient, and the 3D acquisition is realized. Of course, such a scenario is not common, more common or the image acquisition device is rotated.
The image acquisition device is used for acquiring an image of a target object, and can be a fixed-focus camera or a zoom camera. In particular, the camera may be a visible light camera or an infrared camera. Of course, it should be understood that any device having an image capturing function may be used, and the device is not limited to the present invention, and may be, for example, a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a video camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, a smart glasses, a smart watch, a smart bracelet, and all devices having an image capturing function.
The device also comprises a processor, also called a processing unit, which is used for synthesizing a 3D model of the target object according to a 3D synthesis algorithm and obtaining 3D information of the target object according to a plurality of images acquired by the image acquisition device.
(2) The acquisition area moving device is of a translational structure
Besides the rotating structure, the image acquisition device can move relative to the target object in a linear track. For example, the image capturing device is located on a linear track or on a vehicle or an unmanned aerial vehicle running in a straight line, and sequentially passes through the object along the linear track to capture images, as shown in fig. 4, and the image capturing device is kept from rotating during the process. Wherein the linear track may also be replaced by a linear cantilever. But more preferably, when the image capturing device moves along a straight track as a whole, it performs a certain rotation so that the optical axis of the image capturing device 4 is directed toward the target object 1.
(3) The acquisition area moving device is of a random movement structure
Sometimes, the movement of the acquisition area is irregular, for example, when the image acquisition device is held in a hand, or when the traveling route is an irregular route in a vehicle or on board, and at this time, it is difficult to move in a strict track, and the movement track of the image acquisition device is difficult to accurately predict. Therefore, how to ensure that the photographed image can accurately and stably synthesize the 3D model is a big problem in this case, and has not been mentioned yet. A more common approach is to take multiple pictures, with redundancy in the number of pictures to solve the problem. However, the result of the synthesis is not stable. Although there are some ways to improve the composition by limiting the rotation angle of the camera, in practice the user is not sensitive to the angle, and even if the preferred angle is given, it is difficult for the user to operate in case of hand-held shooting. Therefore, the invention provides a method for improving the synthesis effect and shortening the synthesis time by limiting the moving distance of the twice photographing camera.
In the case of irregular motion, a sensor may be provided in the mobile terminal or the image pickup device, and the linear distance moved by the image pickup device at the time of two shots may be measured by the sensor, and when the movement distance does not satisfy the above-described experience condition regarding L (specifically, the following condition), an alarm may be given to the user. The alarm includes sounding or lighting an alarm to the user. Of course, the distance of the user moving and the movable maximum distance L can be displayed on the screen of the mobile phone when the user moves the image acquisition device or prompted by voice in real time. The sensor for realizing the function comprises: rangefinders, gyroscopes, accelerometers, positioning sensors, and/or combinations thereof.
(4) Multi-camera mode
It can be understood that, besides the camera and the target object relatively move so that the camera can shoot images of different angles of the target object, a plurality of cameras can be arranged at different positions around the target object, so that the aim of shooting images of different angles of the target object at the same time can be achieved.
When the acquisition area moves relative to the target object, particularly the image acquisition device rotates around the target object, the optical axis direction of the image acquisition device at different acquisition positions changes relative to the target object during 3D acquisition, and at the moment, the positions of two adjacent image acquisition devices or the two adjacent acquisition positions of the image acquisition device meet the following conditions:
δ<0.603
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of a photosensitive element (CCD) of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
D, taking a rectangular length when the two positions are along the length direction of the photosensitive element of the image acquisition device; when the two positions are along the width direction of the photosensitive element of the image acquisition device, d takes a rectangular width.
When the image acquisition device is at any one of two positions, the photosensitive element moves along the optical axis toThe distance of the target surface is taken as T. In addition to this method, in another case L is A n 、A n+1 The straight line distance between the optical centers of the two image acquisition devices is equal to A n 、A n+1 Adjacent A of two image acquisition devices n-1 、A n+2 Two image acquisition devices and A n 、A n+1 The distance from the photosensitive elements of the two image acquisition devices 4 to the surface of the target object along the optical axis is respectively T n-1 、T n 、T n+1 、T n+2 ,T=(T n-1 +T n +T n+1 +T n+2 )/4. Of course, the average value calculation may be performed not only by the adjacent 4 positions but also by more positions.
By using the device provided by the invention, experiments are carried out, and the following experimental results are obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
As described above, L should be the straight line distance between the optical centers of the two image capturing devices, but since the optical center position of the image capturing device is not easily determined in some cases, the center of the photosensitive element of the image capturing device, the geometric center of the image capturing device, the center of the axis of connection of the image capturing device with the cradle head (or platform, bracket), the center of the proximal end or distal end surface of the lens may be used instead in some cases, and the error caused by this is found to be within an acceptable range through experiments, so the above range is also within the scope of the present invention.
In general, in the prior art, parameters such as an object size and a field angle are used as a mode for estimating a camera position, and a positional relationship between two cameras is also expressed by an angle. The angle is inconvenient in practical use because the angle is not well measured in practical use. And, the object size may change as the measurement object changes. The inconvenient measurement and repeated measurement bring about errors in measurement, thereby causing errors in camera position estimation. According to the scheme, according to a large amount of experimental data, the empirical condition which needs to be met by the position of the camera is provided, so that not only is the angle which is difficult to accurately measure measured avoided, but also the size and the dimension of an object do not need to be directly measured. In the experience condition, d and f are fixed parameters of the camera, and when the camera and the lens are purchased, the manufacturer can give corresponding parameters without measurement. T is only a straight line distance, and can be conveniently measured by using a traditional measuring method, such as a ruler and a laser range finder. Therefore, the empirical formula of the invention makes the preparation process convenient and quick, and improves the arrangement accuracy of the camera positions, so that the cameras can be arranged in the optimized positions, thereby simultaneously taking into account the 3D synthesis accuracy and speed.
From the above experimental results and a lot of experimental experience, it can be derived that the value of δ should satisfy δ <0.603, and at this time, a partial 3D model can be synthesized, and although some parts cannot be synthesized automatically, it is acceptable in case of low requirements, and the part that cannot be synthesized can be compensated by manual or replacement algorithm. Particularly, when the value of delta satisfies delta <0.410, the balance between the synthesis effect and the synthesis time can be optimally considered; delta <0.356 can be chosen for better synthesis, where the synthesis time increases but the quality of the synthesis is better. Of course, to further enhance the effect of the synthesis, δ <0.311 may be selected. And when δ is 0.681, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
And as can be seen from the above experiments, for determining the photographing position of the camera, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the object surface need to be obtained according to the above formula, which makes it easy to design and debug the device. Since the camera parameters (focal length f, CCD size) are already determined at the time of purchase of the camera and are indicated in the product description, they are readily available. The camera position can be calculated easily from the above formula without the need for cumbersome angle of view measurements and object size measurements. Particularly, in some occasions, a camera lens needs to be replaced, and then the method can obtain the camera position by directly replacing the conventional parameter f of the lens and calculating; similarly, when different objects are collected, the measurement of the object size is also complicated due to the different sizes of the objects. By using the method of the invention, the camera position can be more conveniently determined without measuring the object size. The camera position determined by the invention can be used for combining time and combining effect. Thus, the above empirical condition is one of the inventive aspects of the present invention.
The above data are obtained by experiments performed to verify the condition of the formula, and are not limiting on the invention. Even without this data, the objectivity of the formula is not affected. The person skilled in the art can adjust the parameters of the equipment and the details of the steps according to the requirement to perform experiments, and other data are obtained according with the formula.
The rotation motion of the invention is that the previous position acquisition plane and the subsequent position acquisition plane are crossed instead of parallel in the acquisition process, or the optical axis of the previous position image acquisition device and the optical axis of the subsequent position image acquisition position are crossed instead of parallel. That is, the movement of the acquisition region of the image acquisition device around or partially around the object can be considered as a relative rotation of the two. Although more orbital rotational motion is exemplified in the embodiments of the present invention, it is understood that the limitations of the present invention may be used as long as non-parallel motion between the acquisition region of the image acquisition device and the target object is rotational. The scope of the invention is not limited to orbital rotation in the embodiments.
The adjacent acquisition positions refer to two adjacent positions on a moving track, in which acquisition actions occur when the image acquisition device moves relative to a target object. This is generally well understood for image acquisition device motion. However, when the object moves to cause the relative movement of the two objects, the motion of the object is converted into the motion of the object, and the image acquisition device moves according to the relativity of the motion. At this time, two adjacent positions of the image acquisition device, at which acquisition actions occur in the converted movement track, are measured.
3D synthesis modeling device and method
The processor is also called a processing unit and is used for synthesizing a 3D model of the target object according to a plurality of images acquired by the image acquisition device and a 3D synthesis algorithm to obtain 3D information of the target object. The image acquisition device 1 sends the acquired images to a processing unit, and the processing unit obtains 3D information of the target object according to the images in the group of images. Of course, the processing unit may be directly disposed in the housing in which the image capturing device 1 is located, or may be connected to the image capturing device through a data line or through a wireless manner. For example, a separate computer, server, cluster server, or the like may be used as the processing unit, and the image data acquired by the image acquisition apparatus 1 may be transmitted thereto to perform 3D synthesis. Meanwhile, the data of the image acquisition device 1 can be transmitted to the cloud platform, and the 3D synthesis can be performed by utilizing the powerful computing capacity of the cloud platform.
The processing unit performs the following method:
1. and performing image enhancement processing on all the input photos. The following filters are used to enhance the contrast of the original photograph and to suppress noise at the same time.
Wherein: g (x, y) is the original imageThe gray value at (x, y), f (x, y) is the gray value at (x, y) after being enhanced by the Wallis filter, m g Is the local gray level mean value s of the original image g Is the standard deviation of local gray scale of the original image, m f S is the local gray target value of the transformed image f The target value of the local gray standard deviation of the transformed image is obtained. c epsilon (0, 1) is the expansion constant of the image variance, and b epsilon (0, 1) is the image brightness coefficient constant.
The filter can greatly enhance image texture modes with different scales in the image, so that the number and the precision of feature points can be improved when the point features of the image are extracted, and the reliability and the precision of a matching result are improved when the photo features are matched.
2. And extracting feature points of all the input images, and matching the feature points to obtain sparse feature points. And extracting and matching the feature points of the images by adopting a SURF operator. The SURF feature matching method mainly comprises three processes, namely feature point detection, feature point description and feature point matching. The method uses a Hessian matrix to detect feature points, uses a Box filter (Box FiLters) to replace second-order Gaussian filtering, uses an integral image to accelerate convolution to improve calculation speed, and reduces the dimension of a local image feature descriptor to accelerate matching speed. The method comprises the following steps of (1) constructing a Hessian matrix, generating all interest points for feature extraction, and constructing the Hessian matrix for generating edge points (mutation points) with stable images; (2) constructing scale space feature point positioning, comparing each pixel point processed by a Hessian matrix with 26 points in a two-dimensional image space and a scale space adjacent area, preliminarily positioning key points, filtering out key points with weaker energy and incorrectly positioned key points, and screening out final stable feature points; (3) the main direction of the feature points is determined by adopting the Harr wavelet features in the circular neighborhood of the statistical feature points. In the circular neighborhood of the characteristic point, counting the sum of the horizontal and vertical harr wavelet characteristics of all points in a 60-degree fan, then rotating the fan at intervals of 0.2 radian and counting the value of the harr wavelet characteristics in the area again, and finally taking the direction of the fan with the largest value as the main direction of the characteristic point; (4) a 64-dimensional feature point description vector is generated, a rectangular region block of 4*4 is taken around the feature point, but the taken rectangular region direction is along the main direction of the feature point. Each sub-region counts haar wavelet characteristics for the horizontal and vertical directions of 25 pixels, where both horizontal and vertical directions are relative to the main direction. The haar wavelet feature is 4 directions of the sum of a horizontal direction value, a vertical direction value, a horizontal direction absolute value and a vertical direction absolute value, and the 4 values are taken as feature vectors of each sub-block area, so that 4 x 4 = 64-dimensional vectors are taken as descriptors of Surf features; (5) the feature points are matched, the matching degree is determined by calculating the Euclidean distance between the two feature points, and the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented.
3. Inputting matched feature point coordinates, and calculating position and posture data of a sparse target three-dimensional point cloud and a photographing camera by utilizing a beam method adjustment, so as to obtain model coordinate values of the sparse target model three-dimensional point cloud and the position; and taking the sparse feature points as initial values, performing dense matching on the multi-view photos, and obtaining dense point cloud data. The process mainly comprises four steps: stereopair selection, depth map calculation, depth map optimization and depth map fusion. For each image in the input dataset, we select a reference image to form a stereopair for use in computing the depth map. We can thus get a rough depth map of all images, which may contain noise and errors, we use its neighborhood depth map for consistency checking to optimize the depth map for each image. And finally, carrying out depth map fusion to obtain the three-dimensional point cloud of the whole scene.
4. And (5) reconstructing the curved surface of the target object by utilizing the dense point cloud. The method comprises the steps of defining octree, setting function space, creating vector field, solving poisson equation and extracting equivalent surface. And obtaining an integral relation between the sampling points and the indication function according to the gradient relation, obtaining a vector field of the point cloud according to the integral relation, and calculating an approximation of the gradient field of the indication function to form a poisson equation. And (3) solving an approximate solution by using matrix iteration according to a poisson equation, extracting an equivalent surface by adopting a moving square algorithm, and reconstructing a model of the measured object for the measured point cloud.
5. Full-automatic texture mapping of object models. And after the surface model is constructed, texture mapping is carried out. The main process comprises the following steps: (1) texture data is obtained through a surface triangular mesh of an image reconstruction target; (2) and (5) reconstructing visibility analysis of the triangular surface of the model. Calculating a visible image set of each triangular surface and an optimal reference image by using calibration information of the images; (3) triangular face clustering generates texture patches. According to the visible image set of the triangular surface, the optimal reference image and the neighborhood topological relation of the triangular surface, clustering the triangular surface into a plurality of reference image texture patches; (4) the texture patches are automatically ordered to generate a texture image. And sequencing the generated texture patches according to the size relation of the texture patches to generate texture images with minimum surrounding areas, and obtaining texture mapping coordinates of each triangular surface.
Application example
For example, in an autopilot, a 3D acquisition device is installed such that the acquisition device can obtain not only a 3D model of the surrounding building but also its real size. Therefore, the surrounding environment of the automatic driving automobile can be identified more accurately.
Installing the 3D acquisition device on the robot can make the robot have 3D vision, which is equivalent to installing more accurate eyes for the robot. The robot can also know the conditions and specific sizes of the surrounding environment in real time, so that the robot can accurately judge the surrounding environment and make a correct decision.
In addition, the acquisition device may be used on aircraft, drones, ships and various mobile devices to obtain the required 3D model and size.
Of course, while the above applications are all used on mobile devices, the apparatus and method may in fact be used in stationary acquisition as well. For example, 3D acquisition equipment is arranged on road lamps at an intersection, so that 3D models of pedestrians and vehicles on the road can be acquired at any time, and the sizes of the pedestrians and the vehicles can be obtained, so that the vehicles can be accurately identified and judged. Even the accurate three-dimensional outline of the pedestrian can be obtained, so that the pedestrian identity can be determined more accurately than the two-dimensional identification mode. This is very advantageous in security monitoring.
Although the above embodiment describes the image acquisition device acquiring an image, it should not be construed as being applicable to a group of pictures constituted only by a single picture, which is merely an explanatory manner adopted for ease of understanding. The image acquisition device can also acquire video data, and the video data is directly utilized or images are intercepted from the video data to carry out 3D synthesis. However, the shooting positions of the corresponding frames or the truncated images of the video data utilized in the synthesis still satisfy the above empirical formula.
The target object, and the object each represent an object for which three-dimensional information is to be acquired. Can be a solid object or a plurality of object compositions. For example, it may be a building, bridge, etc. The three-dimensional information of the target object comprises a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, local three-dimensional features, three-dimensional dimensions and all parameters with the three-dimensional features of the target object. In the present invention, three-dimensional means having XYZ three-direction information, in particular, having depth information, which is essentially different from only two-dimensional plane information. Also in essence different from some definitions called three-dimensional, panoramic, holographic, three-dimensional, but actually only including two-dimensional information, in particular not including depth information.
The acquisition region in the present invention refers to a range that can be photographed by an image acquisition device (e.g., a camera). The image acquisition device in the invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable equipment, intelligent glasses, intelligent watch, intelligent bracelet and all equipment with image acquisition function.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus in accordance with embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
By now it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described herein in detail, many other variations or modifications of the invention consistent with the principles of the invention may be directly ascertained or inferred from the present disclosure without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (11)

1. The method for acquiring the 3D information of the target object based on the pose information of the acquisition equipment is characterized by comprising the following steps of:
(1) Collecting a plurality of images of the target object by using a collecting device;
(2) The calibration device obtains 6 poses of the acquisition device when the acquisition device acquires each image, wherein the poses are Xs, ys, zs,Deflection angle, omega inclination angle and kappa rotation angle; wherein Xs, ys and Zs are XYZ axis coordinates of the image acquisition center in a calibration space coordinate system; />The included angle between the projection of the Z axis on the XZ coordinate plane and the Z axis is set; omega is the included angle between the z axis and the XZ coordinate plane; kappa is the included angle between the projection of the Y axis on the xy coordinate plane and the Y axis;
(3) The processor acquires a large number of pixel point pairs with the same name among the images, calculates and acquires three-dimensional coordinates corresponding to the pixel points with the same name according to the 6 poses of the acquisition equipment and the camera parameters of the acquisition equipment, and acquires a three-dimensional model point cloud with the three-dimensional coordinates;
The position of the image acquisition device when the image acquisition device rotates to acquire a group of images accords with the following conditions:
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element of the image acquisition device; m is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; μ is an empirical coefficient;
the calibration device is composed of a position sensor and a gesture sensor, or a module for detecting the position and the gesture is combined into a gesture sensor, namely a positioning and orientation system for detecting the position and the gesture can be realized.
2. The method of claim 1, wherein: the processor also calculates the three-dimensional coordinates of the homonymous image points according to the following parameters combined with the acquisition equipment: principal point coordinates (x) 0 ,y 0 ) Focal length f, radial distortion difference coefficient k of image acquisition device 1 Coefficient of radial distortion k 2 Tangential distortion difference coefficient p 1 Tangential distortion difference coefficient p 2 The non-square scaling factor alpha of the photosensitive element of the image acquisition device and/or the distortion factor beta of the non-orthogonality of the photosensitive element of the image acquisition device.
3. The method of claim 1, wherein: μ <0.482, or μ <0.357, or μ <0.198.
4. The method of claim 1, wherein: the absolute size of the target is obtained.
5. The calibration method is characterized by comprising the following steps of: use of the method according to any one of claims 1-4.
6. The method for acquiring the 3D information of the target object based on the pose information of the acquisition equipment is characterized by comprising the following steps of:
(1) Collecting a plurality of images of the target object by using a collecting device;
(2) The calibration device obtains 6 poses of the acquisition device when the acquisition device acquires each image, wherein the poses are Xs, ys, zs,Deflection angle, omega inclination angle and kappa rotation angle; wherein Xs, ys and Zs are XYZ axis coordinates of the image acquisition center in a calibration space coordinate system; />The included angle between the projection of the Z axis on the XZ coordinate plane and the Z axis is set; omega is the included angle between the z axis and the XZ coordinate plane; kappa is the included angle between the projection of the Y axis on the xy coordinate plane and the Y axis;
(3) The processor acquires a large number of pixel point pairs with the same name among the images, calculates and acquires three-dimensional coordinates corresponding to the pixel points with the same name according to the 6 poses of the acquisition equipment and the camera parameters of the acquisition equipment, and acquires a three-dimensional model point cloud with the three-dimensional coordinates;
when the acquisition equipment is 3D image acquisition equipment, two adjacent acquisition positions of the 3D image acquisition equipment accord with the following conditions:
Wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is an adjustment coefficient;
the calibration device is composed of a position sensor and a gesture sensor, or a module for detecting the position and the gesture is combined into a gesture sensor, namely a positioning and orientation system for detecting the position and the gesture can be realized.
7. The method of claim 6, wherein: the processor also calculates the three-dimensional coordinates of the homonymous image points according to the following parameters combined with the acquisition equipment: principal point coordinates (x) 0 ,y 0 ) Focal length f, radial distortion difference coefficient k of image acquisition device 1 Coefficient of radial distortion k 2 Tangential distortion difference coefficient p 1 Tangential distortion difference coefficient p 2 The non-square scaling factor alpha of the photosensitive element of the image acquisition device and/or the distortion factor beta of the non-orthogonality of the photosensitive element of the image acquisition device.
8. The method of claim 6, wherein: delta <0.603, delta <0.410, delta <0.356, delta <0.311, delta <0.284, delta <0.261, delta <0.241, or delta <0.107.
9. The method of claim 6, wherein: the three-dimensional coordinates corresponding to the same-name image points are obtained by carrying out space front intersection calculation on the matched same-name image points.
10. The method of claim 6, wherein: the absolute size of the target is obtained.
11. The calibration method is characterized by comprising the following steps of: use of the method according to any one of claims 6-10.
CN202110618956.8A 2020-03-16 2020-03-16 Method for acquiring 3D information of target object based on pose information of acquisition equipment Active CN113379822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110618956.8A CN113379822B (en) 2020-03-16 2020-03-16 Method for acquiring 3D information of target object based on pose information of acquisition equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110618956.8A CN113379822B (en) 2020-03-16 2020-03-16 Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN202010182913.5A CN111462213B (en) 2020-03-16 2020-03-16 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010182913.5A Division CN111462213B (en) 2020-03-16 2020-03-16 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process

Publications (2)

Publication Number Publication Date
CN113379822A CN113379822A (en) 2021-09-10
CN113379822B true CN113379822B (en) 2024-03-22

Family

ID=71683182

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110618956.8A Active CN113379822B (en) 2020-03-16 2020-03-16 Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN202010182913.5A Active CN111462213B (en) 2020-03-16 2020-03-16 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010182913.5A Active CN111462213B (en) 2020-03-16 2020-03-16 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process

Country Status (2)

Country Link
CN (2) CN113379822B (en)
WO (1) WO2021185218A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379822B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN111462304B (en) * 2020-03-16 2021-06-15 天目爱视(北京)科技有限公司 3D acquisition and size measurement method for space field
CN111445529B (en) * 2020-03-16 2021-03-23 天目爱视(北京)科技有限公司 Calibration equipment and method based on multi-laser ranging
CN112257537B (en) * 2020-10-15 2022-02-15 天目爱视(北京)科技有限公司 Intelligent multi-point three-dimensional information acquisition equipment
CN112254675B (en) * 2020-10-15 2023-04-11 天目爱视(北京)科技有限公司 Space occupancy rate acquisition and judgment equipment and method containing moving object
CN112257535B (en) * 2020-10-15 2022-04-08 天目爱视(北京)科技有限公司 Three-dimensional matching equipment and method for avoiding object
CN112257536B (en) * 2020-10-15 2022-05-20 天目爱视(北京)科技有限公司 Space and object three-dimensional information acquisition and matching equipment and method
CN112435080A (en) * 2020-12-18 2021-03-02 天目爱视(北京)科技有限公司 Virtual garment manufacturing equipment based on human body three-dimensional information
CN112634287A (en) * 2020-12-25 2021-04-09 电子科技大学 Heart magnetic resonance image segmentation method based on interlayer offset correction
CN114397090B (en) * 2021-11-15 2023-05-02 中国科学院西安光学精密机械研究所 Method for rapidly measuring optical axis parallelism of continuous zoom camera
CN113838197A (en) * 2021-11-29 2021-12-24 南京天辰礼达电子科技有限公司 Region reconstruction method and system
CN114234808B (en) * 2021-12-17 2022-10-28 湖南大学 Size measuring method and device for deformation area of rotary magnetic pulse crimping part
CN114410886B (en) * 2021-12-30 2023-03-24 太原重工股份有限公司 Converter tilting mechanism state monitoring method and system
CN116704045B (en) * 2023-06-20 2024-01-26 北京控制工程研究所 Multi-camera system calibration method for monitoring starry sky background simulation system
CN117011365B (en) * 2023-10-07 2024-03-15 宁德时代新能源科技股份有限公司 Dimension measuring method, dimension measuring device, computer equipment and storage medium
CN117146714A (en) * 2023-11-01 2023-12-01 深圳市玻尔智造科技有限公司 Automatic measuring system for width of slitting machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474033A (en) * 2013-12-29 2016-04-06 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
CN107767440A (en) * 2017-09-06 2018-03-06 北京建筑大学 Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN110675450A (en) * 2019-09-06 2020-01-10 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101509763A (en) * 2009-03-20 2009-08-19 天津工业大学 Single order high precision large-sized object three-dimensional digitized measurement system and measurement method thereof
CN103279987B (en) * 2013-06-18 2016-05-18 厦门理工学院 Object quick three-dimensional modeling method based on Kinect
CN104537707B (en) * 2014-12-08 2018-05-04 中国人民解放军信息工程大学 Image space type stereoscopic vision moves real-time measurement system online
CN106251399B (en) * 2016-08-30 2019-04-16 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam
CN109211132A (en) * 2017-07-07 2019-01-15 北京林业大学 A kind of photogrammetric method for obtaining tall and big object deformation information of unmanned plane high-precision
US10417829B2 (en) * 2017-11-27 2019-09-17 Electronics And Telecommunications Research Institute Method and apparatus for providing realistic 2D/3D AR experience service based on video image
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane
US20190287304A1 (en) * 2018-03-13 2019-09-19 The Boeing Company Safety Enhancement System for a Mobile Display System
CN109242898B (en) * 2018-08-30 2022-03-22 华强方特(深圳)电影有限公司 Three-dimensional modeling method and system based on image sequence
CN110049304A (en) * 2019-03-22 2019-07-23 嘉兴超维信息技术有限公司 A kind of method and device thereof of the instantaneous three-dimensional imaging of sparse camera array
CN110288699A (en) * 2019-06-26 2019-09-27 电子科技大学 A kind of three-dimensional rebuilding method based on structure light
CN111445529B (en) * 2020-03-16 2021-03-23 天目爱视(北京)科技有限公司 Calibration equipment and method based on multi-laser ranging
CN111462304B (en) * 2020-03-16 2021-06-15 天目爱视(北京)科技有限公司 3D acquisition and size measurement method for space field
CN113379822B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Method for acquiring 3D information of target object based on pose information of acquisition equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474033A (en) * 2013-12-29 2016-04-06 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
CN107767440A (en) * 2017-09-06 2018-03-06 北京建筑大学 Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN110675450A (en) * 2019-09-06 2020-01-10 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于非量测相机的复杂物体三维重建;郑顺义;《武汉大学学报(信息科学版)》;第第33卷卷(第第5期期);第446-449页 *

Also Published As

Publication number Publication date
WO2021185218A1 (en) 2021-09-23
CN113379822A (en) 2021-09-10
CN111462213B (en) 2021-07-13
CN111462213A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN113379822B (en) Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN111462304B (en) 3D acquisition and size measurement method for space field
CN113532329B (en) Calibration method with projected light spot as calibration point
CN113327291B (en) Calibration method for 3D modeling of remote target object based on continuous shooting
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN111292364B (en) Method for rapidly matching images in three-dimensional model construction process
CN111238374B (en) Three-dimensional model construction and measurement method based on coordinate measurement
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
CN111292239B (en) Three-dimensional model splicing equipment and method
CN111076674B (en) Closely target object 3D collection equipment
WO2021185215A1 (en) Multi-camera co-calibration method in 3d modeling
CN111060008B (en) 3D intelligent vision equipment
WO2022078418A1 (en) Intelligent three-dimensional information acquisition appratus capable of stably rotating
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN113538552B (en) 3D information synthetic image matching method based on image sorting
CN112254679B (en) Multi-position combined type 3D acquisition system and method
CN112254677B (en) Multi-position combined 3D acquisition system and method based on handheld device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant