WO2022078433A1 - Multi-location combined 3d image acquisition system and method - Google Patents

Multi-location combined 3d image acquisition system and method Download PDF

Info

Publication number
WO2022078433A1
WO2022078433A1 PCT/CN2021/123762 CN2021123762W WO2022078433A1 WO 2022078433 A1 WO2022078433 A1 WO 2022078433A1 CN 2021123762 W CN2021123762 W CN 2021123762W WO 2022078433 A1 WO2022078433 A1 WO 2022078433A1
Authority
WO
WIPO (PCT)
Prior art keywords
acquisition
collection
target
acquisition device
type
Prior art date
Application number
PCT/CN2021/123762
Other languages
French (fr)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011106003.5A external-priority patent/CN112254679B/en
Priority claimed from CN202011105994.5A external-priority patent/CN112254677B/en
Priority claimed from CN202011105292.7A external-priority patent/CN112254671B/en
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2022078433A1 publication Critical patent/WO2022078433A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Definitions

  • the invention relates to the technical field of topography measurement, in particular to the technical field of 3D topography measurement.
  • 3D information needs to be collected first.
  • Commonly used methods include the use of machine vision and structured light, laser ranging, and lidar.
  • Structured light, laser ranging, and lidar all require an active light source to be emitted to the target, which will affect the target in some cases, and the cost of the light source is high. Moreover, the structure of the light source is relatively precise and easy to be damaged.
  • the machine vision method is to collect pictures of objects from different angles, and match and stitch these pictures to form a 3D model, which is low-cost and easy to use.
  • multiple cameras can be set at different angles of the object to be tested, or pictures can be collected from different angles by rotating a single or multiple cameras.
  • the acquisition position of the camera needs to be set around the target (referred to as the surround type), but this method requires a large space to set the acquisition position for the image acquisition device.
  • the present invention provides a multi-position combined 3D acquisition system and method that overcomes the above problems or at least partially solves the above problems.
  • Embodiments of the present invention provide a multi-point combined 3D acquisition system and method, including a 3D acquisition device,
  • the 3D collection device performs multi-point collection on the collection target, and the collection range of each collection point at least overlaps with the collection range of other collection points;
  • the 3D acquisition device includes an image acquisition device and a rotation device; wherein the acquisition direction of the image acquisition device is a direction away from the rotation center.
  • the multi-point acquisition is that a plurality of 3D acquisition devices are respectively set to acquire at a plurality of points.
  • the plurality of 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device.
  • the sum of the collection ranges of the first type of 3D collection devices can cover the target, and the sum of the collection ranges of the second type of 3D collection devices can cover a specific area of the target.
  • the multiple 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device, and the sum of the acquisition ranges of the first type of 3D acquisition devices is greater than the sum of the acquisition ranges of the second type of 3D acquisition devices.
  • the first type of 3D acquisition device and the second type of 3D acquisition device are used to jointly scan and acquire.
  • the specific area is a user-specified area.
  • the specific area is an area where the previous synthesis failed.
  • the specific area is an area with a large variation in contour concavo-convex.
  • the 3D acquisition device includes a handheld 3D acquisition device.
  • a single 3D acquisition device is sequentially set to acquire at multiple points to complete the acquisition of multiple 3D acquisition locations.
  • the collection range of the 3D collection device at one collection position overlaps with the collection range at another collection position, and the overlapped area is at least partially located on the target.
  • the multiple 3D collection positions include a first type of 3D collection position and a second type of 3D collection position; the sum of the collection ranges can cover the target at the first type of 3D collection position, and the collection at the second type of 3D collection position The sum of the ranges can cover a specific area of the target.
  • the above-mentioned specific area is a user-specified area.
  • the above-mentioned specific area is the area where the previous synthesis failed.
  • the above-mentioned specific area is an area where the contour unevenness changes greatly.
  • the included angle ⁇ of the optical axes of the image acquisition devices at two adjacent acquisition positions satisfies the following conditions:
  • R is the distance from the rotation center to the surface of the target object
  • T is the sum of the object distance and the image distance during acquisition
  • d is the length or width of the photosensitive element of the image acquisition device
  • F is the lens focal length of the image acquisition device
  • u is the experience coefficient.
  • Another aspect of the embodiments of the present invention further provides a 3D synthesis identification device and method, including the system described in any one of the preceding claims.
  • Another aspect of the embodiments of the present invention further provides an object manufacturing and display device and method, including the system described in any of the preceding claims.
  • the self-rotating intelligent visual 3D acquisition device to collect the 3D information of the internal space of the target, which is suitable for wider space and smaller space.
  • FIG. 1 shows a schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention
  • FIG. 2 shows a schematic structural diagram of a handheld 3D information collection device provided by an embodiment of the present invention
  • FIG. 3 shows another schematic structural diagram of a handheld 3D information collection device provided by an embodiment of the present invention
  • FIG. 4 shows a schematic diagram of a multi-position combined 3D acquisition system provided by an embodiment of the present invention
  • FIG. 5 shows a schematic diagram of a multi-position combined handheld 3D acquisition system provided by an embodiment of the present invention
  • FIG. 6 shows a schematic diagram of the collection of a specific area by a multi-position combined 3D collection system provided by an embodiment of the present invention
  • FIG. 7 shows a schematic diagram of a multiple combined 3D acquisition system provided by an embodiment of the present invention.
  • FIG. 8 shows another schematic diagram of a multiple combined 3D acquisition system provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a multi-position combined 3D acquisition system, including 3D information acquisition equipment, as shown in FIG.
  • the image acquisition device 1 is connected with the rotating shaft of the rotating device 2 , and the rotating device 2 drives it to rotate.
  • the acquisition direction of the image acquisition device 1 is the direction away from the rotation center. That is, the acquisition direction is directed outward relative to the center of rotation.
  • the optical axis of the image acquisition device 1 may be parallel to the rotation plane, or may form a certain angle with the rotation plane, for example, within the range of -90°-90° based on the rotation plane.
  • the rotation axis or its extension line ie, the rotation center line
  • the optical collection ports (eg lenses) of the image collection device are all facing away from the direction of the rotation axis, that is to say, the collection area of the image collection device has no intersection with the rotation center line.
  • this method is also quite different from the general self-rotation method, especially the target object whose surface is not perpendicular to the horizontal plane can be collected.
  • the rotating shaft of the rotating device can also be connected to the image capturing device through a deceleration device, for example, through a gear set or the like.
  • the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on.
  • the above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like.
  • the rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position.
  • the specific collection position will be described in detail below.
  • the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
  • the carrying device 3 is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the carrying device 3 .
  • the carrying device may be a tripod, a base with a supporting device, or the like.
  • the rotating device is located in the center part of the carrier to ensure balance. However, in some special occasions, it can also be located at any position of the carrying device. Furthermore, the carrying device is not necessary.
  • the swivel device can be installed directly in the application, eg on the roof of a vehicle.
  • the embodiment also provides a handheld 3D information acquisition device, referred to as a 3D acquisition device, please refer to FIG.
  • the image acquisition device 1 is connected to the rotation device 2, so as to stably rotate and scan under the drive of the rotation device 2, and realize 3D acquisition of surrounding objects (the specific acquisition process will be described in detail below).
  • the rotating device 2 is mounted on the carrying device 3, and the carrying device 3 is used to carry the entire equipment.
  • the carrier 3 can be a handle, so that the entire device can be used for hand-held acquisition.
  • the bearing device 3 can also be a base-type bearing device, which is used to be installed on other devices, so that the entire intelligent 3D acquisition device can be installed on other devices for common use. For example, an intelligent 3D acquisition device is installed on the vehicle and performs 3D acquisition as the vehicle travels.
  • the carrying device 3 is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the carrying device 3 .
  • the carrying device may be a handle, a tripod, a base with a supporting device, or the like.
  • the rotating device is located in the center part of the carrier to ensure balance. However, in some special occasions, it can also be located at any position of the carrying device. Furthermore, the carrying device is not necessary.
  • the rotating device can also be installed directly in the application, eg on the roof of a vehicle.
  • the inner space of the carrying device is used to accommodate the battery, which is used to supply power to the 3D rotation acquisition stabilization device.
  • buttons are arranged on the casing of the carrying device to control the 3D rotation acquisition stabilization device. Including turning on/off the stabilization function and turning on/off the 3D rotation capture function.
  • the image acquisition device 1 is connected with the rotating shaft of the rotating device 2, and the rotating device drives the rotating device to rotate.
  • the rotating shaft of the rotating device can also be connected with the image capturing device through a transmission device, for example, through a gear set or the like.
  • the rotating device 2 can be arranged inside the handle, and part or all of the transmission device is also arranged inside the handle, which can further reduce the volume of the device.
  • the image capturing device When the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on.
  • the above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like.
  • the rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position.
  • the specific collection position will be described in detail below.
  • the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
  • a distance measuring device is also included.
  • the distance measuring device is fixedly connected with the image acquisition device, and the pointing direction of the distance measuring device is the same as the optical axis direction of the image acquisition device.
  • the distance measuring device can also be fixedly connected to the rotating device, as long as it can rotate synchronously with the image capturing device.
  • an installation platform may be provided, the image acquisition device and the distance measuring device are both located on the platform, the platform is installed on the rotating shaft of the rotating device, and is driven and rotated by the rotating device.
  • the distance measuring device can use a variety of methods such as a laser distance meter, an ultrasonic distance meter, an electromagnetic wave distance meter, etc., or a traditional mechanical measuring tool distance measuring device.
  • the 3D acquisition device is located at a specific location, and its distance from the target has been calibrated, and no additional measurement is required.
  • the light source can also include a light source, and the light source can be arranged on the periphery of the image acquisition device, on the rotating device and on the installation platform.
  • the light source can also be set independently, for example, an independent light source is used to illuminate the target. Even when lighting conditions are good, no light source is used.
  • the light source can be an LED light source or an intelligent light source, that is, the parameters of the light source are automatically adjusted according to the conditions of the target object and the ambient light.
  • the light sources are distributed around the lens of the image capture device, for example, the light sources are ring-shaped LED lights around the lens. Because in some applications it is necessary to control the intensity of the light source.
  • a diffuser device such as a diffuser housing
  • a diffuser housing can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which has a smaller volume, softer light, and has flexible properties, which can be attached to a curved surface.
  • marking points can be set at the position of the target. And the coordinates of these marker points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D composite model is obtained. These marking points can be pre-set points or laser light spots.
  • the method for determining the coordinates of these points may include: 1Using laser ranging: using a calibration device to emit laser light toward the target to form a plurality of calibration point spots, and obtain the calibration point coordinates through the known positional relationship of the laser ranging unit in the calibration device. Use the calibration device to emit laser light toward the target, so that the light beam emitted by the laser ranging unit in the calibration device falls on the target to form a light spot.
  • the laser beams emitted by the laser ranging units are parallel to each other, and the positional relationship between the units is known. Then the two-dimensional coordinates on the emission plane of the multiple light spots formed on the target can be obtained.
  • the distance between each laser ranging unit and the corresponding light spot can be obtained, that is, depth information equivalent to multiple light spots formed on the target can be obtained. That is, the depth coordinates perpendicular to the emission plane can be obtained.
  • the three-dimensional coordinates of each spot can be obtained.
  • 2 using the combination of distance measurement and angle measurement: respectively measure the distance of multiple markers and the angle between each other, so as to calculate the respective coordinates.
  • Use other coordinate measurement tools such as RTK, global coordinate positioning system, star-sensing positioning system, position and pose sensors, etc.
  • the acquisition system includes a plurality of the above-mentioned 3D information acquisition devices a, b, c..., which are located in different spatial positions.
  • the collection range of the collection device a includes the area A
  • the collection range of the collection device b includes the area B
  • the collection range of the collection device c includes the area C... and so on.
  • Their collection areas at least satisfy that the intersection between the two collection areas is not empty. In particular, the non-empty intersection should be located on the target. That is, each acquisition device overlaps at least the acquisition range of the other two acquisition devices, especially the acquisition range of each acquisition device on the target is at least the same as the acquisition range of the other two acquisition devices on the target. overlapping.
  • the collecting devices in multiple locations to scan and collect the specific area, so that the information of the area can be obtained from different angles.
  • the intersection of area A and area B includes this specific area; the common intersection of area A, area B, and area C includes this specific area; the intersection of area A and area B and the intersection of area C and area D include this specific area, etc. Wait. That is to say, the specific area is scanned repeatedly, which may also be called a repeated scanning area, that is, the specific area is scanned and collected by multiple collection devices.
  • the above situation includes the intersection of the collection areas of two or more collection devices including the specific area; the intersection of the collection areas of two or more collection devices, and the other two or more collection devices The intersection of the collection areas, including the specific area.
  • the above-mentioned specific areas can be analyzed and obtained according to the previous 3D synthesis situation, such as the areas where the previous 3D synthesis failed or with a high failure rate; it can also be delineated in advance according to the operator's experience, such as the areas with large fluctuations in bumps, or the degree of bumps.
  • a larger area, etc. that is, an area with a variation of bumps and ridges, or a region with a degree of variation of the bumps and volts greater than a preset threshold.
  • multiple handheld 3D information collection devices are not required, but only one handheld 3D information collection device is used.
  • the user holds the device and performs multiple rotation scans at different positions to obtain a picture of the target object. At this time, it should be ensured that the scanning range of the handheld 3D information acquisition device can cover the entire target area when it is located at each location.
  • the way of determination includes based on pre-existing data, or based on visual results, or based on the distribution of areas that were not synthesized in the previous acquisition.
  • one or more 3D information acquisition devices of the second type are arranged for each specific area, so that their acquisition range can cover the specific area.
  • each 3D information acquisition device After the first and second types of 3D information acquisition devices are all arranged, start to control each 3D information acquisition device to rotate and scan the target object, and the rotation satisfies the optimization conditions for the image acquisition device of the 3D information acquisition device. That is, the controller can control the rotation of the image acquisition device of each 3D information acquisition device according to the above conditions.
  • one 3D acquisition device or a limited number of 3D acquisition devices can be used to perform acquisition at the above set positions in sequence and time-sharing. . That is to say, the acquisition is not performed at the same time, but is acquired at different locations in a time-sharing manner, and images acquired at different times are collected for 3D synthesis.
  • the different locations described here are the same as the locations described above for the different acquisition devices.
  • the collection positions of each collection device are the positions where the user holds the device for collection in sequence. That is, the user holds the device and is located at the collection positions of the above-mentioned first and second types of 3D information collection equipment in sequence. Every time it is in the collection position (the first type of collection position or the second type of collection position), the handheld collection device is controlled to rotate for collection. Finally, the images collected each time are transmitted to the processor for 3D modeling and synthesis.
  • one 3D acquisition device or a limited number of 3D acquisition devices can be used to perform acquisition at the above set positions in sequence and time-sharing. . That is to say, the acquisition is not performed at the same time, but is acquired at different locations in a time-sharing manner, and images acquired at different times are collected for 3D synthesis.
  • the different locations described here are the same as the locations described above for the different acquisition devices.
  • the collection system includes one or more of the above-mentioned 3D information collection devices, and the collection devices are located at positions a, b, c... in sequence during the collection process.
  • the collection range when the collection device is at position a includes area A
  • the collection range when the collection device is at position b includes area B
  • the collection range when the collection device is at position c includes area C... and so on.
  • Their collection areas at least satisfy that the intersection between the two collection areas is not empty. In particular, the non-empty intersection should be located on the target. That is, each acquisition device overlaps at least the acquisition range of the other two acquisition devices, especially the acquisition range of each acquisition device on the target is at least the same as the acquisition range of the other two acquisition devices on the target. overlapping.
  • the collecting devices in multiple locations to scan and collect the specific area, so that the information of the area can be obtained from different angles.
  • the intersection of area A and area B includes this specific area; the common intersection of area A, area B, and area C includes this specific area; the intersection of area A and area B and the intersection of area C and area D include this specific area, etc. Wait. That is to say, the specific area is scanned repeatedly, which may also be called a repeated scanning area, that is, the specific area is scanned and collected by multiple collection devices.
  • the above situation includes the intersection of the collection areas of two or more collection devices including the specific area; the intersection of the collection areas of two or more collection devices, and the other two or more collection devices The intersection of the collection areas, including the specific area.
  • the above-mentioned specific areas can be analyzed and obtained according to the previous 3D synthesis situation, such as the areas where the previous 3D synthesis failed or with a high failure rate; it can also be delineated in advance according to the operator's experience, such as the areas with large fluctuations in bumps, or the degree of bumps. larger areas, etc.
  • the above distance and the collection range of multiple 3D information collection devices select different collection positions a, b, c...
  • the sum of C... can cover the target. But in general, not only is the sum of the collection ranges of the 3D information collection devices required to cover the size of the target, but also when the collection ranges of the 3D information collection devices overlap in adjacent positions, the sum of their collection ranges can still be Override target size. For example, the overlapping range accounts for more than 10% of the acquisition range.
  • the selected 3D information collection equipment is placed in the collection positions a, b, c... in order to ensure that the 3D information collection equipment can cover the target in the collection area of the positions a, b, c....
  • the controller can control the rotation of the image acquisition device of the 3D information acquisition device each time according to the above conditions.
  • the way of determination includes based on pre-existing data, or based on visual results, or based on the distribution of areas that were not synthesized in the previous acquisition.
  • the number and position of the specific area of the target and the number of the second type of 3D information collection positions required by each specific area determine the number of times of the second type of 3D information collection, and arrange the position for each 3D information collection device.
  • one or more second-type 3D information collection locations are inserted between the above-mentioned first-type 3D information collection locations, so as to form repeated collection of areas where the collection range of the first-type 3D information collection locations is weak.
  • the area is repeatedly collected to form a repeated scanning area.
  • the second type of 3D information collection position can also be set at other positions (for example, closer to or farther from the target) to ensure that enough different angle pictures can be obtained in the repeated scanning area.
  • the controller can control the rotation of the image acquisition device of the 3D information acquisition device each time according to the above conditions.
  • the above acquisition can be completed by using a handheld device, so that the user can walk to different positions with the handheld device to perform multiple acquisitions, thereby constructing a 3D model of a larger space.
  • a handheld device for example, for a corridor, it is also possible to set up multiple devices, but the process is more complicated. However, if the user holds the device and moves to different positions of the corridor, different areas of the corridor can be collected separately, so as to finally synthesize the 3D model of the whole corridor.
  • the method of optimizing the camera acquisition position can also be adopted.
  • the prior art for such a device does not mention how to better optimize the camera position.
  • some optimization methods exist they are obtained under different empirical conditions under different experiments.
  • some existing position optimization methods need to obtain the size of the target object, which is feasible in surround 3D acquisition and can be measured in advance.
  • the present invention conducts a large number of experiments, and summarizes the following empirical conditions that the interval of camera acquisition is preferably satisfied during acquisition.
  • the included angle ⁇ of the optical axis of the image acquisition device at two adjacent positions satisfies the following conditions:
  • R is the distance from the center of rotation to the surface of the target
  • T is the sum of the object distance and the image distance during acquisition, that is, the distance between the photosensitive unit of the image acquisition device and the target object.
  • d is the length or width of the photosensitive element (CCD) of the image acquisition device.
  • CCD photosensitive element
  • F is the focal length of the lens of the image acquisition device.
  • u is the empirical coefficient.
  • a distance measuring device such as a laser distance meter
  • a distance measuring device is configured on the acquisition device. Adjust its optical axis to be parallel to the optical axis of the image acquisition device, then it can measure the distance from the acquisition device to the surface of the target object. Using the measured distance, according to the known positional relationship between the distance measuring device and the various components of the acquisition device, you can Get R and T.
  • the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T.
  • multiple averaging methods or other methods can also be used. The principle is that the value of T should not deviate from the distance between the image and the object during acquisition.
  • the distance from the center of rotation to the surface of the target object along the optical axis is taken as R.
  • multiple averaging methods or other methods can also be used, the principle of which is that the value of R should not deviate from the radius of rotation at the time of acquisition.
  • the size of the object is used as a method for estimating the position of the camera in the prior art. Because the size of the object will change with the change of the measured object. For example, after collecting 3D information of a large object, when collecting small objects, it is necessary to re-measure the size and re-calculate. The above-mentioned inconvenient measurements and multiple re-measurements will bring about measurement errors, resulting in incorrect camera position estimation.
  • the empirical conditions that the camera position needs to meet are given, and there is no need to directly measure the size of the object.
  • d and F are fixed parameters of the camera. When purchasing a camera and lens, the manufacturer will give the corresponding parameters without measurement.
  • R and T are only a straight line distance, which can be easily measured by traditional measurement methods, such as straightedge and laser rangefinder.
  • the acquisition direction of the image acquisition device eg, camera
  • the orientation of the lens is substantially opposite to the rotation center.
  • u should be less than 0.498.
  • u ⁇ 0.411 is preferred, especially u ⁇ 0.359.
  • the multiple images acquired by the image acquisition device are sent to the processing unit, and the following algorithm is used to construct a 3D model.
  • the processing unit may be located in the acquisition device, or may be located remotely, such as a cloud platform, a server, a host computer, and the like.
  • the specific algorithm mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after enhancement by Wallis filter
  • m g is the local gray value of the original image.
  • sg is the local grayscale standard deviation of the original image
  • mf is the local grayscale target value of the transformed image
  • sf is the localized grayscale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image luminance coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so it can improve the number and accuracy of feature points when extracting image point features, and improve the reliability and accuracy of matching results in photo feature matching.
  • Step 2 Extract feature points from all the input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photo.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses the Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, uses integral images to accelerate convolution to improve computational speed, and reduces the dimension of local image feature descriptors. to speed up matching.
  • the main steps include 1 constructing a Hessian matrix to generate all interest points for feature extraction.
  • the purpose of constructing a Hessian matrix is to generate image stable edge points (mutation points); 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix
  • Each pixel point is compared with 26 points in the two-dimensional image space and scale space neighborhood, and the key points are initially located.
  • (3) The main direction of the feature point is determined by using the harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree sector is counted, and then the sector is rotated at intervals of 0.2 radians and the harr wavelet eigenvalues in the region are counted again.
  • the direction of the sector with the largest value is used as the main direction of the feature point; (4) a 64-dimensional feature point description vector is generated, and a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts the haar wavelet features of 25 pixels in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • the haar wavelet features are 4 directions after the horizontal value, after the vertical value, after the absolute value of the horizontal direction and the sum of the absolute value of the vertical direction.
  • Step 3 Input the coordinates of the matched feature points, and use the beam method to adjust the position and attitude data of the sparse target object 3D point cloud and the camera to obtain the sparse target object model 3D point cloud and position model coordinates.
  • Sparse feature points Take sparse feature points as the initial value, perform dense matching of multi-view photos, and obtain dense point cloud data.
  • stereo pair selection For each image in the input dataset, we select a reference image to form a stereo pair for computing the depth map. So we can get a rough depth map for all images, these depth maps may contain noise and errors, and we use its neighborhood depth map to perform a consistency check to optimize the depth map for each image.
  • depth map fusion is performed to obtain a 3D point cloud of the entire scene.
  • Step 4 Use dense point cloud to reconstruct the target surface. Including several processes of defining octrees, setting function spaces, creating vector fields, solving Poisson equations, and extracting isosurfaces.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship
  • the vector field of the point cloud is obtained according to the integral relationship
  • the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration
  • the isosurface is extracted by the moving cube algorithm
  • the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the target model. After the surface model is constructed, texture mapping is performed.
  • the main process includes: 1 texture data acquisition through image reconstruction of the target surface triangle mesh; 2 visibility analysis of the reconstructed model triangle. Use the calibration information of the image to calculate the visible image set of each triangular face and the optimal reference image; 3.
  • the triangular face is clustered to generate texture patches.
  • the triangular surface is clustered into several reference image texture patches; 4
  • the texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate a texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangular surface.
  • a 3D acquisition device In order to construct a 3D model inside an exhibition hall, a 3D acquisition device can be placed on the floor of the house, multiple images of the building can be acquired by rotating, and then the acquisition device can be moved to multiple indoor positions for multiple rotation acquisitions, and the 3D model can be synthesized according to the synthesis algorithm. , so as to build a 3D model of the house, which is convenient for subsequent decoration and display.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a solid object, or it can be composed of multiple objects. .
  • the three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with the three-dimensional feature of the target.
  • the so-called three-dimensional in the present invention refers to having three directional information of XYZ, especially having depth information, which is essentially different from having only two-dimensional plane information. It is also fundamentally different from some definitions that are called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially not depth information.
  • the acquisition area mentioned in the present invention refers to the range that the image acquisition device 1 (eg, camera) can capture.
  • the image acquisition device 1 in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image acquisition function.
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all of the components in the device according to the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An embodiment of the present invention provides a multi-location combined 3D image acquisition system and method. The system comprises a 3D image acquisition apparatus. The 3D image acquisition apparatus performs acquisition at multiple locations with respect to an acquisition target. The acquisition range of each acquisition location at least respectively overlaps with the acquisition ranges of the other acquisition locations. The 3D image acquisition apparatus comprises an image acquisition device and a rotation device. The image acquisition device performs acquisition in a direction facing away from the center of rotation. The invention is the first to propose a configuration in which a single autorotating 3D image acquisition apparatus is arranged at multiple locations so as to form a complete multi-location combined 3D image acquisition system. The invention enables image acquisition of a complex surface of an inner space of a target or of a target over a large field of view.

Description

一种多点组合式3D采集系统及方法A multi-point combined 3D acquisition system and method 技术领域technical field
本发明涉及形貌测量技术领域,特别涉及3D形貌测量技术领域。The invention relates to the technical field of topography measurement, in particular to the technical field of 3D topography measurement.
背景技术Background technique
在进行3D测量时,需要首先采集3D信息。目前常用的方法包括使用机器视觉的方式和结构光、激光测距、激光雷达的方式。When performing 3D measurement, 3D information needs to be collected first. Commonly used methods include the use of machine vision and structured light, laser ranging, and lidar.
结构光、激光测距、激光雷达的方式均需要主动光源发射到目标物上,在某些情况下会对目标物造成影响,且光源成本较高。并且光源结构比较精密,易于损坏。Structured light, laser ranging, and lidar all require an active light source to be emitted to the target, which will affect the target in some cases, and the cost of the light source is high. Moreover, the structure of the light source is relatively precise and easy to be damaged.
而机器视觉的方式是采集物体不同角度的图片,并将这些图片匹配拼接形成3D模型,成本低、易使用。其在采集不同角度图片时,可以待测物不同角度设置多个相机,也可以通过单个或多个相机旋转从不同角度采集图片。但无论这两种方式哪一种,都需要将相机的采集位置围绕目标物设置(简称环绕式),但这种方式需要较大空间为图像采集装置设置采集位置。The machine vision method is to collect pictures of objects from different angles, and match and stitch these pictures to form a 3D model, which is low-cost and easy to use. When collecting pictures from different angles, multiple cameras can be set at different angles of the object to be tested, or pictures can be collected from different angles by rotating a single or multiple cameras. But no matter which of the two methods, the acquisition position of the camera needs to be set around the target (referred to as the surround type), but this method requires a large space to set the acquisition position for the image acquisition device.
而且,除了单一目标物3D构建外,通常还有目标物内部空间3D模型构建需求和周边较大视场范围内的3D模型构建的需求,这是传统环绕式3D采集设备所很难做到的。特别是在内部空间或大视场范围内目标物表面较为复杂(体现在表面凹凸不平且凹凸较深),此时在单一位置进行采集难以覆盖表面凹坑或凸起的每个部分,从而导致最终合成时难以获得完整3D模型,甚至合成失败,或合成时间延长。Moreover, in addition to the 3D construction of a single target, there are usually requirements for the construction of 3D models of the internal space of the target and the construction of 3D models within a large surrounding field of view, which is difficult for traditional surround-type 3D acquisition equipment to achieve. . Especially in the inner space or in the large field of view, the surface of the target is more complicated (reflected in the unevenness and deep concavity and convexity of the surface). At this time, it is difficult to cover every part of the surface pits or protrusions by collecting at a single position, resulting in It is difficult to obtain a complete 3D model in the final synthesis, and even the synthesis fails, or the synthesis time is prolonged.
在现有技术中,也曾提出使用包括旋转角度、目标物尺寸、物距的经验公式限定相机位置,从而兼顾合成速度和效果。然而在实际应用中发现这在环绕式3D采集中是可行的,可以事先测量目标物尺寸。但在开放式的空间中则难以事先测量目标物,例如需要采集获得街道、交通路口、楼群、隧道、车流等的3D信息(不限于此)。这使得这种方法难以奏效。即使是固定的较小的目标物,例如家具、人身体部分等虽然可以事先测量其尺寸,但这种方法依然受到较大限制:目标物尺寸难以准确确定,特别是某些应用场合目标物需要频繁更换,每次测量带来大量额外工作量,并且需要专业设备才能准确测量不规则目标物。测量的误差导致相机位置设定误差,从而会影响采集合成速度和效果; 准确度和速度还需要进一步提高。In the prior art, it has also been proposed to use an empirical formula including rotation angle, target size, and object distance to define the camera position, so as to take into account the synthesis speed and effect. In practice, however, this is found to be feasible in wrap-around 3D acquisition, where the target size can be measured in advance. However, in an open space, it is difficult to measure the target in advance. For example, it is necessary to collect and obtain 3D information of streets, traffic intersections, buildings, tunnels, traffic flows, etc. (not limited to this). This makes this approach ineffective. Even small fixed targets, such as furniture, human body parts, etc., can be measured in advance, but this method is still limited: the size of the target is difficult to accurately determine, especially in some applications. Frequent replacements bring a lot of extra work per measurement and require specialized equipment to accurately measure irregular targets. The measurement error leads to the camera position setting error, which will affect the acquisition and synthesis speed and effect; the accuracy and speed need to be further improved.
现有技术虽然也有对于环绕式采集设备优化的方法,但当3D采集合成设备的相机的采集方向与其旋转轴方向相互背离的情况时,现有技术就没有更佳的优化方法。Although the prior art also has methods for optimizing the wrap-around capturing device, when the capturing direction of the camera of the 3D capturing and synthesizing device deviates from the direction of its rotation axis, there is no better optimization method in the prior art.
因此,急需一种能够精确、高效、方便采集周边或内部空间复杂的3D信息的装置。Therefore, there is an urgent need for a device that can accurately, efficiently and conveniently collect complex 3D information in the surrounding or internal space.
发明内容SUMMARY OF THE INVENTION
鉴于上述问题,提出了本发明提供一种克服上述问题或者至少部分地解决上述问题的一种多位置组合式3D采集系统及方法。In view of the above problems, it is proposed that the present invention provides a multi-position combined 3D acquisition system and method that overcomes the above problems or at least partially solves the above problems.
本发明实施例提供了一种多点组合式3D采集系统及方法,包括3D采集设备,Embodiments of the present invention provide a multi-point combined 3D acquisition system and method, including a 3D acquisition device,
所述3D采集设备对采集目标进行多点位采集,每个采集点位的采集范围都至少分别与其他采集点位的采集范围有重叠;The 3D collection device performs multi-point collection on the collection target, and the collection range of each collection point at least overlaps with the collection range of other collection points;
所述3D采集设备包括图像采集装置、旋转装置;其中图像采集装置的采集方向为背离旋转中心方向。The 3D acquisition device includes an image acquisition device and a rotation device; wherein the acquisition direction of the image acquisition device is a direction away from the rotation center.
可选的,所述多点位采集为多个3D采集设备分别设置在多个点位采集。Optionally, the multi-point acquisition is that a plurality of 3D acquisition devices are respectively set to acquire at a plurality of points.
可选的,所述多个3D采集设备包括第一类3D采集设备和第二类3D采集设备。Optionally, the plurality of 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device.
可选的,第一类3D采集设备采集范围之和能够覆盖目标物,第二类3D采集设备采集范围之和能够覆盖目标物的特定区域。Optionally, the sum of the collection ranges of the first type of 3D collection devices can cover the target, and the sum of the collection ranges of the second type of 3D collection devices can cover a specific area of the target.
可选的,所述多个3D采集设备包括第一类3D采集设备和第二类3D采集设备,第一类3D采集设备采集范围之和大于第二类3D采集设备采集范围之和。Optionally, the multiple 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device, and the sum of the acquisition ranges of the first type of 3D acquisition devices is greater than the sum of the acquisition ranges of the second type of 3D acquisition devices.
可选的,对于目标物的特定区域,采用第一类3D采集设备和第二类3D采集设备共同扫描采集。Optionally, for a specific area of the target object, the first type of 3D acquisition device and the second type of 3D acquisition device are used to jointly scan and acquire.
可选的,所述特定区域为用户指定区域。Optionally, the specific area is a user-specified area.
可选的,所述特定区域为前次合成失败区域。Optionally, the specific area is an area where the previous synthesis failed.
可选的,所述特定区域为轮廓凹凸变化较大区域。Optionally, the specific area is an area with a large variation in contour concavo-convex.
可选的,所述3D采集设备包括手持式3D采集设备。Optionally, the 3D acquisition device includes a handheld 3D acquisition device.
可选的,所述多点位采集为单个3D采集设备依次设置在多个点位采集,完成多个3D采集位置的采集。Optionally, for the multi-point acquisition, a single 3D acquisition device is sequentially set to acquire at multiple points to complete the acquisition of multiple 3D acquisition locations.
可选的,3D采集设备依次采集中,3D采集设备在一个采集位置的采集范 围与其在另一个采集位置的采集范围有重叠,且该重叠区域至少部分位于目标物上。Optionally, in the sequential collection by the 3D collection device, the collection range of the 3D collection device at one collection position overlaps with the collection range at another collection position, and the overlapped area is at least partially located on the target.
可选的,多个3D采集位置包括第一类3D采集位置和第二类3D采集位置;在第一类3D采集位置时采集范围之和能够覆盖目标物,在第二类3D采集位置时采集范围之和能够覆盖目标物的特定区域。Optionally, the multiple 3D collection positions include a first type of 3D collection position and a second type of 3D collection position; the sum of the collection ranges can cover the target at the first type of 3D collection position, and the collection at the second type of 3D collection position The sum of the ranges can cover a specific area of the target.
可选的,上述特定区域为用户指定区域。Optionally, the above-mentioned specific area is a user-specified area.
可选的,上述特定区域为前次合成失败区域。Optionally, the above-mentioned specific area is the area where the previous synthesis failed.
可选的,上述特定区域为轮廓凹凸变化较大区域。Optionally, the above-mentioned specific area is an area where the contour unevenness changes greatly.
在可选的实施例中,图像采集装置在相邻的两个采集位置的光轴的夹角α满足如下条件:In an optional embodiment, the included angle α of the optical axes of the image acquisition devices at two adjacent acquisition positions satisfies the following conditions:
Figure PCTCN2021123762-appb-000001
Figure PCTCN2021123762-appb-000001
其中,R为旋转中心到目标物表面的距离,T为采集时物距与像距的和,d为图像采集装置的感光元件的长度或宽度,F为图像采集装置的镜头焦距,u为经验系数。Among them, R is the distance from the rotation center to the surface of the target object, T is the sum of the object distance and the image distance during acquisition, d is the length or width of the photosensitive element of the image acquisition device, F is the lens focal length of the image acquisition device, and u is the experience coefficient.
在可选的实施例中,u<0.498,或u<0.41,或u<0.359,或u<0.281,或u<0.169,或u<0.041,或u<0.028。In alternative embodiments, u<0.498, or u<0.41, or u<0.359, or u<0.281, or u<0.169, or u<0.041, or u<0.028.
本发明实施例另一方面还提供了一种3D合成识别装置及方法,包括上述任一权利要求所述的系统。Another aspect of the embodiments of the present invention further provides a 3D synthesis identification device and method, including the system described in any one of the preceding claims.
本发明实施例的另一方面还提供了一种物体制造展示装置及方法,包括上述任一权利要求所述的系统。Another aspect of the embodiments of the present invention further provides an object manufacturing and display device and method, including the system described in any of the preceding claims.
发明点及技术效果Inventions and technical effects
1、首次提出利用自转式智能视觉3D采集设备采集目标物内部空间的3D信息,适用于更开阔的空间和更细小的空间。1. For the first time, it is proposed to use the self-rotating intelligent visual 3D acquisition device to collect the 3D information of the internal space of the target, which is suitable for wider space and smaller space.
2、首次提出通过测量旋转中心与目标物距离、图像传感元件与目标物距离的方式优化相机采集位置,从而兼顾3D构建的速度和效果。2. For the first time, it is proposed to optimize the camera acquisition position by measuring the distance between the rotation center and the target, and the distance between the image sensing element and the target, so as to take into account the speed and effect of 3D construction.
3、首次提出在多个位置设置单个自转式3D采集设备,从而共同构成一套完整的多位置组合式3D采集系统。实现对于复杂表面的内部空间或大范围目标物的采集。3. For the first time, it is proposed to set up a single self-rotating 3D acquisition device in multiple positions, so as to form a complete set of multi-position combined 3D acquisition system. Realize the acquisition of the inner space of complex surfaces or a wide range of objects.
4、首次提出对于凹凸变化较大区域,进行多位置重复扫描,保证合成率。 即通过两类采集设备的设置,对于特定区域进行特定扫描采集,从而实现对于复杂物体的准确、高效采集。4. For the first time, it is proposed to perform multi-position repeated scanning for areas with large changes in concave and convex to ensure the synthesis rate. That is, through the setting of two types of acquisition devices, specific scanning acquisition is performed for a specific area, so as to achieve accurate and efficient acquisition of complex objects.
附图说明Description of drawings
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are for the purpose of illustrating preferred embodiments only and are not to be considered limiting of the invention. Also, the same components are denoted by the same reference numerals throughout the drawings. In the attached image:
图1示出了本发明实施例提供的3D信息采集设备的结构示意图;FIG. 1 shows a schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention;
图2示出了本发明实施例提供的手持3D信息采集设备的结构示意图;FIG. 2 shows a schematic structural diagram of a handheld 3D information collection device provided by an embodiment of the present invention;
图3示出了本发明实施例提供的手持3D信息采集设备的另一结构示意图;3 shows another schematic structural diagram of a handheld 3D information collection device provided by an embodiment of the present invention;
图4示出了本发明实施例提供的多位置组合式3D采集系统的示意图;4 shows a schematic diagram of a multi-position combined 3D acquisition system provided by an embodiment of the present invention;
图5示出了本发明实施例提供的多位置组合式手持3D采集系统的示意图;5 shows a schematic diagram of a multi-position combined handheld 3D acquisition system provided by an embodiment of the present invention;
图6示出了本发明实施例提供的多位置组合式3D采集系统对于特定区域的采集示意图;6 shows a schematic diagram of the collection of a specific area by a multi-position combined 3D collection system provided by an embodiment of the present invention;
图7示出了本发明实施例提供的多次组合式3D采集系统的示意图;FIG. 7 shows a schematic diagram of a multiple combined 3D acquisition system provided by an embodiment of the present invention;
图8示出了本发明实施例提供的多次组合式3D采集系统的另一示意图。FIG. 8 shows another schematic diagram of a multiple combined 3D acquisition system provided by an embodiment of the present invention.
附图中的附图标记与各部件的对应关系如下:The corresponding relationship between the reference numerals in the accompanying drawings and the components is as follows:
1图像采集装置;1 image acquisition device;
2旋转装置;2 rotating device;
3承载装置。3 carrying device.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that the present disclosure will be more thoroughly understood, and will fully convey the scope of the present disclosure to those skilled in the art.
3D信息采集设备结构3D information acquisition equipment structure
为解决上述技术问题,本发明的一实施例提供了一种多位置组合式3D采集系统,包括3D信息采集设备,如图1所示,包括图像采集装置1、旋转装置2、承载装置3。To solve the above technical problems, an embodiment of the present invention provides a multi-position combined 3D acquisition system, including 3D information acquisition equipment, as shown in FIG.
其中图像采集装置1与旋转装置2的旋转轴连接,由旋转装置2带动其转动。图像采集装置1的采集方向为背离旋转中心方向。即采集方向为指向相对于旋转中心向外。图像采集装置1的光轴可以与旋转平面平行,也可以与旋转平面成一定夹角,例如在以旋转平面为基准-90°-90°的范围内均是可以的。通常旋转轴或其延长线(即旋转中心线)通过图像采集装置,即图像采集装置仍然以自转方式转动。这与传统的图像采集装置围绕某一目标物进行旋转的采集方式(环绕式)本质不同,即与环绕目标物转动的环绕式完全不同。图像采集装置的光学采集口(例如镜头)均背向旋转轴方向,也就是说图像采集装置的采集区与旋转中心线无交集。同时由于图像采集装置的光轴与水平面具有夹角,因此这种方式与一般的自转式也有较大差别,特别是能够采集表面与水平面不垂直的目标物。The image acquisition device 1 is connected with the rotating shaft of the rotating device 2 , and the rotating device 2 drives it to rotate. The acquisition direction of the image acquisition device 1 is the direction away from the rotation center. That is, the acquisition direction is directed outward relative to the center of rotation. The optical axis of the image acquisition device 1 may be parallel to the rotation plane, or may form a certain angle with the rotation plane, for example, within the range of -90°-90° based on the rotation plane. Usually, the rotation axis or its extension line (ie, the rotation center line) passes through the image acquisition device, that is, the image acquisition device still rotates in an autorotation manner. This is essentially different from the acquisition method (surround type) in which the traditional image acquisition device rotates around a certain object, that is, it is completely different from the surround type in which the image acquisition device rotates around the target object. The optical collection ports (eg lenses) of the image collection device are all facing away from the direction of the rotation axis, that is to say, the collection area of the image collection device has no intersection with the rotation center line. At the same time, because the optical axis of the image acquisition device has an included angle with the horizontal plane, this method is also quite different from the general self-rotation method, especially the target object whose surface is not perpendicular to the horizontal plane can be collected.
当然,旋转装置的旋转轴也可以通过减速装置与图像采集装置连接,例如通过齿轮组等。当图像采集装置在水平面进行360°的旋转时,其在特定位置拍摄对应目标物的图像(具体拍摄位置后续将详细描述)。这种拍摄可以是与旋转动作同步进行,或是在拍摄位置停止旋转后进行拍摄,拍摄完毕后继续旋转,以此类推。上述旋转装置可以为电机、马达、步进电机、伺服电机、微型马达等。旋转装置(例如各类电机)可以在控制器的控制下按照规定速度转动,并且可以转动规定角度,从而实现采集位置的优化,具体采集位置下面将详细说明。当然也可以使用现有设备中的旋转装置,将图像采集装置安装其上即可。Of course, the rotating shaft of the rotating device can also be connected to the image capturing device through a deceleration device, for example, through a gear set or the like. When the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on. The above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like. The rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position. The specific collection position will be described in detail below. Of course, the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
承载装置3用来承载整个设备的重量,旋转装置2与承载装置3连接。承载装置可以为三脚架、带有支撑装置的底座等。通常情况下,旋转装置位于承载装置的中心部分,以保证平衡。但在一些特殊场合中,也可以位于承载装置任意位置。而且承载装置并不是必须的。旋转装置可以直接安装于应用设备中,例如可以安装于车辆顶部。The carrying device 3 is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the carrying device 3 . The carrying device may be a tripod, a base with a supporting device, or the like. Typically, the rotating device is located in the center part of the carrier to ensure balance. However, in some special occasions, it can also be located at any position of the carrying device. Furthermore, the carrying device is not necessary. The swivel device can be installed directly in the application, eg on the roof of a vehicle.
手持式3D信息采集设备结构Structure of handheld 3D information acquisition equipment
实施例还提供了一种手持3D信息采集设备,简称3D采集设备,请参考图2,包括图像采集装置1、旋转装置2和承载装置3。The embodiment also provides a handheld 3D information acquisition device, referred to as a 3D acquisition device, please refer to FIG.
图像采集装置1与旋转装置2连接,从而在旋转装置2的驱动下稳定地旋转扫描,实现对于周边目标物的3D采集(具体采集流程下面将详述)。旋转 装置2安装于承载装置3上,承载装置3用于承载整个设备。承载装置3可以为手柄,从而使得整个设备可用于手持采集。承载装置3也可以为底座型承载装置,用于安装在其他设备上,从而使得整个智能3D采集设备安装于其他设备上共同使用。例如,智能3D采集设备安装于车辆上,随车辆行进进行3D采集。The image acquisition device 1 is connected to the rotation device 2, so as to stably rotate and scan under the drive of the rotation device 2, and realize 3D acquisition of surrounding objects (the specific acquisition process will be described in detail below). The rotating device 2 is mounted on the carrying device 3, and the carrying device 3 is used to carry the entire equipment. The carrier 3 can be a handle, so that the entire device can be used for hand-held acquisition. The bearing device 3 can also be a base-type bearing device, which is used to be installed on other devices, so that the entire intelligent 3D acquisition device can be installed on other devices for common use. For example, an intelligent 3D acquisition device is installed on the vehicle and performs 3D acquisition as the vehicle travels.
承载装置3用来承载整个设备的重量,旋转装置2与承载装置3连接。承载装置可以为手柄、三脚架、带有支撑装置的底座等。通常情况下,旋转装置位于承载装置的中心部分,以保证平衡。但在一些特殊场合中,也可以位于承载装置任意位置。而且承载装置并不是必须的。旋转装置也可以直接安装于应用设备中,例如可以安装于车辆顶部。承载装置内部空间用于容纳电池,用于给3D旋转采集稳定装置供电。同时,为了使用方便,在承载装置外壳上设置按键,用于控制3D旋转采集稳定装置。包括开启/关闭稳定功能,开启/关闭3D旋转采集功能。The carrying device 3 is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the carrying device 3 . The carrying device may be a handle, a tripod, a base with a supporting device, or the like. Typically, the rotating device is located in the center part of the carrier to ensure balance. However, in some special occasions, it can also be located at any position of the carrying device. Furthermore, the carrying device is not necessary. The rotating device can also be installed directly in the application, eg on the roof of a vehicle. The inner space of the carrying device is used to accommodate the battery, which is used to supply power to the 3D rotation acquisition stabilization device. At the same time, for the convenience of use, buttons are arranged on the casing of the carrying device to control the 3D rotation acquisition stabilization device. Including turning on/off the stabilization function and turning on/off the 3D rotation capture function.
如图3所示,其中图像采集装置1与旋转装置2的旋转轴连接,由旋转装置带动其转动。当然,旋转装置的旋转轴也可以通过传动装置与图像采集装置连接,例如通过齿轮组等。此时可以将旋转装置2设置于手柄内部,传动装置部分或全部也设置在手柄内部,这样可以进一步缩小设备体积。As shown in FIG. 3 , the image acquisition device 1 is connected with the rotating shaft of the rotating device 2, and the rotating device drives the rotating device to rotate. Of course, the rotating shaft of the rotating device can also be connected with the image capturing device through a transmission device, for example, through a gear set or the like. At this time, the rotating device 2 can be arranged inside the handle, and part or all of the transmission device is also arranged inside the handle, which can further reduce the volume of the device.
当图像采集装置在水平面进行360°的旋转时,其在特定位置拍摄对应目标物的图像(具体拍摄位置后续将详细描述)。这种拍摄可以是与旋转动作同步进行,或是在拍摄位置停止旋转后进行拍摄,拍摄完毕后继续旋转,以此类推。上述旋转装置可以为电机、马达、步进电机、伺服电机、微型马达等。旋转装置(例如各类电机)可以在控制器的控制下按照规定速度转动,并且可以转动规定角度,从而实现采集位置的优化,具体采集位置下面将详细说明。当然也可以使用现有设备中的旋转装置,将图像采集装置安装其上即可。When the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on. The above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like. The rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position. The specific collection position will be described in detail below. Of course, the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
还包括测距装置,测距装置与图像采集装置固定连接,且测距装置指向方向与图像采集装置光轴方向相同。当然测距装置也可以固定连接于旋转装置上,只要可以随图像采集装置同步转动即可。优选的,可以设置安装平台,图像采集装置和测距装置均位于平台上,平台安装于旋转装置旋转轴上,由旋转装置驱动转动。测距装置可以使用激光测距仪、超声测距仪、电磁波测距仪等多种方式,也可以使用传统的机械量具测距装置。当然,在某些应用场合中,3D采集设备位于特定位置,其与目标物的距离已经标定,无需额外测量。A distance measuring device is also included. The distance measuring device is fixedly connected with the image acquisition device, and the pointing direction of the distance measuring device is the same as the optical axis direction of the image acquisition device. Of course, the distance measuring device can also be fixedly connected to the rotating device, as long as it can rotate synchronously with the image capturing device. Preferably, an installation platform may be provided, the image acquisition device and the distance measuring device are both located on the platform, the platform is installed on the rotating shaft of the rotating device, and is driven and rotated by the rotating device. The distance measuring device can use a variety of methods such as a laser distance meter, an ultrasonic distance meter, an electromagnetic wave distance meter, etc., or a traditional mechanical measuring tool distance measuring device. Of course, in some applications, the 3D acquisition device is located at a specific location, and its distance from the target has been calibrated, and no additional measurement is required.
还可以包括光源,光源可以设置于图像采集装置周边、旋转装置上以及安装平台上。当然光源也可以单独设置,例如使用独立光源照射目标物。甚至在光照条件较好的时候不使用光源。光源可以为LED光源,也可以为智能光源,即根据目标物及环境光的情况自动调整光源参数。通常情况下,光源位于图像采集装置的镜头周边分散式分布,例如光源为在镜头周边的环形LED灯。由于在一些应用中需要控制光源强度。特别是可以在光源的光路上设置柔光装置,例如为柔光外壳。或者直接采用LED面光源,不仅光线比较柔和,而且发光更为均匀。更佳地,可以采用OLED光源,体积更小,光线更加柔和,并且具有柔性特性,可以贴附于弯曲的表面。It can also include a light source, and the light source can be arranged on the periphery of the image acquisition device, on the rotating device and on the installation platform. Of course, the light source can also be set independently, for example, an independent light source is used to illuminate the target. Even when lighting conditions are good, no light source is used. The light source can be an LED light source or an intelligent light source, that is, the parameters of the light source are automatically adjusted according to the conditions of the target object and the ambient light. Usually, the light sources are distributed around the lens of the image capture device, for example, the light sources are ring-shaped LED lights around the lens. Because in some applications it is necessary to control the intensity of the light source. In particular, a diffuser device, such as a diffuser housing, can be arranged on the light path of the light source. Or directly use the LED surface light source, not only the light is softer, but also the light is more uniform. More preferably, an OLED light source can be used, which has a smaller volume, softer light, and has flexible properties, which can be attached to a curved surface.
为了方便目标物的实际尺寸测量,可在目标物位置设置多个标记点。并且这些标记点的坐标已知。通过采集标记点,并结合其坐标,获得3D合成模型的绝对尺寸。这些标记点可以为事先设置的点,也可以是激光光点。确定这些点的坐标的方法可以包括:①使用激光测距:使用标定装置向着目标物发射激光,形成多个标定点光斑,通过标定装置中激光测距单元的已知位置关系获得标定点坐标。使用标定装置向着目标物发射激光,使得标定装置中的激光测距单元发射的光束落在目标物上形成光斑。由于激光测距单元发射的激光束相互平行,且各个单元之间的位置关系已知。那么在目标物上形成的多个光斑的在发射平面的二维坐标就可以得到。通过激光测距单元发射的激光束进行测量,可以获得每个激光测距单元与对应光斑之间的距离,即相当于在目标物上形成的多个光斑的深度信息可以获得。即垂直于发射平面的深度坐标就可以得到。由此,可以获得每个光斑的三维坐标。②使用测距与测角结合:分别测量多个标记点的距离以及相互之间的夹角,从而算出各自坐标。③使用其它坐标测量工具:例如RTK、全球坐标定位系统、星敏定位系统、位置和位姿传感器等。In order to facilitate the measurement of the actual size of the target, multiple marking points can be set at the position of the target. And the coordinates of these marker points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D composite model is obtained. These marking points can be pre-set points or laser light spots. The method for determining the coordinates of these points may include: ①Using laser ranging: using a calibration device to emit laser light toward the target to form a plurality of calibration point spots, and obtain the calibration point coordinates through the known positional relationship of the laser ranging unit in the calibration device. Use the calibration device to emit laser light toward the target, so that the light beam emitted by the laser ranging unit in the calibration device falls on the target to form a light spot. Since the laser beams emitted by the laser ranging units are parallel to each other, and the positional relationship between the units is known. Then the two-dimensional coordinates on the emission plane of the multiple light spots formed on the target can be obtained. By measuring the laser beam emitted by the laser ranging unit, the distance between each laser ranging unit and the corresponding light spot can be obtained, that is, depth information equivalent to multiple light spots formed on the target can be obtained. That is, the depth coordinates perpendicular to the emission plane can be obtained. Thereby, the three-dimensional coordinates of each spot can be obtained. ②Using the combination of distance measurement and angle measurement: respectively measure the distance of multiple markers and the angle between each other, so as to calculate the respective coordinates. ③ Use other coordinate measurement tools: such as RTK, global coordinate positioning system, star-sensing positioning system, position and pose sensors, etc.
多位置组合式3D采集系统Multi-position combined 3D acquisition system
如图4-5所示,采集系统包括多个上述3D信息采集设备a、b、c…,它们分别位于不同的空间位置。其中采集设备a的采集范围包括A区域,采集设备b的采集范围包括B区域,采集设备c的采集范围包括C区域…以此类推。它们的采集区域至少满足两两采集区域之间交集不为空。特别的,所述不为空的交集应当位于目标物上。即每个采集设备都至少分别与其他两个采集设备的采集范围有重叠,特别是每个采集设备在目标物上的采集范围都至少分别与其他两个采集设备在目标物上的采集范围有重叠。As shown in Figures 4-5, the acquisition system includes a plurality of the above-mentioned 3D information acquisition devices a, b, c..., which are located in different spatial positions. The collection range of the collection device a includes the area A, the collection range of the collection device b includes the area B, the collection range of the collection device c includes the area C... and so on. Their collection areas at least satisfy that the intersection between the two collection areas is not empty. In particular, the non-empty intersection should be located on the target. That is, each acquisition device overlaps at least the acquisition range of the other two acquisition devices, especially the acquisition range of each acquisition device on the target is at least the same as the acquisition range of the other two acquisition devices on the target. overlapping.
无论是内部空间还是大范围视场中的目标物,它们均可能具有表面较为复 杂的区域,称之为特定区域。这些区域或是有向内凹陷的深孔/深坑,或是具有向外突出的较高的凸起,或是两者兼具,从而构成了表面凹凸不平程度较大。这给在一个方向采集的采集设备带来了挑战。由于凹陷和凸起的原因,导致设备无论设置在哪个位置,通过旋转扫描也只能从单一方向采集目标物该特定区域,从而导致该特定区域的信息大量丢失。Whether it is an interior space or a target in a wide field of view, they may have areas with more complex surfaces, called specific areas. These areas either have inwardly recessed deep holes/pits, or outwardly projecting higher protrusions, or both, resulting in a greater degree of surface unevenness. This presents challenges for acquisition devices that acquire in one direction. Due to the concave and convex reasons, no matter where the device is set, the specific area of the target object can only be collected from a single direction by rotating and scanning, resulting in a large amount of information loss in the specific area.
因此可以设置多个位置的采集设备均扫描采集该特定区域,从而使得从不同角度获得该区域的信息。例如A区域和B区域的交集包括该特定区域;A区域、B区域、C区域的共同交集包括该特定区域;A区域和B区域的交集以及C区域和D区域的交集均包括该特定区域等等。也就是说,该特定区域被重复扫描,又可以称为重复扫描区,即该特定区域被多个采集设备扫描采集。以上情况包括两个及两个以上的采集设备的采集区域的交集包括该特定区域;两个及两个以上的采集设备的采集区域的交集,和,其他的两个及两个以上的采集设备的采集区域的交集,均包括该特定区域。Therefore, it is possible to set the collecting devices in multiple locations to scan and collect the specific area, so that the information of the area can be obtained from different angles. For example, the intersection of area A and area B includes this specific area; the common intersection of area A, area B, and area C includes this specific area; the intersection of area A and area B and the intersection of area C and area D include this specific area, etc. Wait. That is to say, the specific area is scanned repeatedly, which may also be called a repeated scanning area, that is, the specific area is scanned and collected by multiple collection devices. The above situation includes the intersection of the collection areas of two or more collection devices including the specific area; the intersection of the collection areas of two or more collection devices, and the other two or more collection devices The intersection of the collection areas, including the specific area.
上述特定区域可以根据前次3D合成情况分析得到,例如前次3D合成失败或失败率较高的区域;也可以根据操作人员的经验事先划定,例如凹凸起伏变化较大的区域,或凹凸程度较大的区域等,即具有凹凸起伏变化的区域,或凹凸起伏变化度大于预设阈值的区域。The above-mentioned specific areas can be analyzed and obtained according to the previous 3D synthesis situation, such as the areas where the previous 3D synthesis failed or with a high failure rate; it can also be delineated in advance according to the operator's experience, such as the areas with large fluctuations in bumps, or the degree of bumps. A larger area, etc., that is, an area with a variation of bumps and ridges, or a region with a degree of variation of the bumps and volts greater than a preset threshold.
在另一种实施方式中,并不需要多个手持式3D信息采集设备,而是只使用一个手持式3D信息采集设备。用户手持该设备,分别位于不同位置进行多次旋转扫描即可获得目标物图片。此时,应保证每次所处位置时,手持式3D信息采集设备的扫描范围叠加起来能够覆盖目标区域整体。In another embodiment, multiple handheld 3D information collection devices are not required, but only one handheld 3D information collection device is used. The user holds the device and performs multiple rotation scans at different positions to obtain a picture of the target object. At this time, it should be ensured that the scanning range of the handheld 3D information acquisition device can cover the entire target area when it is located at each location.
3D信息采集流程3D information collection process
1、根据目标物尺寸、位置选择第一类3D信息采集设备的数量,并为每个3D信息采集设备安排位置。1. Select the number of the first type of 3D information collection equipment according to the size and position of the target, and arrange the position for each 3D information collection equipment.
(1)根据目标物采集要求,设定3D信息采集设备能够放置的位置,确定3D信息采集设备与目标物之间的距离。(1) According to the target collection requirements, set the position where the 3D information collection equipment can be placed, and determine the distance between the 3D information collection equipment and the target.
(2)根据目标物尺寸、上述距离和多个3D信息采集设备a、b、c…的采集范围A、B、C…选择3D信息采集设备数量,使得3D信息采集设备的采集范围之和能够覆盖目标物。但通常情况下,不仅要求3D信息采集设备的采集范围之和能够覆盖目标物尺寸,而且在相邻3D信息采集设备的采集范围有交叠的情况下,它们的采集范围之和依然能够覆盖目标物尺寸。例如,交叠范围 占采集范围10%以上。(2) Select the number of 3D information collection devices according to the size of the target, the above distance and the collection ranges A, B, C... of multiple 3D information collection devices a, b, c..., so that the sum of the collection ranges of the 3D information collection devices can be Cover the target. However, under normal circumstances, not only is the sum of the collection ranges of the 3D information collection devices required to cover the size of the target, but also when the collection ranges of adjacent 3D information collection devices overlap, the sum of their collection ranges can still cover the target. object size. For example, the overlapping range is more than 10% of the acquisition range.
(3)将选择的多个3D信息采集设备a、b、c…相对均匀布置在距目标物上述距离的位置,从而保证多个3D信息采集设备a、b、c…的采集区域能够覆盖目标物。(3) Arrange the selected multiple 3D information collection devices a, b, c... relatively evenly at the above-mentioned distance from the target, so as to ensure that the collection areas of the multiple 3D information collection devices a, b, c... can cover the target thing.
2、根据目标物特定区域的尺寸、数量、位置设置第二类3D信息采集设备的数量,并为每个3D信息采集设备安排位置。2. Set the number of the second type of 3D information collection equipment according to the size, quantity and position of the specific area of the target, and arrange the position for each 3D information collection equipment.
(1)确定目标物特定区域的数量、位置。确定的方式包括根据预先的资料,或根据目视的结果,或根据前次采集中未合成的区域分布。(1) Determine the number and location of the specific area of the target. The way of determination includes based on pre-existing data, or based on visual results, or based on the distribution of areas that were not synthesized in the previous acquisition.
(2)根据目标物特定区域的尺寸,为每个特定区域安排一个或多个第二类3D信息采集设备,使得它们的采集范围能够覆盖该特定区域。(2) According to the size of the specific area of the target, one or more 3D information acquisition devices of the second type are arranged for each specific area, so that their acquisition range can cover the specific area.
(3)根据目标物特定区域的数量、位置以及每个特定区域所需要的第二类3D信息采集设备数量,确定第二类3D信息采集设备的数量,并为每个3D信息采集设备安排位置。如图6所示,通常情况下,一个或多个第二类3D信息采集设备是插入上述第一类3D信息采集设备之间,从而形成对第一类3D信息采集设备采集范围薄弱的区域进行重复采集,即对特定区域进行重复采集,形成重复扫描区。也可以将第二类3D信息采集设备设置于其他位置(例如靠目标物更近或更远),保证重复扫描区能够获得足够不同角度图片。(3) Determine the number of the second type of 3D information acquisition equipment according to the number and location of the specific area of the target and the number of the second type of 3D information acquisition equipment required for each specific area, and arrange the location for each 3D information acquisition equipment . As shown in Figure 6, under normal circumstances, one or more second-type 3D information collection devices are inserted between the above-mentioned first-type 3D information collection devices, so as to form an area where the collection range of the first-type 3D information collection equipment is weak. Repeated acquisition, that is, repeated acquisition of a specific area to form a repeated scanning area. The second type of 3D information acquisition device can also be set at other positions (for example, closer to or farther from the target) to ensure that enough different angle pictures can be obtained in the repeated scanning area.
3、在第一类、第二类3D信息采集设备均布置完毕后,开始控制每个3D信息采集设备旋转扫描目标物,所述旋转满足对于3D信息采集设备的图像采集装置的优化条件。也就是说,可以通过控制器,按照上述条件控制每个3D信息采集设备的图像采集装置旋转。3. After the first and second types of 3D information acquisition devices are all arranged, start to control each 3D information acquisition device to rotate and scan the target object, and the rotation satisfies the optimization conditions for the image acquisition device of the 3D information acquisition device. That is, the controller can control the rotation of the image acquisition device of each 3D information acquisition device according to the above conditions.
4、将多个3D信息采集设备扫描采集获得的图片发送至处理器中,处理器利用上述多个图片进行目标物3D模型的合成建模。同样,上述多张图片还可以通过通讯装置送入远程平台、云平台、服务器、上位机和/或移动终端中,利用3D模型合成方法进行目标物的3D合成。4. Send the pictures scanned and collected by the multiple 3D information collecting devices to the processor, and the processor uses the multiple pictures to synthesize the 3D model of the target object. Similarly, the above-mentioned multiple pictures can also be sent to a remote platform, cloud platform, server, host computer and/or mobile terminal through a communication device, and a 3D model synthesis method is used to perform 3D synthesis of the target object.
在另一种实施例中,除了上述描述的使用多个3D采集设备进行组合式采集外,可以理解,可以使用一个3D采集设备或有限个3D采集设备分别依次分时在上述设置的位置进行采集。也就是说,并不同时进行采集,而是分时在不同位置进行采集,并收集不同时间采集的图像,进行3D合成。这里所述的不同位置与上述为不同采集设备安排的位置相同。In another embodiment, in addition to the above-described combined acquisition using multiple 3D acquisition devices, it can be understood that one 3D acquisition device or a limited number of 3D acquisition devices can be used to perform acquisition at the above set positions in sequence and time-sharing. . That is to say, the acquisition is not performed at the same time, but is acquired at different locations in a time-sharing manner, and images acquired at different times are collected for 3D synthesis. The different locations described here are the same as the locations described above for the different acquisition devices.
在另一种实施例中,不需要设置多个采集设备,而是将上述流程中的多个采集设备替换为一个,各个采集设备的采集位置为用户手持该设备依次进行采 集的位置。即用户手持该设备依次位于上述第一类、第二类3D信息采集设备采集位置。每次处于采集位置(第一类采集位置或第二类采集位置)时,控制手持式采集设备旋转进行采集。最终将每次采集到的图片传输到处理器进行3D建模和成。In another embodiment, it is not necessary to set up multiple collection devices, but replace the multiple collection devices in the above process with one, and the collection positions of each collection device are the positions where the user holds the device for collection in sequence. That is, the user holds the device and is located at the collection positions of the above-mentioned first and second types of 3D information collection equipment in sequence. Every time it is in the collection position (the first type of collection position or the second type of collection position), the handheld collection device is controlled to rotate for collection. Finally, the images collected each time are transmitted to the processor for 3D modeling and synthesis.
在另一种实施例中,除了上述描述的使用多个3D采集设备进行组合式采集外,可以理解,可以使用一个3D采集设备或有限个3D采集设备分别依次分时在上述设置的位置进行采集。也就是说,并不同时进行采集,而是分时在不同位置进行采集,并收集不同时间采集的图像,进行3D合成。这里所述的不同位置与上述为不同采集设备安排的位置相同。In another embodiment, in addition to the above-described combined acquisition using multiple 3D acquisition devices, it can be understood that one 3D acquisition device or a limited number of 3D acquisition devices can be used to perform acquisition at the above set positions in sequence and time-sharing. . That is to say, the acquisition is not performed at the same time, but is acquired at different locations in a time-sharing manner, and images acquired at different times are collected for 3D synthesis. The different locations described here are the same as the locations described above for the different acquisition devices.
多次组合式3D采集系统Multiple combined 3D acquisition system
如图7-8所示,采集系统包括一个或多个上述3D信息采集设备,在采集过程中采集设备依次位于位置a、b、c…。其中采集设备位于a位置时的采集范围包括A区域,采集设备位于b位置时的采集范围包括B区域,采集设备位于c位置时的采集范围包括C区域…以此类推。它们的采集区域至少满足两两采集区域之间交集不为空。特别的,所述不为空的交集应当位于目标物上。即每个采集设备都至少分别与其他两个采集设备的采集范围有重叠,特别是每个采集设备在目标物上的采集范围都至少分别与其他两个采集设备在目标物上的采集范围有重叠。As shown in Figures 7-8, the collection system includes one or more of the above-mentioned 3D information collection devices, and the collection devices are located at positions a, b, c... in sequence during the collection process. The collection range when the collection device is at position a includes area A, the collection range when the collection device is at position b includes area B, and the collection range when the collection device is at position c includes area C... and so on. Their collection areas at least satisfy that the intersection between the two collection areas is not empty. In particular, the non-empty intersection should be located on the target. That is, each acquisition device overlaps at least the acquisition range of the other two acquisition devices, especially the acquisition range of each acquisition device on the target is at least the same as the acquisition range of the other two acquisition devices on the target. overlapping.
无论是内部空间还是大范围视场中的目标物,它们均可能具有表面较为复杂的区域,称之为特定区域。这些区域或是有向内凹陷的深孔/深坑,或是具有向外突出的较高的凸起,或是两者兼具,从而构成了表面凹凸不平程度较大。这给在一个方向采集的采集设备带来了挑战。由于凹陷和凸起的原因,导致设备无论设置在哪个位置,通过旋转扫描也只能从单一方向采集目标物该特定区域,从而导致该特定区域的信息大量丢失。Whether it is an interior space or an object in a wide field of view, they may have areas with more complex surfaces, called specific areas. These areas either have inwardly recessed deep holes/pits, or outwardly projecting higher protrusions, or both, resulting in a greater degree of surface unevenness. This presents challenges for acquisition devices that acquire in one direction. Due to the concave and convex reasons, no matter where the device is set, the specific area of the target object can only be collected from a single direction by rotating and scanning, resulting in a large amount of information loss in the specific area.
因此可以设置多个位置的采集设备均扫描采集该特定区域,从而使得从不同角度获得该区域的信息。例如A区域和B区域的交集包括该特定区域;A区域、B区域、C区域的共同交集包括该特定区域;A区域和B区域的交集以及C区域和D区域的交集均包括该特定区域等等。也就是说,该特定区域被重复扫描,又可以称为重复扫描区,即该特定区域被多个采集设备扫描采集。以上情况包括两个及两个以上的采集设备的采集区域的交集包括该特定区域;两个及两个以上的采集设备的采集区域的交集,和,其他的两个及两个以上的采集 设备的采集区域的交集,均包括该特定区域。Therefore, it is possible to set the collecting devices in multiple locations to scan and collect the specific area, so that the information of the area can be obtained from different angles. For example, the intersection of area A and area B includes this specific area; the common intersection of area A, area B, and area C includes this specific area; the intersection of area A and area B and the intersection of area C and area D include this specific area, etc. Wait. That is to say, the specific area is scanned repeatedly, which may also be called a repeated scanning area, that is, the specific area is scanned and collected by multiple collection devices. The above situation includes the intersection of the collection areas of two or more collection devices including the specific area; the intersection of the collection areas of two or more collection devices, and the other two or more collection devices The intersection of the collection areas, including the specific area.
上述特定区域可以根据前次3D合成情况分析得到,例如前次3D合成失败或失败率较高的区域;也可以根据操作人员的经验事先划定,例如凹凸起伏变化较大的区域,或凹凸程度较大的区域等。The above-mentioned specific areas can be analyzed and obtained according to the previous 3D synthesis situation, such as the areas where the previous 3D synthesis failed or with a high failure rate; it can also be delineated in advance according to the operator's experience, such as the areas with large fluctuations in bumps, or the degree of bumps. larger areas, etc.
3D信息采集流程3D information collection process
1、根据目标物尺寸、位置选择第一类3D信息采集位置及数量。1. Select the first type of 3D information collection location and quantity according to the size and location of the target.
(1)根据目标物采集要求,设定3D信息采集设备能够放置的位置,确定3D信息采集设备与目标物之间的距离。(1) According to the target collection requirements, set the position where the 3D information collection equipment can be placed, and determine the distance between the 3D information collection equipment and the target.
(2)根据目标物尺寸、上述距离和多个3D信息采集设备的采集范围为采集设备选择不同的采集位置a、b、c…,使得采集设备在多个位置时的采集范围A、B、C…之和能够覆盖目标物。但通常情况下,不仅要求3D信息采集设备的采集范围之和能够覆盖目标物尺寸,而且在相邻位置时3D信息采集设备的采集范围有交叠的情况下,它们的采集范围之和依然能够覆盖目标物尺寸。例如,交叠范围占采集范围10%以上。(2) According to the size of the target, the above distance and the collection range of multiple 3D information collection devices, select different collection positions a, b, c... The sum of C… can cover the target. But in general, not only is the sum of the collection ranges of the 3D information collection devices required to cover the size of the target, but also when the collection ranges of the 3D information collection devices overlap in adjacent positions, the sum of their collection ranges can still be Override target size. For example, the overlapping range accounts for more than 10% of the acquisition range.
(3)将选3D信息采集设备依次放置在采集位置a、b、c…从而保证3D信息采集设备在位置a、b、c…的采集区域能够覆盖目标物。(3) The selected 3D information collection equipment is placed in the collection positions a, b, c... in order to ensure that the 3D information collection equipment can cover the target in the collection area of the positions a, b, c....
(4)根据上述原则,将3D信息采集设备依次放置在每个一类采集位置上,旋转扫描目标物进行采集。所述旋转满足对于3D信息采集设备的图像采集装置的优化条件。也就是说,可以通过控制器,按照上述条件控制每次3D信息采集设备的图像采集装置旋转。(4) According to the above principles, place the 3D information acquisition device on each type of acquisition position in turn, and rotate and scan the target for acquisition. The rotation satisfies the optimization conditions for the image acquisition device of the 3D information acquisition device. That is to say, the controller can control the rotation of the image acquisition device of the 3D information acquisition device each time according to the above conditions.
2、根据目标物特定区域的尺寸、数量、位置设置第二类3D信息采集位置及数量。如图4所示:2. Set the second type of 3D information collection position and quantity according to the size, quantity and position of the specific area of the target. As shown in Figure 4:
(1)确定目标物特定区域的数量、位置。确定的方式包括根据预先的资料,或根据目视的结果,或根据前次采集中未合成的区域分布。(1) Determine the number and location of the specific area of the target. The way of determination includes based on pre-existing data, or based on visual results, or based on the distribution of areas that were not synthesized in the previous acquisition.
(2)根据目标物特定区域的尺寸,为每个特定区域安排一个或多个第二类3D信息采集位置,依次放置采集设备进行再次旋转采集,使得它们的采集范围能够覆盖该特定区域。(2) According to the size of the specific area of the target, arrange one or more second-type 3D information collection positions for each specific area, and place the acquisition equipment in turn for re-rotation acquisition, so that their acquisition range can cover the specific area.
(3)根据目标物特定区域的数量、位置以及每个特定区域所需要的第二类3D信息采集位置数量,确定第二类3D信息采集的次数,并为每次3D信息采集设备安排位置。通常情况下,一个或多个第二类3D信息采集位置是插入上述第一类3D信息采集位置之间,从而形成对第一类3D信息采集位置采集范围 薄弱的区域进行重复采集,即对特定区域进行重复采集,形成重复扫描区。也可以将第二类3D信息采集位置设置于其他位置(例如靠目标物更近或更远),保证重复扫描区能够获得足够不同角度图片。(3) According to the number and position of the specific area of the target and the number of the second type of 3D information collection positions required by each specific area, determine the number of times of the second type of 3D information collection, and arrange the position for each 3D information collection device. Usually, one or more second-type 3D information collection locations are inserted between the above-mentioned first-type 3D information collection locations, so as to form repeated collection of areas where the collection range of the first-type 3D information collection locations is weak. The area is repeatedly collected to form a repeated scanning area. The second type of 3D information collection position can also be set at other positions (for example, closer to or farther from the target) to ensure that enough different angle pictures can be obtained in the repeated scanning area.
(4)根据上述原则,将3D信息采集设备依次放置在每个二类采集位置上,旋转扫描目标物进行采集。所述旋转满足对于3D信息采集设备的图像采集装置的优化条件。也就是说,可以通过控制器,按照上述条件控制每次3D信息采集设备的图像采集装置旋转。(4) According to the above principles, place the 3D information collection device in each second-class collection position in turn, and rotate and scan the target for collection. The rotation satisfies the optimization conditions for the image acquisition device of the 3D information acquisition device. That is to say, the controller can control the rotation of the image acquisition device of the 3D information acquisition device each time according to the above conditions.
3、将多个3D信息采集设备扫描采集获得的图片发送至处理器中,处理器利用上述多个图片进行目标物3D模型的合成建模。同样,上述多张图片还可以通过通讯装置送入远程平台、云平台、服务器、上位机和/或移动终端中,利用3D模型合成方法进行目标物的3D合成。3. Send the pictures scanned and collected by multiple 3D information collection devices to the processor, and the processor uses the above multiple pictures to synthesize and model the 3D model of the target object. Similarly, the above-mentioned multiple pictures can also be sent to a remote platform, cloud platform, server, host computer and/or mobile terminal through a communication device, and a 3D model synthesis method is used to perform 3D synthesis of the target object.
上述采集可以使用手持设备完成,这样,用户可以手持设备行走至不同位置进行多次采集,从而构建较大空间的3D模型。例如对于一个长廊而言,设置多个设备也是可以的,但这样过程较为复杂。而如果用户手持设备,分别移动至长廊的不同位置,即可分别采集长廊不同区域,从而最终合成整个长廊的3D模型。当然,在行走过程中,对于长廊结构复杂的区域,可以多停留几个位置,保证这些区域被反复采集,即形成多个第二类采集位置。The above acquisition can be completed by using a handheld device, so that the user can walk to different positions with the handheld device to perform multiple acquisitions, thereby constructing a 3D model of a larger space. For example, for a corridor, it is also possible to set up multiple devices, but the process is more complicated. However, if the user holds the device and moves to different positions of the corridor, different areas of the corridor can be collected separately, so as to finally synthesize the 3D model of the whole corridor. Of course, in the process of walking, for areas with complex corridor structures, you can stay at several more positions to ensure that these areas are repeatedly collected, that is, multiple second-type collection positions are formed.
相机位置的优化Optimization of camera position
为了保证设备能够兼顾3D合成的效果和效率,除了常规的优化合成算法的方法外,还可以通过优化相机采集位置的方法。特别是当3D采集合成设备的相机的采集方向与其旋转轴方向相互背离的情况时,对于这种设备现有技术未提到如何进行相机位置的更佳的优化。即使存在的一些优化方法,其也是在不同实验下得到的不同的经验条件。特别是,现有的一些位置优化方法需要获得目标物的尺寸,这在环绕式3D采集中是可行的,可以事先测量完毕。但在开放式的空间中则难以事先测量得到。因此需要提出一种能够适用于当3D采集合成设备的相机的采集方向与其旋转轴方向相互背离的情况时进行相机位置优化的方法。这正是本发明所要解决的问题,和做出的技术贡献。In order to ensure that the device can take into account the effect and efficiency of 3D synthesis, in addition to the conventional method of optimizing the synthesis algorithm, the method of optimizing the camera acquisition position can also be adopted. Especially when the acquisition direction of the camera of the 3D acquisition and synthesis device deviates from the direction of its rotation axis, the prior art for such a device does not mention how to better optimize the camera position. Even if some optimization methods exist, they are obtained under different empirical conditions under different experiments. In particular, some existing position optimization methods need to obtain the size of the target object, which is feasible in surround 3D acquisition and can be measured in advance. However, it is difficult to measure in advance in an open space. Therefore, it is necessary to propose a method for optimizing the camera position when the acquisition direction of the camera of the 3D acquisition and synthesis device and the direction of its rotation axis deviate from each other. This is exactly the problem to be solved by the present invention and the technical contribution made.
为此,本发明进行了大量实验,总结出在进行采集时相机采集的间隔优选满足的经验条件如下。To this end, the present invention conducts a large number of experiments, and summarizes the following empirical conditions that the interval of camera acquisition is preferably satisfied during acquisition.
在进行3D采集时,图像采集装置在相邻的两个位置时其光轴的夹角α满 足如下条件:When performing 3D acquisition, the included angle α of the optical axis of the image acquisition device at two adjacent positions satisfies the following conditions:
Figure PCTCN2021123762-appb-000002
Figure PCTCN2021123762-appb-000002
其中,in,
R为旋转中心到目标物表面的距离,R is the distance from the center of rotation to the surface of the target,
T为采集时物距与像距的和,也就是图像采集装置的感光单元与目标物的距离。T is the sum of the object distance and the image distance during acquisition, that is, the distance between the photosensitive unit of the image acquisition device and the target object.
d为图像采集装置的感光元件(CCD)的长度或宽度,当上述两个位置是沿感光元件长度方向时,d取矩形长度;当上述两个位置是沿感光元件宽度方向时,d取矩形宽度。d is the length or width of the photosensitive element (CCD) of the image acquisition device. When the above two positions are along the length direction of the photosensitive element, d is the length of the rectangle; when the above two positions are along the width direction of the photosensitive element, d is the rectangle. width.
F为图像采集装置的镜头焦距。F is the focal length of the lens of the image acquisition device.
u为经验系数。u is the empirical coefficient.
通常情况下,在采集设备上配置有测距装置,例如激光测距仪。将其光轴与图像采集装置的光轴调节平行,则其可以测量采集设备到目标物表面的距离,利用测量得到的距离,根据测距装置与采集设备各部件的已知位置关系,即可获得R和T。Usually, a distance measuring device, such as a laser distance meter, is configured on the acquisition device. Adjust its optical axis to be parallel to the optical axis of the image acquisition device, then it can measure the distance from the acquisition device to the surface of the target object. Using the measured distance, according to the known positional relationship between the distance measuring device and the various components of the acquisition device, you can Get R and T.
图像采集装置在两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为T。除了这种方法外,也可以使用多次平均法或其他方法,其原则是T的值应当与采集时像距物距和不背离。When the image acquisition device is in any one of the two positions, the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T. In addition to this method, multiple averaging methods or other methods can also be used. The principle is that the value of T should not deviate from the distance between the image and the object during acquisition.
同样道理,图像采集装置在两个位置中的任何一个位置时,旋转中心沿着光轴到目标物表面的距离作为R。除了这种方法外,也可以使用多次平均法或其他方法,其原则是R的值应当与采集时旋转半径不背离。In the same way, when the image acquisition device is in any one of the two positions, the distance from the center of rotation to the surface of the target object along the optical axis is taken as R. In addition to this method, multiple averaging methods or other methods can also be used, the principle of which is that the value of R should not deviate from the radius of rotation at the time of acquisition.
通常情况下,现有技术中均采用物体尺寸作为推算相机位置的方式。由于物体尺寸会随着测量物体的变化而改变。例如,在进行一个大物体3D信息采集后,再进行小物体采集时,就需要重新测量尺寸,重新推算。上述不方便的测量以及多次重新测量都会带来测量的误差,从而导致相机位置推算错误。而本方案根据大量实验数据,给出了相机位置需要满足的经验条件,不需要直接测量物体大小尺寸。经验条件中d、F均为相机固定参数,在购买相机、镜头时,厂家即会给出相应参数,无需测量。而R、T仅为一个直线距离,用传统测量方法,例如直尺、激光测距仪均可以很便捷的测量得到。同时,由于本发明的设备中,图像采集装置(例如相机)的采集方向与其旋转轴方向相互背离,也就是说,镜头朝向与旋转中心大体相反。此时控制图像采集装置两次位置的 光轴夹角α就更加容易,只需要控制旋转驱动电机的转角即可。因此,使用α来定义最优位置是更为合理的。因此,本发明的经验公式使得准备过程变得方便快捷,同时也提高了相机位置的排布准确度,使得相机能够设置在优化的位置中,从而在同时兼顾了3D合成精度和速度。Usually, the size of the object is used as a method for estimating the position of the camera in the prior art. Because the size of the object will change with the change of the measured object. For example, after collecting 3D information of a large object, when collecting small objects, it is necessary to re-measure the size and re-calculate. The above-mentioned inconvenient measurements and multiple re-measurements will bring about measurement errors, resulting in incorrect camera position estimation. In this scheme, based on a large number of experimental data, the empirical conditions that the camera position needs to meet are given, and there is no need to directly measure the size of the object. In the empirical conditions, d and F are fixed parameters of the camera. When purchasing a camera and lens, the manufacturer will give the corresponding parameters without measurement. However, R and T are only a straight line distance, which can be easily measured by traditional measurement methods, such as straightedge and laser rangefinder. At the same time, in the apparatus of the present invention, the acquisition direction of the image acquisition device (eg, camera) is away from the direction of its rotation axis, that is, the orientation of the lens is substantially opposite to the rotation center. At this time, it is easier to control the included angle α of the optical axis between the two positions of the image acquisition device, and it is only necessary to control the rotation angle of the rotary drive motor. Therefore, it is more reasonable to use α to define the optimal position. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the arrangement accuracy of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time.
根据大量实验,为保证合成的速度和效果,u应当小于0.498,为了更佳的合成效果,优选u<0.411,特别是优选u<0.359,在一些应用场合下u<0.281,或u<0.169,或u<0.041,或u<0.028。According to a large number of experiments, in order to ensure the speed and effect of synthesis, u should be less than 0.498. For better synthesis effect, u<0.411 is preferred, especially u<0.359. In some applications, u<0.281, or u<0.169, or u<0.041, or u<0.028.
利用本发明装置,进行实验,部分实验数据如下所示,单位mm。(以下数据仅为有限举例)Using the device of the present invention, experiments are carried out, and some experimental data are as follows, in mm. (The following data are only limited examples)
Figure PCTCN2021123762-appb-000003
Figure PCTCN2021123762-appb-000003
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。The above data are only obtained from experiments to verify the conditions of the formula, and do not limit the invention. Even the absence of these data does not affect the objectivity of the formula. Those skilled in the art can adjust the parameters of the equipment and the details of the steps to conduct experiments, and other data obtained are also in line with the conditions of the formula.
3D模型合成方法3D model synthesis method
图像采集装置采集获得的多个图像送入处理单元中,利用下述算法构建3D模型。所述处理单元可以位于采集设备中,也可以位于远程,例如云平台、服务器、上位机等。The multiple images acquired by the image acquisition device are sent to the processing unit, and the following algorithm is used to construct a 3D model. The processing unit may be located in the acquisition device, or may be located remotely, such as a cloud platform, a server, a host computer, and the like.
具体算法主要包括如下步骤:The specific algorithm mainly includes the following steps:
步骤1:对所有输入照片进行图像增强处理。采用下述滤波器增强原始照片的反差和同时压制噪声。Step 1: Perform image enhancement processing on all input photos. The following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
Figure PCTCN2021123762-appb-000004
Figure PCTCN2021123762-appb-000004
式中:g(x,y)为原始影像在(x,y)处灰度值,f(x,y)为经过Wallis滤波器增强后该处的灰度值,m g为原始影像局部灰度均值,s g为原始影像局部灰度标准偏差,m f为变换后的影像局部灰度目标值,s f为变换后影像局部灰度标准偏差目标值。c∈(0,1)为影像方差的扩展常数,b∈(0,1)为影像亮度系数常数。 In the formula: g(x, y) is the gray value of the original image at (x, y), f(x, y) is the gray value of the original image after enhancement by Wallis filter, and m g is the local gray value of the original image. sg is the local grayscale standard deviation of the original image, mf is the local grayscale target value of the transformed image, and sf is the localized grayscale standard deviation target value of the transformed image. c∈(0,1) is the expansion constant of the image variance, and b∈(0,1) is the image luminance coefficient constant.
该滤波器可以大大增强影像中不同尺度的影像纹理模式,所以在提取影像的点特征时可以提高特征点的数量和精度,在照片特征匹配中则提高了匹配结果可靠性和精度。The filter can greatly enhance the image texture patterns of different scales in the image, so it can improve the number and accuracy of feature points when extracting image point features, and improve the reliability and accuracy of matching results in photo feature matching.
步骤2:对输入的所有照片进行特征点提取,并进行特征点匹配,获取稀疏特征点。采用SURF算子对照片进行特征点提取与匹配。SURF特征匹配方法主要包含三个过程,特征点检测、特征点描述和特征点匹配。该方法使用Hessian矩阵来检测特征点,用箱式滤波器(Box Filters)来代替二阶高斯滤波,用积分图像来加速卷积以提高计算速度,并减少了局部影像特征描述符的维数,来加快匹配速度。主要步骤包括①构建Hessian矩阵,生成所有的兴趣点,用于特征提取,构建Hessian矩阵的目的是为了生成图像稳定的边缘点(突变点);②构建尺度空间特征点定位,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;③特征点主方向的确定,采用的是统计特征点圆形邻域内的harr小波特征。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特征总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向;④生成64维特征点描述向量,特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子;⑤特征点匹配,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好。Step 2: Extract feature points from all the input photos, and perform feature point matching to obtain sparse feature points. The SURF operator is used to extract and match the feature points of the photo. The SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses the Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, uses integral images to accelerate convolution to improve computational speed, and reduces the dimension of local image feature descriptors. to speed up matching. The main steps include ① constructing a Hessian matrix to generate all interest points for feature extraction. The purpose of constructing a Hessian matrix is to generate image stable edge points (mutation points); ② constructing the scale space feature point positioning, which will be processed by the Hessian matrix Each pixel point is compared with 26 points in the two-dimensional image space and scale space neighborhood, and the key points are initially located. (3) The main direction of the feature point is determined by using the harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree sector is counted, and then the sector is rotated at intervals of 0.2 radians and the harr wavelet eigenvalues in the region are counted again. The direction of the sector with the largest value is used as the main direction of the feature point; (4) a 64-dimensional feature point description vector is generated, and a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction. Each sub-region counts the haar wavelet features of 25 pixels in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction. The haar wavelet features are 4 directions after the horizontal value, after the vertical value, after the absolute value of the horizontal direction and the sum of the absolute value of the vertical direction. These 4 values are used as the feature vector of each sub-block area, so there are 4* 4*4=64-dimensional vector as the descriptor of the Surf feature; ⑤ Feature point matching, by calculating the Euclidean distance between the two feature points to determine the matching degree, the shorter the Euclidean distance, the better the matching degree of the two feature points. .
步骤3:输入匹配的特征点坐标,利用光束法平差,解算稀疏的目标物三 维点云和拍照相机的位置和姿态数据,即获得了稀疏目标物模型三维点云和位置的模型坐标值;以稀疏特征点为初值,进行多视照片稠密匹配,获取得到密集点云数据。该过程主要有四个步骤:立体像对选择、深度图计算、深度图优化、深度图融合。针对输入数据集里的每一张影像,我们选择一张参考影像形成一个立体像对,用于计算深度图。因此我们可以得到所有影像的粗略的深度图,这些深度图可能包含噪声和错误,我们利用它的邻域深度图进行一致性检查,来优化每一张影像的深度图。最后进行深度图融合,得到整个场景的三维点云。Step 3: Input the coordinates of the matched feature points, and use the beam method to adjust the position and attitude data of the sparse target object 3D point cloud and the camera to obtain the sparse target object model 3D point cloud and position model coordinates. ; Take sparse feature points as the initial value, perform dense matching of multi-view photos, and obtain dense point cloud data. There are four main steps in this process: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input dataset, we select a reference image to form a stereo pair for computing the depth map. So we can get a rough depth map for all images, these depth maps may contain noise and errors, and we use its neighborhood depth map to perform a consistency check to optimize the depth map for each image. Finally, depth map fusion is performed to obtain a 3D point cloud of the entire scene.
步骤4:利用密集点云进行目标物曲面重建。包括定义八叉树、设置函数空间、创建向量场、求解泊松方程、提取等值面几个过程。由梯度关系得到采样点和指示函数的积分关系,根据积分关系获得点云的向量场,计算指示函数梯度场的逼近,构成泊松方程。根据泊松方程使用矩阵迭代求出近似解,采用移动方体算法提取等值面,对所测点云重构出被测物体的模型。Step 4: Use dense point cloud to reconstruct the target surface. Including several processes of defining octrees, setting function spaces, creating vector fields, solving Poisson equations, and extracting isosurfaces. The integral relationship between the sampling point and the indicator function is obtained from the gradient relationship, the vector field of the point cloud is obtained according to the integral relationship, and the approximation of the gradient field of the indicator function is calculated to form the Poisson equation. According to the Poisson equation, the approximate solution is obtained by matrix iteration, the isosurface is extracted by the moving cube algorithm, and the model of the measured object is reconstructed from the measured point cloud.
步骤5:目标物模型的全自动纹理贴图。表面模型构建完成后,进行纹理贴图。主要过程包括:①纹理数据获取通过图像重建目标的表面三角面格网;②重建模型三角面的可见性分析。利用图像的标定信息计算每个三角面的可见图像集以及最优参考图像;③三角面聚类生成纹理贴片。根据三角面的可见图像集、最优参考图像以及三角面的邻域拓扑关系,将三角面聚类生成为若干参考图像纹理贴片;④纹理贴片自动排序生成纹理图像。对生成的纹理贴片,按照其大小关系进行排序,生成包围面积最小的纹理图像,得到每个三角面的纹理映射坐标。Step 5: Fully automatic texture mapping of the target model. After the surface model is constructed, texture mapping is performed. The main process includes: ① texture data acquisition through image reconstruction of the target surface triangle mesh; ② visibility analysis of the reconstructed model triangle. Use the calibration information of the image to calculate the visible image set of each triangular face and the optimal reference image; 3. The triangular face is clustered to generate texture patches. According to the visible image set of the triangular surface, the optimal reference image and the neighborhood topology relationship of the triangular surface, the triangular surface is clustered into several reference image texture patches; ④The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate a texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangular surface.
应当注意,上述算法是本发明使用的算法,本算法与图像采集条件相互配合,使用该算法兼顾了合成的时间和质量。但可以理解,同样可以使用现有技术中常规3D合成算法也可以与本发明的方案进行配合使用。It should be noted that the above algorithm is the algorithm used in the present invention, the algorithm cooperates with the image acquisition conditions, and the time and quality of synthesis are taken into account when using this algorithm. However, it can be understood that conventional 3D synthesis algorithms in the prior art can also be used in conjunction with the solution of the present invention.
应用实例Applications
为了构建某一展览馆内部3D模型,可以将3D采集设备放置在屋内底板上,通过旋转采集建筑物多张图像,再移动采集设备至多个屋内位置多次旋转采集,根据合成算法进行3D模型合成,从而构建屋内的3D模型,便于后续装修、展示。In order to construct a 3D model inside an exhibition hall, a 3D acquisition device can be placed on the floor of the house, multiple images of the building can be acquired by rotating, and then the acquisition device can be moved to multiple indoor positions for multiple rotation acquisitions, and the 3D model can be synthesized according to the synthesis algorithm. , so as to build a 3D model of the house, which is convenient for subsequent decoration and display.
上述目标物体、目标物、及物体皆表示预获取三维信息的对象。可以为一实体物体,也可以为多个物体组成物。。所述目标物的三维信息包括三维图像、三维点云、三维网格、局部三维特征、三维尺寸及一切带有目标物三维特征的参数。本发明里所谓的三维是指具有XYZ三个方向信息,特别是具有深度信息,与只有二维平面信息具有本质区别。也与一些称为三维、全景、全息、三维,但实际上只包括二维信息,特别是不包括深度信息的定义有本质区别。The above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a solid object, or it can be composed of multiple objects. . The three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with the three-dimensional feature of the target. The so-called three-dimensional in the present invention refers to having three directional information of XYZ, especially having depth information, which is essentially different from having only two-dimensional plane information. It is also fundamentally different from some definitions that are called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially not depth information.
本发明所说的采集区域是指图像采集装置1(例如相机)能够拍摄的范围。本发明中的图像采集装置1可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。The acquisition area mentioned in the present invention refers to the range that the image acquisition device 1 (eg, camera) can capture. The image acquisition device 1 in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image acquisition function.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the description provided herein, numerous specific details are set forth. It will be understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。Similarly, it is to be understood that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together into a single embodiment, figure, or its description. This disclosure, however, should not be construed as reflecting an intention that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。Those skilled in the art will understand that the modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined. Each feature disclosed in this specification (including accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。Furthermore, it will be understood by those skilled in the art that although some of the embodiments herein include certain features, but not others, included in other embodiments, that combinations of features of the different embodiments are intended to be within the scope of the present invention And form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于本发明装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all of the components in the device according to the present invention according to the embodiments of the present invention. The present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein. Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It should be noted that the above-described embodiments illustrate rather than limit the invention, and that alternative embodiments may be devised by those skilled in the art without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names.
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。By now, those skilled in the art will recognize that, although various exemplary embodiments of the present invention have been illustrated and described in detail herein, the present invention may still be implemented in accordance with the present disclosure without departing from the spirit and scope of the present invention. The content directly determines or derives many other variations or modifications consistent with the principles of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (22)

  1. 一种多点组合式3D采集系统及方法,其特征在于:包括3D采集设备,A multi-point combined 3D acquisition system and method, characterized in that it comprises a 3D acquisition device,
    所述3D采集设备对采集目标进行多点位采集,每个采集点位的采集范围都至少分别与其他采集点位的采集范围有重叠;The 3D collection device performs multi-point collection on the collection target, and the collection range of each collection point at least overlaps with the collection range of other collection points;
    所述3D采集设备包括图像采集装置、旋转装置;其中图像采集装置的采集方向为背离旋转中心方向。The 3D acquisition device includes an image acquisition device and a rotation device; wherein the acquisition direction of the image acquisition device is a direction away from the rotation center.
  2. 如权利要求1所述的系统,其特征在于:所述多点位采集为多个3D采集设备分别设置在多个点位采集。The system according to claim 1, wherein the multi-point acquisition is that a plurality of 3D acquisition devices are respectively set at a plurality of points for acquisition.
  3. 如权利要求2所述的系统,其特征在于:所述多个3D采集设备包括第一类3D采集设备和第二类3D采集设备。The system of claim 2, wherein the plurality of 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device.
  4. 如权利要求3所述的系统,其特征在于:第一类3D采集设备采集范围之和能够覆盖目标物,第二类3D采集设备采集范围之和能够覆盖目标物的特定区域。The system according to claim 3, wherein the sum of the collection ranges of the first type of 3D collection devices can cover the target, and the sum of the collection ranges of the second type of 3D collection devices can cover a specific area of the target.
  5. 如权利要求3所述的系统,其特征在于:所述多个3D采集设备包括第一类3D采集设备和第二类3D采集设备,第一类3D采集设备采集范围之和大于第二类3D采集设备采集范围之和。The system of claim 3, wherein the plurality of 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device, and the sum of the acquisition ranges of the first type of 3D acquisition device is greater than the second type of 3D acquisition device The sum of the collection range of the collection device.
  6. 如权利要求3所述的系统,其特征在于:对于目标物的特定区域,采用第一类3D采集设备和第二类3D采集设备共同扫描采集。The system according to claim 3, characterized in that: for a specific area of the target object, the first type of 3D acquisition device and the second type of 3D acquisition device are used to scan and acquire together.
  7. 如权利要求6所述的系统,其特征在于:所述特定区域为用户指定区域。The system of claim 6, wherein the specific area is a user-designated area.
  8. 如权利要求6所述的系统,其特征在于:所述特定区域为前次合成失败区域。The system of claim 6, wherein the specific area is an area where the previous synthesis failed.
  9. 如权利要求6所述的系统,其特征在于:所述特定区域为轮廓凹凸变化较大区域。The system according to claim 6, wherein the specific area is an area with a large variation in contour concavo-convex.
  10. 如权利要求1所述的系统,其特征在于:所述3D采集设备包括手持式3D采集设备。The system of claim 1, wherein the 3D acquisition device comprises a handheld 3D acquisition device.
  11. 如权利要求1所述的系统,其特征在于:所述多点位采集为单个3D采集设备依次设置在多个点位采集,完成多个3D采集位置的采集。The system according to claim 1, wherein the multi-point acquisition is that a single 3D acquisition device is sequentially arranged at multiple points for acquisition, and the acquisition of multiple 3D acquisition positions is completed.
  12. 如权利要求11所述的系统,其特征在于:3D采集设备依次采集中,3D采集设备在一个采集位置的采集范围与其在另一个采集位置的采集范围有重叠,且该重叠区域至少部分位于目标物上。11. The system of claim 11, wherein: in the sequential acquisition by the 3D acquisition device, the acquisition range of the 3D acquisition device at one acquisition location overlaps with the acquisition range of another acquisition location, and the overlapped area is at least partially located at the target on things.
  13. 如权利要求11所述的系统,其特征在于:多个3D采集位置包括第一 类3D采集位置和第二类3D采集位置;在第一类3D采集位置时采集范围之和能够覆盖目标物,在第二类3D采集位置时采集范围之和能够覆盖目标物的特定区域。The system according to claim 11, wherein: the plurality of 3D collection positions include a first type of 3D collection position and a second type of 3D collection position; when the first type of 3D collection position is collected, the sum of the collection ranges can cover the target object, In the second type of 3D acquisition location, the sum of the acquisition ranges can cover a specific area of the target.
  14. 如权利要求13所述的系统,其特征在于:上述特定区域为用户指定区域。The system of claim 13, wherein the specific area is a user-specified area.
  15. 如权利要求13所述的系统,其特征在于:上述特定区域为前次合成失败区域。The system of claim 13, wherein the specific area is an area where the previous synthesis failed.
  16. 如权利要求13所述的系统,其特征在于:上述特定区域为轮廓凹凸变化较大区域。The system according to claim 13, wherein the specific area is an area with a large variation in contour concavo-convex.
  17. 如权利要求1所述的系统,其特征在于:图像采集装置在相邻的两个采集位置的光轴的夹角α满足如下条件:The system according to claim 1, wherein the included angle α of the optical axes of the image acquisition devices at two adjacent acquisition positions satisfies the following conditions:
    Figure PCTCN2021123762-appb-100001
    Figure PCTCN2021123762-appb-100001
    其中,R为旋转中心到目标物表面的距离,T为采集时物距与像距的和,d为图像采集装置的感光元件的长度或宽度,F为图像采集装置的镜头焦距,u为经验系数。Among them, R is the distance from the rotation center to the surface of the target object, T is the sum of the object distance and the image distance during acquisition, d is the length or width of the photosensitive element of the image acquisition device, F is the lens focal length of the image acquisition device, and u is the experience coefficient.
  18. 如权利要求9所述的系统,其特征在于:u<0.498,或u<0.41,或u<0.359,或u<0.281,或u<0.169,或u<0.041,或u<0.028。The system of claim 9, wherein: u<0.498, or u<0.41, or u<0.359, or u<0.281, or u<0.169, or u<0.041, or u<0.028.
  19. 一种3D合成或识别装置,包括权利要求1-18任一所述的系统。A 3D synthesis or identification device, comprising the system of any one of claims 1-18.
  20. 一种3D合成或识别方法,包括权利要求1-18任一所述的系统。A 3D synthesis or identification method, comprising the system of any one of claims 1-18.
  21. 一种物体制造或展示装置,包括权利要求1-18任一所述的系统。An object manufacturing or display apparatus comprising the system of any of claims 1-18.
  22. 一种物体制造或展示方法,包括权利要求1-18任一所述的系统。A method of making or displaying an object, comprising the system of any one of claims 1-18.
PCT/CN2021/123762 2020-10-15 2021-10-14 Multi-location combined 3d image acquisition system and method WO2022078433A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202011106003.5A CN112254679B (en) 2020-10-15 2020-10-15 Multi-position combined type 3D acquisition system and method
CN202011105994.5A CN112254677B (en) 2020-10-15 2020-10-15 Multi-position combined 3D acquisition system and method based on handheld device
CN202011105292.7 2020-10-15
CN202011105292.7A CN112254671B (en) 2020-10-15 2020-10-15 Multi-time combined 3D acquisition system and method
CN202011106003.5 2020-10-15
CN202011105994.5 2020-10-15

Publications (1)

Publication Number Publication Date
WO2022078433A1 true WO2022078433A1 (en) 2022-04-21

Family

ID=81207698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123762 WO2022078433A1 (en) 2020-10-15 2021-10-14 Multi-location combined 3d image acquisition system and method

Country Status (1)

Country Link
WO (1) WO2022078433A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495932A (en) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (en) * 2007-07-26 2008-04-02 上海交通大学 Method for tracing three-dimensional human body movement based on multi-camera
CN111292239A (en) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 Three-dimensional model splicing equipment and method
EP3671277A1 (en) * 2018-12-21 2020-06-24 Infineon Technologies AG 3d imaging apparatus and method
CN112254677A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-position combined 3D acquisition system and method based on handheld device
CN112254679A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-position combined 3D acquisition system and method
CN112254671A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-time combined 3D acquisition system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (en) * 2007-07-26 2008-04-02 上海交通大学 Method for tracing three-dimensional human body movement based on multi-camera
EP3671277A1 (en) * 2018-12-21 2020-06-24 Infineon Technologies AG 3d imaging apparatus and method
CN111292239A (en) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 Three-dimensional model splicing equipment and method
CN112254677A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-position combined 3D acquisition system and method based on handheld device
CN112254679A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-position combined 3D acquisition system and method
CN112254671A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-time combined 3D acquisition system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495932A (en) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system
CN117495932B (en) * 2023-12-25 2024-04-16 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system

Similar Documents

Publication Publication Date Title
WO2022078442A1 (en) Method for 3d information acquisition based on fusion of optical scanning and smart vision
WO2022111105A1 (en) Intelligent visual 3d information acquisition apparatus with free posture
WO2022078439A1 (en) Apparatus and method for acquisition and matching of 3d information of space and object
WO2022078418A1 (en) Intelligent three-dimensional information acquisition appratus capable of stably rotating
CN112361962B (en) Intelligent visual 3D information acquisition equipment of many every single move angles
WO2022078440A1 (en) Device and method for acquiring and determining space occupancy comprising moving object
CN112257537B (en) Intelligent multi-point three-dimensional information acquisition equipment
CN112254680B (en) Multi freedom&#39;s intelligent vision 3D information acquisition equipment
WO2022111104A1 (en) Smart visual apparatus for 3d information acquisition from multiple roll angles
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
CN112082486B (en) Handheld intelligent 3D information acquisition equipment
CN112253913B (en) Intelligent visual 3D information acquisition equipment deviating from rotation center
CN112254676B (en) Portable intelligent 3D information acquisition equipment
WO2022078433A1 (en) Multi-location combined 3d image acquisition system and method
WO2022078438A1 (en) Indoor 3d information acquisition device
WO2022078419A1 (en) Intelligent visual 3d information acquisition device having multiple offset angles
CN112254677B (en) Multi-position combined 3D acquisition system and method based on handheld device
CN112254671B (en) Multi-time combined 3D acquisition system and method
CN112254673B (en) Self-rotation type intelligent vision 3D information acquisition equipment
WO2022078444A1 (en) Program control method for 3d information acquisition
WO2022078437A1 (en) Three-dimensional processing apparatus and method between moving objects
CN112254679B (en) Multi-position combined type 3D acquisition system and method
CN112257535A (en) Three-dimensional matching equipment and method for avoiding object
WO2022078417A1 (en) Rotatory intelligent visual 3d information collection device
CN112254674B (en) Close-range intelligent visual 3D information acquisition equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879476

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21879476

Country of ref document: EP

Kind code of ref document: A1