WO2022078433A1 - Système et procédé d'acquisition d'images 3d combinées à de multiples emplacements - Google Patents

Système et procédé d'acquisition d'images 3d combinées à de multiples emplacements Download PDF

Info

Publication number
WO2022078433A1
WO2022078433A1 PCT/CN2021/123762 CN2021123762W WO2022078433A1 WO 2022078433 A1 WO2022078433 A1 WO 2022078433A1 CN 2021123762 W CN2021123762 W CN 2021123762W WO 2022078433 A1 WO2022078433 A1 WO 2022078433A1
Authority
WO
WIPO (PCT)
Prior art keywords
acquisition
collection
target
acquisition device
type
Prior art date
Application number
PCT/CN2021/123762
Other languages
English (en)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011105994.5A external-priority patent/CN112254677B/zh
Priority claimed from CN202011105292.7A external-priority patent/CN112254671B/zh
Priority claimed from CN202011106003.5A external-priority patent/CN112254679B/zh
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2022078433A1 publication Critical patent/WO2022078433A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Definitions

  • the invention relates to the technical field of topography measurement, in particular to the technical field of 3D topography measurement.
  • 3D information needs to be collected first.
  • Commonly used methods include the use of machine vision and structured light, laser ranging, and lidar.
  • Structured light, laser ranging, and lidar all require an active light source to be emitted to the target, which will affect the target in some cases, and the cost of the light source is high. Moreover, the structure of the light source is relatively precise and easy to be damaged.
  • the machine vision method is to collect pictures of objects from different angles, and match and stitch these pictures to form a 3D model, which is low-cost and easy to use.
  • multiple cameras can be set at different angles of the object to be tested, or pictures can be collected from different angles by rotating a single or multiple cameras.
  • the acquisition position of the camera needs to be set around the target (referred to as the surround type), but this method requires a large space to set the acquisition position for the image acquisition device.
  • the present invention provides a multi-position combined 3D acquisition system and method that overcomes the above problems or at least partially solves the above problems.
  • Embodiments of the present invention provide a multi-point combined 3D acquisition system and method, including a 3D acquisition device,
  • the 3D collection device performs multi-point collection on the collection target, and the collection range of each collection point at least overlaps with the collection range of other collection points;
  • the 3D acquisition device includes an image acquisition device and a rotation device; wherein the acquisition direction of the image acquisition device is a direction away from the rotation center.
  • the multi-point acquisition is that a plurality of 3D acquisition devices are respectively set to acquire at a plurality of points.
  • the plurality of 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device.
  • the sum of the collection ranges of the first type of 3D collection devices can cover the target, and the sum of the collection ranges of the second type of 3D collection devices can cover a specific area of the target.
  • the multiple 3D acquisition devices include a first type of 3D acquisition device and a second type of 3D acquisition device, and the sum of the acquisition ranges of the first type of 3D acquisition devices is greater than the sum of the acquisition ranges of the second type of 3D acquisition devices.
  • the first type of 3D acquisition device and the second type of 3D acquisition device are used to jointly scan and acquire.
  • the specific area is a user-specified area.
  • the specific area is an area where the previous synthesis failed.
  • the specific area is an area with a large variation in contour concavo-convex.
  • the 3D acquisition device includes a handheld 3D acquisition device.
  • a single 3D acquisition device is sequentially set to acquire at multiple points to complete the acquisition of multiple 3D acquisition locations.
  • the collection range of the 3D collection device at one collection position overlaps with the collection range at another collection position, and the overlapped area is at least partially located on the target.
  • the multiple 3D collection positions include a first type of 3D collection position and a second type of 3D collection position; the sum of the collection ranges can cover the target at the first type of 3D collection position, and the collection at the second type of 3D collection position The sum of the ranges can cover a specific area of the target.
  • the above-mentioned specific area is a user-specified area.
  • the above-mentioned specific area is the area where the previous synthesis failed.
  • the above-mentioned specific area is an area where the contour unevenness changes greatly.
  • the included angle ⁇ of the optical axes of the image acquisition devices at two adjacent acquisition positions satisfies the following conditions:
  • R is the distance from the rotation center to the surface of the target object
  • T is the sum of the object distance and the image distance during acquisition
  • d is the length or width of the photosensitive element of the image acquisition device
  • F is the lens focal length of the image acquisition device
  • u is the experience coefficient.
  • Another aspect of the embodiments of the present invention further provides a 3D synthesis identification device and method, including the system described in any one of the preceding claims.
  • Another aspect of the embodiments of the present invention further provides an object manufacturing and display device and method, including the system described in any of the preceding claims.
  • the self-rotating intelligent visual 3D acquisition device to collect the 3D information of the internal space of the target, which is suitable for wider space and smaller space.
  • FIG. 1 shows a schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention
  • FIG. 2 shows a schematic structural diagram of a handheld 3D information collection device provided by an embodiment of the present invention
  • FIG. 3 shows another schematic structural diagram of a handheld 3D information collection device provided by an embodiment of the present invention
  • FIG. 4 shows a schematic diagram of a multi-position combined 3D acquisition system provided by an embodiment of the present invention
  • FIG. 5 shows a schematic diagram of a multi-position combined handheld 3D acquisition system provided by an embodiment of the present invention
  • FIG. 6 shows a schematic diagram of the collection of a specific area by a multi-position combined 3D collection system provided by an embodiment of the present invention
  • FIG. 7 shows a schematic diagram of a multiple combined 3D acquisition system provided by an embodiment of the present invention.
  • FIG. 8 shows another schematic diagram of a multiple combined 3D acquisition system provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a multi-position combined 3D acquisition system, including 3D information acquisition equipment, as shown in FIG.
  • the image acquisition device 1 is connected with the rotating shaft of the rotating device 2 , and the rotating device 2 drives it to rotate.
  • the acquisition direction of the image acquisition device 1 is the direction away from the rotation center. That is, the acquisition direction is directed outward relative to the center of rotation.
  • the optical axis of the image acquisition device 1 may be parallel to the rotation plane, or may form a certain angle with the rotation plane, for example, within the range of -90°-90° based on the rotation plane.
  • the rotation axis or its extension line ie, the rotation center line
  • the optical collection ports (eg lenses) of the image collection device are all facing away from the direction of the rotation axis, that is to say, the collection area of the image collection device has no intersection with the rotation center line.
  • this method is also quite different from the general self-rotation method, especially the target object whose surface is not perpendicular to the horizontal plane can be collected.
  • the rotating shaft of the rotating device can also be connected to the image capturing device through a deceleration device, for example, through a gear set or the like.
  • the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on.
  • the above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like.
  • the rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position.
  • the specific collection position will be described in detail below.
  • the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
  • the carrying device 3 is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the carrying device 3 .
  • the carrying device may be a tripod, a base with a supporting device, or the like.
  • the rotating device is located in the center part of the carrier to ensure balance. However, in some special occasions, it can also be located at any position of the carrying device. Furthermore, the carrying device is not necessary.
  • the swivel device can be installed directly in the application, eg on the roof of a vehicle.
  • the embodiment also provides a handheld 3D information acquisition device, referred to as a 3D acquisition device, please refer to FIG.
  • the image acquisition device 1 is connected to the rotation device 2, so as to stably rotate and scan under the drive of the rotation device 2, and realize 3D acquisition of surrounding objects (the specific acquisition process will be described in detail below).
  • the rotating device 2 is mounted on the carrying device 3, and the carrying device 3 is used to carry the entire equipment.
  • the carrier 3 can be a handle, so that the entire device can be used for hand-held acquisition.
  • the bearing device 3 can also be a base-type bearing device, which is used to be installed on other devices, so that the entire intelligent 3D acquisition device can be installed on other devices for common use. For example, an intelligent 3D acquisition device is installed on the vehicle and performs 3D acquisition as the vehicle travels.
  • the carrying device 3 is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the carrying device 3 .
  • the carrying device may be a handle, a tripod, a base with a supporting device, or the like.
  • the rotating device is located in the center part of the carrier to ensure balance. However, in some special occasions, it can also be located at any position of the carrying device. Furthermore, the carrying device is not necessary.
  • the rotating device can also be installed directly in the application, eg on the roof of a vehicle.
  • the inner space of the carrying device is used to accommodate the battery, which is used to supply power to the 3D rotation acquisition stabilization device.
  • buttons are arranged on the casing of the carrying device to control the 3D rotation acquisition stabilization device. Including turning on/off the stabilization function and turning on/off the 3D rotation capture function.
  • the image acquisition device 1 is connected with the rotating shaft of the rotating device 2, and the rotating device drives the rotating device to rotate.
  • the rotating shaft of the rotating device can also be connected with the image capturing device through a transmission device, for example, through a gear set or the like.
  • the rotating device 2 can be arranged inside the handle, and part or all of the transmission device is also arranged inside the handle, which can further reduce the volume of the device.
  • the image capturing device When the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on.
  • the above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like.
  • the rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position.
  • the specific collection position will be described in detail below.
  • the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
  • a distance measuring device is also included.
  • the distance measuring device is fixedly connected with the image acquisition device, and the pointing direction of the distance measuring device is the same as the optical axis direction of the image acquisition device.
  • the distance measuring device can also be fixedly connected to the rotating device, as long as it can rotate synchronously with the image capturing device.
  • an installation platform may be provided, the image acquisition device and the distance measuring device are both located on the platform, the platform is installed on the rotating shaft of the rotating device, and is driven and rotated by the rotating device.
  • the distance measuring device can use a variety of methods such as a laser distance meter, an ultrasonic distance meter, an electromagnetic wave distance meter, etc., or a traditional mechanical measuring tool distance measuring device.
  • the 3D acquisition device is located at a specific location, and its distance from the target has been calibrated, and no additional measurement is required.
  • the light source can also include a light source, and the light source can be arranged on the periphery of the image acquisition device, on the rotating device and on the installation platform.
  • the light source can also be set independently, for example, an independent light source is used to illuminate the target. Even when lighting conditions are good, no light source is used.
  • the light source can be an LED light source or an intelligent light source, that is, the parameters of the light source are automatically adjusted according to the conditions of the target object and the ambient light.
  • the light sources are distributed around the lens of the image capture device, for example, the light sources are ring-shaped LED lights around the lens. Because in some applications it is necessary to control the intensity of the light source.
  • a diffuser device such as a diffuser housing
  • a diffuser housing can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which has a smaller volume, softer light, and has flexible properties, which can be attached to a curved surface.
  • marking points can be set at the position of the target. And the coordinates of these marker points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D composite model is obtained. These marking points can be pre-set points or laser light spots.
  • the method for determining the coordinates of these points may include: 1Using laser ranging: using a calibration device to emit laser light toward the target to form a plurality of calibration point spots, and obtain the calibration point coordinates through the known positional relationship of the laser ranging unit in the calibration device. Use the calibration device to emit laser light toward the target, so that the light beam emitted by the laser ranging unit in the calibration device falls on the target to form a light spot.
  • the laser beams emitted by the laser ranging units are parallel to each other, and the positional relationship between the units is known. Then the two-dimensional coordinates on the emission plane of the multiple light spots formed on the target can be obtained.
  • the distance between each laser ranging unit and the corresponding light spot can be obtained, that is, depth information equivalent to multiple light spots formed on the target can be obtained. That is, the depth coordinates perpendicular to the emission plane can be obtained.
  • the three-dimensional coordinates of each spot can be obtained.
  • 2 using the combination of distance measurement and angle measurement: respectively measure the distance of multiple markers and the angle between each other, so as to calculate the respective coordinates.
  • Use other coordinate measurement tools such as RTK, global coordinate positioning system, star-sensing positioning system, position and pose sensors, etc.
  • the acquisition system includes a plurality of the above-mentioned 3D information acquisition devices a, b, c..., which are located in different spatial positions.
  • the collection range of the collection device a includes the area A
  • the collection range of the collection device b includes the area B
  • the collection range of the collection device c includes the area C... and so on.
  • Their collection areas at least satisfy that the intersection between the two collection areas is not empty. In particular, the non-empty intersection should be located on the target. That is, each acquisition device overlaps at least the acquisition range of the other two acquisition devices, especially the acquisition range of each acquisition device on the target is at least the same as the acquisition range of the other two acquisition devices on the target. overlapping.
  • the collecting devices in multiple locations to scan and collect the specific area, so that the information of the area can be obtained from different angles.
  • the intersection of area A and area B includes this specific area; the common intersection of area A, area B, and area C includes this specific area; the intersection of area A and area B and the intersection of area C and area D include this specific area, etc. Wait. That is to say, the specific area is scanned repeatedly, which may also be called a repeated scanning area, that is, the specific area is scanned and collected by multiple collection devices.
  • the above situation includes the intersection of the collection areas of two or more collection devices including the specific area; the intersection of the collection areas of two or more collection devices, and the other two or more collection devices The intersection of the collection areas, including the specific area.
  • the above-mentioned specific areas can be analyzed and obtained according to the previous 3D synthesis situation, such as the areas where the previous 3D synthesis failed or with a high failure rate; it can also be delineated in advance according to the operator's experience, such as the areas with large fluctuations in bumps, or the degree of bumps.
  • a larger area, etc. that is, an area with a variation of bumps and ridges, or a region with a degree of variation of the bumps and volts greater than a preset threshold.
  • multiple handheld 3D information collection devices are not required, but only one handheld 3D information collection device is used.
  • the user holds the device and performs multiple rotation scans at different positions to obtain a picture of the target object. At this time, it should be ensured that the scanning range of the handheld 3D information acquisition device can cover the entire target area when it is located at each location.
  • the way of determination includes based on pre-existing data, or based on visual results, or based on the distribution of areas that were not synthesized in the previous acquisition.
  • one or more 3D information acquisition devices of the second type are arranged for each specific area, so that their acquisition range can cover the specific area.
  • each 3D information acquisition device After the first and second types of 3D information acquisition devices are all arranged, start to control each 3D information acquisition device to rotate and scan the target object, and the rotation satisfies the optimization conditions for the image acquisition device of the 3D information acquisition device. That is, the controller can control the rotation of the image acquisition device of each 3D information acquisition device according to the above conditions.
  • one 3D acquisition device or a limited number of 3D acquisition devices can be used to perform acquisition at the above set positions in sequence and time-sharing. . That is to say, the acquisition is not performed at the same time, but is acquired at different locations in a time-sharing manner, and images acquired at different times are collected for 3D synthesis.
  • the different locations described here are the same as the locations described above for the different acquisition devices.
  • the collection positions of each collection device are the positions where the user holds the device for collection in sequence. That is, the user holds the device and is located at the collection positions of the above-mentioned first and second types of 3D information collection equipment in sequence. Every time it is in the collection position (the first type of collection position or the second type of collection position), the handheld collection device is controlled to rotate for collection. Finally, the images collected each time are transmitted to the processor for 3D modeling and synthesis.
  • one 3D acquisition device or a limited number of 3D acquisition devices can be used to perform acquisition at the above set positions in sequence and time-sharing. . That is to say, the acquisition is not performed at the same time, but is acquired at different locations in a time-sharing manner, and images acquired at different times are collected for 3D synthesis.
  • the different locations described here are the same as the locations described above for the different acquisition devices.
  • the collection system includes one or more of the above-mentioned 3D information collection devices, and the collection devices are located at positions a, b, c... in sequence during the collection process.
  • the collection range when the collection device is at position a includes area A
  • the collection range when the collection device is at position b includes area B
  • the collection range when the collection device is at position c includes area C... and so on.
  • Their collection areas at least satisfy that the intersection between the two collection areas is not empty. In particular, the non-empty intersection should be located on the target. That is, each acquisition device overlaps at least the acquisition range of the other two acquisition devices, especially the acquisition range of each acquisition device on the target is at least the same as the acquisition range of the other two acquisition devices on the target. overlapping.
  • the collecting devices in multiple locations to scan and collect the specific area, so that the information of the area can be obtained from different angles.
  • the intersection of area A and area B includes this specific area; the common intersection of area A, area B, and area C includes this specific area; the intersection of area A and area B and the intersection of area C and area D include this specific area, etc. Wait. That is to say, the specific area is scanned repeatedly, which may also be called a repeated scanning area, that is, the specific area is scanned and collected by multiple collection devices.
  • the above situation includes the intersection of the collection areas of two or more collection devices including the specific area; the intersection of the collection areas of two or more collection devices, and the other two or more collection devices The intersection of the collection areas, including the specific area.
  • the above-mentioned specific areas can be analyzed and obtained according to the previous 3D synthesis situation, such as the areas where the previous 3D synthesis failed or with a high failure rate; it can also be delineated in advance according to the operator's experience, such as the areas with large fluctuations in bumps, or the degree of bumps. larger areas, etc.
  • the above distance and the collection range of multiple 3D information collection devices select different collection positions a, b, c...
  • the sum of C... can cover the target. But in general, not only is the sum of the collection ranges of the 3D information collection devices required to cover the size of the target, but also when the collection ranges of the 3D information collection devices overlap in adjacent positions, the sum of their collection ranges can still be Override target size. For example, the overlapping range accounts for more than 10% of the acquisition range.
  • the selected 3D information collection equipment is placed in the collection positions a, b, c... in order to ensure that the 3D information collection equipment can cover the target in the collection area of the positions a, b, c....
  • the controller can control the rotation of the image acquisition device of the 3D information acquisition device each time according to the above conditions.
  • the way of determination includes based on pre-existing data, or based on visual results, or based on the distribution of areas that were not synthesized in the previous acquisition.
  • the number and position of the specific area of the target and the number of the second type of 3D information collection positions required by each specific area determine the number of times of the second type of 3D information collection, and arrange the position for each 3D information collection device.
  • one or more second-type 3D information collection locations are inserted between the above-mentioned first-type 3D information collection locations, so as to form repeated collection of areas where the collection range of the first-type 3D information collection locations is weak.
  • the area is repeatedly collected to form a repeated scanning area.
  • the second type of 3D information collection position can also be set at other positions (for example, closer to or farther from the target) to ensure that enough different angle pictures can be obtained in the repeated scanning area.
  • the controller can control the rotation of the image acquisition device of the 3D information acquisition device each time according to the above conditions.
  • the above acquisition can be completed by using a handheld device, so that the user can walk to different positions with the handheld device to perform multiple acquisitions, thereby constructing a 3D model of a larger space.
  • a handheld device for example, for a corridor, it is also possible to set up multiple devices, but the process is more complicated. However, if the user holds the device and moves to different positions of the corridor, different areas of the corridor can be collected separately, so as to finally synthesize the 3D model of the whole corridor.
  • the method of optimizing the camera acquisition position can also be adopted.
  • the prior art for such a device does not mention how to better optimize the camera position.
  • some optimization methods exist they are obtained under different empirical conditions under different experiments.
  • some existing position optimization methods need to obtain the size of the target object, which is feasible in surround 3D acquisition and can be measured in advance.
  • the present invention conducts a large number of experiments, and summarizes the following empirical conditions that the interval of camera acquisition is preferably satisfied during acquisition.
  • the included angle ⁇ of the optical axis of the image acquisition device at two adjacent positions satisfies the following conditions:
  • R is the distance from the center of rotation to the surface of the target
  • T is the sum of the object distance and the image distance during acquisition, that is, the distance between the photosensitive unit of the image acquisition device and the target object.
  • d is the length or width of the photosensitive element (CCD) of the image acquisition device.
  • CCD photosensitive element
  • F is the focal length of the lens of the image acquisition device.
  • u is the empirical coefficient.
  • a distance measuring device such as a laser distance meter
  • a distance measuring device is configured on the acquisition device. Adjust its optical axis to be parallel to the optical axis of the image acquisition device, then it can measure the distance from the acquisition device to the surface of the target object. Using the measured distance, according to the known positional relationship between the distance measuring device and the various components of the acquisition device, you can Get R and T.
  • the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T.
  • multiple averaging methods or other methods can also be used. The principle is that the value of T should not deviate from the distance between the image and the object during acquisition.
  • the distance from the center of rotation to the surface of the target object along the optical axis is taken as R.
  • multiple averaging methods or other methods can also be used, the principle of which is that the value of R should not deviate from the radius of rotation at the time of acquisition.
  • the size of the object is used as a method for estimating the position of the camera in the prior art. Because the size of the object will change with the change of the measured object. For example, after collecting 3D information of a large object, when collecting small objects, it is necessary to re-measure the size and re-calculate. The above-mentioned inconvenient measurements and multiple re-measurements will bring about measurement errors, resulting in incorrect camera position estimation.
  • the empirical conditions that the camera position needs to meet are given, and there is no need to directly measure the size of the object.
  • d and F are fixed parameters of the camera. When purchasing a camera and lens, the manufacturer will give the corresponding parameters without measurement.
  • R and T are only a straight line distance, which can be easily measured by traditional measurement methods, such as straightedge and laser rangefinder.
  • the acquisition direction of the image acquisition device eg, camera
  • the orientation of the lens is substantially opposite to the rotation center.
  • u should be less than 0.498.
  • u ⁇ 0.411 is preferred, especially u ⁇ 0.359.
  • the multiple images acquired by the image acquisition device are sent to the processing unit, and the following algorithm is used to construct a 3D model.
  • the processing unit may be located in the acquisition device, or may be located remotely, such as a cloud platform, a server, a host computer, and the like.
  • the specific algorithm mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after enhancement by Wallis filter
  • m g is the local gray value of the original image.
  • sg is the local grayscale standard deviation of the original image
  • mf is the local grayscale target value of the transformed image
  • sf is the localized grayscale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image luminance coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so it can improve the number and accuracy of feature points when extracting image point features, and improve the reliability and accuracy of matching results in photo feature matching.
  • Step 2 Extract feature points from all the input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photo.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses the Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, uses integral images to accelerate convolution to improve computational speed, and reduces the dimension of local image feature descriptors. to speed up matching.
  • the main steps include 1 constructing a Hessian matrix to generate all interest points for feature extraction.
  • the purpose of constructing a Hessian matrix is to generate image stable edge points (mutation points); 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix
  • Each pixel point is compared with 26 points in the two-dimensional image space and scale space neighborhood, and the key points are initially located.
  • (3) The main direction of the feature point is determined by using the harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree sector is counted, and then the sector is rotated at intervals of 0.2 radians and the harr wavelet eigenvalues in the region are counted again.
  • the direction of the sector with the largest value is used as the main direction of the feature point; (4) a 64-dimensional feature point description vector is generated, and a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts the haar wavelet features of 25 pixels in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • the haar wavelet features are 4 directions after the horizontal value, after the vertical value, after the absolute value of the horizontal direction and the sum of the absolute value of the vertical direction.
  • Step 3 Input the coordinates of the matched feature points, and use the beam method to adjust the position and attitude data of the sparse target object 3D point cloud and the camera to obtain the sparse target object model 3D point cloud and position model coordinates.
  • Sparse feature points Take sparse feature points as the initial value, perform dense matching of multi-view photos, and obtain dense point cloud data.
  • stereo pair selection For each image in the input dataset, we select a reference image to form a stereo pair for computing the depth map. So we can get a rough depth map for all images, these depth maps may contain noise and errors, and we use its neighborhood depth map to perform a consistency check to optimize the depth map for each image.
  • depth map fusion is performed to obtain a 3D point cloud of the entire scene.
  • Step 4 Use dense point cloud to reconstruct the target surface. Including several processes of defining octrees, setting function spaces, creating vector fields, solving Poisson equations, and extracting isosurfaces.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship
  • the vector field of the point cloud is obtained according to the integral relationship
  • the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration
  • the isosurface is extracted by the moving cube algorithm
  • the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the target model. After the surface model is constructed, texture mapping is performed.
  • the main process includes: 1 texture data acquisition through image reconstruction of the target surface triangle mesh; 2 visibility analysis of the reconstructed model triangle. Use the calibration information of the image to calculate the visible image set of each triangular face and the optimal reference image; 3.
  • the triangular face is clustered to generate texture patches.
  • the triangular surface is clustered into several reference image texture patches; 4
  • the texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate a texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangular surface.
  • a 3D acquisition device In order to construct a 3D model inside an exhibition hall, a 3D acquisition device can be placed on the floor of the house, multiple images of the building can be acquired by rotating, and then the acquisition device can be moved to multiple indoor positions for multiple rotation acquisitions, and the 3D model can be synthesized according to the synthesis algorithm. , so as to build a 3D model of the house, which is convenient for subsequent decoration and display.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a solid object, or it can be composed of multiple objects. .
  • the three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with the three-dimensional feature of the target.
  • the so-called three-dimensional in the present invention refers to having three directional information of XYZ, especially having depth information, which is essentially different from having only two-dimensional plane information. It is also fundamentally different from some definitions that are called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially not depth information.
  • the acquisition area mentioned in the present invention refers to the range that the image acquisition device 1 (eg, camera) can capture.
  • the image acquisition device 1 in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image acquisition function.
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all of the components in the device according to the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne, dans un mode de réalisation, un système et un procédé d'acquisition d'images 3D combinées à de multiples emplacements. Le système comprend un appareil d'acquisition d'image 3D. L'appareil d'acquisition d'image 3D effectue une acquisition en de multiples emplacements par rapport à une cible d'acquisition. La plage d'acquisition de chaque emplacement d'acquisition chevauche au moins respectivement les plages d'acquisition des autres emplacements d'acquisition. L'appareil d'acquisition d'image 3D comprend un dispositif d'acquisition d'image et un dispositif de rotation. Le dispositif d'acquisition d'image effectue une acquisition dans une direction opposée au centre de rotation. L'invention est la première à proposer une configuration dans laquelle un seul appareil d'acquisition d'image 3D à auto-rotation est disposé à de multiples emplacements de façon à former un système complet d'acquisition d'images 3D combinées à de multiples emplacements. L'invention permet l'acquisition d'images d'une surface complexe d'un espace interne d'une cible ou d'une cible sur un grand champ de vision.
PCT/CN2021/123762 2020-10-15 2021-10-14 Système et procédé d'acquisition d'images 3d combinées à de multiples emplacements WO2022078433A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202011105994.5A CN112254677B (zh) 2020-10-15 2020-10-15 一种基于手持设备的多位置组合式3d采集系统及方法
CN202011105292.7 2020-10-15
CN202011105994.5 2020-10-15
CN202011105292.7A CN112254671B (zh) 2020-10-15 2020-10-15 一种多次组合式3d采集系统及方法
CN202011106003.5 2020-10-15
CN202011106003.5A CN112254679B (zh) 2020-10-15 2020-10-15 一种多位置组合式3d采集系统及方法

Publications (1)

Publication Number Publication Date
WO2022078433A1 true WO2022078433A1 (fr) 2022-04-21

Family

ID=81207698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123762 WO2022078433A1 (fr) 2020-10-15 2021-10-14 Système et procédé d'acquisition d'images 3d combinées à de multiples emplacements

Country Status (1)

Country Link
WO (1) WO2022078433A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495932A (zh) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 一种电力设备异源点云配准方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (zh) * 2007-07-26 2008-04-02 上海交通大学 基于多目相机的三维人体运动跟踪的方法
CN111292239A (zh) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 一种三维模型拼接设备及方法
EP3671277A1 (fr) * 2018-12-21 2020-06-24 Infineon Technologies AG Appareil et procédé d'imagerie 3d
CN112254677A (zh) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 一种基于手持设备的多位置组合式3d采集系统及方法
CN112254679A (zh) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 一种多位置组合式3d采集系统及方法
CN112254671A (zh) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 一种多次组合式3d采集系统及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (zh) * 2007-07-26 2008-04-02 上海交通大学 基于多目相机的三维人体运动跟踪的方法
EP3671277A1 (fr) * 2018-12-21 2020-06-24 Infineon Technologies AG Appareil et procédé d'imagerie 3d
CN111292239A (zh) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 一种三维模型拼接设备及方法
CN112254677A (zh) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 一种基于手持设备的多位置组合式3d采集系统及方法
CN112254679A (zh) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 一种多位置组合式3d采集系统及方法
CN112254671A (zh) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 一种多次组合式3d采集系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495932A (zh) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 一种电力设备异源点云配准方法及系统
CN117495932B (zh) * 2023-12-25 2024-04-16 国网山东省电力公司滨州供电公司 一种电力设备异源点云配准方法及系统

Similar Documents

Publication Publication Date Title
WO2022078442A1 (fr) Procédé d'acquisition d'informations 3-d basé sur la fusion du balayage optique et de la vision intelligente
WO2022111105A1 (fr) Appareil intelligent d'acquisition d'informations visuelles 3d avec posture libre
WO2022078418A1 (fr) Appareil intelligent d'acquisition d'informations tridimensionnelles pouvant tourner de manière stable
WO2022078440A1 (fr) Dispositif et procédé d'acquisition et de détermination d'occupation d'espace comprenant un objet mobile
CN112361962B (zh) 一种多俯仰角度的智能视觉3d信息采集设备
CN112254680B (zh) 一种多自由度的智能视觉3d信息采集设备
CN112257537B (zh) 一种智能多点三维信息采集设备
WO2022078439A1 (fr) Appareil et procédé d'acquisition et de mise en correspondance d'informations 3d d'espace et d'objet
CN112254638B (zh) 一种可俯仰调节的智能视觉3d信息采集设备
CN112082486B (zh) 一种手持式智能3d信息采集设备
CN112253913B (zh) 一种与旋转中心偏离的智能视觉3d信息采集设备
CN112254676B (zh) 一种便携式智能3d信息采集设备
WO2022078433A1 (fr) Système et procédé d'acquisition d'images 3d combinées à de multiples emplacements
WO2022078438A1 (fr) Dispositif d'acquisition d'informations 3d d'intérieur
WO2022111104A1 (fr) Appareil visuel intelligent pour l'acquisition d'informations 3d à partir de multiples angles de roulis
WO2022078419A1 (fr) Dispositif intelligent d'acquisition d'informations 3d visuelles, comprenant de multiples angles de décalage
CN112254677B (zh) 一种基于手持设备的多位置组合式3d采集系统及方法
CN112254671B (zh) 一种多次组合式3d采集系统及方法
CN112254673B (zh) 一种自转式智能视觉3d信息采集设备
WO2022078444A1 (fr) Procédé de commande de programme d'acquisition d'informations 3d
WO2022078437A1 (fr) Appareil et procédé de traitement tridimensionnel entre des objets en mouvement
CN112254679B (zh) 一种多位置组合式3d采集系统及方法
CN112257535B (zh) 一种躲避物体的三维匹配的设备及方法
WO2022078417A1 (fr) Dispositif intelligent rotatif de collecte d'informations 3d visuelles
CN112254674B (zh) 一种近距离智能视觉3d信息采集设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879476

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21879476

Country of ref document: EP

Kind code of ref document: A1