WO2022111105A1 - Appareil intelligent d'acquisition d'informations visuelles 3d avec posture libre - Google Patents

Appareil intelligent d'acquisition d'informations visuelles 3d avec posture libre Download PDF

Info

Publication number
WO2022111105A1
WO2022111105A1 PCT/CN2021/123710 CN2021123710W WO2022111105A1 WO 2022111105 A1 WO2022111105 A1 WO 2022111105A1 CN 2021123710 W CN2021123710 W CN 2021123710W WO 2022111105 A1 WO2022111105 A1 WO 2022111105A1
Authority
WO
WIPO (PCT)
Prior art keywords
image acquisition
posture
image
rotation
distance
Prior art date
Application number
PCT/CN2021/123710
Other languages
English (en)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2022111105A1 publication Critical patent/WO2022111105A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/043Allowing translations
    • F16M11/045Allowing translations adapted to left-right translation movement
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/043Allowing translations
    • F16M11/046Allowing translations adapted to upward-downward translation movement
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/043Allowing translations
    • F16M11/048Allowing translations adapted to forward-backward translation movement
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/06Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting
    • F16M11/12Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting in more than one direction
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/18Heads with mechanism for moving the apparatus relatively to the stand
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B15/00Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B5/00Measuring arrangements characterised by the use of mechanical techniques
    • G01B5/02Measuring arrangements characterised by the use of mechanical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B7/00Measuring arrangements characterised by the use of electric or magnetic techniques
    • G01B7/02Measuring arrangements characterised by the use of electric or magnetic techniques for measuring length, width or thickness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras

Definitions

  • the invention relates to the technical field of topography measurement, in particular to the technical field of 3D topography measurement.
  • 3D information needs to be collected first.
  • Commonly used methods include the use of machine vision and structured light, laser ranging, and lidar.
  • Structured light, laser ranging, and lidar all require an active light source to be emitted to the target, which will affect the target in some cases, and the cost of the light source is high. Moreover, the structure of the light source is relatively precise and easy to be damaged.
  • the machine vision method is to collect pictures of objects from different angles, and match and stitch these pictures to form a 3D model, which is low-cost and easy to use.
  • multiple cameras can be set at different angles of the object to be tested, or pictures can be collected from different angles by rotating a single or multiple cameras.
  • the collected objects include the outer surface of the object and the interior of the object, and the prior art has never mentioned how to combine the two for a unified solution. That is to say, there is currently no acquisition method or device that can be applied to the outer surface of the object and the interior space of the object.
  • the shape and structure of the target object and the target space are different during collection, and the collection conditions that the device can be placed in are also very different. There is often a situation in which the acquisition equipment is difficult to adapt to.
  • the present invention provides an intelligent visual 3D information collection device that overcomes the above problems or at least partially solves the above problems.
  • the embodiment of the present invention provides a visual 3D information acquisition device, including an image acquisition device, a rotation device, a support device and an attitude setting device;
  • the rotating device drives the supporting device to rotate
  • the support device is provided with an attitude setting device and an image acquisition device;
  • the attitude setting device is used to set the attitude of the image acquisition device
  • the included angle ⁇ of the optical axes of two adjacent acquisition positions satisfies the following conditions:
  • R is the distance from the rotation center to the surface of the target object
  • T is the sum of the object distance and the image distance during acquisition
  • d is the length or width of the photosensitive element of the image acquisition device
  • F is the lens focal length of the image acquisition device
  • u is the experience coefficient.
  • u ⁇ 0.498 preferably u ⁇ 0.411, particularly preferably u ⁇ 0.366, u ⁇ 0.260, or u ⁇ 0.213, or u ⁇ 0.194, or u ⁇ 0.091, or u ⁇ 0.048.
  • the posture of the image acquisition device includes an azimuth angle posture and/or a pitch angle posture and/or a roll angle posture translated relative to the rotation plane.
  • the posture setting device is an adjustable posture setting device or a fixed posture setting device.
  • the support device includes an X-axis translation adjustment unit and/or a Y-axis translation adjustment unit and/or a Z-axis translation adjustment unit.
  • the attitude setting device can respectively set the azimuth angle, the pitch angle, and the roll angle within the range of 0-360°.
  • the posture setting device is a three-dimensionally extending fixing device.
  • Embodiments of the present invention further provide a 3D synthesis/recognition apparatus and method, including any of the above-mentioned devices.
  • Embodiments of the present invention further provide an object manufacturing/display apparatus and method, including any of the above-mentioned devices.
  • FIG. 1 shows a schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention
  • FIG. 2 shows a schematic structural diagram of an attitude setting device of a 3D information collection device provided by an embodiment of the present invention
  • FIG. 3 shows another schematic structural diagram of the attitude setting device of the 3D information collection device provided by the embodiment of the present invention
  • FIG. 4 shows a third schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention.
  • FIG. 5 shows a fourth schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention.
  • an embodiment of the present invention provides an intelligent visual 3D information acquisition device, please refer to FIG. 1 , including an image acquisition device 1, a rotation device 2, a support device 4, an attitude setting device 5 and a carrying device. 3.
  • the support device 4 can preferably be a telescopic structure, that is, the length of the support device 4 can be adjusted, so that support devices of different lengths can be selected according to the size of the target or target area to meet the collection requirements.
  • the posture setting device 5 can adjust and set the direction of the optical acquisition port (optical axis p) of the image acquisition device 1 , so that the image acquisition device can realize any posture in space.
  • the attitude setting device 5 may include an azimuth angle setting unit 51 , a pitch angle setting unit 52 , and a roll angle setting unit 53 .
  • the azimuth angle setting unit 51 is used to adjust the azimuth angle of the image acquisition device on the XY plane
  • the pitch angle setting unit 52 is used to adjust the pitch angle of the image acquisition device on the YZ plane
  • the roll angle setting unit 53 is used to adjust the image acquisition The roll angle of the device in the XZ plane.
  • the XY plane is the plane on which the rotating device drives the supporting device and the image acquisition device to rotate, and the direction of the lens of the image acquisition device is the Y direction; the long side direction of the CCD or CMOS chip of the image acquisition device is the X direction, and when looking toward the lens, The right side is the positive X direction; the vertical XY plane upward is the positive Z direction.
  • the azimuth angle setting unit 51 , the pitch angle setting unit 52 , and the roll angle setting unit 53 can be stacked in order from top to bottom, so as to realize the adjustment of the three angular orientations. , as shown in Figure 3, but the order of the three is not fixed, and the connection order of the three can also be adjusted. Even, the three units can be integrated together to form an integrated three-axis attitude adjustment module.
  • the above three units or integrated modules can be set in the attitude setting device 5, but in some cases only two-axis attitude adjustment is required, then two of the units can be selected for combination as required or integrated. Even if only one axis of attitude adjustment is required, then only one unit is required.
  • a rod 6 can be added between the attitude setting device and the image acquisition device, as shown in FIG. 4 , even if the image acquisition device, the support device and the rotation device are not in the same plane, and the distance is large, the acquisition range will not be blocked.
  • the rod 6 may be a straight rod extending in a certain direction, or may be a curved rod extending in any three-dimensional space.
  • the rod 6 can also be a length-adjustable rod.
  • the support device 4 may also include units that translate in three directions of XYZ: as shown in FIG. 5 , an X translation unit 41 , a Y translation unit 42 , and a Z translation unit 43 .
  • they can be a combination of multiple units or an integrated module; they can be combined in any order, in pairs, or individually.
  • the image acquisition device can be arbitrarily translated to a certain position in three directions, which is deviated from the rotation center. In some cases, it can be avoided that the carrier or rotating device enters the field of view and interferes with the acquisition.
  • the support device 4 can be in the form of a slide rail, or can be a fixed bracket.
  • the above-mentioned posture setting device is an adjustable posture setting device, that is, it is locked after being adjusted to the set posture.
  • the adjustable attitude setting device can be manually adjusted or automatically adjusted electrically.
  • it can also be a fixed posture setting device, that is, after the image acquisition device is installed on it, the optical axis direction naturally satisfies the set posture, and the fixed posture is not adjustable.
  • a pitch bracket, a roll bracket, and a rotation bracket are used to fix the pitch angle.
  • the posture setting device may be a fixing device (eg, a bracket) extending in a three-dimensional direction.
  • the rotating shaft of the rotating device can also be connected to the image capturing device through a deceleration device, for example, through a gear set or the like.
  • the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on.
  • the above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like.
  • the rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position.
  • the specific collection position will be described in detail below.
  • the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
  • the bearing device is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the bearing device 3 .
  • the carrying device may be a tripod, a base with a supporting device, or the like.
  • the rotating device is located in the center part of the carrier to ensure balance. But in some special occasions, it can also be located in any position of the carrier. Furthermore, the carrying device is not necessary.
  • the swivel device can be installed directly in the application, eg on the roof of a vehicle.
  • the carrying device can also be a hand-held part, so that the 3D acquisition device can be used by hand.
  • the supporting device 3 is used to carry the entire device at the lowermost part of the entire device in the figure, it is understood that the entire device can also be used upside down completely. Of course, the entire device does not have to be used vertically, and the entire device can be rotated by 90° so that the device can be used horizontally, or it can be used at any inclination angle, which can be selected according to actual needs.
  • the 3D information acquisition device may further include a ranging device, the ranging device is fixedly connected with the image acquisition device, and the pointing direction of the ranging device is the same as the direction of the optical axis of the image acquisition device.
  • the distance measuring device can also be fixedly connected to the rotating device, as long as it can rotate synchronously with the image capturing device.
  • an installation platform may be provided, the image acquisition device and the distance measuring device are both located on the platform, the platform is installed on the rotating shaft of the rotating device, and is driven and rotated by the rotating device.
  • the distance measuring device can use a variety of methods such as a laser distance meter, an ultrasonic distance meter, an electromagnetic wave distance meter, etc., or a traditional mechanical measuring tool distance measuring device.
  • the 3D acquisition device is located at a specific location, and its distance from the target has been calibrated, and no additional measurement is required.
  • the 3D information acquisition device may further include a light source, and the light source may be disposed around the image acquisition device, on the rotating device, and on the installation platform.
  • the light source can also be set independently, for example, an independent light source is used to illuminate the target. Even when lighting conditions are good, no light source is used.
  • the light source can be an LED light source or an intelligent light source, that is, the parameters of the light source are automatically adjusted according to the conditions of the target object and the ambient light.
  • the light sources are distributed around the lens of the image capture device 1 in a distributed manner, for example, the light sources are ring-shaped LED lights around the lens. Because in some applications it is necessary to control the intensity of the light source.
  • a diffuser device such as a diffuser housing
  • a diffuser housing can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which has a smaller volume, softer light, and has flexible properties, which can be attached to a curved surface.
  • marking points can be set at the position of the target. And the coordinates of these marker points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D composite model is obtained. These marking points can be pre-set points or laser light spots.
  • the method for determining the coordinates of these points may include: 1Using laser ranging: using a calibration device to emit laser light toward the target to form a plurality of calibration point spots, and obtain the calibration point coordinates through the known positional relationship of the laser ranging unit in the calibration device. Use the calibration device to emit laser light toward the target, so that the light beam emitted by the laser ranging unit in the calibration device falls on the target to form a light spot.
  • the laser beams emitted by the laser ranging units are parallel to each other, and the positional relationship between the units is known. Then the two-dimensional coordinates on the emission plane of the multiple light spots formed on the target can be obtained.
  • the distance between each laser ranging unit and the corresponding light spot can be obtained, that is, depth information equivalent to multiple light spots formed on the target can be obtained. That is, the depth coordinates perpendicular to the emission plane can be obtained.
  • the three-dimensional coordinates of each spot can be obtained.
  • 2 using the combination of distance measurement and angle measurement: respectively measure the distance of multiple markers and the angle between each other, so as to calculate the respective coordinates.
  • Use other coordinate measurement tools such as RTK, global coordinate positioning system, star-sensing positioning system, position and pose sensors, etc.
  • the rotating device drives the image acquisition device to rotate at a certain speed, and the image acquisition device performs image acquisition at a set position during the rotation process. At this time, the rotation may not be stopped, that is, the image acquisition and the rotation are performed synchronously; or the rotation may be stopped at the position to be acquired, image acquisition is performed, and the rotation continues to the next position to be acquired after the acquisition is completed.
  • the rotating device can be driven by a pre-programmed control unit program. It can also communicate with the upper computer through the communication interface, and control the rotation through the upper computer. In particular, it can also be wired or wirelessly connected to the mobile terminal, and the rotation of the rotating device can be controlled by the mobile terminal (eg, a mobile phone). That is, the rotation parameters of the rotating device can be set through the remote platform, cloud platform, server, host computer, and mobile terminal to control the start and stop of its rotation.
  • the image acquisition device collects multiple images of the target, and sends the images to the remote platform, cloud platform, server, host computer and/or mobile terminal through the communication device, and uses the 3D model synthesis method to perform 3D synthesis of the target.
  • the distance measuring device can be used to measure the corresponding distance parameters in the relevant formula conditions, that is, the distance from the rotation center to the target, and the distance from the sensing element to the target, before or at the same time as the acquisition.
  • the collection position is calculated according to the corresponding conditional formula, and the user is prompted to set the rotation parameters, or the rotation parameters are automatically set.
  • the rotating device can drive the distance measuring device to rotate, so as to measure the above two distances at different positions.
  • the two distances measured at multiple measurement points are averaged respectively, and are brought into the formula as the unified distance value collected this time.
  • the average value may be obtained by a summation average method, a weighted average method, or another average value method, or a method of discarding abnormal values and averaging again.
  • the method of optimizing the camera acquisition position can also be adopted.
  • the prior art for such a device does not mention how to better optimize the camera position.
  • some optimization methods exist they are obtained under different empirical conditions under different experiments.
  • some existing position optimization methods need to obtain the size of the target object, which is feasible in surround 3D acquisition and can be measured in advance.
  • the present invention conducts a large number of experiments, and summarizes the following empirical conditions that the interval of camera acquisition is preferably satisfied during acquisition.
  • the included angle ⁇ of the optical axis of the image acquisition device at two adjacent positions satisfies the following conditions:
  • R is the distance from the center of rotation to the surface of the target
  • T is the sum of the object distance and the image distance during acquisition, that is, the distance between the photosensitive unit of the image acquisition device and the target object.
  • d is the length or width of the photosensitive element (CCD) of the image acquisition device.
  • CCD photosensitive element
  • F is the focal length of the lens of the image acquisition device.
  • u is the empirical coefficient.
  • a distance measuring device such as a laser distance meter
  • a distance measuring device is configured on the acquisition device. Adjust its optical axis to be parallel to the optical axis of the image acquisition device, then it can measure the distance from the acquisition device to the surface of the target object. Using the measured distance, according to the known positional relationship between the distance measuring device and the various components of the acquisition device, you can Get R and T.
  • the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T.
  • multiple averaging methods or other methods can also be used. The principle is that the value of T should not deviate from the distance between the image and the object during acquisition.
  • the distance from the center of rotation to the surface of the target object along the optical axis is taken as R.
  • multiple averaging methods or other methods can also be used, the principle of which is that the value of R should not deviate from the radius of rotation at the time of acquisition.
  • the size of the object is used as a method for estimating the position of the camera in the prior art. Because the size of the object will change with the change of the measured object. For example, after collecting 3D information of a large object, when collecting small objects, it is necessary to re-measure the size and re-calculate. The above-mentioned inconvenient measurements and multiple re-measurements will bring about measurement errors, resulting in incorrect camera position estimation.
  • the empirical conditions that the camera position needs to meet are given, and there is no need to directly measure the size of the object.
  • d and F are fixed parameters of the camera. When purchasing a camera and lens, the manufacturer will give the corresponding parameters without measurement.
  • R and T are only a straight line distance, which can be easily measured by traditional measurement methods, such as straightedge and laser rangefinder.
  • the acquisition direction of the image acquisition device eg, camera
  • the orientation of the lens is generally opposite to the rotation center.
  • u should be less than 0.498.
  • u ⁇ 0.411 is preferred, especially u ⁇ 0.366.
  • the multiple images acquired by the image acquisition device are sent to the processing unit, and the following algorithm is used to construct a 3D model.
  • the processing unit may be located in the acquisition device, or may be located remotely, such as a cloud platform, a server, a host computer, and the like.
  • the specific algorithm mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after enhancement by Wallis filter
  • m g is the local gray value of the original image.
  • sg is the local grayscale standard deviation of the original image
  • mf is the local grayscale target value of the transformed image
  • sf is the localized grayscale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image luminance coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so it can improve the number and accuracy of feature points when extracting image point features, and improve the reliability and accuracy of matching results in photo feature matching.
  • Step 2 Extract feature points from all the input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photo.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, uses integral image to accelerate convolution to improve calculation speed, and reduces the dimension of local image feature descriptors, to speed up matching.
  • the main steps include 1 constructing the Hessian matrix to generate all interest points for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate image stable edge points (mutation points); 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix
  • Each pixel point is compared with 26 points in the two-dimensional image space and scale space neighborhood, and the key points are initially located.
  • (3) The main direction of the feature point is determined by using the harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree sector is counted, and then the sector is rotated at intervals of 0.2 radians, and the harr wavelet eigenvalues in the region are counted again.
  • the direction of the sector with the largest value is used as the main direction of the feature point; 4
  • a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts the haar wavelet features of 25 pixels in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • the haar wavelet features are 4 directions after the horizontal value, after the vertical value, after the absolute value of the horizontal direction and the sum of the absolute value of the vertical direction.
  • the matching degree is determined by calculating the Euclidean distance between the two feature points. The shorter the Euclidean distance, the better the matching degree of the two feature points. .
  • Step 3 Input the coordinates of the matched feature points, and use the beam method to adjust the position and attitude data of the sparse target object 3D point cloud and the camera to obtain the sparse target object model 3D point cloud and position model coordinates.
  • Sparse feature points Take sparse feature points as the initial value, perform dense matching of multi-view photos, and obtain dense point cloud data.
  • stereo pair selection For each image in the input dataset, we select a reference image to form a stereo pair for computing the depth map. So we can get a rough depth map for all images, these depth maps may contain noise and errors, and we use its neighborhood depth map to perform a consistency check to optimize the depth map for each image.
  • depth map fusion is performed to obtain a 3D point cloud of the entire scene.
  • Step 4 Use dense point cloud to reconstruct the target surface. Including several processes of defining octrees, setting function spaces, creating vector fields, solving Poisson equations, and extracting isosurfaces.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship
  • the vector field of the point cloud is obtained according to the integral relationship
  • the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration
  • the isosurface is extracted by the moving cube algorithm
  • the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the target model. After the surface model is constructed, texture mapping is performed.
  • the main process includes: 1 texture data acquisition through image reconstruction of the target surface triangle mesh; 2 visibility analysis of the reconstructed model triangle. Use the calibration information of the image to calculate the visible image set of each triangular face and the optimal reference image; 3.
  • the triangular face is clustered to generate texture patches.
  • the triangular surface is clustered into several reference image texture patches; 4
  • the texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate a texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangular surface.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a solid object, or it can be composed of multiple objects. For example, it can be a building, a part, or the like.
  • the 3D information of the target includes a 3D image, a 3D point cloud, a 3D mesh, a local 3D feature, a 3D size and all parameters with the 3D feature of the target.
  • the so-called three-dimensional in the present invention refers to having three directional information of XYZ, especially having depth information, which is essentially different from having only two-dimensional plane information. It is also fundamentally different from some definitions that are called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially not depth information.
  • the acquisition area mentioned in the present invention refers to the range that can be photographed by an image acquisition device (eg, a camera).
  • the image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and Image acquisition capabilities for all devices.
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all of the components in the device according to the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un appareil d'acquisition d'informations visuelles 3D, comprenant un dispositif d'acquisition d'images, un dispositif de rotation, un dispositif de support et un dispositif de réglage de posture, dans lequel le dispositif de rotation entraîne le dispositif de support en rotation ; le dispositif de réglage de posture et le dispositif d'acquisition d'images sont disposés sur le dispositif de support ; et le dispositif de réglage de posture est utilisé pour régler un angle de déviation d'un axe optique du dispositif d'acquisition d'image sur un plan de rotation dans une direction d'acquisition, de sorte qu'il y ait un angle inclus réglé entre l'axe optique et une ligne de connexion du centre de rotation et du dispositif d'acquisition d'image. La position d'acquisition d'une caméra est optimisée en mesurant la distance entre le centre de rotation et un objet cible et la distance entre un élément de détection d'image et l'objet cible, en tenant compte à la fois de la vitesse et du résultat de la construction 3D.
PCT/CN2021/123710 2020-11-27 2021-10-14 Appareil intelligent d'acquisition d'informations visuelles 3d avec posture libre WO2022111105A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011361472.1A CN112492292B (zh) 2020-11-27 2020-11-27 一种自由姿态的智能视觉3d信息采集设备
CN202011361472.1 2020-11-27

Publications (1)

Publication Number Publication Date
WO2022111105A1 true WO2022111105A1 (fr) 2022-06-02

Family

ID=74936435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123710 WO2022111105A1 (fr) 2020-11-27 2021-10-14 Appareil intelligent d'acquisition d'informations visuelles 3d avec posture libre

Country Status (2)

Country Link
CN (1) CN112492292B (fr)
WO (1) WO2022111105A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117233167A (zh) * 2023-08-23 2023-12-15 北京易点淘网络技术有限公司 一种检测装置及方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112492292B (zh) * 2020-11-27 2023-04-11 天目爱视(北京)科技有限公司 一种自由姿态的智能视觉3d信息采集设备
CN113155047B (zh) * 2021-04-02 2022-04-15 中车青岛四方机车车辆股份有限公司 长距离孔距测量装置、方法、存储介质、设备及轨道车辆
CN117395509B (zh) * 2023-12-11 2024-03-22 华南理工大学 一种用于三维重建的图像及其位姿信息的自动采集装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109297413A (zh) * 2018-11-30 2019-02-01 中国科学院沈阳自动化研究所 一种大型筒体结构视觉测量方法
CN110986769A (zh) * 2019-12-12 2020-04-10 天目爱视(北京)科技有限公司 一种超高超长物体三维采集装置
CN111060024A (zh) * 2018-09-05 2020-04-24 天目爱视(北京)科技有限公司 旋转中心轴与图像采集装置相交的3d测量及获取装置
CN111415388A (zh) * 2020-03-17 2020-07-14 Oppo广东移动通信有限公司 一种视觉定位方法及终端
CN111627070A (zh) * 2020-04-30 2020-09-04 贝壳技术有限公司 一种对旋转轴进行标定的方法、装置和存储介质
CN112492292A (zh) * 2020-11-27 2021-03-12 天目爱视(北京)科技有限公司 一种自由姿态的智能视觉3d信息采集设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104748746B (zh) * 2013-12-29 2017-11-03 刘进 智能机姿态测定及虚拟现实漫游方法
CN108489482B (zh) * 2018-02-13 2019-02-26 视辰信息科技(上海)有限公司 视觉惯性里程计的实现方法及系统
JP7052564B2 (ja) * 2018-05-29 2022-04-12 オムロン株式会社 視覚センサシステム、制御方法およびプログラム
CN111292364B (zh) * 2020-01-21 2021-02-02 天目爱视(北京)科技有限公司 一种三维模型构建过程中图像快速匹配的方法
CN111429523B (zh) * 2020-03-16 2021-06-15 天目爱视(北京)科技有限公司 一种在3d建模中远距离标定方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060024A (zh) * 2018-09-05 2020-04-24 天目爱视(北京)科技有限公司 旋转中心轴与图像采集装置相交的3d测量及获取装置
CN109297413A (zh) * 2018-11-30 2019-02-01 中国科学院沈阳自动化研究所 一种大型筒体结构视觉测量方法
CN110986769A (zh) * 2019-12-12 2020-04-10 天目爱视(北京)科技有限公司 一种超高超长物体三维采集装置
CN111415388A (zh) * 2020-03-17 2020-07-14 Oppo广东移动通信有限公司 一种视觉定位方法及终端
CN111627070A (zh) * 2020-04-30 2020-09-04 贝壳技术有限公司 一种对旋转轴进行标定的方法、装置和存储介质
CN112492292A (zh) * 2020-11-27 2021-03-12 天目爱视(北京)科技有限公司 一种自由姿态的智能视觉3d信息采集设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117233167A (zh) * 2023-08-23 2023-12-15 北京易点淘网络技术有限公司 一种检测装置及方法
CN117233167B (zh) * 2023-08-23 2024-03-19 北京易点淘网络技术有限公司 一种检测装置及方法

Also Published As

Publication number Publication date
CN112492292B (zh) 2023-04-11
CN112492292A (zh) 2021-03-12

Similar Documents

Publication Publication Date Title
WO2022111105A1 (fr) Appareil intelligent d'acquisition d'informations visuelles 3d avec posture libre
WO2022078418A1 (fr) Appareil intelligent d'acquisition d'informations tridimensionnelles pouvant tourner de manière stable
CN112361962B (zh) 一种多俯仰角度的智能视觉3d信息采集设备
WO2022078442A1 (fr) Procédé d'acquisition d'informations 3-d basé sur la fusion du balayage optique et de la vision intelligente
CN112257537B (zh) 一种智能多点三维信息采集设备
WO2022078440A1 (fr) Dispositif et procédé d'acquisition et de détermination d'occupation d'espace comprenant un objet mobile
CN112254680B (zh) 一种多自由度的智能视觉3d信息采集设备
WO2022078439A1 (fr) Appareil et procédé d'acquisition et de mise en correspondance d'informations 3d d'espace et d'objet
CN112254638B (zh) 一种可俯仰调节的智能视觉3d信息采集设备
CN112254676B (zh) 一种便携式智能3d信息采集设备
CN112253913B (zh) 一种与旋转中心偏离的智能视觉3d信息采集设备
CN112082486B (zh) 一种手持式智能3d信息采集设备
WO2022111104A1 (fr) Appareil visuel intelligent pour l'acquisition d'informations 3d à partir de multiples angles de roulis
WO2022078419A1 (fr) Dispositif intelligent d'acquisition d'informations 3d visuelles, comprenant de multiples angles de décalage
WO2022078444A1 (fr) Procédé de commande de programme d'acquisition d'informations 3d
WO2022078438A1 (fr) Dispositif d'acquisition d'informations 3d d'intérieur
WO2022078433A1 (fr) Système et procédé d'acquisition d'images 3d combinées à de multiples emplacements
WO2022078437A1 (fr) Appareil et procédé de traitement tridimensionnel entre des objets en mouvement
CN112254673B (zh) 一种自转式智能视觉3d信息采集设备
CN112254677B (zh) 一种基于手持设备的多位置组合式3d采集系统及方法
CN112254671B (zh) 一种多次组合式3d采集系统及方法
WO2022078421A1 (fr) Dispositif intelligent de collecte d'informations 3d visuelles à angle de pas multiples
CN112254679A (zh) 一种多位置组合式3d采集系统及方法
WO2022078417A1 (fr) Dispositif intelligent rotatif de collecte d'informations 3d visuelles
CN112254674B (zh) 一种近距离智能视觉3d信息采集设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896607

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896607

Country of ref document: EP

Kind code of ref document: A1