WO2022111104A1 - 一种多翻滚角度的智能视觉3d信息采集设备 - Google Patents

一种多翻滚角度的智能视觉3d信息采集设备 Download PDF

Info

Publication number
WO2022111104A1
WO2022111104A1 PCT/CN2021/123707 CN2021123707W WO2022111104A1 WO 2022111104 A1 WO2022111104 A1 WO 2022111104A1 CN 2021123707 W CN2021123707 W CN 2021123707W WO 2022111104 A1 WO2022111104 A1 WO 2022111104A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image acquisition
angle
angle setting
rotation
Prior art date
Application number
PCT/CN2021/123707
Other languages
English (en)
French (fr)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2022111104A1 publication Critical patent/WO2022111104A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/043Allowing translations
    • F16M11/046Allowing translations adapted to upward-downward translation movement
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/04Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand
    • F16M11/06Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting
    • F16M11/12Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting in more than one direction
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/02Heads
    • F16M11/18Heads with mechanism for moving the apparatus relatively to the stand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details

Definitions

  • the invention relates to the technical field of topography measurement, in particular to the technical field of 3D topography measurement.
  • 3D information needs to be collected first.
  • Commonly used methods include the use of machine vision and structured light, laser ranging, and lidar.
  • Structured light, laser ranging, and lidar all require an active light source to be emitted to the target, which will affect the target in some cases, and the cost of the light source is high. Moreover, the structure of the light source is relatively precise and easy to be damaged.
  • the machine vision method is to collect pictures of objects from different angles, and match and stitch these pictures to form a 3D model, which is low-cost and easy to use.
  • multiple cameras can be set at different angles of the object to be tested, or pictures can be collected from different angles by rotating a single or multiple cameras.
  • the collected objects include the outer surface of the object and the interior of the object, and the prior art has never mentioned how to combine the two for a unified solution. That is to say, there is currently no acquisition method or device that can be applied to the outer surface of the object and the interior space of the object.
  • the present invention provides a visual 3D information acquisition device that overcomes the above problems or at least partially solves the above problems.
  • Embodiments of the present invention provide a visual 3D information acquisition device, including an image acquisition device, a rotation device, a support device, and an angle setting device;
  • the rotating device drives the supporting device to rotate
  • An angle setting device and an image acquisition device are arranged on the support device;
  • the angle setting device is used to set the roll angle between the image acquisition device and the rotating plane
  • the included angle ⁇ of the optical axes of two adjacent acquisition positions satisfies the following conditions:
  • R is the distance from the rotation center to the surface of the target object
  • T is the sum of the object distance and the image distance during acquisition
  • d is the length or width of the photosensitive element of the image acquisition device
  • F is the lens focal length of the image acquisition device
  • u is the experience coefficient.
  • u ⁇ 0.498 preferably u ⁇ 0.411, particularly preferably u ⁇ 0.359, u ⁇ 0.250, or u ⁇ 0.216, or u ⁇ 0.197, or u ⁇ 0.055, or u ⁇ 0.028.
  • the set included angle may be in the range of 0° to 180°, 90° to -90°, and 0° to -180°.
  • the image acquisition device translates relative to the rotational plane.
  • the set angle is 20°, 40°, 60°, 80°, 90°, 100°, 120°, 160°, 180°. .
  • the angle setting device is an adjustable angle setting device.
  • the angle setting device is a fixed angle setting device.
  • the support device includes a translation unit such that the image capture device is offset relative to the center of rotation of the device. .
  • the support device enables the image capture device to be located at any point in space and offset from the center of rotation of the device.
  • Embodiments of the present invention further provide a 3D synthesis/recognition device and method, including any of the above-mentioned devices and methods.
  • Embodiments of the present invention also provide an object manufacturing/display apparatus and method, including any of the above-mentioned apparatuses and methods.
  • FIG. 1 shows a schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention
  • FIG. 2 shows a schematic diagram of setting an angle setting device of a 3D information collection device provided by an embodiment of the present invention to 20°;
  • FIG. 3 shows a schematic diagram of setting an angle setting device of a 3D information collection device provided by an embodiment of the present invention to 80°;
  • FIG. 4 shows another schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention.
  • FIG. 5 shows another schematic structural diagram of a 3D information collection device provided by an embodiment of the present invention.
  • an embodiment of the present invention provides an intelligent visual 3D information acquisition device, please refer to FIG. 1 , including an image acquisition device 1, a rotation device 2, a support device 4, an angle setting device 5 and a carrying device. 3.
  • the support device 4 can preferably be a telescopic structure, that is, the length of the support device 4 can be adjusted, so that support devices of different lengths can be selected according to the size of the target or target area to meet the collection requirements.
  • the shape of the support device 4 is variable and can extend in three directions of XYZ.
  • the XY plane is the plane on which the rotating device drives the supporting device and the image acquisition device to rotate, and the direction of the lens of the image acquisition device is the Y direction; the long side direction of the CCD or CMOS chip of the image acquisition device is the X direction, and when looking toward the lens, The right side is the positive X direction; the vertical XY plane upward is the positive Z direction.
  • the angle setting device 5 can adjust and set the direction of the optical capture port (optical axis p) of the image capture device 1 .
  • the plane in which the rotating device drives the support device and the image acquisition device to rotate is called the XY plane.
  • the direction of the lens of the image acquisition device is the Y direction
  • the long side direction of the CCD or CMOS chip of the image acquisition device is the X direction.
  • the right side is the positive X.
  • the angle setting device can set the roll angle of the image capturing device, that is, the image capturing device can be rotated in the XZ direction through the angle setting device.
  • the vertically upward Z direction is taken as the 0° direction, as shown in FIG. 1
  • the counterclockwise roll angle of the image acquisition device when viewed from the Y direction is defined as the roll angle increasing direction.
  • the angle setting device 5 can make the optical axis p of the optical capture port of the image capture device 1 rotate a certain angle on the rotation plane, for example, to 20°, 40°, 60°, 80° °, 90°, 100°, 120°, 160°. That is to say, there is an included angle between the optical axis of the image acquisition device along the acquisition direction of the optical acquisition port and the rotation plane, that is, the set angle.
  • a rod 6 can be added between the angle setting device and the image acquisition device, as shown in FIG. 4 , even if the image acquisition device, the support device and the rotation device are not in the same plane, and the distance is large, the acquisition range will not be blocked.
  • the rod 6 can be a straight rod extending in a certain direction, or a curved rod extending in any three-dimensional space.
  • the rod 6 can also be a length-adjustable rod.
  • it can also be realized by changing the shape and structure of the support device.
  • the supporting means may be arranged in an L-like shape, ie comprising a vertical part connected to the rotating means and a lateral part connected to the angle setting means (or a lateral part connected to the rotating means and a vertical part connected to the angle setting means ).
  • the support device can also be in other complex shapes (for example, the support device extends along the Z axis, the Y axis, and the Z axis), so that the image acquisition device can be located at any position in space and deviate from the rotation axis of the device.
  • the support device can also be a displacement device that can freely adjust the position in both the XYZ axes, so that the image acquisition device can translate in the XYZ directions thereon.
  • the above-mentioned vertical pole can be a telescopic device, that is, the extended length of the vertical pole can be adjusted according to actual needs, so as to meet the needs of different target sizes or collection spaces.
  • the carrying device 3 is used to carry the entire device at the lowermost part of the entire device in the figure, it is understood that the entire device can also be used upside down completely.
  • the entire device does not have to be used vertically, and the entire device can be rotated by 90° so that the device can be used horizontally, or it can be used at any inclination angle, which can be selected according to actual needs.
  • the above-mentioned angle setting device is an adjustable rotating device, which is locked after being rotated to the set angle.
  • the adjustable rotating device can be manually adjusted or automatically adjusted electrically.
  • it can also be a fixed angle device, that is, after the image acquisition device is installed on it, the optical axis direction naturally satisfies the set angle, and the fixation is not adjustable. In this way, in some known fixed use occasions, it is not necessary to adjust the set angle every time, so as to avoid inaccuracy caused by repeated adjustment.
  • the rotating shaft of the rotating device can also be connected to the image capturing device through a deceleration device, for example, through a gear set or the like.
  • the image capturing device rotates 360° on the horizontal plane, it captures an image corresponding to the target at a specific position (the specific shooting position will be described in detail later). This shooting can be performed in synchronization with the rotation action, or after the shooting position stops rotating, and then continues to rotate after shooting, and so on.
  • the above-mentioned rotating device may be a motor, a motor, a stepping motor, a servo motor, a micro motor, or the like.
  • the rotating device (for example, various types of motors) can rotate at a specified speed under the control of the controller, and can rotate at a specified angle, so as to realize the optimization of the collection position.
  • the specific collection position will be described in detail below.
  • the rotating device in the existing equipment can also be used, and the image capturing device can be installed thereon.
  • the support device can also realize the translation relative to the rotation center in the XY plane, so as to realize more flexible acquisition.
  • the bearing device is used to carry the weight of the entire equipment, and the rotating device 2 is connected with the bearing device 3 .
  • the carrying device may be a tripod, a base with a supporting device, or the like.
  • the rotating device is located in the center part of the carrier to ensure balance. But in some special occasions, it can also be located in any position of the carrier. Furthermore, the carrying device is not necessary.
  • the swivel device can be installed directly in the application, eg on the roof of a vehicle.
  • the carrying device can also be a hand-held part, so that the 3D acquisition device can be used by hand.
  • the 3D information acquisition device may further include a ranging device, the ranging device is fixedly connected with the image acquisition device, and the pointing direction of the ranging device is the same as the direction of the optical axis of the image acquisition device.
  • the distance measuring device can also be fixedly connected to the rotating device, as long as it can rotate synchronously with the image capturing device.
  • an installation platform may be provided, the image acquisition device and the distance measuring device are both located on the platform, the platform is installed on the rotating shaft of the rotating device, and is driven and rotated by the rotating device.
  • the distance measuring device can use a variety of methods such as a laser distance meter, an ultrasonic distance meter, an electromagnetic wave distance meter, etc., or a traditional mechanical measuring tool distance measuring device.
  • the 3D acquisition device is located at a specific location, and its distance from the target has been calibrated, and no additional measurement is required.
  • the 3D information acquisition device may further include a light source, and the light source may be disposed around the image acquisition device, on the rotating device, and on the installation platform.
  • the light source can also be set independently, for example, an independent light source is used to illuminate the target. Even when lighting conditions are good, no light source is used.
  • the light source can be an LED light source or an intelligent light source, that is, the parameters of the light source are automatically adjusted according to the conditions of the target object and the ambient light.
  • the light sources are distributed around the lens of the image capture device 1 in a distributed manner, for example, the light sources are ring-shaped LED lights around the lens. Because in some applications it is necessary to control the intensity of the light source.
  • a diffuser device such as a diffuser housing
  • a diffuser housing can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which has a smaller volume, softer light, and has flexible properties, which can be attached to a curved surface.
  • marking points can be set at the position of the target. And the coordinates of these marker points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D composite model is obtained. These marking points can be pre-set points or laser light spots.
  • the method for determining the coordinates of these points may include: 1Using laser ranging: using a calibration device to emit laser light toward the target to form a plurality of calibration point spots, and obtain the calibration point coordinates through the known positional relationship of the laser ranging unit in the calibration device. Use the calibration device to emit laser light toward the target, so that the light beam emitted by the laser ranging unit in the calibration device falls on the target to form a light spot.
  • the laser beams emitted by the laser ranging units are parallel to each other, and the positional relationship between the units is known. Then the two-dimensional coordinates on the emission plane of the multiple light spots formed on the target can be obtained.
  • the distance between each laser ranging unit and the corresponding light spot can be obtained, that is, depth information equivalent to multiple light spots formed on the target can be obtained. That is, the depth coordinates perpendicular to the emission plane can be obtained.
  • the three-dimensional coordinates of each spot can be obtained.
  • Use other coordinate measurement tools such as RTK, global coordinate positioning system, star-sensing positioning system, position and pose sensors, etc.
  • the rotating device drives the image acquisition device to rotate at a certain speed, and the image acquisition device performs image acquisition at a set position during the rotation process. At this time, the rotation may not be stopped, that is, the image acquisition and the rotation are performed synchronously; or the rotation may be stopped at the position to be acquired, image acquisition is performed, and the rotation continues to the next position to be acquired after the acquisition is completed.
  • the rotating device can be driven by a pre-programmed control unit program. It can also communicate with the upper computer through the communication interface, and control the rotation through the upper computer. In particular, it can also be wired or wirelessly connected to the mobile terminal, and the rotation of the rotating device can be controlled by the mobile terminal (eg, a mobile phone). That is, the rotation parameters of the rotating device can be set through the remote platform, cloud platform, server, host computer, and mobile terminal to control the start and stop of its rotation.
  • the image acquisition device collects multiple images of the target, and sends the images to the remote platform, cloud platform, server, host computer and/or mobile terminal through the communication device, and uses the 3D model synthesis method to perform 3D synthesis of the target.
  • the distance measuring device can be used to measure the corresponding distance parameters in the relevant formula conditions, that is, the distance from the rotation center to the target, and the distance from the sensing element to the target, before or at the same time as the acquisition.
  • the collection position is calculated according to the corresponding conditional formula, and the user is prompted to set the rotation parameters, or the rotation parameters are automatically set.
  • the rotating device can drive the distance measuring device to rotate, so as to measure the above two distances at different positions.
  • the two distances measured at multiple measurement points are averaged respectively, and are brought into the formula as the unified distance value collected this time.
  • the average value may be obtained by a summation average method, a weighted average method, or another average value method, or a method of discarding abnormal values and averaging again.
  • the method of optimizing the camera acquisition position can also be adopted.
  • the prior art for such a device does not mention how to better optimize the camera position.
  • some optimization methods exist they are obtained under different empirical conditions under different experiments.
  • some existing position optimization methods need to obtain the size of the target object, which is feasible in surround 3D acquisition and can be measured in advance.
  • the present invention conducts a large number of experiments, and summarizes the following empirical conditions that the interval of camera acquisition is preferably satisfied during acquisition.
  • the included angle ⁇ of the optical axis of the image acquisition device at two adjacent positions satisfies the following conditions:
  • R is the distance from the center of rotation to the surface of the target
  • T is the sum of the object distance and the image distance during acquisition, that is, the distance between the photosensitive unit of the image acquisition device and the target object.
  • d is the length or width of the photosensitive element (CCD) of the image acquisition device.
  • CCD photosensitive element
  • F is the focal length of the lens of the image acquisition device.
  • u is the empirical coefficient.
  • a distance measuring device such as a laser distance meter
  • a distance measuring device is configured on the acquisition device. Adjust its optical axis to be parallel to the optical axis of the image acquisition device, then it can measure the distance from the acquisition device to the surface of the target object. Using the measured distance, according to the known positional relationship between the distance measuring device and the various components of the acquisition device, you can Get R and T.
  • the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T.
  • multiple averaging methods or other methods can also be used. The principle is that the value of T should not deviate from the distance between the image and the object during acquisition.
  • the distance from the center of rotation to the surface of the target object along the optical axis is taken as R.
  • multiple averaging methods or other methods can also be used, the principle of which is that the value of R should not deviate from the radius of rotation at the time of acquisition.
  • the size of the object is used as a method for estimating the position of the camera in the prior art. Because the size of the object will change with the change of the measured object. For example, after collecting 3D information of a large object, when collecting small objects, it is necessary to re-measure the size and re-calculate. The above-mentioned inconvenient measurements and multiple re-measurements will bring about measurement errors, resulting in incorrect camera position estimation.
  • the empirical conditions that the camera position needs to meet are given, and there is no need to directly measure the size of the object.
  • d and F are fixed parameters of the camera. When purchasing a camera and lens, the manufacturer will give the corresponding parameters without measurement.
  • R and T are only a straight line distance, which can be easily measured by traditional measurement methods, such as straightedge and laser rangefinder.
  • the acquisition direction of the image acquisition device eg, camera
  • the orientation of the lens is generally opposite to the rotation center.
  • u should be less than 0.498, for better synthesis effect, preferably u ⁇ 0.411, especially preferably u ⁇ 0.359, in some applications u ⁇ 0.250, or u ⁇ 0.216, or u ⁇ 0.197, or u ⁇ 0.055, or u ⁇ 0.028.
  • the multiple images acquired by the image acquisition device are sent to the processing unit, and the following algorithm is used to construct a 3D model.
  • the processing unit may be located in the acquisition device, or may be located remotely, such as a cloud platform, a server, a host computer, and the like.
  • the specific algorithm mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after enhancement by Wallis filter
  • m g is the local gray value of the original image.
  • sg is the local grayscale standard deviation of the original image
  • mf is the local grayscale target value of the transformed image
  • sf is the localized grayscale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image luminance coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so it can improve the number and accuracy of feature points when extracting image point features, and improve the reliability and accuracy of matching results in photo feature matching.
  • Step 2 Extract feature points from all the input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photo.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, uses integral image to accelerate convolution to improve calculation speed, and reduces the dimension of local image feature descriptors, to speed up matching.
  • the main steps include 1 constructing the Hessian matrix to generate all interest points for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate image stable edge points (mutation points); 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix
  • Each pixel point is compared with 26 points in the two-dimensional image space and scale space neighborhood, and the key points are initially located.
  • (3) The main direction of the feature point is determined by using the harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree sector is counted, and then the sector is rotated at intervals of 0.2 radians, and the harr wavelet eigenvalues in the region are counted again.
  • the direction of the sector with the largest value is used as the main direction of the feature point; (4) a 64-dimensional feature point description vector is generated, and a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts the haar wavelet features of 25 pixels in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • the haar wavelet features are 4 directions after the horizontal value, after the vertical value, after the absolute value of the horizontal direction and the sum of the absolute value of the vertical direction.
  • the matching degree is determined by calculating the Euclidean distance between the two feature points. The shorter the Euclidean distance, the better the matching degree of the two feature points. .
  • Step 3 Input the coordinates of the matched feature points, and use the beam method to adjust the position and attitude data of the sparse target object 3D point cloud and the camera to obtain the sparse target object model 3D point cloud and position model coordinates.
  • Sparse feature points Take sparse feature points as the initial value, perform dense matching of multi-view photos, and obtain dense point cloud data.
  • stereo pair selection For each image in the input dataset, we select a reference image to form a stereo pair for computing the depth map. So we can get a rough depth map for all images, these depth maps may contain noise and errors, and we use its neighborhood depth map to perform a consistency check to optimize the depth map for each image.
  • depth map fusion is performed to obtain a 3D point cloud of the entire scene.
  • Step 4 Use dense point cloud to reconstruct the target surface. It includes several processes of defining octrees, setting function spaces, creating vector fields, solving Poisson equations, and extracting isosurfaces.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship
  • the vector field of the point cloud is obtained according to the integral relationship
  • the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration
  • the isosurface is extracted by the moving cube algorithm
  • the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the target model. After the surface model is constructed, texture mapping is performed.
  • the main process includes: 1 texture data acquisition through image reconstruction of the target surface triangle mesh; 2 visibility analysis of the reconstructed model triangle. Use the calibration information of the image to calculate the visible image set of each triangular face and the optimal reference image; 3.
  • the triangular face is clustered to generate texture patches.
  • the triangular surface is clustered into several reference image texture patches; 4
  • the texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate a texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangular surface.
  • 3D modeling of its outer surface and the interior of the inner cavity is required.
  • the angle is set to 180°, and the acquisition is performed by rotation to obtain multiple images.
  • the angle was set to 10°, and a device with a micro-camera mounted on the rod was inserted into the lumen to take pictures by rotating to obtain multiple images.
  • the 3D model of the outer surface and inner cavity of the engine is synthesized by two parts of the image, so as to realize the quality inspection of the inner cavity of the engine.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a solid object, or it can be composed of multiple objects. For example, it can be a building, a part, or the like.
  • the 3D information of the target includes a 3D image, a 3D point cloud, a 3D mesh, a local 3D feature, a 3D size and all parameters with the 3D feature of the target.
  • the so-called three-dimensional in the present invention refers to having three directional information of XYZ, especially having depth information, which is essentially different from having only two-dimensional plane information. It is also fundamentally different from some definitions that are called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially not depth information.
  • the acquisition area mentioned in the present invention refers to the range that can be photographed by an image acquisition device (eg, a camera).
  • the image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and Image acquisition capabilities for all devices.
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination, unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all of the components in the device according to the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明实施例提供了一种视觉3D信息采集设备,包括图像采集装置、旋转装置、支撑装置和角度设定装置;其中旋转装置带动支撑装置旋转;支撑装置上设置有角度设定装置和图像采集装置;角度设定装置用于设定图像采集装置与旋转平面呈翻滚夹角;首次提出能够适用于物体外表面和内部空间3D信息采集的方法和设备。通过测量旋转中心与目标物距离、图像传感元件与目标物距离的方式优化相机采集位置,从而兼顾3D构建的速度和效果。

Description

一种多翻滚角度的智能视觉3D信息采集设备 技术领域
本发明涉及形貌测量技术领域,特别涉及3D形貌测量技术领域。
背景技术
在进行3D测量时,需要首先采集3D信息。目前常用的方法包括使用机器视觉的方式和结构光、激光测距、激光雷达的方式。
结构光、激光测距、激光雷达的方式均需要主动光源发射到目标物上,在某些情况下会对目标物造成影响,且光源成本较高。并且光源结构比较精密,易于损坏。
而机器视觉的方式是采集物体不同角度的图片,并将这些图片匹配拼接形成3D模型,成本低、易使用。其在采集不同角度图片时,可以待测物不同角度设置多个相机,也可以通过单个或多个相机旋转从不同角度采集图片。目前采集对象包括物体外表面和物体内部,现有技术中从未提到如何将两者结合起来,统一解决。也就是说,目前还没有采集方法或设备能够适用于物体外表面和物体内部空间。
另外,在现有技术中,也曾提出使用包括旋转角度、目标物尺寸、物距的经验公式限定相机位置,从而兼顾合成速度和效果。然而在实际应用中发现这在环绕式3D采集中是可行的,可以事先测量目标物尺寸。但在开放式的空间中则难以事先测量目标物,例如需要采集获得街道、交通路口、楼群、隧道、车流等的3D信息(不限于此)。这使得这种方法难以奏效。即使是固定的较小的目标物,例如家具、人身体部分等虽然可以事先测量其尺寸,但这种方法依然受到较大限制:目标物尺寸难以准确确定,特别是某些应用场合目标物需要频繁更换,每次测量带来大量额外工作量,并且需要专业设备才能准确测量不规则目标物。测量的误差导致相机位置设定误差,从而会影响采集合成速度和效果;准确度和速度还需要进一步提高。
因此,急需一种能够精确、高效、方便物体3D信息,且可以适用于物体外表面和内部的采集方法和设备。
发明内容
鉴于上述问题,提出了本发明提供一种克服上述问题或者至少部分地解决上述问题的一种视觉3D信息采集设备。
本发明实施例提供了一种视觉3D信息采集设备,包括图像采集装置、旋转装置、支撑装置和角度设定装置;
其中旋转装置带动支撑装置旋转;
支撑装置上设置有角度设定装置和图像采集装置;
角度设定装置用于设定图像采集装置与旋转平面呈翻滚夹角;
图像采集装置在转动过程中相邻的两个采集位置的光轴的夹角α满足如下条件:
Figure PCTCN2021123707-appb-000001
其中,R为旋转中心到目标物表面的距离,T为采集时物距与像距的和,d为图像采集装置的感光元件的长度或宽度,F为图像采集装置的镜头焦距,u为经验系数。
在可选的实施例中,u<0.498,优选u<0.411,特别是优选u<0.359,u<0.250,或u<0.216,或u<0.197,或u<0.055,或u<0.028。
在可选的实施例中,所述设定夹角可以在0°~180°、90°~-90°、0°~-180°范围内。
在可选的实施例中,图像采集装置相对旋转平面上平移。
在可选的实施例中,所述设定角度为20°、40°、60°、80°、90°、100°、120°、160°、180°。。
在可选的实施例中,角度设定装置为可调角度设定装置。
在可选的实施例中,角度设定装置为固定式角度设定装置。
在可选的实施例中,支撑装置包括平移单元,使得图像采集装置相对于设备旋转中心偏离。。
在可选的实施例中,支撑装置使得图像采集装置位于空间内任一点,且与设备旋转中心偏离。
本发明实施例还提供了一种3D合成/识别装置及方法,包括上述任一所述的设备及方法。
本发明实施例还提供了一种物体制造/展示装置及方法,包括上述任一所述 的设备及方法。
发明点及技术效果
1、首次提出能够适用于物体外表面和内部空间3D信息采集的方法和设备。
2、通过测量旋转中心与目标物距离、图像传感元件与目标物距离的方式优化相机采集位置,从而兼顾3D构建的速度和效果。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了本发明实施例提供的3D信息采集设备的结构示意图;
图2示出了本发明实施例提供的3D信息采集设备的角度设定装置设定为20°的示意图;
图3示出了本发明实施例提供的3D信息采集设备的角度设定装置设定为80°的示意图;
图4示出了本发明实施例提供的3D信息采集设备的的另一种结构示意图;
图5示出了本发明实施例提供的3D信息采集设备的又一种结构示意图;
附图中的附图标记与各部件的对应关系如下:
1图像采集装置;
2旋转装置;
3承载装置;
4支撑装置;
5角度设定装置;
6杆。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被 这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
3D信息采集设备结构
为解决上述技术问题,本发明的一实施例提供了一种智能视觉3D信息采集设备,请参考图1,包括图像采集装置1、旋转装置2、支撑装置4、角度设定装置5和承载装置3。
支撑装置4一端连接旋转装置2,另一端通过角度设定装置5安装有图像采集装置1,从而在旋转装置2驱动下,支撑装置4带动图像采集装置1绕旋转中心转动。支撑装置4优选可以为伸缩结构,即支撑装置4的长度可调,这样可以根据目标物或目标区域大小选择不同长度的支撑装置,以适应采集需求。同时,支撑装置4的形状可变,可以在XYZ三个方向延伸。
XY平面为旋转装置带动支撑装置及图像采集装置进行旋转的平面,以图像采集装置镜头朝向方向为Y方向;以图像采集装置的CCD或CMOS芯片长边方向为X方向,且朝向镜头看时,右侧为X正向;垂直XY平面向上为Z向正向。角度设定装置5能够调节设定图像采集装置1的光学采集口(光轴p)的方向。旋转装置带动支撑装置及图像采集装置进行旋转的平面称为XY平面,以图像采集装置镜头朝向方向为Y方向,以图像采集装置的CCD或CMOS芯片长边方向为X方向,且朝向镜头看时,右侧为X正向。角度设定装置可以设定图像采集装置的翻滚角度,即通过角度设定装置使得图像采集装置可以在XZ方向旋转。以竖直向上的Z方向为0°方向,如图1所示,且定义图像采集装置从Y方向看时逆时针翻滚角度为翻滚角度增加方向。
请参考图2-图3所示,角度设定装置5可以使得图像采集装置1的光学采集口光轴p方向在旋转平面上转动一定角度,例如转动到20°、40°、60°、80°、90°、100°、120°、160°。也就是说,图像采集装置光轴沿光学采集口的采集方向与旋转平面之间存在夹角,即设定角度。
当然,也可以继续增加设定的角度,例如180°、270°等。但当图像采集装置转动超过180°后,实际上相当于与上述相反的方向设定转动一个小于180°的角度,即反向旋转设定。
可以在角度设定装置和图像采集装置之间增加杆6,如图4所示,即使得图像采集装置与支撑装置、旋转装置不处于同一平面内,且距离较大,不会阻挡采集范围。可以理解,杆6可以为再某一方向延伸的直杆,也可以为在空间 任意三维延伸的弯曲杆。特别是,杆6还可以为长度可调杆。此外,请参考图5,也可以通过改变支撑装置的形状结构来实现。例如支撑装置可以设置为类似L形状,即包括与旋转装置连接的竖直部分和与角度设定装置连接的横向部分(或与旋转装置连接的横向部分和与角度设定装置连接的竖直部分)。当然支撑装置也可以为其它复杂形状(例如支撑装置沿Z轴、Y轴、Z轴均有延伸),使得图像采集装置能够位于空间任意位置,且与设备旋转轴相偏离。同理,支撑装置也可以为在XYZ轴均可以自由调整位置的位移装置,使得图像采集装置在其上可以在XYZ方向平移。
可以理解,上述立杆可以为可伸缩装置,即可以根据实际需要调整立杆伸出的长度,从而适应不同目标物尺寸或采集空间的需要。而且,虽然图示中承载装置3在整个设备的最下面用于承载整个设备,但是可以理解,整个设备也可以上下完全颠倒使用。当然,也并不一定非要整个设备竖直使用,将整个设备旋转90°,从而水平使用该设备,或任意倾斜角度使用该设备都是可以的,可以根据实际需要选择。
上述角度设定装置为可调转动装置,在转动到设定角度后进行锁定。可调转动装置可以为手动调节,也可以电动自动调节。同时也可以为固定角度装置,即图像采集装置安装在其上后光轴方向自然满足设定角度,且固定不可调。这样在一些已知的固定使用的场合无需每次均进行设定角度的调节,从而避免反复调节带来的不准确。
当然,旋转装置的旋转轴也可以通过减速装置与图像采集装置连接,例如通过齿轮组等。当图像采集装置在水平面进行360°的旋转时,其在特定位置拍摄对应目标物的图像(具体拍摄位置后续将详细描述)。这种拍摄可以是与旋转动作同步进行,或是在拍摄位置停止旋转后进行拍摄,拍摄完毕后继续旋转,以此类推。上述旋转装置可以为电机、马达、步进电机、伺服电机、微型马达等。旋转装置(例如各类电机)可以在控制器的控制下按照规定速度转动,并且可以转动规定角度,从而实现采集位置的优化,具体采集位置下面将详细说明。当然也可以使用现有设备中的旋转装置,将图像采集装置安装其上即可。
此外支撑装置也可以实现在XY平面相对旋转中心的平移,从而实现更灵活的采集。
承载装置用来承载整个设备的重量,旋转装置2与承载装置3连接。承载装置可以为三脚架、带有支撑装置的底座等。通常情况下,旋转装置位于承载装置的中心部分,以保证平衡。但在一些特殊场合中,也可以位于承载装置任 意位置。而且承载装置并不是必须的。旋转装置可以直接安装于应用设备中,例如可以安装于车辆顶部。承载装置也可以为手持部,从而使得3D采集设备可以手持使用。
3D信息采集设备还可以包括测距装置,测距装置与图像采集装置固定连接,且测距装置指向方向与图像采集装置光轴方向相同。当然测距装置也可以固定连接于旋转装置上,只要可以随图像采集装置同步转动即可。优选的,可以设置安装平台,图像采集装置和测距装置均位于平台上,平台安装于旋转装置旋转轴上,由旋转装置驱动转动。测距装置可以使用激光测距仪、超声测距仪、电磁波测距仪等多种方式,也可以使用传统的机械量具测距装置。当然,在某些应用场合中,3D采集设备位于特定位置,其与目标物的距离已经标定,无需额外测量。
3D信息采集设备还可以包括光源,光源可以设置于图像采集装置周边、旋转装置上以及安装平台上。当然光源也可以单独设置,例如使用独立光源照射目标物。甚至在光照条件较好的时候不使用光源。光源可以为LED光源,也可以为智能光源,即根据目标物及环境光的情况自动调整光源参数。通常情况下,光源位于图像采集装置1的镜头周边分散式分布,例如光源为在镜头周边的环形LED灯。由于在一些应用中需要控制光源强度。特别是可以在光源的光路上设置柔光装置,例如为柔光外壳。或者直接采用LED面光源,不仅光线比较柔和,而且发光更为均匀。更佳地,可以采用OLED光源,体积更小,光线更加柔和,并且具有柔性特性,可以贴附于弯曲的表面。
为了方便目标物的实际尺寸测量,可在目标物位置设置多个标记点。并且这些标记点的坐标已知。通过采集标记点,并结合其坐标,获得3D合成模型的绝对尺寸。这些标记点可以为事先设置的点,也可以是激光光点。确定这些点的坐标的方法可以包括:①使用激光测距:使用标定装置向着目标物发射激光,形成多个标定点光斑,通过标定装置中激光测距单元的已知位置关系获得标定点坐标。使用标定装置向着目标物发射激光,使得标定装置中的激光测距单元发射的光束落在目标物上形成光斑。由于激光测距单元发射的激光束相互平行,且各个单元之间的位置关系已知。那么在目标物上形成的多个光斑的在发射平面的二维坐标就可以得到。通过激光测距单元发射的激光束进行测量,可以获得每个激光测距单元与对应光斑之间的距离,即相当于在目标物上形成的多个光斑的深度信息可以获得。即垂直于发射平面的深度坐标就可以得到。由此,可以获得每个光斑的三维坐标。②使用测距与测角结合:分别测量多个 标记点的距离以及相互之间的夹角,从而算出各自坐标。③使用其它坐标测量工具:例如RTK、全球坐标定位系统、星敏定位系统、位置和位姿传感器等。
3D信息采集流程
旋转装置按一定速度带动图像采集装置进行旋转,在旋转过程中图像采集装置在设定好的位置进行图像采集。此时可以不停止旋转,即图像采集与旋转同步进行;也可以在待采集的位置停止旋转,进行图像采集,采集完毕后继续旋转至下一个待采集位置。旋转装置可以利用事先设定好的控制单元中的程序进行驱动。也可以通过通讯接口与上位机进行通讯,通过上位机进行控制旋转。特别是其还可以与移动终端通过有线或无线进行连接,通过移动终端(例如手机)控制旋转装置转动。即可以通过远程平台、云平台、服务器、上位机、移动终端设置旋转装置转动参数,控制其旋转的启停。
图像采集装置采集到目标物多张图像,并将图像通过通讯装置送入远程平台、云平台、服务器、上位机和/或移动终端中,利用3D模型合成方法进行目标物的3D合成。
特别的,可以在采集前或者采集的同时,使用测距装置测量相关公式条件中相应的距离参数,即旋转中心到目标物的距离、传感元件到目标物的距离。根据相应条件公式计算出采集位置,并提示给用户进行旋转参数的设定,或自动设定旋转参数。
在采集前进行测距时,可以使得旋转装置带动测距装置转动,从而测量不同位置上上述两个距离。并对多个测量点测得的两个距离分别取平均值,作为本次采集的统一距离值带入公式中。所述平均值的获得可以使用求和平均的方式,也可以使用加权平均的方式,还可以使用其它求均值的方式,或舍弃异常值再平均的方式等。
在采集过程中进行测距时,在旋转装置转动到第一位置进行图像采集的同时,进行上述两个距离值的测量,并将它们带入条件公式中计算间隔角度,根据该角度确定下一采集位置。
相机位置的优化
为了保证设备能够兼顾3D合成的效果和效率,除了常规的优化合成算法的方法外,还可以通过优化相机采集位置的方法。特别是当3D采集合成设备的相机的采集方向与其旋转轴方向相互背离的情况时,对于这种设备现有技术 未提到如何进行相机位置的更佳的优化。即使存在的一些优化方法,其也是在不同实验下得到的不同的经验条件。特别是,现有的一些位置优化方法需要获得目标物的尺寸,这在环绕式3D采集中是可行的,可以事先测量完毕。但在开放式的空间中则难以事先测量得到。因此需要提出一种能够适用于当3D采集合成设备的相机的采集方向与其旋转轴方向相互背离的情况时进行相机位置优化的方法。这正是本发明所要解决的问题,和做出的技术贡献。
为此,本发明进行了大量实验,总结出在进行采集时相机采集的间隔优选满足的经验条件如下。
在进行3D采集时,图像采集装置在相邻的两个位置时其光轴的夹角α满足如下条件:
Figure PCTCN2021123707-appb-000002
其中,
R为旋转中心到目标物表面的距离,
T为采集时物距与像距的和,也就是图像采集装置的感光单元与目标物的距离。
d为图像采集装置的感光元件(CCD)的长度或宽度,当上述两个位置是沿感光元件长度方向时,d取矩形长度;当上述两个位置是沿感光元件宽度方向时,d取矩形宽度。
F为图像采集装置的镜头焦距。
u为经验系数。
通常情况下,在采集设备上配置有测距装置,例如激光测距仪。将其光轴与图像采集装置的光轴调节平行,则其可以测量采集设备到目标物表面的距离,利用测量得到的距离,根据测距装置与采集设备各部件的已知位置关系,即可获得R和T。
图像采集装置在两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为T。除了这种方法外,也可以使用多次平均法或其他方法,其原则是T的值应当与采集时像距物距和不背离。
同样道理,图像采集装置在两个位置中的任何一个位置时,旋转中心沿着光轴到目标物表面的距离作为R。除了这种方法外,也可以使用多次平均法或其他方法,其原则是R的值应当与采集时旋转半径不背离。
通常情况下,现有技术中均采用物体尺寸作为推算相机位置的方式。由于 物体尺寸会随着测量物体的变化而改变。例如,在进行一个大物体3D信息采集后,再进行小物体采集时,就需要重新测量尺寸,重新推算。上述不方便的测量以及多次重新测量都会带来测量的误差,从而导致相机位置推算错误。而本方案根据大量实验数据,给出了相机位置需要满足的经验条件,不需要直接测量物体大小尺寸。经验条件中d、F均为相机固定参数,在购买相机、镜头时,厂家即会给出相应参数,无需测量。而R、T仅为一个直线距离,用传统测量方法,例如直尺、激光测距仪均可以很便捷的测量得到。同时,由于本发明的设备中,图像采集装置(例如相机)的采集方向与其旋转轴方向相互背离,也就是说,镜头朝向与旋转中心大体相反。此时控制图像采集装置两次位置的光轴夹角α就更加容易,只需要控制旋转驱动电机的转角即可。因此,使用α来定义最优位置是更为合理的。因此,本发明的经验公式使得准备过程变得方便快捷,同时也提高了相机位置的排布准确度,使得相机能够设置在优化的位置中,从而在同时兼顾了3D合成精度和速度。
根据大量实验,为保证合成的速度和效果,u应当小于0.498,为了更佳的合成效果,优选u<0.411,特别是优选u<0.359,在一些应用场合下u<0.250,或u<0.216,或u<0.197,或u<0.055,或u<0.028。
利用本发明装置,进行实验,部分实验数据如下所示,单位mm。(以下数据仅为有限举例)
相机设定角度为0°时,
Figure PCTCN2021123707-appb-000003
相机设定角度不为0时
Figure PCTCN2021123707-appb-000004
Figure PCTCN2021123707-appb-000005
Figure PCTCN2021123707-appb-000006
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。
3D模型合成方法
图像采集装置采集获得的多个图像送入处理单元中,利用下述算法构建3D模型。所述处理单元可以位于采集设备中,也可以位于远程,例如云平台、服务器、上位机等。
具体算法主要包括如下步骤:
步骤1:对所有输入照片进行图像增强处理。采用下述滤波器增强原始照片的反差和同时压制噪声。
Figure PCTCN2021123707-appb-000007
式中:g(x,y)为原始影像在(x,y)处灰度值,f(x,y)为经过Wallis滤波器增强后该处的灰度值,m g为原始影像局部灰度均值,s g为原始影像局部灰度标准偏差,m f为变换后的影像局部灰度目标值,s f为变换后影像局部灰度标准偏差目标值。c∈(0,1)为影像方差的扩展常数,b∈(0,1)为影像亮度系数常数。
该滤波器可以大大增强影像中不同尺度的影像纹理模式,所以在提取影像的点特征时可以提高特征点的数量和精度,在照片特征匹配中则提高了匹配结果可靠性和精度。
步骤2:对输入的所有照片进行特征点提取,并进行特征点匹配,获取稀疏特征点。采用SURF算子对照片进行特征点提取与匹配。SURF特征匹配方法主要包含三个过程,特征点检测、特征点描述和特征点匹配。该方法使用Hessian矩阵来检测特征点,用箱式滤波器(Box Filters)来代替二阶高斯滤波,用积分图像来加速卷积以提高计算速度,并减少了局部影像特征描述符的维数,来加快匹配速度。主要步骤包括①构建Hessian矩阵,生成所有的兴趣点,用于特征提取,构建Hessian矩阵的目的是为了生成图像稳定的边缘点(突变点);②构建尺度空间特征点定位,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;③特征点主方向的确定,采用的是统计特征点圆形邻域内的harr小波特征。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特征总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向;④生成64维特征点描述向量,特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子;⑤特征点匹配,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好。
步骤3:输入匹配的特征点坐标,利用光束法平差,解算稀疏的目标物三维点云和拍照相机的位置和姿态数据,即获得了稀疏目标物模型三维点云和位置的模型坐标值;以稀疏特征点为初值,进行多视照片稠密匹配,获取得到密集点云数据。该过程主要有四个步骤:立体像对选择、深度图计算、深度图优化、深度图融合。针对输入数据集里的每一张影像,我们选择一张参考影像形成一个立体像对,用于计算深度图。因此我们可以得到所有影像的粗略的深度图,这些深度图可能包含噪声和错误,我们利用它的邻域深度图进行一致性检查,来优化每一张影像的深度图。最后进行深度图融合,得到整个场景的三维点云。
步骤4:利用密集点云进行目标物曲面重建。包括定义八叉树、设置函数 空间、创建向量场、求解泊松方程、提取等值面几个过程。由梯度关系得到采样点和指示函数的积分关系,根据积分关系获得点云的向量场,计算指示函数梯度场的逼近,构成泊松方程。根据泊松方程使用矩阵迭代求出近似解,采用移动方体算法提取等值面,对所测点云重构出被测物体的模型。
步骤5:目标物模型的全自动纹理贴图。表面模型构建完成后,进行纹理贴图。主要过程包括:①纹理数据获取通过图像重建目标的表面三角面格网;②重建模型三角面的可见性分析。利用图像的标定信息计算每个三角面的可见图像集以及最优参考图像;③三角面聚类生成纹理贴片。根据三角面的可见图像集、最优参考图像以及三角面的邻域拓扑关系,将三角面聚类生成为若干参考图像纹理贴片;④纹理贴片自动排序生成纹理图像。对生成的纹理贴片,按照其大小关系进行排序,生成包围面积最小的纹理图像,得到每个三角面的纹理映射坐标。
应当注意,上述算法是本发明使用的算法,本算法与图像采集条件相互配合,使用该算法兼顾了合成的时间和质量。但可以理解,同样可以使用现有技术中常规3D合成算法也可以与本发明的方案进行配合使用。
应用实例
为构建发动机整体3D模型,需要对其外表面和内腔内部进行3D建模。在进行外表面建模时,设定角度为180°,通过旋转进行采集,获得多个图像。在进行内腔表面建模时,设定角度为10°,将杆部安装有微型相机的设备伸入内腔中通过旋转拍照,获得多个图像。通过两部分图像合成发动机外表面和内腔的3D模型,从而实现发动机内腔的质量检查。
上述目标物体、目标物、及物体皆表示预获取三维信息的对象。可以为一实体物体,也可以为多个物体组成物。例如可以为建筑物、零件等。所述目标物的3D信息包括三维图像、三维点云、三维网格、局部三维特征、三维尺寸及一切带有目标物三维特征的参数。本发明里所谓的三维是指具有XYZ三个方向信息,特别是具有深度信息,与只有二维平面信息具有本质区别。也与一些称为三维、全景、全息、三维,但实际上只包括二维信息,特别是不包括深度信息的定义有本质区别。
本发明所说的采集区域是指图像采集装置(例如相机)能够拍摄的范围。本发明中的图像采集装置可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智 能手表、智能手环以及带有图像采集功能所有设备。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于本发明装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可 以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。

Claims (13)

  1. 一种视觉3D信息采集设备,其特征在于:包括图像采集装置、旋转装置、支撑装置和角度设定装置;
    其中旋转装置带动支撑装置旋转;
    支撑装置上设置有角度设定装置和图像采集装置;
    角度设定装置用于设定图像采集装置与旋转平面呈翻滚夹角;
    图像采集装置在转动过程中相邻的两个采集位置的光轴的夹角α满足如下条件:
    Figure PCTCN2021123707-appb-100001
    其中,R为旋转中心到目标物表面的距离,T为采集时物距与像距的和,d为图像采集装置的感光元件的长度或宽度,F为图像采集装置的镜头焦距,u为经验系数。
  2. 如权利要求1所述的设备,其特征在于:u<0.498,或u<0.411,或u<0.359,或u<0.250,或u<0.216,或u<0.197,或u<0.055,或u<0.028。
  3. 如权利要求1所述的设备,其特征在于:所述设定夹角可以在0°~180°、90°~-90°、0°~-180°范围内。
  4. 如权利要求1所述的设备,其特征在于:图像采集装置相对旋转平面上平移。
  5. 如权利要求1所述的设备,其特征在于:所述设定角度为20°、40°、60°、80°、90°、100°、120°、160°、180°。
  6. 如权利要求1所述的设备,其特征在于:角度设定装置为可调角度设定装置。
  7. 如权利要求1所述的设备,其特征在于:角度设定装置为固定式角度设定装置。
  8. 如权利要求1所述的设备,其特征在于:支撑装置包括平移单元,使得图像采集装置相对于设备旋转中心偏离。
  9. 如权利要求1所述的设备,其特征在于:支撑装置使得图像采集装置位于空间内任一点,且与设备旋转中心偏离。
  10. 一种3D合成或识别装置,包括上述权利要求1-9任一所述的设备。
  11. 一种3D合成或识别方法,包括上述权利要求1-9任一所述的设备。
  12. 一种物体制造或展示装置,包括上述权利要求1-9任一所述的设备。
  13. 一种物体制造或展示方法,包括上述权利要求1-9任一所述的设备。
PCT/CN2021/123707 2020-11-25 2021-10-14 一种多翻滚角度的智能视觉3d信息采集设备 WO2022111104A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011341023.0 2020-11-25
CN202011341023.0A CN112484663B (zh) 2020-11-25 2020-11-25 一种多翻滚角度的智能视觉3d信息采集设备

Publications (1)

Publication Number Publication Date
WO2022111104A1 true WO2022111104A1 (zh) 2022-06-02

Family

ID=74934487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123707 WO2022111104A1 (zh) 2020-11-25 2021-10-14 一种多翻滚角度的智能视觉3d信息采集设备

Country Status (2)

Country Link
CN (1) CN112484663B (zh)
WO (1) WO2022111104A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112484663B (zh) * 2020-11-25 2022-05-03 天目爱视(北京)科技有限公司 一种多翻滚角度的智能视觉3d信息采集设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1314000A1 (en) * 2000-08-25 2003-05-28 3Shape APS Method and apparatus for three-dimensional optical scanning of interior surfaces
CN106989690A (zh) * 2017-02-20 2017-07-28 上海大学 便携非接触式物体内腔形貌内窥测量数字化设备
CN208579103U (zh) * 2018-07-16 2019-03-05 天目爱视(北京)科技有限公司 一种3d数据的采集装置
CN111076674A (zh) * 2019-12-12 2020-04-28 天目爱视(北京)科技有限公司 一种近距离目标物3d采集设备
CN111292364A (zh) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 一种三维模型构建过程中图像快速匹配的方法
CN211373522U (zh) * 2019-12-12 2020-08-28 天目爱视(北京)科技有限公司 近距离3d信息采集设备及3d合成、显微、附属物制作设备
CN112484663A (zh) * 2020-11-25 2021-03-12 天目爱视(北京)科技有限公司 一种多翻滚角度的智能视觉3d信息采集设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6107212B2 (ja) * 2013-02-20 2017-04-05 日本精工株式会社 物品の形状測定方法及び測定装置
CN204963811U (zh) * 2015-09-18 2016-01-13 中国科学院紫金山天文台 低温多姿态拍摄测量装置
WO2019118969A1 (en) * 2017-12-17 2019-06-20 Ap Robotics, Llc Multi-dimensional measurement system for precise calculation of position and orientation of a dynamic object
CN109458958B (zh) * 2018-12-21 2020-10-09 中国航空工业集团公司北京航空精密机械研究所 一种四轴视觉测量装置中的转台中心位置的标定方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1314000A1 (en) * 2000-08-25 2003-05-28 3Shape APS Method and apparatus for three-dimensional optical scanning of interior surfaces
CN106989690A (zh) * 2017-02-20 2017-07-28 上海大学 便携非接触式物体内腔形貌内窥测量数字化设备
CN208579103U (zh) * 2018-07-16 2019-03-05 天目爱视(北京)科技有限公司 一种3d数据的采集装置
CN111076674A (zh) * 2019-12-12 2020-04-28 天目爱视(北京)科技有限公司 一种近距离目标物3d采集设备
CN211373522U (zh) * 2019-12-12 2020-08-28 天目爱视(北京)科技有限公司 近距离3d信息采集设备及3d合成、显微、附属物制作设备
CN111292364A (zh) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 一种三维模型构建过程中图像快速匹配的方法
CN112484663A (zh) * 2020-11-25 2021-03-12 天目爱视(北京)科技有限公司 一种多翻滚角度的智能视觉3d信息采集设备

Also Published As

Publication number Publication date
CN112484663A (zh) 2021-03-12
CN112484663B (zh) 2022-05-03

Similar Documents

Publication Publication Date Title
WO2022111105A1 (zh) 一种自由姿态的智能视觉3d信息采集设备
WO2022078418A1 (zh) 一种转动稳定的智能三维信息采集设备
CN112361962B (zh) 一种多俯仰角度的智能视觉3d信息采集设备
WO2022078442A1 (zh) 一种基于光扫描和智能视觉融合的3d信息采集方法
CN112257537B (zh) 一种智能多点三维信息采集设备
WO2022078440A1 (zh) 一种包含运动物体的空间占用率采集判断设备及方法
CN112254680B (zh) 一种多自由度的智能视觉3d信息采集设备
WO2022078439A1 (zh) 一种空间与物体三维信息采集匹配设备及方法
CN112254638B (zh) 一种可俯仰调节的智能视觉3d信息采集设备
CN112253913B (zh) 一种与旋转中心偏离的智能视觉3d信息采集设备
CN112082486B (zh) 一种手持式智能3d信息采集设备
CN112254676B (zh) 一种便携式智能3d信息采集设备
WO2022111104A1 (zh) 一种多翻滚角度的智能视觉3d信息采集设备
WO2022078419A1 (zh) 一种多偏置角度的智能视觉3d信息采集设备
WO2022078438A1 (zh) 一种室内3d信息采集设备
WO2022078444A1 (zh) 一种用于3d信息采集的程序控制方法
WO2022078433A1 (zh) 一种多点组合式3d采集系统及方法
CN112254673B (zh) 一种自转式智能视觉3d信息采集设备
CN112254677B (zh) 一种基于手持设备的多位置组合式3d采集系统及方法
CN112254671B (zh) 一种多次组合式3d采集系统及方法
WO2022078421A1 (zh) 一种多俯仰角度的智能视觉3d信息采集设备
CN112254679A (zh) 一种多位置组合式3d采集系统及方法
WO2022078417A1 (zh) 一种自转式智能视觉3d信息采集设备
CN112254672B (zh) 一种高度可调的智能3d信息采集设备
CN112254674B (zh) 一种近距离智能视觉3d信息采集设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896606

Country of ref document: EP

Kind code of ref document: A1