WO2021185220A1 - 一种基于坐标测量的三维模型构建及测量方法 - Google Patents

一种基于坐标测量的三维模型构建及测量方法 Download PDF

Info

Publication number
WO2021185220A1
WO2021185220A1 PCT/CN2021/080882 CN2021080882W WO2021185220A1 WO 2021185220 A1 WO2021185220 A1 WO 2021185220A1 CN 2021080882 W CN2021080882 W CN 2021080882W WO 2021185220 A1 WO2021185220 A1 WO 2021185220A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image acquisition
coordinates
point
calibration
Prior art date
Application number
PCT/CN2021/080882
Other languages
English (en)
French (fr)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2021185220A1 publication Critical patent/WO2021185220A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the invention relates to the technical field of shape measurement, in particular to the technical field of 3D shape measurement.
  • the camera is usually rotated relative to the target, or multiple cameras are set around the target to perform acquisition at the same time.
  • the Digital EmiLy project of the University of Southern California uses a spherical bracket to fix hundreds of cameras at different positions and angles on the bracket to realize 3D collection and modeling of the human body.
  • the distance between the camera and the target should be short, at least within the range that can be arranged, so that the camera can collect images of the target at different positions.
  • a further problem is that even if 3D modeling is completed for these long-distance targets, how to obtain their accurate size so that the 3D model has an absolute size is still an unsolved problem.
  • a variety of calibration objects can be designed for the target object, and the calibration object can be placed around the target object, so as to obtain the coordinates or absolute size of the target object according to the known coordinates of the calibration object.
  • a calibration object that is too small will cause a larger calibration error, and a calibration object that is too large is not easy to carry. How to solve this problem has not yet been mentioned.
  • the size and coordinates of the three-dimensional model are usually obtained by setting the calibration point first, and then collecting. If the collection point is far from the target, the user needs to walk back and forth between the two, which takes time and effort. Moreover, calibration and acquisition must cooperate with each other, and the work efficiency is not high when a large number of measurements are performed.
  • the present invention is proposed to provide a 3D modeling device and method with coordinate information that overcomes the above problems or at least partially solves the above problems.
  • the embodiment of the present invention provides a 3D modeling device and method with coordinate information
  • the acquisition device sends multiple target images to the server, and the server sends the image identifying the specific point to the terminal.
  • the operator searches for a corresponding point on the target according to a specific point prompted by the terminal, and measures its coordinates.
  • the coordinate measurement uses RTK, GPS, or 5G.
  • the position when the image acquisition device rotates to acquire a group of images meets the following conditions:
  • the acquisition device is a 3D intelligent image acquisition device
  • two adjacent acquisition positions of the 3D intelligent image acquisition device meet the following conditions:
  • the method further includes extracting feature points of the collected image, and performing feature point matching to obtain sparse feature points; input the coordinate of the matched feature points, and use the sparse 3D point cloud and the photographic image acquisition device Obtain the 3D point cloud of the sparse model of the object A and the object B and the position model coordinate values of the position and posture data of the object.
  • the template is matched to obtain the row and column numbers x i and y i of all pixels in the input photo that contain the marker points.
  • it also includes inputting the pixel row and column numbers x i , y i of the marker point according to the position and posture data of the shooting camera, and the coordinates of the marker point in the model coordinate system (X i , Y i , Z i ); According to the absolute coordinates of the marker points and the model coordinates (X T , Y T , Z T ) and (X i , Y i , Z i ), use the space similarity transformation formula to calculate the 7 of the model coordinates and the absolute coordinates Using the 7 parameters to be solved, the coordinates of the three-dimensional point cloud of object A and object B and the position and posture data of the camera can be converted to the absolute coordinate system, that is, the true object of the target can be obtained. size.
  • the absolute size of the target is obtained.
  • the absolute size calibration of the target object is achieved by laser ranging and angle measurement of multiple points.
  • FIG. 1 is a schematic diagram of using a 3D intelligent vision device to collect an image of a building according to an embodiment of the present invention
  • Figure 2 is a schematic diagram of using RTK to measure building calibration points according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of using a 3D image acquisition device to collect images of a large workpiece according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of measuring calibration points of large workpieces by using RTK according to an embodiment of the present invention
  • the calibration object A can be placed around B at this time, but in many cases the calibration object A cannot be placed near the target B. At this point you can:
  • RTK use RTK to measure the mark points on the target (the above-mentioned feature points), and obtain the three-dimensional coordinates Pa(Xa, Ya, Za), Pb(Xb) corresponding to the mark points A, B, C, D, and E. , Yb, Zb), Pc (Xc, Yc, Zc), Pd (Xd, Yd, Zd), Pe (Xe, Ye, Ze).
  • the coordinates of the three-dimensional point cloud of the shooting target area and the target object and the position and posture data of the shooting camera can be converted to the absolute coordinate system, that is, the real size and the target object can be obtained. size.
  • the collection device collects multiple images of the target and uploads the images to the server or cloud platform.
  • the server automatically or manually marks out multiple specific points on the image as calibration points.
  • the so-called specific point is a point that can be uniquely determined on an image, such as color, texture, and shape, and is easy to distinguish by the human eye.
  • the server sends the image marked with the calibration point to the terminal device.
  • the operator uses the coordinate measuring device to measure the coordinates of the points corresponding to the calibration points marked on the image on the target object according to the image prompts received by the terminal equipment.
  • the operator uses terminal equipment to send the calibration point coordinates to the server or cloud platform.
  • the calibration device 5 includes a coordinate measuring device, and a variety of common devices such as RTK and GPS receivers can be commonly used, and even some devices that contain GPS.
  • mobile phones can also be positioned through GPS, cellular networks, and so on.
  • 5G can make coordinate positioning methods more diversified.
  • FIG. 1-2 it includes an image acquisition device 1, a rotating device 2 and a cylindrical housing 3.
  • the image capturing device 1 is installed on the rotating device 2.
  • the rotating device is housed in a cylindrical housing 3 and can rotate freely in the cylindrical housing.
  • the image acquisition device 1 is used to acquire a set of images of the target through the relative movement of the acquisition area of the image acquisition device 1 and the target; the acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target.
  • the acquisition area is the effective field of view range of the image acquisition device.
  • the image acquisition device 1 may be a camera, and the rotating device 2 may be a turntable.
  • the camera setting 2 is on the turntable, and the optical axis of the camera is at a certain angle with the turntable surface, and the turntable surface is approximately parallel to the object to be collected.
  • the turntable drives the camera to rotate, so that the camera collects images of the target at different positions.
  • the camera is installed on the turntable through the angle adjustment device 4, as shown in Fig. 2, the angle adjustment device 4 can be rotated to adjust the angle between the optical axis of the image acquisition device 1 and the turntable surface, and the adjustment range is -90° ⁇ 90° .
  • the optical axis of the image acquisition device 1 can be offset in the direction of the central axis of the turntable, that is, the ⁇ can be adjusted in the direction of -90°.
  • the optical axis of the image acquisition device 1 can be offset from the central axis of the turntable, that is, the ⁇ can be adjusted in the direction of 90°.
  • the above adjustment can be done manually, or the 3D intelligent vision device can be provided with a distance measuring device to measure the distance from the target, and automatically adjust the ⁇ angle according to the distance.
  • the turntable can be connected with a motor through a transmission device, and rotate under the drive of the motor, and drive the image acquisition device 1 to rotate.
  • the transmission device can be a conventional mechanical structure such as a gear system or a transmission belt.
  • multiple image collection devices 1 can be provided on the turntable.
  • a plurality of image acquisition devices 1 are sequentially distributed along the circumference of the turntable.
  • an image acquisition device 1 can be provided at both ends of any diameter of the turntable. It is also possible to arrange one image acquisition device 1 every 60° circumferential angle, and 6 image acquisition devices 1 are evenly arranged on the entire disk.
  • the above-mentioned multiple image acquisition devices may be the same type of cameras or different types of cameras. For example, a visible light camera and an infrared camera are set on the turntable, so that images of different bands can be collected.
  • the image acquisition device 1 is used to acquire an image of a target object, and it can be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera. Of course, it is understandable that any device with image acquisition function can be used and does not constitute a limitation of the present invention. For example, it can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, Mobile terminals, wearable devices, smart glasses, smart watches, smart bracelets, and all devices with image capture functions.
  • the rotating device 2 can also be in various forms such as a rotating arm, a rotating beam, and a rotating bracket, as long as it can drive the image acquisition device to rotate. No matter which method is used, the optical axis of the image acquisition device 1 and the rotating surface have a certain included angle ⁇ .
  • the light source is distributed around the lens of the image acquisition device 1 in a dispersed manner.
  • the light source is a ring LED lamp on the periphery of the lens, which is located on the turntable; it can also be arranged on the cross section of the cylindrical housing. Since in some applications, the collected object is a human body, it is necessary to control the intensity of the light source to avoid causing discomfort to the human body.
  • a soft light device such as a soft light housing, can be arranged on the light path of the light source. Or directly use the LED surface light source, not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces.
  • the light source can also be set in other positions that can provide uniform illumination for the target.
  • the light source can also be a smart light source, that is, the light source parameters are automatically adjusted according to the target object and ambient light conditions.
  • the optical axis direction of the image acquisition device does not change relative to the target object at different acquisition positions, and is usually roughly perpendicular to the surface of the target object.
  • the position of two adjacent image acquisition devices 1 or the image acquisition device 1 Two adjacent collection locations meet the following conditions:
  • d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device 1, d is the width of the rectangle.
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as M.
  • L should be the linear distance between the optical centers of the two image capture devices 1, but because the position of the optical centers of the image capture devices 1 is not easy to determine in some cases, image capture devices can also be used in some cases
  • the center of the photosensitive element of 1, the geometric center of the image capture device 1, the axis center of the image capture device and the pan/tilt (or platform, bracket), and the center of the proximal or distal surface of the lens are replaced.
  • the error is within an acceptable range, so the above range is also within the protection scope of the present invention.
  • the adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves and causes the two to move relatively, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
  • the moving device of the collection area is a rotating structure
  • the target 6 is fixed at a certain position, and the rotating device drives the image acquisition device 1 to rotate around the target 6.
  • the rotating device can drive the image acquisition device 1 to rotate around the target 6 through the rotating arm.
  • this kind of rotation is not necessarily a complete circular motion, and it can only be rotated by a certain angle according to the collection needs.
  • this rotation does not necessarily have to be a circular motion, and the motion trajectory of the image acquisition device 1 may be other curved trajectories, as long as it is ensured that the camera shoots the object from different angles.
  • the rotating device can also drive the image acquisition device to rotate, so that the image acquisition device 1 can collect images of the target object from different angles through the rotation.
  • the rotating device can be in various forms such as a cantilever, a turntable, or a track, or it can be hand-held, vehicle-mounted or air-borne, so that the image acquisition device 1 can move.
  • the camera can also be fixed, and the stage carrying the target object can be rotated, so that the direction of the target object facing the image capture device is constantly changing, so that the image capture device 1 can capture images of the target object from different angles.
  • the calculation can still be performed according to the situation converted into the movement of the image acquisition device, so that the movement conforms to the corresponding empirical formula (the details will be described in detail below). For example, in a scenario where the stage rotates, it can be assumed that the stage does not move, but the image capture device 1 rotates.
  • the rotation speed is derived, and the rotation speed of the stage is deduced, so as to facilitate the rotation speed control and realize 3D acquisition.
  • this kind of scene is not commonly used, and it is more commonly used to rotate the image capture device.
  • the acquisition area moving device is an optical scanning device, so that when the image acquisition device does not move or rotate, the acquisition area of the image acquisition device moves relative to the target.
  • the collection area moving device also includes a light deflection unit, which is mechanically driven to rotate, or is electrically driven to cause light path deflection, or is arranged in multiple groups in space, so as to obtain images of the target object from different angles.
  • the light deflection unit can typically be a mirror, which rotates so that images of the target object in different directions are collected.
  • the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image acquisition device.
  • the image acquisition device is used to acquire an image of a target object, and it can be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera. Of course, it is understandable that any device with image acquisition function can be used and does not constitute a limitation of the present invention. For example, it can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, Mobile terminals, wearable devices, smart glasses, smart watches, smart bracelets, and all devices with image capture functions.
  • the device also includes a processor, also called a processing unit, for synthesizing a 3D model of the target object according to a 3D synthesis algorithm according to the multiple images collected by the image acquisition device to obtain 3D information of the target object.
  • a processor also called a processing unit
  • the moving device of the collection area is a translational structure
  • the image acquisition device can move relative to the target in a linear trajectory.
  • the image acquisition device is located on a straight track or on a car or unmanned aerial vehicle traveling in a straight line, and then passes through the target in sequence along the linear track to take pictures, and the image acquisition device does not rotate during the process.
  • the linear track can also be replaced by a linear cantilever.
  • the mobile device in the acquisition area is an irregular movement structure
  • the movement of the collection area is irregular, such as when the image collection device is held in hand, or when the vehicle or airborne, when the travel route is irregular, it is difficult to move on a strict trajectory at this time, and the movement trajectory of the image collection device is difficult to be accurate predict. Therefore, in this case, how to ensure that the captured images can be accurately and stably synthesized 3D models is a big problem, and no one has mentioned it yet.
  • a more common method is to take more photos and use the redundancy of the number of photos to solve the problem. But the result of this synthesis is not stable.
  • the present invention proposes a method for improving the synthesis effect and shortening the synthesis time by limiting the movement distance of the camera for two shots.
  • a sensor can be installed in the mobile terminal or the image acquisition device, and the linear distance that the image acquisition device moves during two shots can be measured by the sensor.
  • L specifically, the following conditions
  • an alarm is issued to the user.
  • the alarm includes sound or light alarm to the user.
  • it can also be displayed on the screen of the mobile phone when the user moves the image acquisition device, or voice prompts the user in real time to move the distance and the maximum distance L that can be moved.
  • Sensors that implement this function include: rangefinders, gyroscopes, accelerometers, positioning sensors, and/or combinations thereof.
  • multiple cameras can also be set at different positions around the target, so that images of different angles of the target can be captured at the same time.
  • the optical axis direction of the image acquisition device changes relative to the target at different acquisition positions.
  • two adjacent image acquisition devices The position of the image acquisition device, or two adjacent acquisition positions of the image acquisition device meet the following conditions:
  • d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as T.
  • L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A
  • it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
  • L should be the linear distance between the optical centers of the two image capture devices.
  • the photosensitive of the image capture devices can also be used in some cases.
  • the center of the component, the geometric center of the image capture device, the center of the axis connecting the image capture device and the pan/tilt (or platform, bracket), the center of the proximal or distal lens surface instead of Within the acceptable range, therefore, the above-mentioned range is also within the protection scope of the present invention.
  • parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, resulting in errors in the estimation of the camera position. Based on a large amount of experimental data, this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object.
  • d and f are the fixed parameters of the camera.
  • the manufacturer When purchasing the camera and lens, the manufacturer will give the corresponding parameters without measurement.
  • T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera positions, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time.
  • the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome.
  • the method of the present invention there is no need to measure the size of the object, and the camera position can be determined more conveniently.
  • the camera position determined by the present invention can take into account the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
  • the rotation movement of the present invention is that during the acquisition process, the previous position acquisition plane and the next position acquisition plane cross instead of being parallel, or the optical axis of the image acquisition device at the previous position crosses the optical axis of the image acquisition position at the next position. Instead of parallel. That is to say, the movement of the acquisition area of the image acquisition device around or partly around the target object can be regarded as the relative rotation of the two.
  • the examples of the present invention enumerate more rotational motions with tracks, it can be understood that as long as non-parallel motion occurs between the acquisition area of the image acquisition device and the target, it is in the category of rotation, and the present invention can be used. Qualification.
  • the protection scope of the present invention is not limited to the orbital rotation in the embodiment.
  • the adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves and causes the two to move relatively, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
  • the processor also called a processing unit, is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • the image acquisition device sends the collected multiple images to the processing unit, and the processing unit obtains the 3D information of the target object according to the multiple images in the above-mentioned set of images.
  • the processing unit can be directly arranged in the housing where the image acquisition device is located, or it can be connected to the image acquisition device through a data cable or wirelessly.
  • an independent computer, server, cluster server, etc. can be used as the processing unit, and the image data collected by the image acquisition device is transmitted to it for 3D synthesis.
  • the data of the image acquisition device can also be transmitted to the cloud platform, and the powerful computing power of the cloud platform can be used for 3D synthesis.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after being enhanced by the WaLLis filter
  • m g is the local gray value of the original image Degree mean
  • s g is the local gray-scale standard deviation of the original image
  • m f is the local gray-scale target value of the transformed image
  • s f is the local gray-scale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image brightness coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching.
  • This method uses a Hessian matrix to detect feature points, uses a box filter (Box FiLters) to replace the second-order Gaussian filter, and uses an integral image to accelerate the convolution to increase the calculation speed and reduce the dimensionality of the local image feature descriptor. To speed up the matching speed.
  • the main steps include 1 constructing the Hessian matrix to generate all points of interest for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable 3
  • the main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point.
  • the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again.
  • the direction of the sector with the largest value is taken as the main direction of the feature point;
  • 4 Generate a 64-dimensional feature point description vector, take a 4*4 rectangular area block around the feature point, but the obtained rectangular area direction is along the main direction of the feature point direction.
  • Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • Input the matching feature point coordinates use the beam method to adjust the sparse target 3D point cloud and the position and posture data of the camera to obtain the sparse target model 3D point cloud and position model coordinates;
  • sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data.
  • the process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map for consistency checking to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
  • the main process includes: 1The texture data is obtained through the image reconstruction target's surface triangle grid; 2The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; 3The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topology relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; 4The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
  • the collection vehicle When reconstructing a three-dimensional model of a street-side building, the collection vehicle can drive around each building with a collection device and collect multiple images of it. But the whole process almost does not need to stop, just follow the set route and collect it. Even this process can be done autonomously by robots or self-driving cars.
  • the subsequent server analyzes the image sent by the collection vehicle, it transmits the image marked with the specific point to the operator. The operator measures the coordinates of the specific point corresponding to the target building according to the image prompts as a calibration point, and transmits the coordinate data back to the server. This completes the construction of the three-dimensional model and the determination of the model coordinates.
  • image acquisition equipment can be used to collect the parts to be tested. After the subsequent server analyzes the collected images, they will be marked with specific points. The image is transmitted to the operator, and the operator measures the coordinates of a specific point corresponding to the target building according to the image prompts, as a calibration point, and transmits the coordinate data back to the server. This completes the construction of the three-dimensional model and the determination of the model coordinates. In this way, it is easy to analyze the part situation in the computer, find out the mismatched position with the design, and facilitate the detection of the part.
  • the image capture device captures images
  • the image acquisition device can also collect video data, directly use the video data or intercept images from the video data for 3D synthesis.
  • the shooting position of the corresponding frame of the video data or the captured image used in the synthesis still satisfies the above empirical formula.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects.
  • the three-dimensional information of the target includes three-dimensional images, three-dimensional point clouds, three-dimensional grids, local three-dimensional features, three-dimensional dimensions, and all parameters with three-dimensional features of the target.
  • the so-called three-dimensional in the present invention refers to three-direction information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
  • the acquisition area mentioned in the present invention refers to the range that can be photographed by an image acquisition device (for example, a camera).
  • the image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image capture function.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all the features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种具有坐标信息的3D建模方法,(1)利用采集设备采集目标物多个图像;(2)从多个图像中确定多个特定点;(3)根据图像中的特定点在目标物上找到对应点,作为标定点;测量目标物上标定点的坐标;(4)利用多个图像及标定点的坐标构建具有坐标信息的三维模型。还提供了具有坐标信息的3D建模设备。通过对多个点进行激光测距测角的方法实现目标物体的绝对尺寸标定。

Description

一种基于坐标测量的三维模型构建及测量方法 技术领域
本发明涉及形貌测量技术领域,特别涉及3D形貌测量技术领域。
背景技术
目前在利用视觉方式进行3D采集和测量时,通常使得相机相对目标物转动,或在目标物周边设置多个相机同时进行采集。例如南加州大学的DigitaL EmiLy项目,采用球型支架,在支架上不同位置不同角度固定了上百个相机,从而实现人体的3D采集和建模。然而无论哪种方式,都需要相机与目标物距离较短,至少应当在可布置的范围内,这样才能形成相机在不同位置采集目标物图像。
然而在一些应用中,无法环绕目标物进行图像的采集。例如监控探头在采集被监控区域时,由于区域较大、距离较远,且采集对象不固定,因此难以围绕目标对象设置相机,或使得相机围绕目标对象转动。在这种情形下如何进行目标对象的3D采集与建模是亟待解决的问题。
更进一步的问题,对于这些远距离的目标即使完成3D建模,如何得到其准确的尺寸,从而使得3D模型具有绝对的尺寸也是没有解决的问题。例如在实验室或工厂中,可以针对目标物进行各种标定物的设计,可以在目标物周边放置标定物,从而根据标定物的已知坐标,最终获得目标物的坐标或绝对尺寸。然而,在户外测量时对于大尺寸目标物而言,尺寸过于小的标定物将会带来较大的标定误差,而过大的标定物又不易携带。如何解决这一问题,目前还未提及。
并且,现在获得三维模型尺寸和坐标通常都是先设定标定点,再进行采集。如果采集点和目标物较远,则使用者需要在两者之间来回行走,费时费力。而且标定和采集要相互配合,在大量测量时工作效率不高。
此外,在现有技术中也曾提出使用包括旋转角度、目标物尺寸、物距的经验公式限定相机位置,从而兼顾合成速度和效果。然而在实际应用中发现:除非有精确量角装置,否则用户对角度并不敏感,难以准确确定角度;目标物尺寸难以准确确定,例如上述河边房屋的3D模型构建的场景中。并且测量的误差导致相机位置设定误差,从而会影响采集合成速度和效果;准确度和速度还 需要进一步提高。
因此,目前急需解决以下技术问题:①能够方便采集较大目标物3D信息,精确获得绝对坐标或尺寸;②同时兼顾合成速度和合成精度。③能够准确、方便获得较远物体或不宜放置标定物的物体的三维模型;④避免获得坐标和尺寸必须要先标定后采集带来的不便。
发明内容
鉴于上述问题,提出了本发明提供一种克服上述问题或者至少部分地解决上述问题的具有坐标信息的3D建模设备及方法。
本发明实施例提供了一种具有坐标信息的3D建模设备及方法,
(1)利用采集设备采集目标物多个图像;
(2)从上述多个图像中确定多个特定点;
(3)根据图像中的特定点在目标物上找到对应点,作为标定点;测量目标物上标定点的坐标;
(4)利用上述多个图像及标定点的坐标构建具有坐标信息的三维模型。
在可选的实施例中,采集设备将多个目标物图像发送至服务器,服务器将标识出特定点的图像发送至终端。
在可选的实施例中,操作人员根据终端提示的特定点在目标物上寻找对应点,并测量其坐标。
在可选的实施例中,所述坐标测量采用RTK、GPS或5G等方式。
在可选的实施例中,图像采集装置转动采集一组图像时的位置符合如下条件:
Figure PCTCN2021080882-appb-000001
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度;M为图像采集装置感光元件沿着光轴到目标物表面的距离;μ为经验系数。
在可选的实施例中,采集设备为3D智能图像采集设备时,3D智能图像采集设备的相邻两个采集位置符合如下条件:
Figure PCTCN2021080882-appb-000002
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。
在可选的实施例中,还包括对采集的图像进行特征点提取,并进行特征点匹配,获取稀疏特征点;输入匹配的特征点坐标,利用解算稀疏的三维点云和拍照图像采集设备的位置和姿态数据,获得物体A与物体B的稀疏的模型三维点云和位置的模型坐标值。
在可选的实施例中,导入标定物上的标志点的绝对坐标X T、Y T、Z T和已做好的标志点的图片模板,然后将标志点的图片模板与输入的所有照片进行模板匹配,获得输入照片中所有包含标志点的像素行列号x i、y i
在可选的实施例中,还包括根据拍照相机的位置和姿态数据,输入标志点的像素行列号x i、y i,可解算出其标志点的模型坐标系下坐标(X i、Y i、Z i);根据标志点绝对坐标和模型坐标(X T、Y T、Z T)与(X i、Y i、Z i),利用空间相似变换公式,解算出模型坐标与绝对坐标的7个空间坐标转换参数,利用解算的7个参数,则可将物体A和物体B的三维点云和拍照相机的位置和姿态数据的坐标转换到绝对坐标系下,即获得了目标物体的真实尺寸。
在可选的实施例中,获得目标物的绝对尺寸。
发明点及技术效果
1、通过对多个点进行激光测距测角的方法实现目标物体的绝对尺寸标定。
2、通过优化相机采集图片的位置,保证能够同时提高合成速度和合成精度。优化相机采集位置时,无需测量角度,无需测量目标尺寸,适用性更强。
3、首次提出通过相机光轴与转盘呈一定夹角而非平行的方式转动来采集目标物图像,实现3D合成和建模,而无需绕目标物转动,提高了场景的适应性。
4、首次提出,先采集图像,再进行标定的三维模型采集构建方法,使得采集和标定可以分成两个步骤,分别去做,效率更高;且两个步骤相互独立,不需要再两地来回奔波。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本实用新型的限制。而且在整个附图中,用相同的参考符号表示相 同的部件。在附图中:
图1为本发明实施例的使用3D智能视觉设备采集建筑物图像的示意图;
图2为本发明实施例的使用RTK测量建筑物标定点的示意图;
图3为本发明实施例的使用3D图像采集设备采集大型工件图像的示意图;
图4为本发明实施例的使用RTK测量大型工件标定点的示意图;
其中,图像采集装置1、旋转装置2、筒状外壳3、RTK标定装置5、目标物6。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
3D采集标定流程
当要采集的目标物为B时,此时可以在B的周边放置标定物A,但很多情况下无法将标定物A放置在目标物B附近。此时可以:
1、使用采集设备采集目标物多个图像,在采集时,每次采集设备的采集区域与目标物之间相互运动。因此,对于采集设备和/或目标物的每次采集位置具有一定的要求,具体下面将详述。
2、测量目标物上特定点坐标。从采集图像中确定多个特征点,这些点应该是多张采集照片中均出现的,最终能够合成三维模型的点。根据图片上这些点,利用标定装置(例如坐标测量装置,RTK等)在目标物上实际测量这些点的坐标。可以理解,这些特征点是在图像上易区分,且唯一的点。特别是在颜色、纹理上有明显特征的点。例如可以是花瓶某些花纹的交叉点、可以是建筑物某些垂直和水平结构交叉点等。也就是说,利用RTK对目标物上的标志点(上述特征点)进行测量,得到标志点A、B、C、D、E点对应的三维坐标Pa(Xa,Ya,Za),Pb(Xb,Yb,Zb),Pc(Xc,Yc,Zc),Pd(Xd,Yd,Zd),Pe(Xe,Ye,Ze)。
3、在处理中进行三维模型构建和坐标测量。
(1)对所有拍摄照片进行特征点提取,并进行特征点匹配。获取稀疏特征点。输入匹配的特征点坐标,利用解算稀疏的三维点云和拍照相机的位置和姿 态数据,即获得了拍照目标区域的稀疏的模型三维点云和位置的模型坐标值。
(2)在输入的照片上,人工量测出A、B、C、D、E点所在照片上对应的像素行列号x i、y i,或者利用已做好A、B、C、D、E标志点的图片模板,然后将标志点的图片模板与输入的所有照片进行模板匹配,获得输入照片中所有包含标志点A、B、C、D、E的像素行列号x i、y i
(3)根据步骤(1)的拍照相机的位置和姿态数据,输入标志点的像素行列号x i、y i,可解算出其标志点的模型坐标系下坐标(X i、Y i、Z i);根据A、B、C、D、E点的标志点绝对坐标Pa,Pb,Pc,Pd,Pe和对应的模型点坐标(X i、Y i、Z i),利用空间相似变换公式,解算出模型坐标与绝对坐标的7个空间坐标转换参数;其中εx、εy、εz、λ、X 0、Y 0、Z 0为7个参数。X,Y,Z是目标物的模型坐标,XT,YT,ZT是目标物的绝对坐标(标定坐标)。
Figure PCTCN2021080882-appb-000003
利用(3)解算的7个参数,则可将拍摄目标区域与目标物体的三维点云和拍照相机的位置和姿态数据的坐标转换到绝对坐标系下,即获得了目标物体的真实尺寸和大小。
可以理解,上述仅仅是举例进行了五个标志点的测量,实际上只需要三个以上的标志点即可。
利用上述方法在进行大批量目标物的采集和尺寸测量时,可以设置专门采集的人员,手持或车载、机载设备进行目标物的三维模型构建,而不需要考虑放置标定物等其他问题。可大大提高采集的速度。在采集完成后,可设置专门测量标定点坐标的人员,根据采集的图像选择恰当的特定点作为标定点,后利用RTK等坐标测量部件进行标定点坐标测量。并最终将坐标信息发送给服务器。这样,两组分别进行,互不影响,工作效率更高。
特别是采用如下步骤:
1、采集设备采集了目标物多个图像后将图像上传至服务器、云平台中。
2、服务器自动或人工标注出图像上多个特定点,作为标定点。所谓特定点,是颜色、纹理、形状等能够在图像上唯一确定的点,且易于人眼分辨。
3、服务器将标注标定点的图像发送至终端设备。
4、操作人员根据终端设备接收的图像提示,利用坐标测量装置在目标物上 测量与图像上标识出来的标定点对应的点的坐标。
5、操作人员利用终端设备,将标定点坐标发送至服务器或云平台。
当然,虽然上述分成两组进行工作,但可以理解,图像采集和坐标测量是可以同时完成的,并不一定必须要分组进行。同时,为了测量的精确,也可以事先在目标物上设置专门的标定点,例如在目标物上做好十字标记等。这种标记可以是喷涂的,也可以是利用激光投射光点。而且这种方法虽然在大量目标物测量时较佳,但可以理解其同样可以用于常规室内、工厂单个静止目标物的测量。
标定装置结构
标定装置5包括坐标测量装置,常见可以使用RTK、GPS接收机等多种常见设备,甚至也包括一些含有GPS的设备。例如现在手机也可以通过GPS、蜂窝网络等进行定位。特别是5G的应用,可以使得坐标定位手段更加多元化。这些都是本发明可以利用的技术手段,不做限制。
利用3D智能视觉设备
如图1-图2,包括图像采集装置1、旋转装置2和筒状外壳3。如图1,图像采集装置1安装在旋转装置2上,旋转装置容纳在筒状外壳3内,并且可以在筒状外壳内自由转动。
图像采集装置1用于通过图像采集装置1的采集区域与目标物相对运动采集目标物一组图像;采集区域移动装置,用于驱动图像采集装置的采集区域与目标物产生相对运动。采集区域为图像采集装置的有效视场范围。
图像采集装置1可以为相机,旋转装置2可以为转盘。相机设置2在转盘上,且相机光轴与转盘面呈一定夹角,转盘面与待采集目标物近似平行。转盘带动相机转动,从而使得相机在不同位置采集目标物的图像。
进一步,相机通过角度调整装置4安装在转盘上,如图2,角度调整装置4可以转动从而调整图像采集装置1的光轴与转盘面的夹角,调节范围为-90°<γ<90°。在拍摄较近目标物时,可以使得图像采集装置1光轴向转盘中心轴方向偏移,即将γ向-90°方向调节。而在拍摄腔体内部时,可以使得图像采集装置1光轴向偏离转盘中心轴方向偏移,即将γ向90°方向调节。上述调节可以手动完成,也可以给3D智能视觉设备设置测距装置,测量其距离目标物的距离,根据该距离来自动调整γ角度。
转盘可通过传动装置与电机连接,在电机的驱动下转动,并带动图像采集装置1转动。传动装置可以为齿轮系统或传动带等常规机械结构。
为了提高采集效率,转盘上可以设置多个图像采集装置1。多个图像采集装置1沿转盘圆周依次分布。例如可以在转盘任意一条直径两端分别设置一个图像采集装置1。也可以每隔60°圆周角设置一个图像采集装置1,整个圆盘均匀设置6个图像采集装置1。上述多个图像采集装置可以为同一类型相机,也可以为不同类型相机。例如在转盘上设置一个可见光相机及一个红外相机,从而能够采集不同波段图像。
图像采集装置1用于采集目标物的图像,其可以为定焦相机,或变焦相机。特别是即可以为可见光相机,也可以为红外相机。当然,可以理解的是任何具有图像采集功能的装置均可以使用,并不构成对本发明的限定,例如可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。
旋转装置2除了转盘,也可以为转动臂、转动梁、转动支架等多种形式,只要能够带动图像采集装置转动即可。无论使用哪种方式,图像采集装置1的光轴与转动面均具有一定的夹角γ。
通常情况下,光源位于图像采集装置1的镜头周边分散式分布,例如光源为在镜头周边的环形LED灯,位于转盘上;也可以设置在筒状外壳的横截面上。由于在一些应用中,被采集对象为人体,因此需要控制光源强度,避免造成人体不适。特别是可以在光源的光路上设置柔光装置,例如为柔光外壳。或者直接采用LED面光源,不仅光线比较柔和,而且发光更为均匀。更佳地,可以采用OLED光源,体积更小,光线更加柔和,并且具有柔性特性,可以贴附于弯曲的表面。光源也可以设置于其他能够为目标物提供均匀照明的位置。光源也可以为智能光源,即根据目标物及环境光的情况自动调整光源参数。
在进行3D采集时,图像采集装置在不同采集位置光轴方向相对于目标物不发生变化,通常大致垂直于目标物表面,此时相邻两个图像采集装置1的位置,或图像采集装置1相邻两个采集位置满足如下条件:
Figure PCTCN2021080882-appb-000004
μ<0.482
其中L为相邻两个采集位置图像采集装置1光心的直线距离;f为图像采集 装置1的焦距;d为图像采集装置感光元件(CCD)的矩形长度;M为图像采集装置1感光元件沿着光轴到目标物表面的距离;μ为经验系数。
当上述两个位置是沿图像采集装置1感光元件长度方向时,d取矩形长度;当上述两个位置是沿图像采集装置1感光元件宽度方向时,d取矩形宽度。
图像采集装置1在上述两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为M。
如上所述,L应当为两个图像采集装置1光心的直线距离,但由于图像采集装置1光心位置在某些情况下并不容易确定,因此在某些情况下也可以使用图像采集装置1的感光元件中心、图像采集装置1的几何中心、图像采集装置与云台(或平台、支架)连接的轴中心、镜头近端或远端表面的中心替代,经过试验发现由此带来的误差是在可接受的范围内的,因此上述范围也在本发明的保护范围之内。
利用本发明装置,进行实验,得到了如下实验结果。
Figure PCTCN2021080882-appb-000005
从上述实验结果及大量实验经验可以得出,μ的值应当满足μ<0.482,此时已经能够合成部分3D模型,虽然有一部分无法自动合成,但是在要求不高的情况下也是可以接受的,并且可以通过手动或者更换算法的方式弥补无法合成的部分。特别是μ的值满足μ<0.357时,能够最佳地兼顾合成效果和合成时间的平衡;为了获得更好的合成效果可以选择μ<0.198,此时合成时间会上升,但合成质量更好。而当μ为0.5078时,已经无法合成。但这里应当注意,以上范围仅仅是最佳实施例,并不构成对保护范围的限定。
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调 整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。
本发明所述的相邻采集位置是指,在图像采集装置相对目标物移动时,移动轨迹上的发生采集动作的两个相邻位置。这通常对于图像采集装置运动容易理解。但对于目标物发生移动导致两者相对移动时,此时应当根据运动的相对性,将目标物的运动转化为目标物不动,而图像采集装置运动。此时再衡量图像采集装置在转化后的移动轨迹中发生采集动作的两个相邻位置。
利用3D图像采集设备
(1)采集区域移动装置为旋转结构
如图3-图4,目标物6固定于某一位置,旋转装置驱动图像采集装置1围绕目标物6转动。旋转装置可以通过旋转臂带动图像采集装置1围绕目标物6转动。当然这种转动并不一定是完整的圆周运动,可以根据采集需要只转动一定角度。并且这种转动也不一定必须为圆周运动,图像采集装置1的运动轨迹可以为其它曲线轨迹,只要保证相机从不同角度拍摄物体即可。
旋转装置也可以驱动图像采集装置自转,通过自转使得图像采集装置1能够从不同角度采集目标物图像。
旋转装置可以为悬臂、转台、轨道等多种形态,也可以手持、使用车载或机载,使得图像采集装置1能够产生运动即可。
除了上述方式,在某些情况下也可以将相机固定,承载目标物的载物台转动,使得目标物面向图像采集装置的方向时刻变化,从而使得图像采集装置1能够从不同角度采集目标物图像。但此时计算时,仍然可以按照转化为图像采集装置运动的情况下来进行计算,从而使得运动符合相应经验公式(具体下面将详细阐述)。例如,载物台转动的场景下,可以假设载物台不动,而图像采集装置1旋转。通过利用经验公式设定图像采集装置1旋转时拍摄位置的距离,从而推导出其转速,从而反推出载物台转速,以方便进行转速控制,实现3D采集。当然,这种场景并不常用,更为常用的还是转动图像采集装置。
另外,为了使得图像采集装置能够采集目标物不同方向的图像,也可以保持图像采集装置和目标物均静止,通过旋转图像采集装置的光轴来实现。例如:采集区域移动装置为光学扫描装置,使得图像采集装置不移动或转动的情况下,图像采集装置的采集区域与目标物产生相对运动。采集区域移动装置还包括光线偏转单元,光线偏转单元被机械驱动发生转动,或被电学驱动导致光路偏折,或本身为多组在空间的排布,从而实现从不同角度获得目标物的图像。光线偏 转单元典型地可以为反射镜,通过转动使得目标物不同方向的图像被采集。或直接于空间布置环绕目标物的反射镜,依次使得反射镜的光进入图像采集装置中。与前述类似,这种情况下光轴的转动可以看作是图像采集装置虚拟位置的转动,通过这种转换的方法,假设为图像采集装置转动,从而利用下述经验公式进行计算。
图像采集装置用于采集目标物的图像,其可以为定焦相机,或变焦相机。特别是即可以为可见光相机,也可以为红外相机。当然,可以理解的是任何具有图像采集功能的装置均可以使用,并不构成对本发明的限定,例如可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。
设备还包括处理器,也称处理单元,用以根据图像采集装置采集的多个图像,根据3D合成算法,合成目标物3D模型,得到目标物3D信息。
(2)采集区域移动装置为平动结构
除了上述旋转结构外,图像采集装置可以以直线轨迹相对目标物运动。例如图像采集装置位于直线轨道上或位于直线行驶的汽车或无人机上,沿直线轨道依次经过目标物进行拍摄,在过程中图像采集装置保持不转动。其中直线轨道也可以用直线悬臂代替。但更佳的是,在图像采集装置整体沿直线轨迹运动时,其进行一定的转动,从而使得图像采集装置的光轴朝向目标物。
(3)采集区域移动装置为无规则运动结构
有时,采集区域移动并不规则,例如在手持图像采集装置时,或车载或机载时,行进路线为不规则路线时,此时难以以严格的轨道进行运动,图像采集装置的运动轨迹难以准确预测。因此在这种情况下如何保证拍摄图像能够准确、稳定地合成3D模型是一大难题,目前还未有人提及。更常见的方法是多拍照片,用照片数量的冗余来解决该问题。但这样合成结果并不稳定。虽然目前也有一些通过限定相机转动角度的方式提高合成效果,但实际上用户对于角度并不敏感,即使给出优选角度,在手持拍摄的情况下用户也很难操作。因此本发明提出了通过限定两次拍照相机移动距离的方式来提高合成效果、缩短合成时间的方法。
在无规则运动的情况下,可以在移动终端或图像采集装置中设置传感器,通过传感器测量图像采集装置两次拍摄时移动的直线距离,在移动距离不满足上述关于L(具体下述条件)的经验条件时,向用户发出报警。所述报警包括 向用户发出声音或灯光报警。当然,也可以在用户移动图像采集装置时,手机屏幕上显示,或语音实时提示用户移动的距离,以及可移动的最大距离L。实现该功能的传感器包括:测距仪、陀螺仪、加速度计、定位传感器和/或它们的组合。
(4)多相机方式
可以了解,除了通过相机与目标物相对运动从而使得相机可以拍摄目标物不同角度的图像外,还可以再目标物周围不同位置设置多个相机,从而可以实现同时拍摄目标物不同角度的图像。
采集区域相对目标物运动时,特别是图像采集装置围绕目标物转动,在进行3D采集时,图像采集装置在不同采集位置光轴方向相对于目标物发生变化,此时相邻两个图像采集装置的位置,或图像采集装置相邻两个采集位置满足如下条件:
Figure PCTCN2021080882-appb-000006
δ<0.603
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。
当上述两个位置是沿图像采集装置感光元件长度方向时,d取矩形长度;当上述两个位置是沿图像采集装置感光元件宽度方向时,d取矩形宽度。
图像采集装置在两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为T。除了这种方法外,在另一种情况下,L为A n、A n+1两个图像采集装置光心的直线距离,与A n、A n+1两个图像采集装置相邻的A n-1、A n+2两个图像采集装置和A n、A n+1两个图像采集装置各自感光元件沿着光轴到目标物表面的距离分别为T n-1、T n、T n+1、T n+2,T=(T n-1+T n+T n+1+T n+2)/4。当然可以不只限于相邻4个位置,也可以用更多的位置进行平均值计算。
利用本发明装置,进行实验,得到了如下实验结果。
Figure PCTCN2021080882-appb-000007
更换相机镜头,再次实验,得到了如下实验结果。
Figure PCTCN2021080882-appb-000008
更换相机镜头,再次实验,得到了如下实验结果。
Figure PCTCN2021080882-appb-000009
Figure PCTCN2021080882-appb-000010
如上所述,L应当为两个图像采集装置光心的直线距离,但由于图像采集装置光心位置在某些情况下并不容易确定,因此在某些情况下也可以使用图像采集装置的感光元件中心、图像采集装置的几何中心、图像采集装置与云台(或平台、支架)连接的轴中心、镜头近端或远端表面的中心替代,经过试验发现由此带来的误差是在可接受的范围内的,因此上述范围也在本发明的保护范围之内。
通常情况下,现有技术中均采用物体尺寸、视场角等参数作为推算相机位置的方式,并且两个相机之间的位置关系也采用角度表达。由于角度在实际使用过程中并不好测量,因此在实际使用时较为不便。并且,物体尺寸会随着测量物体的变化而改变。上述不方便的测量以及多次重新测量都会带来测量的误差,从而导致相机位置推算错误。而本方案根据大量实验数据,给出了相机位置需要满足的经验条件,不仅避免测量难以准确测量的角度,而且不需要直接测量物体大小尺寸。经验条件中d、f均为相机固定参数,在购买相机、镜头时,厂家即会给出相应参数,无需测量。而T仅为一个直线距离,用传统测量方法,例如直尺、激光测距仪均可以很便捷的测量得到。因此,本发明的经验公式使得准备过程变得方便快捷,同时也提高了相机位置的排布准确度,使得相机能够设置在优化的位置中,从而在同时兼顾了3D合成精度和速度。
从上述实验结果及大量实验经验可以得出,δ的值应当满足δ<0.603,此时已经能够合成部分3D模型,虽然有一部分无法自动合成,但是在要求不高的情况下也是可以接受的,并且可以通过手动或者更换算法的方式弥补无法合成的部分。特别是δ的值满足δ<0.410时,能够最佳地兼顾合成效果和合成时间的平衡;为了获得更好的合成效果可以选择δ<0.356,此时合成时间会上升,但合 成质量更好。当然为了进一步提高合成效果,可以选择δ<0.311。而当δ为0.681时,已经无法合成。但这里应当注意,以上范围仅仅是最佳实施例,并不构成对保护范围的限定。
并且从上述实验可以看出,对于相机拍照位置的确定,只需要获取相机参数(焦距f、CCD尺寸)、相机CCD与物体表面的距离T即可根据上述公式得到,这使得在进行设备设计和调试时变得容易。由于相机参数(焦距f、CCD尺寸)在相机购买时就已经确定,并且是产品说明中就会标示的,很容易获得。因此根据上述公式很容易就能够计算得到相机位置,而不需要再进行繁琐的视场角测量和物体尺寸测量。特别是在一些场合中,需要更换相机镜头,那么本发明的方法直接更换镜头常规参数f计算即可得到相机位置;同理,在采集不同物体时,由于物体大小不同,对于物体尺寸的测量也较为繁琐。而使用本发明的方法,无需进行物体尺寸测量,能够更为便捷地确定相机位置。并且使用本发明确定的相机位置,能够兼顾合成时间和合成效果。因此,上述经验条件是本发明的发明点之一。
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。
本发明所述的转动运动,为在采集过程中前一位置采集平面和后一位置采集平面发生交叉而不是平行,或前一位置图像采集装置光轴和后一位置图像采集位置光轴发生交叉而不是平行。也就是说,图像采集装置的采集区域环绕或部分环绕目标物运动,均可以认为是两者相对转动。虽然本发明实施例中列举更多的为有轨道的转动运动,但是可以理解,只要图像采集设备的采集区域和目标物之间发生非平行的运动,均是转动范畴,均可以使用本发明的限定条件。 本发明保护范围并不限定于实施例中的有轨道转动。
本发明所述的相邻采集位置是指,在图像采集装置相对目标物移动时,移动轨迹上的发生采集动作的两个相邻位置。这通常对于图像采集装置运动容易理解。但对于目标物发生移动导致两者相对移动时,此时应当根据运动的相对性,将目标物的运动转化为目标物不动,而图像采集装置运动。此时再衡量图像采集装置在转化后的移动轨迹中发生采集动作的两个相邻位置。
3D合成建模装置及方法
处理器,也称处理单元,用以根据图像采集装置采集的多个图像,根据3D合成算法,合成目标物3D模型,得到目标物3D信息。图像采集装置将采集到的多个图像发送给处理单元,处理单元根据上述所述一组图像中的多个图像得到目标物的3D信息。当然,处理单元可以直接设置在图像采集装置所在的壳体内,也可以通过数据线或通过无线方式与图像采集装置连接。例如可以使用独立的计算机、服务器及集群服务器等作为处理单元,图像采集装置采集到的图像数据传输至其上,进行3D合成。同时,也可以将图像采集装置的数据传输至云平台,利用云平台的强大计算能力进行3D合成。
处理单元中执行如下方法:
1、对所有输入照片进行图像增强处理。采用下述滤波器增强原始照片的反差和同时压制噪声。
Figure PCTCN2021080882-appb-000011
式中:g(x,y)为原始影像在(x,y)处灰度值,f(x,y)为经过WaLLis滤波器增强后该处的灰度值,m g为原始影像局部灰度均值,s g为原始影像局部灰度标准偏差,m f为变换后的影像局部灰度目标值,s f为变换后影像局部灰度标准偏差目标值。c∈(0,1)为影像方差的扩展常数,b∈(0,1)为影像亮度系数常数。
该滤波器可以大大增强影像中不同尺度的影像纹理模式,所以在提取影像的点特征时可以提高特征点的数量和精度,在照片特征匹配中则提高了匹配结果可靠性和精度。
2、对输入的所有图像进行特征点提取,并进行特征点匹配,获取稀疏特征点。采用SURF算子对照片进行特征点提取与匹配。SURF特征匹配方法主要 包含三个过程,特征点检测、特征点描述和特征点匹配。该方法使用Hessian矩阵来检测特征点,用箱式滤波器(Box FiLters)来代替二阶高斯滤波,用积分图像来加速卷积以提高计算速度,并减少了局部影像特征描述符的维数,来加快匹配速度。主要步骤包括①构建Hessian矩阵,生成所有的兴趣点,用于特征提取,构建Hessian矩阵的目的是为了生成图像稳定的边缘点(突变点);②构建尺度空间特征点定位,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;③特征点主方向的确定,采用的是统计特征点圆形邻域内的harr小波特征。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特征总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向;④生成64维特征点描述向量,特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子;⑤特征点匹配,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好。
3、输入匹配的特征点坐标,利用光束法平差,解算稀疏的目标物三维点云和拍照相机的位置和姿态数据,即获得了稀疏目标物模型三维点云和位置的模型坐标值;以稀疏特征点为初值,进行多视照片稠密匹配,获取得到密集点云数据。该过程主要有四个步骤:立体像对选择、深度图计算、深度图优化、深度图融合。针对输入数据集里的每一张影像,我们选择一张参考影像形成一个立体像对,用于计算深度图。因此我们可以得到所有影像的粗略的深度图,这些深度图可能包含噪声和错误,我们利用它的邻域深度图进行一致性检查,来优化每一张影像的深度图。最后进行深度图融合,得到整个场景的三维点云。
4、利用密集点云进行目标物曲面重建。包括定义八叉树、设置函数空间、创建向量场、求解泊松方程、提取等值面几个过程。由梯度关系得到采样点和指示函数的积分关系,根据积分关系获得点云的向量场,计算指示函数梯度场的逼近,构成泊松方程。根据泊松方程使用矩阵迭代求出近似解,采用移动方 体算法提取等值面,对所测点云重构出被测物体的模型。
5、目标物模型的全自动纹理贴图。表面模型构建完成后,进行纹理贴图。主要过程包括:①纹理数据获取通过图像重建目标的表面三角面格网;②重建模型三角面的可见性分析。利用图像的标定信息计算每个三角面的可见图像集以及最优参考图像;③三角面聚类生成纹理贴片。根据三角面的可见图像集、最优参考图像以及三角面的邻域拓扑关系,将三角面聚类生成为若干参考图像纹理贴片;④纹理贴片自动排序生成纹理图像。对生成的纹理贴片,按照其大小关系进行排序,生成包围面积最小的纹理图像,得到每个三角面的纹理映射坐标。
应用举例
在进行街边建筑物的三维模型重建时,采集车可以携带采集设备围绕每个建筑物行驶并采集其多张图像。但整个过程几乎不需要停止,只需要按设定路线行进并进行采集即可。甚至这一过程可以由机器人或自动驾驶汽车自主完成。后续服务器将采集车发送的图像进行分析后,将标有特定点的图像传送至操作人员,操作人员按照图像提示测量目标建筑对应特定点的坐标,作为标定点,并将坐标数据传回服务器,从而完成三维模型构建和模型坐标确定。
在进行大型零件的测量和检测时,例如飞机的机身、大型盾构机等,可以利用图像采集设备围绕待测零件进行采集,后续服务器将采集的图像进行分析后,将标有特定点的图像传送至操作人员,操作人员按照图像提示测量目标建筑对应特定点的坐标,作为标定点,并将坐标数据传回服务器。从而完成三维模型构建和模型坐标确定。这样,可以方便地在电脑中分析零件情况,找出与设计存在的不匹配位置,方便零件的检测。
虽然上述实施例中记载图像采集装置采集图像,但不应理解为仅适用于单张图片构成的图片组,这只是为了便于理解而采用的说明方式。图像采集装置也可以采集视频数据,直接利用视频数据或从视频数据中截取图像进行3D合成。但合成时所利用的视频数据相应帧或截取的图像的拍摄位置,依然满足上述经验公式。
上述目标物体、目标物、及物体皆表示预获取三维信息的对象。可以为一实体物体,也可以为多个物体组成物。所述目标物的三维信息包括三维图像、三维点云、三维网格、局部三维特征、三维尺寸及一切带有目标物三维特征的 参数。本实用新型里所谓的三维是指具有XYZ三个方向信息,特别是具有深度信息,与只有二维平面信息具有本质区别。也与一些称为三维、全景、全息、三维,但实际上只包括二维信息,特别是不包括深度信息的定义有本质区别。
本发明所说的采集区域是指图像采集装置(例如相机)能够拍摄的范围。本发明中的图像采集装置可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本实用新型的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器 上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于本发明装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。

Claims (16)

  1. 一种具有坐标信息的3D建模方法,其特征在于:
    (1)利用图像采集设备采集目标物多个图像;
    (2)从上述多个图像中确定多个特定点;
    (3)根据图像中的特定点在目标物上找到对应点,作为标定点;测量目标物上标定点的坐标;
    (4)利用上述多个图像及标定点的坐标构建具有坐标信息的三维模型;
    图像采集设备转动采集一组图像时的位置符合如下条件:
    Figure PCTCN2021080882-appb-100001
    其中L为相邻两个采集位置图像采集设备光心的直线距离;f为图像采集设备的焦距;d为图像采集设备感光元件的矩形长度;M为图像采集设备感光元件沿着光轴到目标物表面的距离;μ为经验系数。
  2. 如权利要求1所述的方法,其特征在于:图像采集设备将目标物多个图像发送至服务器,服务器将标识出特定点的多个图像发送至终端。
  3. 如权利要求2所述的方法,其特征在于:操作人员根据终端提示的特定点在目标物上寻找对应点,并测量其坐标。
  4. 如权利要求1所述的方法,其特征在于:所述坐标测量采用RTK、GPS或5G方式。
  5. 如权利要求1所述的方法,其特征在于:还包括对采集的多个图像进行特征点提取,并进行特征点匹配,获取稀疏特征点;输入匹配的特征点坐标,利用解算稀疏的三维点云和图像采集设备的位置和姿态数据,获得目标物的稀疏的模型三维点云和位置的模型坐标值。
  6. 如权利要求5所述的方法,其特征在于:导入标定点的绝对坐标X T、Y T、Z T和已做好的标定点的图片模板,然后将标定点的图片模板与输入的多个图像进行模板匹配,获得输入多个图像中所有包含标定点的像素行列号x i、y i
  7. 如权利要求6所述的方法,其特征在于:还包括根据图像采集设备的位置和姿态数据,输入标定点的像素行列号x i、y i,可解算出其标定点的模型坐 标系下坐标(X i、Y i、Z i);根据标定点绝对坐标和模型坐标(X T、Y T、Z T)与(X i、Y i、Z i),利用空间相似变换公式,解算出模型坐标与绝对坐标的7个空间坐标转换参数,利用解算的7个参数,则可将目标物的三维点云和图像采集设备的位置和姿态数据的坐标转换到绝对坐标系下,即获得了目标物的真实尺寸。
  8. 如权利要求1所述的方法,其特征在于:获得目标物的绝对尺寸。
  9. 一种具有坐标信息的3D建模方法,其特征在于:
    (1)利用3D智能图像采集设备采集目标物多个图像;
    (2)从上述多个图像中确定多个特定点;
    (3)根据图像中的特定点在目标物上找到对应点,作为标定点;测量目标物上标定点的坐标;
    (4)利用上述多个图像及标定点的坐标构建具有坐标信息的三维模型;
    3D智能图像采集设备的相邻两个采集位置符合如下条件:
    Figure PCTCN2021080882-appb-100002
    其中L为相邻两个采集位置3D智能图像采集设备光心的直线距离;f为3D智能图像采集设备的焦距;d为3D智能图像采集设备感光元件的矩形长度或宽度;T为3D智能图像采集设备感光元件沿着光轴到目标物表面的距离;δ为调整系数。
  10. 如权利要求9所述的方法,其特征在于:3D智能图像采集设备将目标物多个图像发送至服务器,服务器将标识出特定点的多个图像发送至终端。
  11. 如权利要求10所述的方法,其特征在于:操作人员根据终端提示的特定点在目标物上寻找对应点,并测量其坐标。
  12. 如权利要求9所述的方法,其特征在于:所述坐标测量采用RTK、GPS或5G方式。
  13. 如权利要求9所述的方法,其特征在于:还包括对采集的多个图像进行特征点提取,并进行特征点匹配,获取稀疏特征点;输入匹配的特征点坐标,利用解算稀疏的三维点云和3D智能图像采集设备的位置和姿态数据,获得目标物的稀疏的模型三维点云和位置的模型坐标值。
  14. 如权利要求13所述的方法,其特征在于:导入标定点的绝对坐标X T、Y T、Z T和已做好的标定点的图片模板,然后将标定点的图片模板与输入的 多个图像进行模板匹配,获得输入多个图像中所有包含标定点的像素行列号x i、y i
  15. 如权利要求14所述的方法,其特征在于:还包括根据3D智能图像采集设备的位置和姿态数据,输入标定点的像素行列号x i、y i,可解算出其标定点的模型坐标系下坐标(X i、Y i、Z i);根据标定点绝对坐标和模型坐标(X T、Y T、Z T)与(X i、Y i、Z i),利用空间相似变换公式,解算出模型坐标与绝对坐标的7个空间坐标转换参数,利用解算的7个参数,则可将目标物的三维点云和3D智能图像采集设备的位置和姿态数据的坐标转换到绝对坐标系下,即获得了目标物的真实尺寸。
  16. 如权利要求9所述的方法,其特征在于:获得目标物的绝对尺寸。
PCT/CN2021/080882 2020-03-16 2021-03-15 一种基于坐标测量的三维模型构建及测量方法 WO2021185220A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010183328.7 2020-03-16
CN202010183328.7A CN111238374B (zh) 2020-03-16 2020-03-16 一种基于坐标测量的三维模型构建及测量方法

Publications (1)

Publication Number Publication Date
WO2021185220A1 true WO2021185220A1 (zh) 2021-09-23

Family

ID=70873499

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080882 WO2021185220A1 (zh) 2020-03-16 2021-03-15 一种基于坐标测量的三维模型构建及测量方法

Country Status (2)

Country Link
CN (1) CN111238374B (zh)
WO (1) WO2021185220A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114338981A (zh) * 2021-12-22 2022-04-12 湖南中医药大学 一种实验用自动显距标面积、体积照相机
CN114511620A (zh) * 2021-12-28 2022-05-17 南通大学 一种基于Mask R-CNN的结构位移监测方法
CN114693763A (zh) * 2022-03-15 2022-07-01 武汉理工大学 航道船舶三维模型构建方法、系统、装置及存储介质
CN114882095A (zh) * 2022-05-06 2022-08-09 山东省科学院海洋仪器仪表研究所 一种基于轮廓匹配的物体高度在线测量方法
CN115661323A (zh) * 2022-10-28 2023-01-31 中国科学院烟台海岸带研究所 一种利用3d水下声呐系统实时建立三维虚拟图像的方法
CN118175423A (zh) * 2024-05-15 2024-06-11 山东云海国创云计算装备产业创新中心有限公司 一种焦距确定系统、方法、设备、介质及产品

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111238374B (zh) * 2020-03-16 2021-03-12 天目爱视(北京)科技有限公司 一种基于坐标测量的三维模型构建及测量方法
CN111696162B (zh) * 2020-06-11 2022-02-22 中国科学院地理科学与资源研究所 一种双目立体视觉精细地形测量系统及方法
CN111783597B (zh) * 2020-06-24 2022-12-13 中国第一汽车股份有限公司 行车轨迹线的标定方法、装置、计算机设备和存储介质

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002341473A (ja) * 2001-05-18 2002-11-27 Olympus Optical Co Ltd 立体画像撮影方法及び立体画像観察装置
CN1804541A (zh) * 2005-01-10 2006-07-19 北京航空航天大学 一种摄像机空间三维位置姿态测量方法
CN1890531A (zh) * 2003-12-03 2007-01-03 卢存伟 非接触三维测量方法及装置
CN101509763A (zh) * 2009-03-20 2009-08-19 天津工业大学 单目高精度大型物体三维数字化测量系统及其测量方法
CN102566246A (zh) * 2010-12-30 2012-07-11 华晶科技股份有限公司 立体影像拍摄方法
US20130038723A1 (en) * 2011-08-11 2013-02-14 Canon Kabushiki Kaisha Image acquisition apparatus and image processing apparatus
CN104428624A (zh) * 2012-06-29 2015-03-18 富士胶片株式会社 三维测定方法、装置及系统、以及图像处理装置
US20150271467A1 (en) * 2014-03-20 2015-09-24 Neal Weinstock Capture of three-dimensional images using a single-view camera
CN108769654A (zh) * 2018-06-26 2018-11-06 李晓勇 一种三维图像显示方法
CN108965690A (zh) * 2017-05-17 2018-12-07 欧姆龙株式会社 图像处理系统、图像处理装置及计算机可读存储介质
CN109211132A (zh) * 2017-07-07 2019-01-15 北京林业大学 一种无人机高精度摄影测量获取高大物体变形信息的方法
CN109754429A (zh) * 2018-12-14 2019-05-14 东南大学 一种基于图像的桥梁结构挠度测量方法
CN111238374A (zh) * 2020-03-16 2020-06-05 天目爱视(北京)科技有限公司 一种基于坐标测量的三维模型构建及测量方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6973233B2 (ja) * 2017-05-17 2021-11-24 オムロン株式会社 画像処理システム、画像処理装置および画像処理プログラム

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002341473A (ja) * 2001-05-18 2002-11-27 Olympus Optical Co Ltd 立体画像撮影方法及び立体画像観察装置
CN1890531A (zh) * 2003-12-03 2007-01-03 卢存伟 非接触三维测量方法及装置
CN1804541A (zh) * 2005-01-10 2006-07-19 北京航空航天大学 一种摄像机空间三维位置姿态测量方法
CN101509763A (zh) * 2009-03-20 2009-08-19 天津工业大学 单目高精度大型物体三维数字化测量系统及其测量方法
CN102566246A (zh) * 2010-12-30 2012-07-11 华晶科技股份有限公司 立体影像拍摄方法
US20130038723A1 (en) * 2011-08-11 2013-02-14 Canon Kabushiki Kaisha Image acquisition apparatus and image processing apparatus
CN104428624A (zh) * 2012-06-29 2015-03-18 富士胶片株式会社 三维测定方法、装置及系统、以及图像处理装置
US20150271467A1 (en) * 2014-03-20 2015-09-24 Neal Weinstock Capture of three-dimensional images using a single-view camera
CN108965690A (zh) * 2017-05-17 2018-12-07 欧姆龙株式会社 图像处理系统、图像处理装置及计算机可读存储介质
CN109211132A (zh) * 2017-07-07 2019-01-15 北京林业大学 一种无人机高精度摄影测量获取高大物体变形信息的方法
CN108769654A (zh) * 2018-06-26 2018-11-06 李晓勇 一种三维图像显示方法
CN109754429A (zh) * 2018-12-14 2019-05-14 东南大学 一种基于图像的桥梁结构挠度测量方法
CN111238374A (zh) * 2020-03-16 2020-06-05 天目爱视(北京)科技有限公司 一种基于坐标测量的三维模型构建及测量方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114338981A (zh) * 2021-12-22 2022-04-12 湖南中医药大学 一种实验用自动显距标面积、体积照相机
CN114338981B (zh) * 2021-12-22 2023-11-07 湖南中医药大学 一种实验用自动显距标面积、体积照相机
CN114511620A (zh) * 2021-12-28 2022-05-17 南通大学 一种基于Mask R-CNN的结构位移监测方法
CN114511620B (zh) * 2021-12-28 2024-06-04 南通大学 一种基于Mask R-CNN的结构位移监测方法
CN114693763A (zh) * 2022-03-15 2022-07-01 武汉理工大学 航道船舶三维模型构建方法、系统、装置及存储介质
CN114882095A (zh) * 2022-05-06 2022-08-09 山东省科学院海洋仪器仪表研究所 一种基于轮廓匹配的物体高度在线测量方法
CN114882095B (zh) * 2022-05-06 2022-12-20 山东省科学院海洋仪器仪表研究所 一种基于轮廓匹配的物体高度在线测量方法
CN115661323A (zh) * 2022-10-28 2023-01-31 中国科学院烟台海岸带研究所 一种利用3d水下声呐系统实时建立三维虚拟图像的方法
CN115661323B (zh) * 2022-10-28 2024-03-19 中国科学院烟台海岸带研究所 一种利用3d水下声呐系统实时建立三维虚拟图像的方法
CN118175423A (zh) * 2024-05-15 2024-06-11 山东云海国创云计算装备产业创新中心有限公司 一种焦距确定系统、方法、设备、介质及产品

Also Published As

Publication number Publication date
CN111238374B (zh) 2021-03-12
CN111238374A (zh) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2021185220A1 (zh) 一种基于坐标测量的三维模型构建及测量方法
WO2021185218A1 (zh) 一种在运动过程中获取物体3d坐标及尺寸的方法
WO2021185217A1 (zh) 一种基于多激光测距和测角的标定方法
WO2021185214A1 (zh) 一种在3d建模中远距离标定方法
WO2021185216A1 (zh) 一种基于多激光测距的标定方法
WO2021185215A1 (zh) 一种在3d建模中多相机共同标定方法
WO2022078442A1 (zh) 一种基于光扫描和智能视觉融合的3d信息采集方法
CN112254680B (zh) 一种多自由度的智能视觉3d信息采集设备
CN111340959A (zh) 一种基于直方图匹配的三维模型无缝纹理贴图方法
CN112082486B (zh) 一种手持式智能3d信息采集设备
CN112253913B (zh) 一种与旋转中心偏离的智能视觉3d信息采集设备
CN112254638B (zh) 一种可俯仰调节的智能视觉3d信息采集设备
CN112254677B (zh) 一种基于手持设备的多位置组合式3d采集系统及方法
CN112254673B (zh) 一种自转式智能视觉3d信息采集设备
CN112254671B (zh) 一种多次组合式3d采集系统及方法
CN111325780B (zh) 一种基于图像筛选的3d模型快速构建方法
CN112254679A (zh) 一种多位置组合式3d采集系统及方法
CN112254674B (zh) 一种近距离智能视觉3d信息采集设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21771943

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21771943

Country of ref document: EP

Kind code of ref document: A1