CN111982071A - 3D scanning method and system based on TOF camera - Google Patents

3D scanning method and system based on TOF camera Download PDF

Info

Publication number
CN111982071A
CN111982071A CN201910441073.7A CN201910441073A CN111982071A CN 111982071 A CN111982071 A CN 111982071A CN 201910441073 A CN201910441073 A CN 201910441073A CN 111982071 A CN111982071 A CN 111982071A
Authority
CN
China
Prior art keywords
controller
dimensional coordinate
tof camera
coordinate system
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910441073.7A
Other languages
Chinese (zh)
Other versions
CN111982071B (en
Inventor
吴振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Research America Inc
Original Assignee
TCL Research America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Research America Inc filed Critical TCL Research America Inc
Priority to CN201910441073.7A priority Critical patent/CN111982071B/en
Publication of CN111982071A publication Critical patent/CN111982071A/en
Application granted granted Critical
Publication of CN111982071B publication Critical patent/CN111982071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a 3D scanning method and a system based on a TOF camera, wherein the TOF camera is utilized to obtain three-dimensional coordinate values of all acquisition points on the surface of a target object in a TOF camera three-dimensional coordinate system, and first point cloud data generated by the three-dimensional coordinate values of all the acquisition points in the TOF camera three-dimensional coordinate system are transmitted to a controller; the controller converts the first point cloud data into a controller object coordinate system corresponding to the controller to obtain second point cloud data; and the controller corrects the second point cloud data according to a preset error coefficient, and obtains a 3D model of the target object according to the corrected second point cloud data. According to the method and the system provided by the invention, the TOF camera is adopted to acquire the data, and the acquired data is corrected, so that the accuracy of the established 3D model is improved, the requirement on environmental conditions during data acquisition is reduced, and the method and the system have strong practicability.

Description

3D scanning method and system based on TOF camera
Technical Field
The invention relates to the technical field of 3D scanning, in particular to a 3D scanning method and system based on a TOF camera.
Background
At present, 3D scanning of a target object may be required in various fields such as the urban and building measurement field, the topographic mapping field, the industrial manufacturing field, the service industry field or the medical field, and a 3D model of a target building is obtained, and the 3D scanning technology not only can realize display of a panoramic guide image, but also can realize monitoring of building construction quality, so that convenience is provided for realizing efficient detection or panoramic display of the building.
The 3D scanning technique goes from a single point scan, to a line scan, to an area scan. The 3D scanner detects and analyzes the shape of the object or environment. At present, the 3D scanning technology is mainly laser scanning and structured light scanning, but the laser scanning generally uses a triangulation distance measurement method, and the principle thereof is that a laser beam emitted by a laser is reflected back by hitting an object to be measured, the laser beam is imaged on a CCD through a lens, a measured distance is obtained by geometric relation triangle similarity, then 3D scanning of the object to be measured is realized based on the measured distance, and the distance that the laser beam can scan is between 0.4 and 6 meters, so the distance is short, and the method is relatively suitable for indoor scanning, and depends on the completeness of 3D data and key size, so the calculation amount is large, and the scanning efficiency is influenced by the laser light source, indoor ambient light and the like, and therefore, the method is not suitable for 3D scanning of outdoor objects. The principle of binocular measurement adopting the structured light 3D scanning technology is that effective feature points are added to a measured object by projecting light shadows to the measured object, so that the requirement on imaging quality is high, the dependence on uncertain factors such as saturation, the degree of closeness of the measured object to a background, structural light intensity, the direction of a projection light source and the like is strong, and the method is seriously interfered by the environment, and therefore the method is also not suitable for outdoor use, so that the two existing scanning methods cannot meet the requirement of a user on realizing remote building 3D scanning.
Therefore, the prior art is subject to further improvement.
Disclosure of Invention
In view of the above deficiencies in the prior art, the present invention provides a 3D scanning method and system based on a TOF camera, which are used to overcome the defects that the 3D scanning method in the prior art cannot perform long-distance 3D scanning and cannot obtain a 3D model of a long-distance object.
The invention provides a 3D scanning method based on a TOF camera, which is applied to a 3D scanning system comprising the TOF camera and a controller, wherein the TOF camera and the controller are communicated with each other;
the method comprises the following steps:
the TOF camera determining first point cloud data and transmitting the first point cloud data to the controller; the first point cloud data is determined according to three-dimensional coordinate values of all acquisition points on the surface of the target object in a three-dimensional coordinate system of the TOF camera;
the controller receives the first point cloud data and determines second point cloud data according to the first point cloud data, wherein the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller;
and the controller corrects the second point cloud data according to a preset error coefficient, and obtains a 3D model of the target object according to the corrected second point cloud data.
As a further improved technical solution, the step of determining the first point cloud data by the TOF camera includes:
acquiring position information of each acquisition point on the surface of the target object in the three-dimensional coordinate system of the TOF camera;
according to the position information of each acquisition point in the TOF camera three-dimensional coordinate system, calculating a three-dimensional coordinate value of each acquisition point in the TOF camera three-dimensional coordinate system;
and generating the first point cloud data according to the three-dimensional coordinate values of the acquisition points in the TOF camera three-dimensional coordinate system.
As a further improved technical solution, the step of determining the second point cloud data according to the first point cloud data includes:
determining three-dimensional coordinate values of all acquisition points in the coordinate system of the controller object according to the three-dimensional coordinate values of all acquisition points in the first point cloud data in the three-dimensional coordinate system of the TOF camera;
and determining second point cloud data according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller respectively.
As a further improved technical solution, the 3D scanning system further includes: an RGB camera disposed on the TOF camera, the RGB camera in communication with the controller;
The method further comprises the following steps:
the RGB camera collects RGB color values respectively corresponding to each collection point on the surface of the target object and transmits the collected RGB color values respectively corresponding to each collection point to the controller;
and the controller receives the RGB color values respectively corresponding to the acquisition points, and fills the RGB color values respectively corresponding to the acquisition points into the 3D model according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller.
As a further improved technical solution, the error coefficient calculation step is:
the TOF camera acquires position information of at least one calibration label arranged on a calibration object or on the surface of a target object in a TOF camera three-dimensional coordinate system, obtains a three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system according to the position information, and transmits the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system to the controller;
the controller receives the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system, and determines the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system;
And the controller calculates the error coefficient according to the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system and the actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system.
As a further improved technical solution, the method for acquiring the actual three-dimensional coordinate value of the at least one calibration label in the controller coordinate system includes the following steps:
the at least one calibration label sends the actual position information thereof to the controller;
and the controller receives the actual position information and obtains an actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the received actual position information.
As a further improved technical solution, the method for acquiring an actual three-dimensional coordinate value of the at least one calibration label in the controller coordinate system includes the following steps:
the controller respectively emits emission laser beams to the surface of the at least one calibration label;
the at least one calibration label surface reflects the emitted laser beam to the controller under the irradiation of the emitted laser beam;
The controller receives the emitted laser beam reflected back by the surface of the at least one calibration label;
the controller calculates the position information of the at least one calibration label in the controller object coordinate system according to the emitting time and the emitting angle of the emitted laser beam and the receiving time and the reflecting angle of the received emitted laser beam;
and the controller obtains the actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the calculated position information of the at least one calibration label in the controller object coordinate system.
As a further improved technical solution, the step of obtaining the 3D model of the target object according to the corrected second point cloud data includes:
sequentially carrying out 3D modeling according to the corrected second point cloud data by utilizing a principle of updating or not updating;
the principle of refreshing with or without refreshing is as follows: and aiming at the same acquisition point on the surface of the target object, taking the three-dimensional coordinate value in the coordinate system of the controller object acquired firstly as valid data, and taking the three-dimensional coordinate value in the coordinate system of the controller object acquired later as invalid data.
As a further improved technical solution, the step of correcting the second point cloud data by the controller according to a preset error coefficient includes:
and adding the error coefficient to the three-dimensional coordinate value of each acquisition point in the second point cloud data in the controller object coordinate system to obtain corrected second point cloud data.
The invention also provides a 3D scanning system based on the TOF camera, wherein the system comprises: a TOF camera and a controller in communication therewith;
the TOF camera includes: the system comprises a data acquisition module and a first communication module;
the data acquisition module is used for acquiring and determining first point cloud data; the first point cloud data is determined according to three-dimensional coordinate values of all acquisition points on the surface of the target object in a three-dimensional coordinate system of the TOF camera;
the first communication module is used for transmitting the first point cloud data to the controller;
the controller includes: the second communication module and the data processing module;
the second communication module is used for receiving the first point cloud data;
the data processing module is used for determining second point cloud data according to the first point cloud data, wherein the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller;
And correcting the second point cloud data according to a preset error coefficient, and obtaining a 3D model of the target object according to the corrected second point cloud data.
As a further improved technical solution, the 3D scanning system further includes: an RGB camera disposed on the TOF camera, the RGB camera in communication with the controller; the data processing module further comprises: a data fusion unit;
the RGB camera is used for collecting RGB color values corresponding to all collection points on the surface of the target object and transmitting the collected RGB color values to the controller;
the second communication module is further configured to receive the RGB color values;
the data fusion unit is used for receiving the RGB color values respectively corresponding to the acquisition points, determining the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller according to the three-dimensional coordinate values of the acquisition points in the three-dimensional coordinate system of the TOF camera, and filling the RGB color values respectively corresponding to the acquisition points into the 3D model according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller.
As a further improved technical solution, the system further includes: a calibration object;
arranging at least one calibration label on the surface of the calibration object or the surface of the target object;
the data processing module is further configured to receive a three-dimensional coordinate value of at least one calibration label in a TOF camera three-dimensional coordinate system transmitted by the TOF camera, determine a three-dimensional coordinate value of the at least one calibration label in a controller object coordinate system according to the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system, and calculate the error coefficient according to the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system and an actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system.
The method and the system have the advantages that data acquired based on the TOF camera has surface scanning, the calculation amount is small, the influence of ambient light is small, the scanning data precision is high, and the like, so that the defects that laser scanning and structured light scanning in the prior art can only carry out short-distance scanning and are greatly influenced by the ambient light can be overcome by utilizing the TOF camera to carry out 3D modeling on the target object, and in order to acquire more accurate point cloud data of the target object, the error coefficient of three-dimensional coordinate conversion is acquired by utilizing the calibration label, and the converted point cloud data of the target object is corrected by utilizing the error coefficient, so that more accurate 3D modeling of the target object is realized. The method disclosed by the invention can be widely applied in the fields of building modeling, 3D film production and the like.
Drawings
FIG. 1 is a flow chart of the steps of a TOF camera based 3D scanning method provided by the present invention;
FIG. 2 is a schematic layout of the various devices in which the method of the present invention is particularly applicable;
FIG. 3 is a schematic diagram of the transformation between the three-dimensional coordinates of the TOF camera scanner and the coordinates of the object to be measured in the method of the present invention;
FIG. 4 is a schematic diagram illustrating the calculation of error coefficients in the method of the present invention;
FIG. 5 is a flow chart of steps implemented in a specific application of the method of the present invention;
fig. 6 is a schematic block diagram of the architecture of the system of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
TOF is an abbreviation of Time of Flight technology, i.e. a sensor emits modulated near-infrared light, which is reflected after encountering an object, and the sensor converts the distance of a shot scene by calculating the Time difference or phase difference between light emission and reflection to generate depth information.
The TOF camera is composed of components such as a light source, an optical component, a sensor, a control circuit, and an arithmetic unit, and performs distance measurement by using changes of an incident light signal and a reflected light signal, so that the irradiation units of the TOF camera perform high-frequency modulation on light and then emit the light, for example: pulse light emitted by an LED or a laser diode is adopted, and the pulse can reach 100 MHz. Similar to a normal camera, the front end of the TOF camera chip needs a lens for collecting light. However, unlike a common optical lens, a bandpass filter is added in front of the lens to ensure that only light with the same wavelength as the illumination light source enters. As the core of the TOF camera, each pixel of the TOF chip records the phase between the incident light back-and-forth camera and the target object respectively. The sensor structure is similar to a conventional image sensor, but more complex than an image sensor, and it includes 2 or more shutters for sampling the reflected light at different times. For this reason, TOF chip pixels are much larger than typical image sensor pixel sizes, typically around 100 um. Both the illumination unit and the TOF sensor require high-speed signal control, so that a high depth measurement accuracy can be achieved. For example, a shift of 10ps in the synchronization signal between the illumination light and the TOF sensor corresponds to a displacement of 1.5 mm. And the current CPU can reach 3GHz, the corresponding clock period is 300ps, and the corresponding depth resolution is 45 mm. The arithmetic unit is mainly used for finishing data correction and calculation work, and distance information can be obtained by calculating the relative phase shift relation of incident light and reflected light.
Advantages of TOF: compared with a stereo camera or a triangulation system, the TOF camera is small in size and very suitable for occasions needing a light and small-size camera. The TOF camera can rapidly calculate depth information in real time to reach dozens to 100fps, and the binocular stereo camera needs a complex correlation algorithm and is low in processing speed. The depth information calculation of the TOF camera is not influenced by the surface gray scale and characteristics of the object, and three-dimensional detection can be accurately carried out. The binocular stereo camera requires good feature change of the target, otherwise, depth calculation cannot be performed. The depth calculation accuracy of the TOF does not change along with the change of the distance, and the measurement distance can reach more than 100 meters, which is very significant for application occasions of large-range motion.
Furthermore, the method and the system provided by the invention also combine with the error coefficient between the measured value and the actual value of the coordinate data of the calibration object to correct the collected point cloud data, and realize the modeling of the target object according to the corrected point cloud data, so that a more accurate 3D modeling effect can be obtained.
The invention provides a TOF camera-based 3D scanning method, which is applied to a 3D scanning system comprising a TOF camera and a controller, wherein the TOF camera and the controller are communicated with each other, and as shown in figure 1, the method comprises the following steps:
step S1, the TOF camera determining first point cloud data and transmitting the first point cloud data to the controller; and determining the first point cloud data according to the three-dimensional coordinate values of all acquisition points on the surface of the target object in a three-dimensional coordinate system of the TOF camera.
In the step, the TOF camera shoots the surface of the target object from different angles to obtain a depth image of the target object, and three-dimensional coordinate values of all acquisition points on the surface of the target object in a TOF camera three-dimensional coordinate system are calculated based on the position information between each acquisition point on the surface of the target object and the TOF camera contained in the depth image. The first point cloud data is a set of three-dimensional coordinate values of each acquisition point in the TOF camera three-dimensional coordinate system.
The acquisition points are corresponding coordinate points which are irradiated by infrared rays emitted by the TOF camera to the surface of the target object and are uniformly distributed on the surface of the target object. Shooting the target object at preset angles until complete three-dimensional coordinate data of the surface of the target object is obtained. When shooting around a target object based on the TOF camera, the motion trajectory of the TOF camera during shooting can be irregular, and the preset shooting angle can be adjusted correspondingly according to different structures of the surface of the target object, for example: when the target object is in the shape of a cylinder, the side surface of the target object can be shot at every first preset angle, the upper surface and the lower surface of the target object can be shot at every second preset angle, specifically, the center of the cylinder can be used as a circle center, and the TOF camera can shoot 360 degrees around the side surface of the cylinder in a horizontal plane, for example, the first preset angle is 90 degrees, namely shooting sequentially by respectively rotating 0 degrees, 90 degrees, 180 degrees and 270 degrees in the horizontal direction. When photographing the upper surface or the lower surface of the cylinder, the TOF camera takes the center of the cylinder as a center, and photographing can be performed around the upper surface or the lower surface in a vertical plane above the upper surface or below the lower surface by using a second preset angle, for example: the second preset angle is 30 degrees, when the upper surface is photographed, the upper surface is respectively rotated by 0 °, 30 °, 90 °, 120 ° and 180 ° in the vertical direction, when the lower surface is photographed, the upper surface is respectively rotated by 210 °, 240 °, 270 °, 300 ° and 330 ° in the vertical direction, and the upper surface or the lower surface is photographed every 1 second.
The method for determining the first point cloud data by the TOF camera in this step specifically includes:
and S11, acquiring the position information of each acquisition point on the surface of the target object in the three-dimensional coordinate system of the TOF camera.
The location information includes: a distance value and an angle value between the target object and the TOF camera.
The acquisition method of the distance value comprises the following steps:
sending infrared rays to the surface of a target object by a TOF camera, and recording infrared ray emission time and infrared ray emission speed;
under the irradiation of the infrared rays, the surface of the target object reflects the infrared rays to the TOF camera, and the reflection time is recorded;
and the TOF camera calculates a distance value between the TOF camera and the target object according to the infrared ray emission time, the time difference between the reflection times and the infrared ray emission speed. The specific calculation method comprises the following steps: the product of the time difference and the transmission speed. The distance value is a straight-line distance between each acquisition point on the surface of the target object and the central point of the lens of the TOF camera.
The angle value is the shooting angle of the TOF camera when the TOF camera shoots the surface of the target object. The shooting angle is the value of the angle between the center line of the TOF camera lens in the TOF camera coordinate system and three coordinate axes. Since the TOF camera is rotated around the target object at preset angular intervals or intervals while acquiring images of different orientations on the surface of the target object, the TOF camera forms different shooting angles during rotation.
And S12, calculating the three-dimensional coordinate values of the acquisition points in the three-dimensional coordinate system of the TOF camera according to the position information of the acquisition points in the three-dimensional coordinate system of the TOF camera.
In the step, according to the distance value and the angle value corresponding to each acquisition point, a straight line corresponding to the distance value is respectively projected onto three coordinate axes of a TOF camera three-dimensional coordinate system, and according to the angle value between the straight line and each coordinate axis and the length value of the straight line respectively projected onto the three coordinate axes, a three-dimensional coordinate value in the TOF camera three-dimensional coordinate system corresponding to each acquisition point is obtained.
And S13, generating the first point cloud data according to the three-dimensional coordinate values of the acquisition points in the TOF camera three-dimensional coordinate system.
And recombining the three-dimensional coordinate values in the TOF camera three-dimensional coordinate system of each acquisition point into three-dimensional data, and generating the first point cloud data from the recombined three-dimensional data.
For example: three-dimensional coordinate values in a TOF camera three-dimensional coordinate system respectively corresponding to acquisition points A1, A2 and A3 … … An (n is a natural number) on the surface of the target object are
Figure 241787DEST_PATH_IMAGE001
Figure 362190DEST_PATH_IMAGE002
Figure 125615DEST_PATH_IMAGE003
,……
Figure 458508DEST_PATH_IMAGE004
And recombining the three-dimensional coordinate values corresponding to all the acquisition points and the angle values corresponding to the acquisition points to generate the first point cloud data. Each three-dimensional data point in the first point cloud data corresponds to a three-dimensional coordinate value in a three-dimensional coordinate system of a TOF camera where a sampling point on the surface of the target object is located and an angle value corresponding to the three-dimensional coordinate value.
With reference to the schematic layout of each apparatus in the method shown in fig. 2, the TOF camera 240 establishes a communication connection with the controller 10, where the communication connection may be a wired communication connection or a wireless communication connection, and after the communication connection is established, the TOF camera 230 transmits the first point cloud data to the controller through the communication connection. To achieve accurate transmission of data information, it is preferred in the present invention to use wires 120 to establish information transfer between the controller and the TOF camera.
Furthermore, the number of the target objects in this step may be one or multiple, that is, scanning of the acquisition points on the surfaces of multiple target objects may be simultaneously achieved, and first point cloud data of TOF camera three-dimensional coordinate systems corresponding to the acquisition points on the surfaces of multiple target objects may be obtained. The target object can be a building to be detected for size accuracy, and can also be a plurality of physical models to be displayed by 3D models.
Step S2, the controller receives the first point cloud data, and determines second point cloud data according to the first point cloud data, where the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller.
The controller receives first point cloud data transmitted by the TOF camera through communication connection with the TOF camera, and determines three-dimensional coordinate values converted to the object controller according to the three-dimensional coordinate values of all acquisition points contained in the first point cloud data.
As shown in fig. 2, the controller 10 may have a data processing function, and preferably, the controller 10 may also have an information display function, for example, the controller 10 may further be equipped with a display 110 having an information display function, and the display 110 is used for displaying the 3D model of the modeled target object.
Specifically, the controller receives first point cloud data sent by a TOF camera, and because the three-dimensional coordinate values of the acquisition points contained in the first point cloud data are in a TOF camera three-dimensional coordinate system established by the TOF camera, corresponding coordinate conversion needs to be performed on the three-dimensional coordinate values of the acquisition points contained in the first point cloud data according to different coordinate systems, specifically, according to the three-dimensional coordinate values of the acquisition points in the TOF camera three-dimensional coordinate system, the three-dimensional coordinate values of the acquisition points in the controller object coordinate system are determined; and determining second point cloud data according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller respectively.
Referring to fig. 3, a schematic diagram of coordinate transformation between the TOF camera three-dimensional coordinate system and an object coordinate system established by the controller is shown, in which o1-X1Y1Z1 is the TOF camera three-dimensional coordinate system, which is a three-dimensional rectangular coordinate system established with the focus center of the pinhole camera model as the origin and the optical axis of the camera as the Z-axis. In the figure, o2-X2Y2Z2 is an object coordinate system established by the controller, vector o1o2 is a coordinate vector of the TOF camera in the object coordinate system, and first point cloud data in o1-X1Y1Z1 can be converted into an o2-X2Y2Z2 controller object coordinate system through the coordinate vector o1o 2.
Specifically, the coordinate conversion may be implemented by a coordinate conversion method such as a three-parameter method or a seven-parameter method. The coordinate conversion method is exemplified by a seven-parameter method as follows:
the two space rectangular coordinate systems are O1-X1Y1Z1 and O2-X2Y2Z2 respectively, the original points of the two space rectangular coordinate systems are inconsistent, corresponding coordinate axes are not parallel to each other, three translation parameters and three Euler angles are arranged between the two coordinate axes, namely three rotation parameters, a dimension change parameter m is required to be arranged considering that the dimensions of the two coordinate systems are not as large as one, and the total number of the parameters is seven.
The seven parameters are used for converting the space rectangular coordinate system into a Boolean Sha formula, a Moluoqinski formula, a normal formula and the like.
The following is a boolean seven parameter formula:
Figure 741722DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 298605DEST_PATH_IMAGE006
the components of the three-dimensional coordinate system of the TOF camera with respect to the coordinate origin of the coordinate system of the controller object on three coordinate axes, commonly referred to as three translation parameters,
Figure 785212DEST_PATH_IMAGE007
respectively for each three-dimensional coordinate value acquired in the three-dimensional coordinate system of the TOF camera,
Figure 452954DEST_PATH_IMAGE008
respectively for each three-dimensional coordinate value acquired in the three-dimensional coordinate system of the TOF camera,
Figure 590674DEST_PATH_IMAGE009
three rotation parameters are respectively provided, and m is a scale change parameter.
And converting the three-dimensional coordinate values of the plurality of acquisition points in the TOF camera three-dimensional coordinate system contained in the first point cloud data into the object coordinate system where the controller is located through the coordinate conversion, and obtaining the three-dimensional coordinate values of the plurality of converted acquisition points in the object coordinate system of the controller.
And step S3, the controller corrects the second point cloud data converted into the controller object coordinate system according to a preset error coefficient, and obtains a 3D model of the target object according to the corrected second point cloud data.
Because there will be certain measurement error when utilizing the TOF camera to acquire the position information of the acquisition point in the above-mentioned step, this measurement error can be: errors generated during operation (for example, errors generated due to hand shake during operation), measurement errors of the TOF camera system (errors existing in a coordinate system or errors of processing software) or hardware (for example, errors of a lens) of the TOF camera, or errors of setting measurement parameters, and the like, so that the accuracy of the second point cloud data obtained by coordinate conversion is reduced.
Therefore, in the method, an error coefficient generated by coordinate system conversion is firstly calculated, the error coefficient is recorded and stored, and in the subsequent 3D scanning process, the second point cloud data can be corrected according to the stored error coefficient, so that the corrected second point cloud data is obtained.
The step of calculating the error coefficient in this step is:
step S31, the TOF camera collects position information of at least one calibration label arranged on the surface of a calibration object or on the surface of a target object in a TOF camera three-dimensional coordinate system, three-dimensional coordinate values of the at least one calibration label in the TOF camera three-dimensional coordinate system are obtained according to the position information, and the three-dimensional coordinate values of the at least one calibration label in the TOF camera three-dimensional coordinate system are transmitted to the controller.
First, within the scanning area of the TOF camera, at least one calibration object may be provided, on the surface of which at least one calibration label for correcting measurement errors occurring during coordinate transformation is provided. On the other hand, the calibration label may also be disposed on the surface of the target object.
Specifically, in this step, the TOF camera first acquires the position information of the at least one calibration label, then calculates a three-dimensional coordinate value of the calibration label in the three-dimensional coordinate system of the TOF camera according to the acquired position information of the at least one calibration label, and sends the three-dimensional coordinate value to the controller through the communication module. In this step, the method for the TOF camera to obtain the position information of the calibration label and calculate the three-dimensional coordinate value of the calibration label in the three-dimensional coordinate system of the TOF camera according to the position information is the same as the method for the TOF camera to obtain the position information of each acquisition point in step S1 and calculate the three-dimensional coordinate value of the acquisition point in the three-dimensional coordinate system of the TOF camera according to the position information, and details are not repeated here.
Step S32, the controller receives the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system, and determines the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system.
Step S33, the controller calculates the error coefficient according to the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system and the actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system.
The error coefficient is a difference value between the three-dimensional coordinate value obtained by conversion and an actual three-dimensional coordinate value, specifically a difference value between three coordinate values on three coordinate axes. When there are multiple calibration labels, the average value of the error coefficients corresponding to the calibration labels is used as the error coefficient for correcting the second point cloud data in step S3.
Specifically, in step S3, as shown in fig. 4, the error coefficient calculation is illustrated schematically, and as shown in the data matrix [ Xmn [ ]]The three-dimensional coordinate value in the coordinate system of the controller object calculated for a certain calibration label on the target object to be scanned
Figure 52879DEST_PATH_IMAGE010
Data matrix [ Ymn ]]For the actual three-dimensional coordinate value in the coordinate system of the controller object where the calibration label is located
Figure 525318DEST_PATH_IMAGE011
Then the difference Zmn between the two
Figure 731171DEST_PATH_IMAGE012
And the error coefficient is a calculation error possibly generated in the process of converting the coordinate data of the three-dimensional coordinate system of the TOF camera and the object coordinate system where the controller is located, and the system error value is added to each coordinate point in the data matrix obtained after conversion, so that the whole second point cloud data is corrected.
In an embodiment, in the above method, the actual three-dimensional coordinate value of the at least one calibration label in the controller coordinate system may be obtained by:
the first mode is to obtain the actual three-dimensional coordinate value of the calibration label by a mode that the calibration label automatically sends the position information of the calibration label to a controller, and the mode comprises the following steps:
step S331, the at least one calibration label sends actual position information thereof to the controller;
step S332, the controller receives the actual position information, and obtains an actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the received actual position information.
In the mode, the at least one calibration label is communicated with the controller, the at least one calibration label firstly positions the position of the at least one calibration label to obtain the position information of the at least one calibration label, and then the actual position information of the at least one calibration label is sent to the controller through communication connection.
And the controller receives the actual position information sent by the at least one calibration label and obtains a three-dimensional coordinate value in a controller object coordinate system where the at least one calibration label is located according to the received actual position information.
The actual position information may be a three-dimensional coordinate value of the controller object coordinate system measured by the calibration label, or a longitude and latitude value located by the GPS module, or a distance value and an angle value of the distance controller located by the calibration label. The actual position information needs to satisfy: and calculating data contained in the position information to obtain a three-dimensional coordinate value of the at least one calibration label in the object coordinate system.
Further, the communication mode may be wired communication or wireless communication, as long as the information intercommunication between each calibration label and the controller can be satisfied.
The second way of obtaining the actual three-dimensional coordinate value of the at least one calibration label is by a laser reflection method, which includes the following steps:
step 341, the controller respectively emits emission laser beams to the surface of the at least one calibration label;
Step 341, reflecting the emission laser beam to the controller by the at least one calibration label surface under the irradiation of the emission laser beam;
342, receiving the emitted laser beam reflected by the surface of the at least one calibration label by the controller;
344, the controller calculates the position information of the at least one calibration label in the controller object coordinate system according to the emitting time and the emitting angle of the emitted laser beam and the receiving time and the reflection angle of the received emitted laser beam;
step 345, the controller obtains an actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the calculated position information of the at least one calibration label in the controller object coordinate system.
In the above steps, firstly, the distance value between the object and each calibration label is calculated according to the sending speed of the laser beam sent to each calibration label, the sending time of the laser beam and the receiving time of the laser beam, and then the three-dimensional coordinate value in the object coordinate system where each calibration label is located is calculated according to the calculated distance value and the sending angle of the laser beam.
Furthermore, as only three-dimensional coordinate values in a controller object coordinate system where each acquisition point on the surface of the target object is located are used for 3D modeling, and the obtained 3D model is black and white, in order to enable the presented 3D model to be closer to a real target object, RGB color values of the target object can be obtained by using an RGB camera, and the 3D model is subjected to color filling by using the RGB color values, so that the modeled target object model has the shape of the target object and also has color information of the modeled target object model.
The 3D scanning system further comprises: an RGB camera disposed on the TOF camera, the RGB camera in communication with the controller;
the method further comprises the following steps:
the RGB camera collects RGB color values respectively corresponding to each collection point on the surface of the target object and transmits the collected RGB color values respectively corresponding to each collection point to the controller;
and the controller receives the RGB color values respectively corresponding to the acquisition points, and fills the RGB color values respectively corresponding to the acquisition points into the 3D model according to the three-dimensional coordinate values of the acquisition points respectively corresponding to the coordinate system of the controller.
Specifically, the RGB camera is a common cameraThe machine can acquire a color image of the target object. The method comprises the steps that an RGB camera shoots the surface of a target object to obtain an image of the surface of the target object, the image of the surface contains a three-dimensional coordinate value of a TOF camera three-dimensional coordinate system corresponding to a collection point on the surface of the target object and an RGB color value corresponding to the collection point, and after the RGB value corresponding to the collection point is transmitted to a controller, the RGB color value received by the controller carries out color filling on the three-dimensional coordinate point of each collection point on the three-dimensional coordinate value in a controller object coordinate system. For example: when the RGB color values corresponding to the capture points a1, a2, A3 … … An (n is a natural number) on the surface of the target object are: (255, 228, 196), (245, 245, 220), (240, 255, 255) … … (240, 248, 255), which corresponds to three-dimensional coordinate values in the TOF camera three-dimensional coordinate system of
Figure 723398DEST_PATH_IMAGE013
Figure 622084DEST_PATH_IMAGE014
Figure 907502DEST_PATH_IMAGE015
,……
Figure 651467DEST_PATH_IMAGE016
The three-dimensional coordinate values converted from the acquisition points a1, a2 and A3 … … An to the controller coordinate system are respectively as follows:
Figure 763780DEST_PATH_IMAGE017
Figure 567788DEST_PATH_IMAGE018
Figure 280398DEST_PATH_IMAGE019
,……
Figure 562474DEST_PATH_IMAGE020
then, according to the three-dimensional coordinate values corresponding to each acquisition point, respectively
Figure 794873DEST_PATH_IMAGE017
Figure 35361DEST_PATH_IMAGE018
Figure 205574DEST_PATH_IMAGE019
,……
Figure 556921DEST_PATH_IMAGE020
The corresponding RGB color values are correspondingly filled into the three-dimensional coordinate points with the same three-dimensional coordinate value, that is, the corresponding RGB color values of the collection points a1, a2, A3 … … An (n is a natural number) on the surface of the target object are respectively: (255, 228, 196), (245, 245, 220), (240, 255, 255) … … (240, 248, 255) filled to three-dimensional coordinate values of
Figure 378246DEST_PATH_IMAGE017
Figure 789636DEST_PATH_IMAGE018
Figure 945679DEST_PATH_IMAGE019
,……
Figure 835138DEST_PATH_IMAGE020
At the three-dimensional coordinate point of (2). For example: will be three-dimensional coordinate values in the three-dimensional coordinate system of TOF camera
Figure 776549DEST_PATH_IMAGE021
The RGB color values (255, 228, 196) corresponding to the collection points are correspondingly filled into the coordinate system of the controller, and the three-dimensional coordinate values are
Figure 624420DEST_PATH_IMAGE022
On the three-dimensional coordinate points of (a); will be three-dimensional coordinate values in the three-dimensional coordinate system of TOF camera
Figure 18492DEST_PATH_IMAGE023
The RGB color value (245, 245, 220) corresponding to the collection point is correspondingly filled into the coordinate system of the controller, and the three-dimensional coordinate value is
Figure 462374DEST_PATH_IMAGE024
At the three-dimensional coordinate point of (2).
And the controller performs 3D modeling according to the corrected second point cloud data so as to obtain a 3D model of the target object after modeling is completed, and transmits the model to the display screen for displaying.
In order to maintain the stability of the image in the process of establishing the model when 3D modeling is realized, the method also comprises the following steps while the point cloud data is corrected by using the error coefficient:
sequentially carrying out 3D modeling according to the corrected second point cloud data by utilizing a principle of updating or not updating;
the principle of refreshing with or without refreshing is as follows: and aiming at the same acquisition point on the surface of the target object, taking the three-dimensional coordinate value in the coordinate system of the controller object acquired firstly as valid data, and taking the three-dimensional coordinate value in the coordinate system of the controller object acquired later as invalid data.
When 3D modeling is carried out, the problem that the three-dimensional coordinate value corresponding to the same acquisition point possibly repeatedly appears in the second point cloud data is solved by only judging that the three-dimensional position value in the controller coordinate system of the acquisition point acquired first is valid data, the three-dimensional coordinate value in the controller coordinate system acquired later is invalid data, and the three-dimensional coordinate value acquired later does not update the data which is acquired first and is filled in the 3D model, so that the contour of the 3D model is slowly and completely filled with the three-dimensional coordinate value acquired first, the three-dimensional coordinate value acquired later does not disturb the data filled before, and the stability of the 3D model during building is maintained.
The method of the present invention is further described below by way of examples of specific applications of the present invention.
As shown in fig. 5, in the specific application embodiment, taking the case that the calibration label is disposed on the target object, the method of the present invention has the following steps:
h1, attaching a plurality of calibration labels on the target object.
The calibration label is an electronic label, for example: the tag or the antenna consisting of the coupling element and the chip is provided with the communication unit in the electronic tag, so that information transmission between the calibration tag and the controller can be realized.
H2, connecting the controller and the TOF camera with a power supply, and starting the controller and the TOF camera.
H3, respectively calibrating the object coordinate system established by the controller and calibrating the TOF camera three-dimensional coordinate system established by the TOF camera.
H4, the controller acquires actual position information of the calibration labels, three-dimensional coordinate values of the calibration labels in a controller object coordinate system are acquired according to the actual position information, the TOF camera acquires position information of the TOF camera three-dimensional coordinate system where the calibration labels are located, three-dimensional coordinate values of the TOF camera three-dimensional coordinate system where the calibration labels are located are acquired according to the position information, and the three-dimensional coordinate values of the TOF camera three-dimensional coordinate system where the calibration labels are located are transmitted to the controller;
the controller receives three-dimensional coordinate values in a TOF camera three-dimensional coordinate system where the calibration labels are located and converts the three-dimensional coordinate values in the TOF camera three-dimensional coordinate system where the calibration labels are located into a controller object coordinate system to obtain the three-dimensional coordinate values of the calibration labels in the controller object coordinate system;
and the controller calculates to obtain an error coefficient according to the three-dimensional coordinate value of the plurality of calibration labels in the controller object coordinate system and the actual three-dimensional coordinate value of the plurality of calibration labels in the controller object coordinate system.
H5, collecting the position information of each collecting point on the surface of the target object by using a TOF camera, calculating a three-dimensional coordinate value in a TOF camera three-dimensional coordinate system of each collecting point according to a distance value and an angle value contained in the collected position information, collecting the RGB color value of each collecting point on the surface of the target object by the RGB camera, and respectively transmitting the generated first point cloud data and the collected RGB color value to the controller by the TOF camera.
H6, the controller receives first point cloud data and RGB color values, and according to the three-dimensional coordinate values of all the collection points in the first point cloud data in the TOF camera three-dimensional coordinate system, the three-dimensional coordinate values of all the collection points in the controller object coordinate system are determined; and determining second point cloud data according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller respectively. Correcting the whole converted second point cloud data according to the error coefficient, modeling 3D data according to the principle that the data is updated or not updated, and displaying the modeling effect on a display screen of a controller;
and continuously repeating the steps H4-H6 until all three-dimensional coordinate values corresponding to all three-dimensional coordinate points contained in the second point cloud data are endowed to the three-dimensional image to obtain a reconstructed three-dimensional image of the target object, and meanwhile, carrying out color filling on the reconstructed three-dimensional image of the target object according to the received RGB color values to obtain a 3D model of the target object containing color information.
The method disclosed by the invention can be widely applied to the fields of product replication and modification, antique modeling, antique building modeling, portrait statue, 3D movie making and the like. The TOF camera is adopted to obtain the three-dimensional point cloud data of the target object, and the error coefficient is utilized to correct the converted point cloud data, so that the accuracy of the obtained data is improved.
The invention also provides a 3D scanning system based on a TOF camera, as shown in fig. 6, comprising: TOF camera 240 and controller 10 establishing a communication connection therewith;
the TOF camera 240 includes: a data acquisition module 2401 and a first communication module 2402;
the data acquisition module 2401 is configured to determine first point cloud data; the first point cloud data is determined according to three-dimensional coordinate values of all acquisition points on the surface of the target object in a three-dimensional coordinate system of the TOF camera;
the first communication module 2402 is configured to transmit the first point cloud data to the controller 10;
The controller 10 includes: a second communication module 101 and a data processing module 102;
the second communication module 101 is configured to receive the first point cloud data;
the data processing module 102 is configured to determine second point cloud data according to the first point cloud data, where the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller;
and correcting the second point cloud data according to a preset error coefficient, and obtaining a 3D model of the target object according to the corrected second point cloud data.
As shown in fig. 2, an RGB camera 230 is further disposed on the TOF camera 240, the TOF camera 240 and the RGB camera 230 are both disposed on a TOF camera scanner 20, and the RGB camera 230 is in communication with the controller 10; the data processing module 102 further comprises: a data fusion unit;
the RGB camera 230 is configured to collect RGB color values corresponding to the collection points on the surface of the target object 30, and transmit the collected RGB color values to the controller 10;
the second communication module is further configured to receive the RGB color values;
and the data fusion unit is used for filling the RGB color values respectively corresponding to the acquisition points into the 3D model according to the three-dimensional coordinate values respectively corresponding to the acquisition points in the coordinate system of the controller object.
The system further comprises: a calibration object;
arranging at least one calibration label on the surface of the calibration object or the surface of the target object;
the data processing module 102 is further configured to receive a three-dimensional coordinate value of the at least one calibration tag in a TOF camera three-dimensional coordinate system transmitted by the TOF camera 240, determine a three-dimensional coordinate value of the at least one calibration tag in a controller object coordinate system according to the three-dimensional coordinate value of the at least one calibration tag in the TOF camera three-dimensional coordinate system, and calculate the error coefficient according to the three-dimensional coordinate value of the at least one calibration tag in the controller object coordinate system and an actual three-dimensional coordinate value of the at least one calibration tag in the controller object coordinate system.
Preferably, the controller 10 establishes a wireless communication connection with each calibration tag.
The invention provides a TOF camera-based 3D scanning method and a TOF camera-based 3D scanning system, wherein a TOF camera is used for collecting position information of each collecting point on the surface of a target object in a TOF camera three-dimensional coordinate system, three-dimensional coordinate values of the collecting points in the TOF camera three-dimensional coordinate system are obtained according to the position information of the collecting points in the TOF camera three-dimensional coordinate system, and first point cloud data generated by the three-dimensional coordinate values of the collecting points in the TOF camera three-dimensional coordinate system are transmitted to a controller; the controller receives the first point cloud data and determines second point cloud data according to the first point cloud data, wherein the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller; and the controller corrects the second point cloud data according to a preset error coefficient, and obtains a 3D model of the target object according to the corrected second point cloud data. According to the method and the system provided by the invention, the TOF camera has the advantages of surface scanning, small operation amount and small influence by ambient light, so that the 3D model established by collecting the data is more accurate, the requirement on the environmental condition during data collection is reduced, and the practicability is stronger.
It should be understood that equivalents and modifications of the technical solution and inventive concept thereof may occur to those skilled in the art, and all such modifications and alterations should fall within the scope of the appended claims.

Claims (12)

1. A TOF camera-based 3D scanning method is applied to a 3D scanning system comprising a TOF camera and a controller, wherein the TOF camera and the controller are in communication with each other;
the method comprises the following steps:
the TOF camera determining first point cloud data and transmitting the first point cloud data to the controller; the first point cloud data is determined according to three-dimensional coordinate values of all acquisition points on the surface of the target object in a three-dimensional coordinate system of the TOF camera;
the controller receives the first point cloud data and determines second point cloud data according to the first point cloud data, wherein the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller;
and the controller corrects the second point cloud data according to a preset error coefficient and generates a 3D model of the target object according to the corrected second point cloud data.
2. The TOF camera based 3D scanning method according to claim 1, wherein the step of the TOF camera determining the first point cloud data comprises:
the TOF camera acquires the position information of each acquisition point on the surface of the target object in the TOF camera three-dimensional coordinate system;
the TOF camera calculates three-dimensional coordinate values of the acquisition points in the TOF camera three-dimensional coordinate system according to the position information of the acquisition points in the TOF camera three-dimensional coordinate system;
and the TOF camera generates the first point cloud data according to the three-dimensional coordinate values of the acquisition points in the TOF camera three-dimensional coordinate system.
3. The TOF camera based 3D scanning method according to claim 1, wherein the step of determining second point cloud data from the first point cloud data comprises:
determining three-dimensional coordinate values of all acquisition points in the coordinate system of the controller object according to the three-dimensional coordinate values of all acquisition points in the first point cloud data in the three-dimensional coordinate system of the TOF camera;
and determining second point cloud data according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller respectively.
4. The TOF camera based 3D scanning method according to claim 1, wherein the 3D scanning system further comprises: an RGB camera disposed on the TOF camera, the RGB camera in communication with the controller;
the method further comprises the following steps:
the RGB camera collects RGB color values respectively corresponding to each collection point on the surface of the target object and transmits the collected RGB color values respectively corresponding to each collection point to the controller;
and the controller receives the RGB color values respectively corresponding to the acquisition points, and fills the RGB color values respectively corresponding to the acquisition points into the 3D model according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller.
5. The TOF camera based 3D scanning method according to claim 1, wherein the error coefficients are calculated by:
the TOF camera acquires position information of at least one calibration label arranged on a calibration object or the surface of the target object in a TOF camera three-dimensional coordinate system;
obtaining a three-dimensional coordinate value of the at least one calibration label in a TOF camera three-dimensional coordinate system according to the position information of the at least one calibration label, and transmitting the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system to the controller;
The controller receives the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system, and determines the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system;
and the controller calculates the error coefficient according to the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system and the actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system.
6. The TOF camera-based 3D scanning method according to claim 5, wherein the method for obtaining actual three-dimensional coordinate values of the at least one calibration label in the controller object coordinate system comprises the steps of:
the at least one calibration label sends the actual position information thereof to the controller;
and the controller receives the actual position information and obtains an actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the received actual position information.
7. The TOF camera based 3D scanning method according to claim 5, wherein the actual three-dimensional coordinate value acquisition method of said at least one calibration label in said controller object coordinate system comprises the following steps:
The controller respectively emits emission laser beams to the surface of the at least one calibration label;
the at least one calibration label surface reflects the emitted laser beam to the controller under the irradiation of the emitted laser beam;
the controller receives the emitted laser beam reflected back by the surface of the at least one calibration label;
the controller calculates the position information of the at least one calibration label in the controller object coordinate system according to the emitting time and the emitting angle of the emitted laser beam and the receiving time and the reflecting angle of the received emitted laser beam;
and the controller obtains the actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system according to the calculated position information of the at least one calibration label in the controller object coordinate system.
8. The TOF camera based 3D scanning method according to any of the claims 1 to 7, wherein said step of deriving a 3D model of the target object from the corrected second point cloud data comprises:
sequentially carrying out 3D modeling according to the corrected second point cloud data by utilizing a principle of updating or not updating;
The principle of refreshing with or without refreshing is as follows: and aiming at the same acquisition point on the surface of the target object, taking the three-dimensional coordinate value in the coordinate system of the controller object acquired firstly as valid data, and taking the three-dimensional coordinate value in the coordinate system of the controller object acquired later as invalid data.
9. The TOF camera based 3D scanning method according to claim 1, wherein the step of the controller correcting the second point cloud data according to a preset error coefficient comprises:
and adding the error coefficient to the three-dimensional coordinate value of each acquisition point in the second point cloud data in the controller object coordinate system to obtain corrected second point cloud data.
10. A TOF camera-based 3D scanning system, comprising: a TOF camera and a controller in communication therewith;
the TOF camera includes: the system comprises a data acquisition module and a first communication module;
the data acquisition module is used for determining first point cloud data; the first point cloud data is determined according to three-dimensional coordinate values of all acquisition points on the surface of the target object in a three-dimensional coordinate system of the TOF camera;
the first communication module is used for transmitting the first point cloud data to the controller;
The controller includes: the second communication module and the data processing module;
the second communication module is used for receiving the first point cloud data;
the data processing module is used for determining second point cloud data according to the first point cloud data, wherein the second point cloud data is a three-dimensional coordinate value of the first point cloud data in a controller object coordinate system corresponding to the controller;
and correcting the second point cloud data according to a preset error coefficient, and obtaining a 3D model of the target object according to the corrected second point cloud data.
11. The TOF camera based 3D scanning system of claim 10 further comprising: an RGB camera disposed on the TOF camera, the RGB camera in communication with the controller; the data processing module further comprises: a data fusion unit;
the RGB camera is used for collecting RGB color values corresponding to all collecting points on the surface of the target object respectively and transmitting the collected RGB color values to the controller;
the second communication module is further configured to receive the RGB color values;
the data fusion unit is used for receiving the RGB color values respectively corresponding to the acquisition points, determining the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller according to the three-dimensional coordinate values of the acquisition points in the three-dimensional coordinate system of the TOF camera, and filling the RGB color values respectively corresponding to the acquisition points into the 3D model according to the three-dimensional coordinate values of the acquisition points in the coordinate system of the controller.
12. The TOF camera based 3D scanning system according to any of the claims 10 to 11, further comprising: a calibration object;
arranging at least one calibration label on the surface of the calibration object or the surface of the target object;
the data processing module is further configured to receive a three-dimensional coordinate value of at least one calibration label in a TOF camera three-dimensional coordinate system transmitted by the TOF camera, determine a three-dimensional coordinate value of the at least one calibration label in a controller object coordinate system according to the three-dimensional coordinate value of the at least one calibration label in the TOF camera three-dimensional coordinate system, and calculate the error coefficient according to the three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system and an actual three-dimensional coordinate value of the at least one calibration label in the controller object coordinate system.
CN201910441073.7A 2019-05-24 2019-05-24 3D scanning method and system based on TOF camera Active CN111982071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910441073.7A CN111982071B (en) 2019-05-24 2019-05-24 3D scanning method and system based on TOF camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910441073.7A CN111982071B (en) 2019-05-24 2019-05-24 3D scanning method and system based on TOF camera

Publications (2)

Publication Number Publication Date
CN111982071A true CN111982071A (en) 2020-11-24
CN111982071B CN111982071B (en) 2022-09-27

Family

ID=73436681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910441073.7A Active CN111982071B (en) 2019-05-24 2019-05-24 3D scanning method and system based on TOF camera

Country Status (1)

Country Link
CN (1) CN111982071B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284293A (en) * 2020-12-24 2021-01-29 中国人民解放军国防科技大学 Method for measuring space non-cooperative target fine three-dimensional morphology

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101074869A (en) * 2007-04-27 2007-11-21 东南大学 Method for measuring three-dimensional contour based on phase method
US20090122295A1 (en) * 2006-03-07 2009-05-14 Eaton Robert B Increasing measurement rate in time of flight measurement apparatuses
CN102519434A (en) * 2011-12-08 2012-06-27 北京控制工程研究所 Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data
CN103234496A (en) * 2013-03-28 2013-08-07 中国计量学院 High-precision correction method for two-dimensional platform error of three-dimensional coordinate measuring machine
CN103453889A (en) * 2013-09-17 2013-12-18 深圳市创科自动化控制技术有限公司 Calibrating and aligning method of CCD (Charge-coupled Device) camera
KR101394425B1 (en) * 2012-11-23 2014-05-13 현대엠엔소프트 주식회사 Apparatus and method for map data maintenance
US20150134303A1 (en) * 2013-11-12 2015-05-14 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Three-dimensional scanning system and method with hole-filling function for point cloud using contact probe
CN105973145A (en) * 2016-05-19 2016-09-28 深圳市速腾聚创科技有限公司 Movable type three dimensional laser scanning system and movable type three dimensional laser scanning method
CN108337915A (en) * 2017-12-29 2018-07-27 深圳前海达闼云端智能科技有限公司 Three-dimensional builds drawing method, device, system, high in the clouds platform, electronic equipment and computer program product
CN108645339A (en) * 2018-05-14 2018-10-12 国能生物发电集团有限公司 A kind of acquisition of bio-power plant material buttress point cloud data and calculation method of physical volume
CN109085603A (en) * 2017-06-14 2018-12-25 浙江舜宇智能光学技术有限公司 Optical 3-dimensional imaging system and color three dimensional image imaging method
CN109682372A (en) * 2018-12-17 2019-04-26 重庆邮电大学 A kind of modified PDR method of combination fabric structure information and RFID calibration

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090122295A1 (en) * 2006-03-07 2009-05-14 Eaton Robert B Increasing measurement rate in time of flight measurement apparatuses
CN101074869A (en) * 2007-04-27 2007-11-21 东南大学 Method for measuring three-dimensional contour based on phase method
CN102519434A (en) * 2011-12-08 2012-06-27 北京控制工程研究所 Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data
KR101394425B1 (en) * 2012-11-23 2014-05-13 현대엠엔소프트 주식회사 Apparatus and method for map data maintenance
CN103234496A (en) * 2013-03-28 2013-08-07 中国计量学院 High-precision correction method for two-dimensional platform error of three-dimensional coordinate measuring machine
CN103453889A (en) * 2013-09-17 2013-12-18 深圳市创科自动化控制技术有限公司 Calibrating and aligning method of CCD (Charge-coupled Device) camera
US20150134303A1 (en) * 2013-11-12 2015-05-14 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Three-dimensional scanning system and method with hole-filling function for point cloud using contact probe
CN105973145A (en) * 2016-05-19 2016-09-28 深圳市速腾聚创科技有限公司 Movable type three dimensional laser scanning system and movable type three dimensional laser scanning method
CN109085603A (en) * 2017-06-14 2018-12-25 浙江舜宇智能光学技术有限公司 Optical 3-dimensional imaging system and color three dimensional image imaging method
CN108337915A (en) * 2017-12-29 2018-07-27 深圳前海达闼云端智能科技有限公司 Three-dimensional builds drawing method, device, system, high in the clouds platform, electronic equipment and computer program product
CN108645339A (en) * 2018-05-14 2018-10-12 国能生物发电集团有限公司 A kind of acquisition of bio-power plant material buttress point cloud data and calculation method of physical volume
CN109682372A (en) * 2018-12-17 2019-04-26 重庆邮电大学 A kind of modified PDR method of combination fabric structure information and RFID calibration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284293A (en) * 2020-12-24 2021-01-29 中国人民解放军国防科技大学 Method for measuring space non-cooperative target fine three-dimensional morphology

Also Published As

Publication number Publication date
CN111982071B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
US7800736B2 (en) System and method for improving lidar data fidelity using pixel-aligned lidar/electro-optic data
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN104335005B (en) 3D is scanned and alignment system
CN108171733A (en) Scanner vis
CN107917701A (en) Measuring method and RGBD camera systems based on active binocular stereo vision
JP2003130621A (en) Method and system for measuring three-dimensional shape
CN110390719A (en) Based on flight time point cloud reconstructing apparatus
CN108332660B (en) Robot three-dimensional scanning system and scanning method
CN111238368A (en) Three-dimensional scanning method and device
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
Kitajima et al. Simultaneous projection and positioning of laser projector pixels
CN114858086A (en) Three-dimensional scanning system, method and device
KR100332995B1 (en) Non-contact type 3D scarmer 3 dimensional
CA2757313A1 (en) Stereoscopic measurement system and method
CN111982071B (en) 3D scanning method and system based on TOF camera
CN111654626B (en) High-resolution camera containing depth information
CN112257536B (en) Space and object three-dimensional information acquisition and matching equipment and method
CN104034729A (en) Five-dimensional imaging system for circuit board separation and imaging method thereof
CN112082486A (en) Handheld intelligent 3D information acquisition equipment
CN112257535B (en) Three-dimensional matching equipment and method for avoiding object
CN112672134B (en) Three-dimensional information acquisition control equipment and method based on mobile terminal
CN114693807A (en) Method and system for reconstructing mapping data of power transmission line image and point cloud
CN114155349A (en) Three-dimensional mapping method, three-dimensional mapping device and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

GR01 Patent grant
GR01 Patent grant