WO2015024361A1 - Three-dimensional reconstruction method and device, and mobile terminal - Google Patents

Three-dimensional reconstruction method and device, and mobile terminal Download PDF

Info

Publication number
WO2015024361A1
WO2015024361A1 PCT/CN2014/070135 CN2014070135W WO2015024361A1 WO 2015024361 A1 WO2015024361 A1 WO 2015024361A1 CN 2014070135 W CN2014070135 W CN 2014070135W WO 2015024361 A1 WO2015024361 A1 WO 2015024361A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
dimensional
camera
camera coordinate
coordinates
Prior art date
Application number
PCT/CN2014/070135
Other languages
French (fr)
Chinese (zh)
Inventor
刘兆祥
廉士国
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015024361A1 publication Critical patent/WO2015024361A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion

Definitions

  • the present invention relates to the field of three-dimensional information technology, and in particular, to a three-dimensional reconstruction method and apparatus, and a mobile terminal.
  • Three-dimensional scanning and reconstruction is a high-tech integrating light, machine, electricity and computer technology. It is mainly used to scan the external structure and color of an object to obtain the spatial coordinates of the object. Its important significance is that it can convert the stereo information of an object into a digital signal that can be directly processed by a computer, which provides a convenient and quick means for digital digitization. 3D scanning and reconstruction technology is widely used in more than 4 fields, such as industrial reverse engineering calculations, medical surface inspection, and product quality control in production.
  • the following two devices are generally used to realize three-dimensional scanning and reconstruction of an object.
  • One is a hand-held three-dimensional scanning device consisting of a line laser projector, a camera and an external auxiliary positioning device, which performs laser tracking by external auxiliary positioning device or wireless positioning indoors to realize three-dimensional scanning and reconstruction.
  • the main disadvantage of the device is that the device is bulky and thus has poor portability, is easily limited by the spatial extent, and the three-dimensional reconstruction result does not realize the collection of color information, and therefore, the color information is seriously missing.
  • the second is a mobile phone with a rear camera and a micro-projector that uses three projected structured lights to achieve three-dimensional scanning and reconstruction.
  • the main disadvantage of this device is that it is relatively expensive and can only measure one surface. Since the acquired image information is not correlated, the omnidirectional three-dimensional reconstruction of the object is not achieved, and the color information of the object is not obtained.
  • the technical problem to be solved by the present invention is how to implement a three-dimensional reconstruction method and apparatus for performing fast and omnidirectional three-dimensional scanning and reconstruction of an object.
  • the present invention provides a three-dimensional reconstruction method, comprising: projecting a linear laser to an object; continuously acquiring image information of an object illuminated by the linear laser from at least an angle, and continuously acquiring the image of the camera According to the image information, obtaining three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time; according to the motion information, obtaining a camera coordinate system at each acquisition time is relatively global a positional relationship of the three-dimensional coordinate system; and three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship.
  • the method further includes, according to the image information, obtaining the three-dimensional coordinates of the object in the camera coordinate system at the time of the collection at each acquisition time, the method further includes: And calibrating the internal parameters and the external parameters of the camera; and obtaining, according to the image information, the three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time, including: collecting according to each collection time The image information and the inner and outer parameters are converted to three-dimensional coordinates of the object at a camera coordinate system corresponding to each acquisition time.
  • the motion information includes acceleration, angular velocity, and heading, and according to the motion information, obtaining a positional relationship between a camera coordinate system and a global three-dimensional coordinate system at each acquisition time
  • the method includes: according to the acceleration, the angular velocity, and the heading, using a dead reckoning algorithm to calculate a positional relationship of the camera coordinate system of each of the acquisition moments relative to the global three-dimensional coordinate system.
  • the performing three-dimensional reconstruction on the object according to the three-dimensional coordinates and the positional relationship comprises: comparing a global coordinate coordinate according to a camera coordinate system of each acquisition time The positional relationship of the system converts the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time into three-dimensional coordinates in the global three-dimensional coordinate system.
  • the method further includes: calculating a camera coordinate system at different acquisition times A relative relationship between the mappings between the image information of the objects at different acquisition times is established according to the relative relationship.
  • the mapping relationship between the image information of the object at different acquisition moments according to the relative relationship includes:: scanning the object scan line at the i-1th acquisition time The three-dimensional coordinates of the upper point in the camera coordinate system are mapped to the camera coordinate system at the ith acquisition time, and the image sitting at the ith collection time is calculated. And obtaining a pixel value of the point according to image coordinates of the point at the ith acquisition time, where i is an arbitrary integer greater than 1.
  • the method further includes: combining the result of the three-dimensional reconstruction with the result of establishing the mapping relationship.
  • the present invention provides a three-dimensional reconstruction apparatus, comprising: a line laser projector for projecting a linear laser to an object; and a camera for continuously collecting the light irradiated by the linear laser from different angles Image information of the object; a sensor for continuously acquiring motion information of the camera; and a processor coupled to the camera, the linear laser projector, and a sensor, the processor comprising: an image information processing module, configured to: Obtaining, according to the image information, three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time; a motion information processing module, configured to obtain a camera for each acquisition time according to the motion information a positional relationship of the coordinate system with respect to the global three-dimensional coordinate system; and a three-dimensional reconstruction module for performing three-dimensional reconstruction on the object according to the three-dimensional coordinates and the positional relationship.
  • an image information processing module configured to: Obtaining, according to the image information, three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding
  • the processor further includes a calibration module, configured to calibrate internal parameters and external parameters of the camera; and then the image information processing module is specifically configured to collect according to each The image information collected at the moment and the inner and outer parameters are calculated, and the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time are calculated.
  • the motion information includes an acceleration, an angular velocity, and a heading
  • the motion information processing module is specifically configured to calculate a dead reckoning algorithm according to the acceleration, the angular velocity, and the heading.
  • the three-dimensional reconstruction module is specifically configured to: according to the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time, the object is corresponding to each acquisition The three-dimensional coordinates in the camera coordinate system at the moment are converted into three-dimensional coordinates in the global three-dimensional coordinate system.
  • the processor further includes a color map restoration module, configured to calculate a relative relationship between each camera coordinate system at different acquisition times, and establish according to the relative relationship A mapping relationship between image information of the objects at different acquisition times.
  • the color map restoration module is specifically configured to map the three-dimensional coordinates of the point on the object scan line of the object at the il acquisition time to the i-th acquisition time. In the lower camera coordinate system, the image coordinates of the point at the ith acquisition time are calculated, and the pixel value of the point is obtained according to the image coordinates of the point at the ith acquisition time, where i is an arbitrary value greater than 1. Integer.
  • the processor further includes a fusion module, configured to fuse the result of the three-dimensional reconstruction with the result of establishing a mapping relationship.
  • the present invention provides a mobile terminal comprising the above-described three-dimensional reconstruction apparatus.
  • the three-dimensional reconstruction method and device and the mobile terminal of the embodiment of the invention can continuously acquire the image information of the object from different angles and continuously acquire the motion information of the camera, and can realize fast and comprehensive three-dimensional scanning on the object according to the collected information. And reconstruction.
  • FIG. 1 is a flow chart of a three-dimensional reconstruction method provided by an embodiment of the present invention
  • FIG. 2 is a flow chart of a three-dimensional reconstruction method according to another embodiment of the present invention
  • FIG. 3 shows still another embodiment of the present invention.
  • FIG. 4 is a schematic diagram showing a color map restoration method provided by still another embodiment of the present invention
  • FIG. 5 is a block diagram showing the structure of a three-dimensional reconstruction apparatus according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram showing the working principle of the line laser of the line laser projector of FIG. 4.
  • FIG. 7 is a view showing another embodiment of the present invention. Schematic diagram of the three-dimensional scanning principle of the mobile terminal;
  • FIG. 8 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention
  • FIG. 9 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention.
  • FIG. 10 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention.
  • the laser projector When performing three-dimensional scanning, the laser projector first projects a linear laser to the object, and the projected linear laser forms a laser projection plane.
  • the laser plane projection intersects the surface of the object, a bright scanning line is formed on the surface of the object, that is, light. article. Since the light bar contains all the surface points where the laser projection plane intersects the object, the three-dimensional coordinates (U W ) of the corresponding surface points of the object can be obtained according to the coordinates of the light bar.
  • the three-dimensional coordinates are mapped onto the laser projection plane, and a two-dimensional image of the light strip is obtained.
  • the point on the two-dimensional image of the light strip is marked as ( M , , and can be calculated according to the point coordinates ( M , v ) of the two-dimensional image
  • the above process of calculating the three-dimensional coordinates ⁇ , uj is shown in Equation 1.
  • the digital image captured by the camera can be stored as an array in the computer.
  • the value of each element (pixel, pixel) in the array is the brightness (grayscale) of the image point, and the Cartesian coordinates are defined on the image.
  • the system u, v is used as the image coordinate system, and the pixels (u, v) on the image coordinate system are the number of columns and the number of rows of the pixel in the array, respectively, so (u, v) is the image coordinate system in units of pixels. coordinate of. Since the image coordinate system only indicates the number of columns and rows of pixels in the digital image, and the physical position of the pixel in the image is not represented by physical units, it is necessary to establish an image coordinate system expressed in physical units (for example, centimeters).
  • the coordinates of the image coordinate system measured in physical units are represented by (X, y).
  • the origin 0 is defined at the intersection of the camera's optical axis and the image plane, called the main point of the image, the origin is generally at the center of the image, and the X and y axes are usually associated with the u-axis of the Cartesian coordinate system, respectively.
  • the V axis is parallel. If the coordinates of 0 in the u, v coordinate system are ( M. , v .), the physical size of each pixel in the X-axis and y-axis directions is dx, dy, respectively, then any pixel in the image is in two coordinates. The relationship of the coordinates under the system is as shown in Equation 2.
  • the coordinates of the camera coordinate system are represented by (AA, z .
  • the coordinate system is based on the optical center of the camera.
  • the A axis and the axis are parallel to the X and y axes of the image coordinate system respectively.
  • the z axis is the optical axis of the camera, and the image plane. Vertical The intersection of the optical axis and the image plane is the origin of the image coordinate system.
  • the Cartesian coordinate system consisting of the camera optical center and the AAA axis is called the camera coordinate system.
  • c 0 is the camera focal length f.
  • the camera coordinate system and the image coordinate system The relationship can be expressed by Equations 3 and 4:
  • Equation 4 Equations 3 and 4 can be expressed as homogeneous equations and matrices as Equation 5:
  • FIG. 1 is a flow chart showing a three-dimensional reconstruction method provided by an embodiment of the present invention. As shown in FIG. 1, the three-dimensional reconstruction method mainly includes:
  • Step S100 projecting a linear laser to the object
  • Step S110 continuously acquiring image information of an object illuminated by the linear laser from at least two angles, and continuously acquiring motion information of the camera;
  • Step S120 Obtain, according to the collected image information, a three-dimensional coordinate of the object in a camera coordinate system corresponding to each acquisition time at each acquisition time;
  • Step S130 Obtain a positional relationship between the camera coordinate system and the global three-dimensional coordinate system at each acquisition time according to the collected motion information;
  • Step S140 Perform three-dimensional reconstruction on the object according to the three-dimensional coordinates and the positional relationship.
  • the camera may be built in a three-dimensional reconstruction device, which may be a mobile terminal such as a smart phone, on which a line laser projector can be installed, and the line laser projector can be connected to an external interface of the mobile terminal, such as audio. The interface is connected.
  • the line laser projector controls the linear light source to emit a linear laser to the object, and the linear laser refers to the object that is projected onto the object through a laser projection plane. Form a bright line. Therefore, when a linear laser is emitted to an object, a laser projection plane is formed.
  • the laser projection plane intersects the object to form a bright scan line, that is, a light strip.
  • the camera built in the three-dimensional reconstruction device continuously acquires image information of the scanning line of the reflected object for a predetermined length of time, for example, 0.5 seconds, and passes through the sensor built in the three-dimensional reconstruction device within the above-mentioned length of time.
  • the motion information of the camera is continuously acquired, and the motion information may mainly include acceleration, angular velocity, heading, etc., and the three-dimensional reconstruction device may use the motion information to determine the position and posture of the camera in space.
  • the three-dimensional reconstruction device performs corresponding processing on the image information and the motion signal collected at each acquisition time, specifically, according to the image information collected at each acquisition time, obtaining an object at each acquisition time corresponding to each The three-dimensional coordinates of the camera coordinate system at the time of acquisition.
  • the three-dimensional coordinates of the object in the camera coordinate system at the time of T1 acquisition can be obtained;
  • the camera of the object at the time of T2 acquisition is obtained.
  • the three-dimensional coordinates in the coordinate system, and so on, you can get each The three-dimensional coordinates of the object at the acquisition time are in the camera coordinate system corresponding to each acquisition time.
  • the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time is obtained.
  • the positional relationship of the camera coordinate system at the T1 acquisition time relative to the global three-dimensional coordinate system can be obtained; the T2 acquisition time can be obtained according to the motion information collected at the T2 acquisition time.
  • the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system, and so on, obtains the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition time.
  • the camera coordinate system when starting three-dimensional scanning and reconstruction may be set as a global three-dimensional coordinate system.
  • the global three-dimensional coordinates of the object are obtained. , thereby three-dimensional reconstruction of the object.
  • the three-dimensional reconstruction device can be controlled to continuously acquire image information of the object around the object, thereby realizing continuous and omnidirectional image information of the collected object.
  • the acquisition of image information and the acquisition of motion information can be performed simultaneously.
  • the three-dimensional reconstruction device can continuously acquire image information of an object continuously collected from different angles and continuously acquire motion information of the camera, and can realize fast and comprehensive three-dimensional scanning and reconstruction of the object according to the collected information. .
  • FIG. 2 is a flow chart showing a three-dimensional reconstruction method provided by another embodiment of the present invention.
  • the main difference between this embodiment and the previous embodiment is that, according to the collected image information, the object is corresponding to each acquisition time.
  • the method further includes:
  • Step S200 Calibrate the internal parameters and the external parameters of the camera.
  • obtaining the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time at each acquisition time may specifically include:
  • Step S210 Calculate, according to the image information collected at each acquisition time and the internal parameters and the external parameters, the three-dimensional coordinates of the object under the camera coordinate system corresponding to each acquisition time.
  • the internal parameter reflects the intrinsic property of the camera itself
  • the external parameter represents the positional relationship between the module coordinate system and the camera coordinate system.
  • the 0 L coordinate plane of the module coordinate system is a laser projection plane orthogonal to the laser projection plane.
  • the outer parameter can usually be represented by the rotation matrix R and the translation vector T, where R represents the rotation of the module coordinate system relative to the camera coordinate system, and R is a 3 x 3 orthogonal unit recorded as Equation 6, where l, r 4 , r 7 ), ⁇ 5 , and (whi 9 ) respectively represent the unit vector of the upper coordinate, axis, and axis of the module coordinate system ⁇ 3 ⁇ 4 _ ⁇ .
  • T represents the camera coordinate system relative to the module coordinates.
  • the flat of the system is a 3-dimensional translation vector, recorded as Equation 7, where, and are the coordinates of the module coordinate system origin in the camera coordinate system.
  • step S210 in the process of continuously scanning the object by the three-dimensional reconstruction device, the collected image information is corrected according to the internal number, and the center of the light bar is calculated row by row to obtain an array of coordinates of the center of the light bar, that is, a group is obtained.
  • the image coordinates of the center of the strip according to the image coordinates of the center of each strip and the internal and external parameters of the calibration, calculate the coordinates mapped in the module coordinate system, and then calculate the camera corresponding to each acquisition time.
  • the three-dimensional coordinates in the coordinate system According to the image coordinates of the center of a group of light strips, the three-dimensional coordinates of the entire scan line of the object in the camera coordinate system can be calculated, and the object can be scanned continuously.
  • the above method can be used to calculate the three-dimensional coordinates of the whole object in the camera coordinate system.
  • the coordinates ( , , converted in the module coordinate system can be converted to the coordinates of the camera coordinate system at the T1 acquisition time, z c).
  • the three-dimensional coordinates of the entire object in the camera coordinate system at each acquisition time can be calculated.
  • the motion information includes an acceleration, an angular velocity, and a heading.
  • the positional relationship between the camera coordinate system and the global three-dimensional coordinate system at each acquisition time may be obtained according to the collected motion information.
  • Step S220 According to the acceleration, the angular velocity, and the heading, the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time is calculated by using a dead reckoning algorithm.
  • the motion information of the camera at each acquisition time can be obtained by the sensor.
  • the sensor can mainly include a three-axis acceleration sensor, a three-axis gyroscope, and a three-axis electronic compass.
  • the three-axis accelerometer is mainly used to measure the three-axis acceleration of the camera.
  • the three-axis gyroscope is mainly used to measure the three-axis angular velocity of the camera.
  • the three-axis electronic compass is mainly used to measure the three-axis heading of the camera.
  • the triaxial acceleration sensor can measure the three-axis acceleration of the camera, the three-axis gyroscope can measure the three-axis angular velocity of the camera, and the three-axis electronic compass can measure the three-axis heading of the camera, and then utilize The dead reckoning algorithm can derive the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at the kth acquisition time.
  • the triaxial acceleration sensor can measure the three-axis acceleration of the camera respectively.
  • v x (k) v x (k- ⁇ ) + ⁇ *a x (k- ⁇ )
  • v y (k) v y (k- ⁇ ) + At*a y (k- ⁇ )
  • Equation 14 Indicates the coordinates of the object in the camera coordinate system at the kl acquisition time and the kth acquisition time.
  • the relative positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at the current measurement time can be obtained.
  • the positional relationship of the camera coordinate system corresponding to the global three-dimensional coordinate system corresponding to each acquisition time can also be calculated by using the dead reckoning algorithm. In this way, the relative positional relationship of the camera coordinate system at different acquisition times can be obtained, thereby obtaining a continuous sequence of camera coordinate systems.
  • the three-axis rotation angle can be calculated from the three-axis angular velocity, or the three-axis rotation angle can be derived from the three-axis heading.
  • the foregoing three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship includes:
  • Step S230 Convert the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time into the three-dimensional coordinates in the global three-dimensional coordinate system according to the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time.
  • step S210 the three-dimensional coordinates of the object at each acquisition time in the camera coordinate system corresponding to each acquisition time are obtained, and in step S220, the positional relationship between the camera coordinate system and the global three-dimensional coordinate system of each acquisition time is obtained. Then, the three-dimensional coordinates of the object in the global three-dimensional coordinate system at each acquisition time can be converted, thereby performing three-dimensional reconstruction of the object.
  • the three-dimensional reconstruction device can continuously collect image information of the object from different angles, and obtain the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time and the continuous acquisition camera at each acquisition time.
  • the motion information obtains the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition time, thereby realizing fast and comprehensive three-dimensional scanning and reconstruction of the object.
  • FIG. 3 is a flow chart showing a three-dimensional reconstruction method provided by still another embodiment of the present invention. 3 is the same as the steps of FIG. 1 and FIG. 2 having the same meaning. As shown in FIG. 3, the main difference between this embodiment and the previous embodiment is that the pair is based on the three-dimensional coordinates and the positional relationship. After the object is three-dimensionally reconstructed, the method further includes: Step S240: Calculate a relative relationship between camera coordinate systems at different acquisition times, and establish a mapping relationship between image information of objects at different acquisition times according to the relative relationship.
  • a continuous sequence of camera coordinate systems can be obtained, that is, the relative positional relationship of the camera coordinate system at any two acquisition times can be obtained, for example, two acquisition times T0 can be obtained.
  • T1 the relative positional relationship between the camera coordinate systems of the two acquisition moments can use the rotation matrix and the translation matrix G.
  • G To represent, where, is a 3 * 3 rotation matrix, G .
  • the measured point shows its true color
  • sets the three-dimensional coordinates of the measured point in the camera coordinate system at the time of T1 acquisition as ( ⁇ ⁇ , , ⁇ ⁇ ), the measured point is
  • Equation 15 The mapping relationship in the camera coordinate system at the two acquisition moments is as shown in Equation 15.
  • Equation 10 it can be obtained that if the image coordinate of the measured point at time T1 is SX M , then ( ⁇ can be substituted into Equation 3 and Equation 4 to calculate 1 ⁇ ( ⁇ , , finally, according to Equation 2 Obtaining the pixel value of the coordinate, thereby obtaining the true color information of the measured point, that is, realizing the color restoration of the measured point, and traversing the measured point on the scanning line to realize the whole scanning The color reproduction of the line.
  • the coordinates of a measured point at the T1 acquisition time in the camera coordinate system at the time of T2 acquisition can also be obtained, thereby obtaining the image coordinates of the measured point at the time of T2 acquisition, and further can be obtained according to Equation 2.
  • the pixel value of the image coordinates, thereby obtaining the true color information of the measured point, that is, the color restoration of the measured point can be realized, and the color of the entire scan line can be realized by traversing the point on the scan line. And so on, the above method can be used to achieve the color reproduction of the entire object.
  • the two acquisition moments selected in step S240 may be adjacent acquisition moments. Thereby improving the color reproduction accuracy of the measured point.
  • the method further includes:
  • Step S250 merging the result of the three-dimensional reconstruction with the result of establishing a mapping relationship. Specifically, the result of the three-dimensional reconstruction, that is, the position corresponding to each measured point on the object and the result of establishing a mapping relationship, that is, the color of each measured point in the color restoration, thereby realizing three-dimensional reconstruction and color restoration of the object .
  • the three-dimensional reconstruction method provided in this embodiment can realize the restoration of the color texture of the object by establishing the mapping relationship of the image information of the adjacent acquisition time.
  • FIG. 5 is a structural block diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present invention.
  • the three-dimensional reconstruction apparatus 100 mainly includes a camera 41, a sensor 42, a processor 43, and a line laser projector 44.
  • the above three-dimensional reconstruction device can be a mobile terminal.
  • the line laser projector 44 is mainly used to project a linear laser to the object; the camera 41 is mainly used for continuously collecting image information of the object illuminated by the linear laser from at least two angles; the sensor 42 is mainly used for continuously acquiring the motion information of the camera 41;
  • the processor 43 is connected to the camera 41 and the sensor 42 and the line laser projector 44, and may include the following modules: an image information processing module 431, a motion information processing module 432, and a three-dimensional reconstruction module 433.
  • the image information processing module 431 is mainly configured to obtain, according to the image information collected by the camera 41, three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time;
  • the module 432 is mainly configured to obtain a positional relationship between the camera coordinate system and the global three-dimensional coordinate system at each acquisition time according to the motion information;
  • the three-dimensional reconstruction module 433 is mainly configured to perform three-dimensionality on the object according to the three-dimensional coordinates and the positional relationship. reconstruction.
  • the camera 41, the sensor 42 and the processor 43 can all be built in the three-dimensional reconstruction device 100, and the three-dimensional reconstruction device 100 can be mounted with a line laser projector 44 for a mobile terminal such as a smartphone.
  • the line laser projector 44 can be connected to an external interface of the mobile terminal such as an audio interface.
  • a three-dimensional scanning control switch can also be provided to control the three-dimensional reconstruction device for three-dimensional scanning. When three-dimensional scanning and reconstruction of the object is required, the three-dimensional scanning is started by the three-dimensional scanning control switch, and the linear laser projector 44 controls the linear light source to the object emission line.
  • the laser forms a laser projection plane.
  • Fig. 6 is a schematic view showing the working principle of the line laser of the line laser projector 44.
  • the line laser projector 44 mounted on the smartphone is taken as an example to describe the working principle of the line laser.
  • the line laser projector 44 is mounted on the smart phone and connected to an external interface of the smart phone, such as an audio interface.
  • the three-dimensional scanning control switch is used to start the three-dimensional scanning, the left channel or the right of the smart phone.
  • the channel generates a square wave of a certain frequency
  • the micro-transformer 441 and the rectifier 442 built in the line laser projector 44 supply power to the laser diode 443, and then the cylindrical mirror 444 converts the laser light emitted from the laser diode 443 into a linear laser, and the laser projection
  • the plane intersects the object to form a bright scan line, that is, the light strip, and the three-dimensional scan can be started.
  • the method includes: the camera 41 collects image information of the reflected object, and the sensor 42 collects motion information of the camera.
  • the three-dimensional reconstruction device can be controlled to wrap around the object, so that the camera 41 can continuously acquire image information of the object, thereby realizing continuous and omnidirectional image information of the collected object.
  • the image information acquired by the camera 41 and the sensor 42 can be simultaneously acquired.
  • FIG. 7 is a schematic diagram of the principle of three-dimensional scanning and reconstruction of a mobile terminal.
  • the mobile terminal can mainly include a camera, a line laser projector, an audio interface, and a sensor.
  • the above line laser projector can be externally connected to the audio interface.
  • the mobile terminal can be used to realize the full, continuous and fast image information of the collected object and the motion information of the camera, thereby calculating the coordinates of the object in the global three-dimensional coordinate system, and finally realizing the three-dimensional scanning and reconstruction of the object.
  • the three-dimensional reconstruction device of the embodiment of the invention can continuously collect image information of an object collected from different angles and continuously acquire motion information of the camera, and can be based on the collected information. Fast and omni-directional 3D scanning and reconstruction.
  • FIG. 8 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention. 8 and FIG. 5 have the same functions, and as shown in FIG. 8, the main difference between this embodiment and the previous embodiment is that the processor 43 of the three-dimensional reconstruction apparatus 200 of the present embodiment may further include a calibration module. 434, the calibration module 434 is mainly used to calibrate the internal parameters and the external parameters of the camera 41.
  • the image information processing module 431 can be specifically configured to convert the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time according to the image information collected by each camera camera 41 and the internal parameters and the external parameters. .
  • the image information processing module 431 first needs to calibrate the internal parameters and the external parameters of the camera 41 by using the calibration module 434 before determining the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time.
  • the definitions of the internal parameters and the external parameters can be referred to the related description of the above embodiments.
  • the image information processing module 431 can correct the collected image information by internal parameters, identify and calculate the image coordinates of the center of the light bar, use the image coordinates of the center of the light bar, and calibrate the internal parameters and external parameters according to the calibration module 434. Calculate the coordinates of the object under the laser projection plane and the three-dimensional coordinates of the object in the camera coordinate system at the corresponding acquisition time.
  • the motion information includes acceleration, angular velocity, and heading.
  • the motion information processing module 432 may be specifically configured to calculate each acquisition time by using a dead reckoning algorithm according to the acceleration, angular velocity, and heading. The positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system.
  • the three-dimensional reconstruction device obtains the motion information of the camera 41 at each acquisition time by the sensor 42.
  • the sensor 42 body may include a three-axis acceleration sensor 421, a three-axis gyroscope 422, and a three-axis electronic compass 423.
  • the three-axis acceleration sensor 421 is mainly used to measure the acceleration of the camera 41
  • the three-axis gyroscope 422 is mainly used to measure the angular velocity of the camera 41
  • the three-axis electronic compass 423 is used to measure the heading of the camera 41.
  • the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time is calculated by using the dead reckoning algorithm, so that the camera coordinate system at different acquisition times can be obtained.
  • the three-dimensional reconstruction module 433 may be specifically configured to: according to the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system, each of the collection time objects is in a camera coordinate system corresponding to each acquisition time. Convert three-dimensional coordinates into global three-dimensional coordinates The three-dimensional coordinates under the standard are used to perform three-dimensional reconstruction of the object.
  • the three-dimensional reconstruction device of the embodiment of the invention can obtain the image information of the object continuously collected from different angles, obtain the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time, and the motion of the continuous acquisition camera at each acquisition time.
  • the information obtains the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition time, thereby realizing fast and comprehensive three-dimensional scanning and reconstruction of the object.
  • FIG. 9 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention. 9 is the same as the components of FIG. 8 and FIG. 5.
  • the main difference between the present embodiment and the previous embodiment is that the processor 43 of the three-dimensional reconstruction apparatus 300 of the present embodiment further includes The color map restoration module 435 is mainly configured to calculate a relative relationship between each camera coordinate system at different acquisition times, and establish a mapping relationship of image information of the objects at different acquisition times according to the relative relationship.
  • a continuous camera coordinate system sequence can be obtained, and the camera coordinate system of the two acquisition moments, the relative relationship between the two camera coordinate systems, and the realization of the measured point can be obtained.
  • the color reproduction method reference may be made to the related description in the above embodiments.
  • the two acquisition moments selected by the color map restoration module 435 may be adjacent moments, thereby implementing color reduction of the measured point, thereby being restored by the color of the entire scan line. In turn, the color reproduction of the entire object is achieved.
  • the processor 43 may further include a fusion module 436, and the fusion module 436 is mainly used to combine the result of the three-dimensional reconstruction with the result of establishing a mapping relationship.
  • the three-dimensional reconstruction apparatus provided in this embodiment can realize the restoration of the color texture of the object by establishing the mapping relationship of the image information of the adjacent acquisition time.
  • the camera, the sensor and the processor may be built in the three-dimensional reconstruction device, and the line laser projector may be installed on the three-dimensional reconstruction device, and the three-dimensional reconstruction device may be a mobile terminal, such as a smart phone, a PAD or the like.
  • the three-dimensional reconstruction device of the present application has the advantage of portability.
  • FIG. 10 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention.
  • the three-dimensional reconstruction device 700 may be a host server having a computing capability, a personal computer PC, or a portable computer or terminal that can be carried.
  • the specific embodiment of the present invention is not The specific implementation of the arithmetic node is limited.
  • the three-dimensional reconstruction apparatus 700 includes a processor 710, a communications interface 720, a memory array 730, and a bus 740.
  • the processor 710, the communication interface 720, and the memory 730 complete communication with each other through the bus 740.
  • the communication interface 720 is for communicating with a network element, wherein the network element includes, for example, a virtual machine management center, shared storage, and the like.
  • the processor 710 is for executing a program.
  • the processor 710 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • ASIC Application Specific Integrated Circuit
  • the memory 730 is used to store files.
  • Memory 730 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • Memory 730 can also be a memory array.
  • Memory 730 may also be partitioned, and the blocks may be combined into virtual volumes according to certain rules.
  • the above program may be a program code that includes computer operating instructions. This program can be used to perform the following steps:
  • the object is three-dimensionally reconstructed according to the three-dimensional coordinates and the positional relationship.
  • the program before the acquiring, according to the image information, the three-dimensional coordinates of the object in the camera coordinate system at the respective acquisition time, the program further includes: calibrating the inside of the camera The parameter and the external parameter; then, according to the image information, obtaining the three-dimensional coordinates of the object at each acquisition time in the camera coordinate system corresponding to each acquisition time comprises: the image information collected according to each collection time And the inner parameter and the outer parameter, and the three-dimensional coordinates of the object under the camera coordinate system corresponding to each acquisition time are converted.
  • the motion information includes an acceleration, an angular velocity, and a heading, and according to the motion information, obtaining a positional relationship of a camera coordinate system at each acquisition time relative to a global three-dimensional coordinate system, including: according to the acceleration, The angular velocity and the heading are calculated by using the dead reckoning algorithm to calculate the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time.
  • the three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship includes: according to a positional relationship of a camera coordinate system of each acquisition time relative to a global three-dimensional coordinate system, The three-dimensional coordinates of the object under the camera coordinate system corresponding to each acquisition time are converted into three-dimensional coordinates under the global three-dimensional coordinate system.
  • the method further includes: calculating a relative relationship between camera coordinate systems at different acquisition times, according to the relative relationship The relationship establishes a mapping relationship between image information of the objects at different acquisition times.
  • the mapping relationship between the image information of the object at different acquisition times according to the relative relationship includes: setting a point on the scan line of the object at the i-1th acquisition time at the camera The three-dimensional coordinates in the coordinate system are mapped to the camera coordinate system at the ith acquisition time, and the image coordinates of the point at the ith acquisition time are calculated, and the points are obtained according to the image coordinates of the point at the ith acquisition time. Pixel value, where i is any integer greater than one.
  • the method further includes: combining the result of the three-dimensional reconstruction with the result of establishing the mapping relationship.
  • the function is implemented in the form of computer software and sold or used as a stand-alone product, it is considered to some extent that all or part of the technical solution of the present invention (for example, a part contributing to the prior art) is It is embodied in the form of computer software products.
  • the computer software product is typically stored in a computer readable storage medium, including a number of instructions In order to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Abstract

Disclosed are a three-dimensional reconstruction method and device, and a mobile terminal. The three-dimensional reconstruction method comprises: projecting linear laser to an object; continuously collecting image information about the object irradiated by the linear laser from at least two angles, and continuously collecting motion information about a camera; according to the image information, obtaining the three-dimensional coordinates of the object under a camera coordinate system corresponding to each collection time at each collection time; and according to the motion information, obtaining a position relationship of the camera coordinate system relative to a global three-dimensional coordinate system at each collection time, and conducting three-dimensional reconstruction of the object according to the three-dimensional coordinates and the position relationship. By means of the three-dimensional reconstruction method and device, and the mobile terminal in the embodiments of the present invention, image information about an object can be continuously collected from different angles and motion information about a camera can be continuously collected, and quick and omnidirectional three-dimensional scanning and reconstruction can be conducted on the object according to the collected information.

Description

三维重建方法及装置、 移动终端  Three-dimensional reconstruction method and device, mobile terminal
本申请要求于 2013 年 8 月 20 日提交中国专利局、 申请号为 201310364666.0, 发明名称为 "三维重建方法及装置、 移动终端" 的中国 专利申请优先权, 上述专利的全部内容通过引用结合在本申请中。  This application claims to be filed on August 20, 2013 in the Chinese Patent Office, Application No. 201310364666.0, entitled "Three-Dimensional Reconstruction Method and Apparatus, Mobile Terminal" Chinese Patent Application Priority, the entire contents of which are incorporated herein by reference. In the application.
技术领域 本发明涉及三维信息技术领域, 尤其涉及一种三维重建方法及装置、 移动终端。 The present invention relates to the field of three-dimensional information technology, and in particular, to a three-dimensional reconstruction method and apparatus, and a mobile terminal.
背景技术 三维扫描和重建是集光、 机、 电和计算机技术于一体的高新技术, 主要用于对物体外部结构及色彩进行扫描, 以获得物体的空间坐标。 其 重要意义在于能够将物体的立体信息转换为计算机能直接处理的数字信 号, 为实物数字化提供了相当方便快捷的手段。 三维扫描和重建技术在 4艮多领域都有广泛的应用, 如在工业上用于逆向工程计算, 在医疗上用 于面形检测, 在生产中用于产品质量控制等。 BACKGROUND OF THE INVENTION Three-dimensional scanning and reconstruction is a high-tech integrating light, machine, electricity and computer technology. It is mainly used to scan the external structure and color of an object to obtain the spatial coordinates of the object. Its important significance is that it can convert the stereo information of an object into a digital signal that can be directly processed by a computer, which provides a convenient and quick means for digital digitization. 3D scanning and reconstruction technology is widely used in more than 4 fields, such as industrial reverse engineering calculations, medical surface inspection, and product quality control in production.
现有技术中,通常采用以下两种装置实现对物体的三维扫描和重建。 其一是由线激光投射器、 摄像头以及外部辅助定位装置组成的手持式三 维扫描装置, 通过外部辅助定位装置进行激光跟踪或者在室内进行无线 定位来实现三维扫描和重建。 该装置的主要缺点是设备体积较大因而便 携性较差, 容易受空间范围限制, 而且其三维重建结果没有实现颜色信 息的采集, 因此, 颜色信息缺失严重。 其二是具有后置摄像头和微型投 影仪的手机, 利用投射出的多幅结构光实现三维扫描和重建。 该装置的 主要缺点是成本比较高, 而且只能测量一个面, 由于没有将所采集的图 像信息关联起来, 因此, 没有实现对物体的全方位的三维重建, 也没有 获得物体的颜色信息。  In the prior art, the following two devices are generally used to realize three-dimensional scanning and reconstruction of an object. One is a hand-held three-dimensional scanning device consisting of a line laser projector, a camera and an external auxiliary positioning device, which performs laser tracking by external auxiliary positioning device or wireless positioning indoors to realize three-dimensional scanning and reconstruction. The main disadvantage of the device is that the device is bulky and thus has poor portability, is easily limited by the spatial extent, and the three-dimensional reconstruction result does not realize the collection of color information, and therefore, the color information is seriously missing. The second is a mobile phone with a rear camera and a micro-projector that uses three projected structured lights to achieve three-dimensional scanning and reconstruction. The main disadvantage of this device is that it is relatively expensive and can only measure one surface. Since the acquired image information is not correlated, the omnidirectional three-dimensional reconstruction of the object is not achieved, and the color information of the object is not obtained.
发明内容 Summary of the invention
¾术问题 3⁄4 problem
有鉴于此, 本发明要解决的技术问题是如何实现一种三维重建方法 和装置, 可以对物体进行快速、 全方位的三维扫描和重建。  In view of this, the technical problem to be solved by the present invention is how to implement a three-dimensional reconstruction method and apparatus for performing fast and omnidirectional three-dimensional scanning and reconstruction of an object.
解决方案 为了解决上述问题, 在第一方面, 本发明提出了一种三维重建方法, 包括: 向物体投射线状激光; 从至少角度连续采集被线状激光照射的物 体的图像信息, 并连续采集摄像头的运动信息; 根据所述图像信息, 得 到每个采集时刻所述物体在对应于每个采集时刻的摄像头坐标系下的三 维坐标; 根据所述运动信息, 得到每个采集时刻的摄像头坐标系相对全 局三维坐标系的位置关系; 以及根据所述三维坐标和所述位置关系对所 述物体进行三维重建。 solution In order to solve the above problems, in a first aspect, the present invention provides a three-dimensional reconstruction method, comprising: projecting a linear laser to an object; continuously acquiring image information of an object illuminated by the linear laser from at least an angle, and continuously acquiring the image of the camera According to the image information, obtaining three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time; according to the motion information, obtaining a camera coordinate system at each acquisition time is relatively global a positional relationship of the three-dimensional coordinate system; and three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship.
结合第一方面, 在一种可能的实现方式中, 所述根据所述图像信息, 得到每个采集时刻所述物体在相应采集时刻的摄像头坐标系下的三维坐 标之前, 所述方法还包括: 标定所述摄像头的内参数和外参数; 则所述 根据所述图像信息得到每个采集时刻所述物体在对应于每个采集时刻的 摄像头坐标系下的三维坐标包括: 根据每个采集时刻采集到的所述图像 信息以及所述内参数和外参数, 换算出所述物体在对应于每个采集时刻 的摄像头坐标系下的三维坐标。  With reference to the first aspect, in a possible implementation, the method further includes, according to the image information, obtaining the three-dimensional coordinates of the object in the camera coordinate system at the time of the collection at each acquisition time, the method further includes: And calibrating the internal parameters and the external parameters of the camera; and obtaining, according to the image information, the three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time, including: collecting according to each collection time The image information and the inner and outer parameters are converted to three-dimensional coordinates of the object at a camera coordinate system corresponding to each acquisition time.
结合第一方面, 在一种可能的实现方式中, 所述运动信息包括加速 度、 角速度以及航向, 所述根据所述运动信息, 得到每个采集时刻的摄 像头坐标系相对全局三维坐标系的位置关系包括: 根据加速度、 角速度 以及航向, 采用航位推算法推算出所述每个采集时刻的摄像头坐标系相 对全局三维坐标系的位置关系。  With reference to the first aspect, in a possible implementation manner, the motion information includes acceleration, angular velocity, and heading, and according to the motion information, obtaining a positional relationship between a camera coordinate system and a global three-dimensional coordinate system at each acquisition time The method includes: according to the acceleration, the angular velocity, and the heading, using a dead reckoning algorithm to calculate a positional relationship of the camera coordinate system of each of the acquisition moments relative to the global three-dimensional coordinate system.
结合第一方面, 在一种可能的实现方式中, 所述根据所述三维坐标 和所述位置关系对所述物体进行三维重建包括: 根据所述每个采集时刻 的摄像头坐标系相对全局三维坐标系的位置关系, 将物体在对应于每个 采集时刻的摄像头坐标系下的三维坐标转换成全局三维坐标系下的三维 坐标。  With reference to the first aspect, in a possible implementation manner, the performing three-dimensional reconstruction on the object according to the three-dimensional coordinates and the positional relationship comprises: comparing a global coordinate coordinate according to a camera coordinate system of each acquisition time The positional relationship of the system converts the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time into three-dimensional coordinates in the global three-dimensional coordinate system.
结合第一方面, 在一种可能的实现方式中, 在所述根据所述三维坐 标和所述位置关系对所述物体进行三维重建之后, 所述方法还包括: 计 算不同采集时刻的摄像头坐标系之间的相对关系, 根据所述相对关系建 立不同采集时刻的所述物体的图像信息之间的映射关系。  With reference to the first aspect, in a possible implementation, after the three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship, the method further includes: calculating a camera coordinate system at different acquisition times A relative relationship between the mappings between the image information of the objects at different acquisition times is established according to the relative relationship.
结合第一方面, 在一种可能的实现方式中, 所述根据所述相对关系 建立不同采集时刻的所述物体的图像信息的映射关系, 包括: 将第 i-1 采集时刻所述物体扫描线上的点在摄像头坐标系下的三维坐标映射到第 i采集时刻下的摄像头坐标系中, 计算所述点在第 i采集时刻下的图像坐 标, 根据所述点在第 i 采集时刻下的图像坐标得到所述点的像素值, 其 中, i为大于 1的任意整数。 With reference to the first aspect, in a possible implementation, the mapping relationship between the image information of the object at different acquisition moments according to the relative relationship includes:: scanning the object scan line at the i-1th acquisition time The three-dimensional coordinates of the upper point in the camera coordinate system are mapped to the camera coordinate system at the ith acquisition time, and the image sitting at the ith collection time is calculated. And obtaining a pixel value of the point according to image coordinates of the point at the ith acquisition time, where i is an arbitrary integer greater than 1.
结合第一方面, 在一种可能的实现方式中, 所述建立不同采集时刻 的物体的图像信息的映射关系之后还包括: 将三维重建的结果与建立映 射关系的结果进行融合。  With reference to the first aspect, in a possible implementation manner, after the mapping relationship between the image information of the objects at different acquisition times is established, the method further includes: combining the result of the three-dimensional reconstruction with the result of establishing the mapping relationship.
在第二方面, 本发明提出了一种三维重建装置, 包括: 线激光投射 器, 用于向物体投射线状激光; 摄像头, 用于从不同角度连续采集被所 述线状激光照射的所述物体的图像信息; 传感器, 用于连续采集所述摄 像头的运动信息; 以及处理器, 与所述摄像头、 所述线性激光投射器以 及传感器连接, 所述处理器包括: 图像信息处理模块, 用于根据所述图 像信息, 得到每个采集时刻所述物体在对应于每个采集时刻的摄像头坐 标系下的三维坐标; 运动信息处理模块, 用于根据所述运动信息, 得到 每个采集时刻的摄像头坐标系相对全局三维坐标系的位置关系; 以及三 维重建模块, 用于根据所述三维坐标和所述位置关系, 对所述物体进行 三维重建。  In a second aspect, the present invention provides a three-dimensional reconstruction apparatus, comprising: a line laser projector for projecting a linear laser to an object; and a camera for continuously collecting the light irradiated by the linear laser from different angles Image information of the object; a sensor for continuously acquiring motion information of the camera; and a processor coupled to the camera, the linear laser projector, and a sensor, the processor comprising: an image information processing module, configured to: Obtaining, according to the image information, three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time; a motion information processing module, configured to obtain a camera for each acquisition time according to the motion information a positional relationship of the coordinate system with respect to the global three-dimensional coordinate system; and a three-dimensional reconstruction module for performing three-dimensional reconstruction on the object according to the three-dimensional coordinates and the positional relationship.
结合第二方面, 在一种可能的实现方式中, 所述处理器还包括标定 模块, 用于标定所述摄像头的内参数和外参数; 则所述图像信息处理模 块具体用于根据每个采集时刻采集到的所述图像信息以及所述内参数和 外参数, 计算出所述物体在对应于每个采集时刻的所述摄像头坐标系下 的三维坐标。  With reference to the second aspect, in a possible implementation manner, the processor further includes a calibration module, configured to calibrate internal parameters and external parameters of the camera; and then the image information processing module is specifically configured to collect according to each The image information collected at the moment and the inner and outer parameters are calculated, and the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time are calculated.
结合第二方面, 在一种可能的实现方式中, 所述运动信息包括加速 度、 角速度以及航向, 所述运动信息处理模块具体用于根据所述加速度、 角速度以及航向, 采用航位推算法推算出所述每个采集时刻的摄像头坐 标系相对全局三维坐标系的位置关系。  With reference to the second aspect, in a possible implementation, the motion information includes an acceleration, an angular velocity, and a heading, and the motion information processing module is specifically configured to calculate a dead reckoning algorithm according to the acceleration, the angular velocity, and the heading. The positional relationship of the camera coordinate system at each acquisition time relative to the global three-dimensional coordinate system.
结合第二方面, 在一种可能的实现方式中, 所述三维重建模块具体 用于根据所述每个采集时刻的摄像头坐标系相对全局三维坐标系的位置 关系, 将物体在对应于每个采集时刻的摄像头坐标系下的三维坐标转换 成全局三维坐标系下的三维坐标。  With reference to the second aspect, in a possible implementation, the three-dimensional reconstruction module is specifically configured to: according to the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time, the object is corresponding to each acquisition The three-dimensional coordinates in the camera coordinate system at the moment are converted into three-dimensional coordinates in the global three-dimensional coordinate system.
结合第二方面, 在一种可能的实现方式中, 所述处理器还包括色彩 映射还原模块, 用于计算不同采集时刻的各所述摄像头坐标系之间的相 对关系, 根据所述相对关系建立不同采集时刻的所述物体的图像信息之 间的映射关系。 结合第二方面, 在一种可能的实现方式中, 所述色彩映射还原模块 具体用于将第 i-l 采集时刻所述物体扫描线上的点在摄像头坐标系下的 三维坐标映射到第 i 采集时刻下的摄像头坐标系中, 计算所述点在第 i 采集时刻下的图像坐标, 根据所述点在第 i 采集时刻下的图像坐标得到 所述点的像素值, 其中, i为大于 1的任意整数。 With reference to the second aspect, in a possible implementation manner, the processor further includes a color map restoration module, configured to calculate a relative relationship between each camera coordinate system at different acquisition times, and establish according to the relative relationship A mapping relationship between image information of the objects at different acquisition times. With reference to the second aspect, in a possible implementation, the color map restoration module is specifically configured to map the three-dimensional coordinates of the point on the object scan line of the object at the il acquisition time to the i-th acquisition time. In the lower camera coordinate system, the image coordinates of the point at the ith acquisition time are calculated, and the pixel value of the point is obtained according to the image coordinates of the point at the ith acquisition time, where i is an arbitrary value greater than 1. Integer.
结合第二方面, 在一种可能的实现方式中, 所述处理器还包括融合 模块, 用于将三维重建的结果与建立映射关系的结果进行融合。  With reference to the second aspect, in a possible implementation manner, the processor further includes a fusion module, configured to fuse the result of the three-dimensional reconstruction with the result of establishing a mapping relationship.
在第三方面, 本发明提供了一种移动终端, 所述移动终端包括上述 三维重建装置。  In a third aspect, the present invention provides a mobile terminal comprising the above-described three-dimensional reconstruction apparatus.
有益效果  Beneficial effect
本发明实施例的三维重建方法及装置、 移动终端, 可以从不同角度 连续采集到的物体的图像信息和连续采集摄像头的运动信息, 根据采集 到的信息能够对物体实现快速和全方位的三维扫描和重建。  The three-dimensional reconstruction method and device and the mobile terminal of the embodiment of the invention can continuously acquire the image information of the object from different angles and continuously acquire the motion information of the camera, and can realize fast and comprehensive three-dimensional scanning on the object according to the collected information. And reconstruction.
根据下面参考附图对示例性实施例的详细说明, 本发明的其它特征 及方面将变得清楚。  Further features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments.
附图说明 包含在说明书中并且构成说明书的一部分的附图与说明书一起示出 了本发明的示例性实施例、 特征和方面, 并且用于解释本发明的原理。 BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in FIG.
图 1示出了本发明一个实施例提供的三维重建方法的流程图; 图 2示出了本发明另一个实施例提供的三维重建方法的流程图; 图 3示出了本发明又一个实施例提供的三维重建方法的流程图; 图 4 示出了本发明又一个实施例提供的色彩映射还原方法的示意 图;  1 is a flow chart of a three-dimensional reconstruction method provided by an embodiment of the present invention; FIG. 2 is a flow chart of a three-dimensional reconstruction method according to another embodiment of the present invention; FIG. 3 shows still another embodiment of the present invention. A flowchart of a three-dimensional reconstruction method provided; FIG. 4 is a schematic diagram showing a color map restoration method provided by still another embodiment of the present invention;
图 5示出了本发明一个实施例提供的三维重建装置的结构框图; 图 6示出了图 4中线激光投射器的线激光工作原理的示意图; 图 7示出了本发明另一个实施例提供的移动终端三维扫描原理的示 意图;  FIG. 5 is a block diagram showing the structure of a three-dimensional reconstruction apparatus according to an embodiment of the present invention; FIG. 6 is a schematic diagram showing the working principle of the line laser of the line laser projector of FIG. 4. FIG. 7 is a view showing another embodiment of the present invention. Schematic diagram of the three-dimensional scanning principle of the mobile terminal;
图 8示出了本发明又一个实施例提供的三维重建装置的结构框图; 图 9示出了本发明又一个实施例提供的三维重建装置的结构框图; 图 10示出了本发明又一个实施例提供的三维重建装置的结构框图。 FIG. 8 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention; FIG. 9 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention. FIG. 10 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention.
具体实施方式 以下将参考附图详细说明本发明的各种示例性实施例、特征和方面。 附图中相同的附图标记表示功能相同或相似的元件。 尽管在附图中示出 了实施例的各种方面, 但是除非特别指出, 不必按比例绘制附图。 DETAILED DESCRIPTION OF THE EMBODIMENTS Various exemplary embodiments, features, and aspects of the invention will be described in detail below with reference to the drawings. The same reference numerals in the drawings denote the same or similar elements. The various aspects of the embodiments are shown in the drawings, and the drawings are not necessarily drawn to scale unless otherwise indicated.
在这里专用的词"示例性 "意为 "用作例子、 实施例或说明性"。 这里 作为"示例性"所说明的任何实施例不必解释为优于或好于其它实施例。  The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustrative." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous.
另外, 为了更好的说明本发明, 在下文的具体实施方式中给出了众 多的具体细节。 本领域技术人员应当理解, 没有这些具体细节, 本发明 同样可以实施。 在另外一些实例中, 对于大家熟知的方法、 手段、 元件 和电路未作详细描述, 以便于凸显本发明的主旨。  Further, in order to better illustrate the invention, numerous specific details are set forth in the Detailed Description. Those skilled in the art will appreciate that the present invention may be practiced without these specific details. In other instances, well-known methods, means, components, and circuits have not been described in detail in order to the present invention.
为了便于理解, 将首先介绍本发明所基于的线结构光视觉检测基本 原理, 本发明所涉及的几种坐标系及各坐标系坐标之间的关系。  For ease of understanding, the basic principle of line structure light vision detection based on the present invention, the relationship between several coordinate systems and coordinates of each coordinate system according to the present invention will be first introduced.
一、 线结构光视觉检测基本原理  First, the basic principle of line structure light vision detection
进行三维扫描时, 首先由激光投射器向物体投射线状激光, 投射的 线状激光形成一个激光投射平面, 激光平面投射与物体表面相交时, 会 在物体表面形成一条亮的扫描线, 即光条。 由于光条包含了激光投射平 面与物体相交的所有表面点, 因此根据光条的坐标可以得到物体的相应 的表面点的三维坐标 ( ,UW)。该三维坐标映射到激光投射平面上, 则得 到光条的二维图像, 光条的二维图像上的点坐标记为(M, , 根据二维图 像的点坐标(Mv)即可以计算出其对应的物体表面点的三维坐标( ,uJ , 这就是线结构光视觉检测基本原理。上述计算三维坐标 < ,uj的过程如 式 1所示。 When performing three-dimensional scanning, the laser projector first projects a linear laser to the object, and the projected linear laser forms a laser projection plane. When the laser plane projection intersects the surface of the object, a bright scanning line is formed on the surface of the object, that is, light. article. Since the light bar contains all the surface points where the laser projection plane intersects the object, the three-dimensional coordinates (U W ) of the corresponding surface points of the object can be obtained according to the coordinates of the light bar. The three-dimensional coordinates are mapped onto the laser projection plane, and a two-dimensional image of the light strip is obtained. The point on the two-dimensional image of the light strip is marked as ( M , , and can be calculated according to the point coordinates ( M , v ) of the two-dimensional image The three-dimensional coordinates of the surface point of the corresponding object ( uJ , which is the basic principle of line structure light vision detection. The above process of calculating the three-dimensional coordinates < , uj is shown in Equation 1.
< , 3^, ζ ) = /(Μν) 式 1 二、 常用坐标系及其相互关系 < , 3^, ζ ) = /( Μ , ν ) Equation 1 2. Common coordinate systems and their relationship
( 1 ) 图像坐标系  (1) Image coordinate system
摄像头采集的数字图像在计算机内可以存储为数组, 数组中的每一 个元素 (像素, pixel)的值即图像点的亮度 (灰度), 在图像上定义直角坐标 系 u,v作为图像坐标系,该图像坐标系上的像素 (u,v)分别是该像素在数组 中的列数和行数, 故 (u,v)是以像素为单位的图像坐标系的坐标。 由于图 像坐标系只表示像素位于数字图像的列数和行数, 并没有用物理单位表 示出该像素在图像中的物理位置,因而需要再建立以物理单位(例如厘米) 表示的图像坐标系, 以物理单位度量的图像坐标系的坐标采用( X , y )来表 示。 在该坐标系中, 原点 0定义在摄像头光轴和图像平面的交点处, 称 为图像的主点, 原点一般位于图像中心处, 且 X轴和 y轴通常分别与直 角坐标系的 u轴和 V轴平行。 若 0在 u,v坐标系下的坐标为(M。,v。), 每一 像素在 X轴和 y轴方向上的物理尺寸分别为 dx、 dy , 则图像中任意一个 像素在两个坐标系下的坐标的关系如式 2。 The digital image captured by the camera can be stored as an array in the computer. The value of each element (pixel, pixel) in the array is the brightness (grayscale) of the image point, and the Cartesian coordinates are defined on the image. The system u, v is used as the image coordinate system, and the pixels (u, v) on the image coordinate system are the number of columns and the number of rows of the pixel in the array, respectively, so (u, v) is the image coordinate system in units of pixels. coordinate of. Since the image coordinate system only indicates the number of columns and rows of pixels in the digital image, and the physical position of the pixel in the image is not represented by physical units, it is necessary to establish an image coordinate system expressed in physical units (for example, centimeters). The coordinates of the image coordinate system measured in physical units are represented by (X, y). In this coordinate system, the origin 0 is defined at the intersection of the camera's optical axis and the image plane, called the main point of the image, the origin is generally at the center of the image, and the X and y axes are usually associated with the u-axis of the Cartesian coordinate system, respectively. The V axis is parallel. If the coordinates of 0 in the u, v coordinate system are ( M. , v .), the physical size of each pixel in the X-axis and y-axis directions is dx, dy, respectively, then any pixel in the image is in two coordinates. The relationship of the coordinates under the system is as shown in Equation 2.
I I dx 0  I I dx 0
0 1 / dy  0 1 / dy
0 0 1 0 0 1
Figure imgf000008_0001
式 2
Figure imgf000008_0001
Equation 2
( 2 ) 摄像头坐标系 ( 2 ) camera coordinate system
摄像头坐标系坐标采用( AA,z 来表示。该坐标系以摄像头光心 为原点, A轴和 轴分别与图像坐标系的 X轴和 y轴平行, z轴为摄像 头的光轴, 与图像平面垂直。 光轴与图像平面的交点为图像坐标系的原 点, 由摄像头光心与 AAA轴组成的直角坐标系称为摄像头坐标系。 。c0 为摄像头焦距 f。摄像头坐标系与图像坐标系之间的关系可以由式 3和式 4来表示: The coordinates of the camera coordinate system are represented by (AA, z . The coordinate system is based on the optical center of the camera. The A axis and the axis are parallel to the X and y axes of the image coordinate system respectively. The z axis is the optical axis of the camera, and the image plane. Vertical The intersection of the optical axis and the image plane is the origin of the image coordinate system. The Cartesian coordinate system consisting of the camera optical center and the AAA axis is called the camera coordinate system. c 0 is the camera focal length f. The camera coordinate system and the image coordinate system The relationship can be expressed by Equations 3 and 4:
X - -X - -
Z 式 3 Z type 3
式 4 式 3和式 4可以由齐次坐标与矩阵表示为式 5 : Equation 4 Equations 3 and 4 can be expressed as homogeneous equations and matrices as Equation 5:
Yc Y c
Z  Z
zc z c
Figure imgf000008_0002
1 式 5
Figure imgf000008_0002
1 formula 5
( 3 ) 全局三维坐标系 (3) Global three-dimensional coordinate system
由于摄像头可安装在任何位置, 因此还需要选择一个基准坐标系来 描述摄像头的位置, 并用其描述环境中任何物体的位置, 本申请中选用 世界坐标系作为基准坐标系, 世界坐标系也称为全局三维坐标系 Ow -XwYwZw ^ 该全局三维坐标系可以任意指定。 图 1 示出了本发明一个实施例提供的三维重建方法的流程图。 如图 1所示, 该三维重建方法主要包括: Since the camera can be mounted in any position, you also need to select a reference coordinate system. Describe the position of the camera and describe the position of any object in the environment. In this application, the world coordinate system is selected as the reference coordinate system. The world coordinate system is also called the global three-dimensional coordinate system O w -X w Y w Z w ^ The coordinate system can be arbitrarily specified. FIG. 1 is a flow chart showing a three-dimensional reconstruction method provided by an embodiment of the present invention. As shown in FIG. 1, the three-dimensional reconstruction method mainly includes:
步骤 S 100、 向物体投射线状激光;  Step S100, projecting a linear laser to the object;
步骤 S 1 10、 从至少两个角度连续采集被线状激光照射的物体的图像 信息, 并连续采集摄像头的运动信息;  Step S110: continuously acquiring image information of an object illuminated by the linear laser from at least two angles, and continuously acquiring motion information of the camera;
步骤 S 120、 根据采集到的图像信息, 得到每个采集时刻该物体在对 应于每个采集时刻的摄像头坐标系下的三维坐标;  Step S120: Obtain, according to the collected image information, a three-dimensional coordinate of the object in a camera coordinate system corresponding to each acquisition time at each acquisition time;
步骤 S 130、 根据采集到的运动信息, 得到每个采集时刻的摄像头坐 标系相对全局三维坐标系的位置关系; 以及  Step S130: Obtain a positional relationship between the camera coordinate system and the global three-dimensional coordinate system at each acquisition time according to the collected motion information;
步骤 S 140、 根据上述三维坐标和位置关系对物体进行三维重建。 具体地, 该摄像头可以内置于三维重建装置中, 该三维重建装置可 以为移动终端例如智能手机, 在其上可以安装一个线激光投射器, 这个 线激光投射器可以与移动终端的外部接口例如音频接口相连, 当需要对 物体进行三维扫描和重建时, 线激光投射器控制线状光源向物体发射线 状激光, 线状激光是指与被通过一个激光投射平面, 投射在物体上时, 在物体形成一条亮线。 因此, 当向物体发射线状激光时, 会形成一个激 光投射平面。 激光投射平面与物体相交形成一条亮的扫描线, 即光条。 然后, 内置于三维重建装置的摄像头会在预定的时间长度范围内例如 0.5 秒内连续采集反射回来的物体的扫描线的图像信息, 并且在上述时间长 度范围内通过内置于三维重建装置上的传感器连续采集摄像头的运动信 息, 该运动信息主要可以包括加速度、 角速度以及航向等, 三维重建装 置可以用该运动信息来确定摄像头在空间的位置和姿态。 然后, 三维重 建装置对每个采集时刻所采集到的图像信息和运动信, 做相应的处理, 具体为根据每个采集时刻所采集到的图像信息, 得到每个采集时刻物体 在对应于每个采集时刻的摄像头坐标系下的三维坐标。 例如, 可以根据 在 T1采集时刻所采集到的图像信息, 得到物体在 T1采集时刻的摄像头 坐标系下的三维坐标; 根据在 T2采集时刻所采集到的图像信息, 得到物 体在 T2采集时刻的摄像头坐标系下的三维坐标, 以此类推, 就可得到每 个采集时刻物体在对应于每个采集时刻的摄像头坐标系下的三维坐标。 根据每个采集时刻所采集到的运动信息, 得到每个采集时刻的摄像头坐 标系相对全局三维坐标系的位置关系。 例如, 可以根据在 T1采集时刻所 采集到的运动信息,得到 T1采集时刻的摄像头坐标系相对全局三维坐标 系的位置关系; 可以根据在 T2采集时刻所采集到的运动信息, 得到 T2 采集时刻的摄像头坐标系相对全局三维坐标系的位置关系, 以此类推, 得到每个采集时刻的摄像头坐标系相对全局三维坐标系的位置关系。 在 一种可能的实现方式中, 可以将启动三维扫描和重建时的摄像头坐标系 设定为全局三维坐标系。 最后, 根据所得到的每个采集时刻物体在对应 于每个采集时刻的摄像头坐标系下的三维坐标以及每个采集时刻的摄像 头坐标系相对全局三维坐标系的位置关系, 得到物体的全局三维坐标, 从而对物体进行三维重建。 Step S140: Perform three-dimensional reconstruction on the object according to the three-dimensional coordinates and the positional relationship. Specifically, the camera may be built in a three-dimensional reconstruction device, which may be a mobile terminal such as a smart phone, on which a line laser projector can be installed, and the line laser projector can be connected to an external interface of the mobile terminal, such as audio. The interface is connected. When the object needs to be scanned and reconstructed in three dimensions, the line laser projector controls the linear light source to emit a linear laser to the object, and the linear laser refers to the object that is projected onto the object through a laser projection plane. Form a bright line. Therefore, when a linear laser is emitted to an object, a laser projection plane is formed. The laser projection plane intersects the object to form a bright scan line, that is, a light strip. Then, the camera built in the three-dimensional reconstruction device continuously acquires image information of the scanning line of the reflected object for a predetermined length of time, for example, 0.5 seconds, and passes through the sensor built in the three-dimensional reconstruction device within the above-mentioned length of time. The motion information of the camera is continuously acquired, and the motion information may mainly include acceleration, angular velocity, heading, etc., and the three-dimensional reconstruction device may use the motion information to determine the position and posture of the camera in space. Then, the three-dimensional reconstruction device performs corresponding processing on the image information and the motion signal collected at each acquisition time, specifically, according to the image information collected at each acquisition time, obtaining an object at each acquisition time corresponding to each The three-dimensional coordinates of the camera coordinate system at the time of acquisition. For example, according to the image information collected at the time of T1 acquisition, the three-dimensional coordinates of the object in the camera coordinate system at the time of T1 acquisition can be obtained; according to the image information collected at the time of T2 acquisition, the camera of the object at the time of T2 acquisition is obtained. The three-dimensional coordinates in the coordinate system, and so on, you can get each The three-dimensional coordinates of the object at the acquisition time are in the camera coordinate system corresponding to each acquisition time. According to the motion information collected at each acquisition time, the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time is obtained. For example, according to the motion information collected at the time of T1 acquisition, the positional relationship of the camera coordinate system at the T1 acquisition time relative to the global three-dimensional coordinate system can be obtained; the T2 acquisition time can be obtained according to the motion information collected at the T2 acquisition time. The positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system, and so on, obtains the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition time. In a possible implementation manner, the camera coordinate system when starting three-dimensional scanning and reconstruction may be set as a global three-dimensional coordinate system. Finally, according to the obtained three-dimensional coordinates of the object at each acquisition time in the camera coordinate system corresponding to each acquisition time and the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time, the global three-dimensional coordinates of the object are obtained. , thereby three-dimensional reconstruction of the object.
优选地, 可以控制三维重建装置绕物体一周连续采集物体的图像信 息, 从而实现连续、 全方位的采集物体的图像信息。  Preferably, the three-dimensional reconstruction device can be controlled to continuously acquire image information of the object around the object, thereby realizing continuous and omnidirectional image information of the collected object.
进一步地, 为提高扫描重建的速度和效率, 采集图像信息与采集运 动信息可以同时进行。  Further, in order to improve the speed and efficiency of the scan reconstruction, the acquisition of image information and the acquisition of motion information can be performed simultaneously.
本发明实施例的三维重建方法, 三维重建装置可以从不同角度连续 采集到的物体的图像信息和连续采集摄像头的运动信息, 根据采集到的 信息能够对物体实现快速和全方位的三维扫描和重建。  In the three-dimensional reconstruction method of the embodiment of the present invention, the three-dimensional reconstruction device can continuously acquire image information of an object continuously collected from different angles and continuously acquire motion information of the camera, and can realize fast and comprehensive three-dimensional scanning and reconstruction of the object according to the collected information. .
图 2示出了本发明另一个实施例提供的三维重建方法的流程图。 图 FIG. 2 is a flow chart showing a three-dimensional reconstruction method provided by another embodiment of the present invention. Figure
2与图 1 标号相同的步骤具有相同的含义, 如图 2所示, 本实施例与上 一实施例的主要区别在于, 上述根据采集到的图像信息, 得到每个采集 时刻该物体在对应于每个采集时刻的摄像头坐标系下的三维坐标之前, 所述方法还包括: 2 The same steps as those in FIG. 1 have the same meanings. As shown in FIG. 2, the main difference between this embodiment and the previous embodiment is that, according to the collected image information, the object is corresponding to each acquisition time. Before the three-dimensional coordinates in the camera coordinate system of each acquisition time, the method further includes:
步骤 S200、 标定摄像头的内参数和外参数。  Step S200: Calibrate the internal parameters and the external parameters of the camera.
相应地, 上述根据采集到的图像信息, 得到每个采集时刻该物体在 对应于每个采集时刻的摄像头坐标系下的三维坐标具体可以包括:  Correspondingly, according to the collected image information, obtaining the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time at each acquisition time may specifically include:
步骤 S210、 根据每个采集时刻采集到的图像信息以及上述内参数和 外参数, 计算出该物体在对应于每个采集时刻的摄像头坐标系下的三维 坐标。  Step S210: Calculate, according to the image information collected at each acquisition time and the internal parameters and the external parameters, the three-dimensional coordinates of the object under the camera coordinate system corresponding to each acquisition time.
对于上述步骤 S200 ,在确定物体在摄像头坐标系下的三维坐标之前, 需要首先对摄像头的内参数和外参数进行标定。 其中, 内参数反映的是 摄像头本身的内在属性, 外参数表示的是模块坐标系与摄像头坐标系的 位置关系。其中,模块坐标系 的 0L 坐标面为激光投射平面, 与激光投射平面正交。 外参数通常可以由旋转矩阵 R和平移向量 T 来表示, R表示模块坐标系相对于摄像头坐标系的旋转, R是 3 x 3正交 单位 记为
Figure imgf000011_0001
式 6 其中, l,r4,r7)、 ^5, 及( „9)分别表示模块坐标系 <¾ _ ^ 上 轴、 轴、 轴上的单位向量。 T 表示摄像头坐标系相对于模块坐标系 的平 是 3维平移向量, 记为
Figure imgf000011_0002
式 7 其中, 、 以及 为模块坐标系原点在摄像头坐标系下的坐标。 对于上述步骤 S210 , 在三维重建装置连续扫描物体的过程中, 根据 内 数对采集到的各图像信息进行校正, 并逐行计算光条的中心, 得到 光条中心的坐标数组, 即得到一组光条中心的图像坐标, 根据每个光条 中心的图像坐标及标定的内参数和外参数, 计算出其映射在模块坐标系 下的坐标, 进而计算出其在对应于每个采集时刻的摄像头坐标系下的三 维坐标。 根据一组光条中心的图像坐标就可以计算出物体整个扫描线的 在摄像头坐标系下的三维坐标, 连续扫描物体, 重复利用上述方法就可 以计算出整个物体在摄像头坐标系下的三维坐标。 下面以某个光条中心 为例来说明上述方法, 假设在 T1采集时刻, 计算出的某光条中心的坐标 为 (x,y ) , 映射在 T1 采集时刻模块坐标系下的坐标为 (其中 = 0 ) , 而在 T1采集时刻摄像头坐标系下的坐标为 , , ) , 则根据式
For the above step S200, before determining the three-dimensional coordinates of the object in the camera coordinate system, It is necessary to first calibrate the internal and external parameters of the camera. Among them, the internal parameter reflects the intrinsic property of the camera itself, and the external parameter represents the positional relationship between the module coordinate system and the camera coordinate system. The 0 L coordinate plane of the module coordinate system is a laser projection plane orthogonal to the laser projection plane. The outer parameter can usually be represented by the rotation matrix R and the translation vector T, where R represents the rotation of the module coordinate system relative to the camera coordinate system, and R is a 3 x 3 orthogonal unit recorded as
Figure imgf000011_0001
Equation 6, where l, r 4 , r 7 ), ^ 5 , and ( „ 9 ) respectively represent the unit vector of the upper coordinate, axis, and axis of the module coordinate system <3⁄4 _ ^. T represents the camera coordinate system relative to the module coordinates. The flat of the system is a 3-dimensional translation vector, recorded as
Figure imgf000011_0002
Equation 7, where, and are the coordinates of the module coordinate system origin in the camera coordinate system. In the above step S210, in the process of continuously scanning the object by the three-dimensional reconstruction device, the collected image information is corrected according to the internal number, and the center of the light bar is calculated row by row to obtain an array of coordinates of the center of the light bar, that is, a group is obtained. The image coordinates of the center of the strip, according to the image coordinates of the center of each strip and the internal and external parameters of the calibration, calculate the coordinates mapped in the module coordinate system, and then calculate the camera corresponding to each acquisition time. The three-dimensional coordinates in the coordinate system. According to the image coordinates of the center of a group of light strips, the three-dimensional coordinates of the entire scan line of the object in the camera coordinate system can be calculated, and the object can be scanned continuously. The above method can be used to calculate the three-dimensional coordinates of the whole object in the camera coordinate system. The following method is described by taking a light bar center as an example. It is assumed that the coordinates of the center of a light bar calculated at time T1 are (x, y), and the coordinates of the module coordinate system at the time of T1 acquisition are ( = 0 ), and the coordinates of the camera coordinate system at the time of T1 acquisition are , , ), according to the formula
8可以由光条中心的坐标( x,y )计算出映射在 T1采集时刻模块坐标系下 的坐 (¾,Λ, );
Figure imgf000011_0003
根据式 9可以由模块坐标系下的坐标 ( , , 换算得到其映射在 T1 采集时刻的摄像头坐标系下的坐标 , , zc)。
8 can calculate the coordinates (3⁄4, Λ, ) mapped in the module coordinate system of the T1 acquisition time by the coordinates (x, y) of the center of the light bar;
Figure imgf000011_0003
According to Equation 9, the coordinates ( , , converted in the module coordinate system can be converted to the coordinates of the camera coordinate system at the T1 acquisition time, z c).
Figure imgf000012_0001
重复使用上述计算方法就可以计算得到整个物体在每个采集时刻摄 像头坐标系下的三维坐标。
Figure imgf000012_0001
By repeating the above calculation method, the three-dimensional coordinates of the entire object in the camera coordinate system at each acquisition time can be calculated.
在一种可能的具体实施方式中, 所述运动信息包括加速度、 角速度 以及航向, 上述根据采集到的运动信息, 得到每个采集时刻的摄像头坐 标系相对全局三维坐标系的位置关系具体可以包括:  In a possible implementation, the motion information includes an acceleration, an angular velocity, and a heading. The positional relationship between the camera coordinate system and the global three-dimensional coordinate system at each acquisition time may be obtained according to the collected motion information.
步骤 S220、 根据所述加速度、 角速度以及航向, 采用航位推算法推 算出每个采集时刻的摄像头坐标系相对全局三维坐标系的位置关系。  Step S220: According to the acceleration, the angular velocity, and the heading, the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time is calculated by using a dead reckoning algorithm.
具体地, 在三维重建装置连续扫描物体的过程中, 通过传感器可以 得到每一采集时刻的摄像头的运动信息。 该传感器主要可以包括三轴加 速度传感器、 三轴陀螺仪以及三轴电子罗盘。 三轴加速度传感器主要用 于测量摄像头的三轴加速度, 三轴陀螺仪主要用于测量摄像头的三轴角 速度, 三轴电子罗盘主要用于测量摄像头的三轴航向。 例如, 在第 k个 采集时刻, 三轴加速度传感器可以测量出摄像头的三轴加速度, 三轴陀 螺仪可以测量出摄像头的三轴角速度, 三轴电子罗盘可以测量出摄像头 的三轴航向, 然后利用航位推算法即可推导出在第 k个采集时刻摄像头 坐标系相对全局三维坐标系的位置关系。 具体地, 在第 k个采集时刻, 三轴加速度传感器可以测量出摄像头的三轴加速度分别为^ 通过 对三轴加速度 ' A分别进行二重积分可以得到三轴的位移 x,y,Z;然后 通过差分可以得到相邻两个测量时刻三维重建装置的三轴位移 A, Ay,Az 和三轴旋转角度 Δ^,Δ 。 具体计算步骤如式 10-12。 Specifically, in the process of continuously scanning the object by the three-dimensional reconstruction device, the motion information of the camera at each acquisition time can be obtained by the sensor. The sensor can mainly include a three-axis acceleration sensor, a three-axis gyroscope, and a three-axis electronic compass. The three-axis accelerometer is mainly used to measure the three-axis acceleration of the camera. The three-axis gyroscope is mainly used to measure the three-axis angular velocity of the camera. The three-axis electronic compass is mainly used to measure the three-axis heading of the camera. For example, at the kth acquisition time, the triaxial acceleration sensor can measure the three-axis acceleration of the camera, the three-axis gyroscope can measure the three-axis angular velocity of the camera, and the three-axis electronic compass can measure the three-axis heading of the camera, and then utilize The dead reckoning algorithm can derive the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at the kth acquisition time. Specifically, at the kth acquisition time, the triaxial acceleration sensor can measure the three-axis acceleration of the camera respectively. ^ By performing double integration on the three-axis acceleration 'A, respectively, three-axis displacements x, y, Z can be obtained ; By the difference, the three-axis displacement A, Ay, Az and the three-axis rotation angle Δ ^, Δ of the three-dimensional reconstruction device of two adjacent measurement moments can be obtained. The specific calculation steps are as shown in Equation 10-12.
vx(k) = vx(k-\) + ^*ax(k-\) v x (k) = v x (k-\) + ^*a x (k-\)
vy(k) = vy(k-\) + At*ay(k-\) v y (k) = v y (k-\) + At*a y (k-\)
vz(k) = vz(k-i) + At*az(k-i) 式 10 v z (k) = v z (ki) + At * a z (ki) of formula 10
Ax = x(k) - x(k -\) = At*vx(k- 1) Ax = x(k) - x(k -\) = At*v x (k-1)
Ay = y(k)-y(k-\) = At*vy(k-\) Ay = y(k)-y(k-\) = At*v y (k-\)
Az^z(k)-z(k-l)^At^vz(k-l) 式丄丄 Αφ = φ(Κ) - φ(}ί - ί) = At * - 1) Az^z(k)-z(kl)^At^v z (kl) Αφ = φ(Κ) - φ(}ί - ί) = At * - 1)
Αθ = 0(k) - 0(k _ 1) = At * ί¾ (A _ 1)  Αθ = 0(k) - 0(k _ 1) = At * ί3⁄4 (A _ 1)
Αφ = φ{Κ) - φ{]ί -\) = ^* k- 1) 式 12 其中, 为采样间隔, 即为第 k个采集时刻和第 k-1 个采集时刻的 间隔。  Αφ = φ{Κ) - φ{]ί -\) = ^* k- 1) where 12 is the sampling interval, which is the interval between the kth acquisition time and the k-1th acquisition time.
相邻两个测量时刻摄像头坐标系的相对位置关系可以用旋转和平移 表示, 具体 下式 14。
Figure imgf000013_0002
式 13 r = cosA^cosA6*
The relative positional relationship of the camera coordinate system of two adjacent measurement moments can be expressed by rotation and translation, which is specifically expressed by the following formula 14.
Figure imgf000013_0002
Equation 13 r = cosA^cosA6*
r = cosA^sinA^sinA^-sinA^  r = cosA^sinA^sinA^-sinA^
r13 = cos Δ ^ sin Δ ^ cos Αφ - sin Αφ sin r 13 = cos Δ ^ sin Δ ^ cos Αφ - sin Αφ sin
r21 = sinA^cosA^ r 21 = sinA^cosA^
r22 = sin Αφ sin sin Αφ + cosA^cosA^ r 22 = sin Αφ sin sin Αφ + cosA^cosA^
r23 = sin Αφ sin cos Αφ + cos sin r 23 = sin Αφ sin cos Αφ + cos sin
-sinA^ 式 14
Figure imgf000013_0001
表示物体在第 k-l采集时刻、 第 k采集时刻的摄 像头坐标系下的坐标。
-sinA^ Equation 14
Figure imgf000013_0001
Indicates the coordinates of the object in the camera coordinate system at the kl acquisition time and the kth acquisition time.
通过多次迭代, 可以得到当前测量时刻摄像头坐标系相对全局三维 坐标系的相对位置关系。 同理, 可以根据所测量到每个采集时刻摄像头 的加速度、 角速度, 同样可以利用航位推算法推算出对应于每个采集时 刻的摄像头坐标系相对全局三维坐标系的位置关系。 这样, 就可以得到 不同采集时刻摄像头坐标系的相对位置关系, 从而得到一个连续的摄像 头坐标系序列。 其中, 可以通过三轴角速度推算出三轴旋转角度, 也可 以通过三轴航向推算出三轴旋转角度。  Through multiple iterations, the relative positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at the current measurement time can be obtained. Similarly, according to the measured acceleration and angular velocity of the camera at each acquisition time, the positional relationship of the camera coordinate system corresponding to the global three-dimensional coordinate system corresponding to each acquisition time can also be calculated by using the dead reckoning algorithm. In this way, the relative positional relationship of the camera coordinate system at different acquisition times can be obtained, thereby obtaining a continuous sequence of camera coordinate systems. Among them, the three-axis rotation angle can be calculated from the three-axis angular velocity, or the three-axis rotation angle can be derived from the three-axis heading.
在一种可能的具体实施方式中, 上述根据三维坐标和位置关系对物 体进行三维重建包括:  In a possible specific implementation, the foregoing three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship includes:
步骤 S230、 根据每个采集时刻的摄像头坐标系相对全局三维坐标系 的位置关系, 将物体在对应于每个采集时刻的摄像头坐标系下的三维坐 标转换成全局三维坐标系下的三维坐标。 对于上述步骤 S230 , 在步骤 S210得到每个采集时刻物体在对应于 每个采集时刻的摄像头坐标系下的三维坐标以及步骤 S220 得到每个采 集时刻的摄像头坐标系相对全局三维坐标系的位置关系后, 就可以换算 出每个采集时刻物体在全局三维坐标系下的三维坐标, 从而进行物体的 三维重建。 Step S230: Convert the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time into the three-dimensional coordinates in the global three-dimensional coordinate system according to the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time. For the above step S230, in step S210, the three-dimensional coordinates of the object at each acquisition time in the camera coordinate system corresponding to each acquisition time are obtained, and in step S220, the positional relationship between the camera coordinate system and the global three-dimensional coordinate system of each acquisition time is obtained. Then, the three-dimensional coordinates of the object in the global three-dimensional coordinate system at each acquisition time can be converted, thereby performing three-dimensional reconstruction of the object.
本发明实施例的三维重建方法, 三维重建装置可以从不同角度连续 采集到物体的图像信息, 得到每个采集时刻该物体在对应于每个采集时 刻的摄像头坐标系下的三维坐标以及连续采集摄像头的运动信息, 得到 每个采集时刻的摄像头坐标系相对全局三维坐标系的位置关系, 从而实 现对物体快速和全方位的三维扫描和重建。  In the three-dimensional reconstruction method of the embodiment of the present invention, the three-dimensional reconstruction device can continuously collect image information of the object from different angles, and obtain the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time and the continuous acquisition camera at each acquisition time. The motion information obtains the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition time, thereby realizing fast and comprehensive three-dimensional scanning and reconstruction of the object.
图 3 示出了本发明又一个实施例提供的三维重建方法的流程图。 图 3与图 1、 图 2标号相同的步骤具有相同的含义, 如图 3所示, 本实施例 与上一实施例的主要区别在于, 在所述根据所述三维坐标和所述位置关 系对物体进行三维重建之后, 所述方法还包括: 步骤 S240、 计算不同采集时刻的各摄像头坐标系之间的相对关系, 根据该相对关系建立不同采集时刻的物体的图像信息之间的映射关系。  FIG. 3 is a flow chart showing a three-dimensional reconstruction method provided by still another embodiment of the present invention. 3 is the same as the steps of FIG. 1 and FIG. 2 having the same meaning. As shown in FIG. 3, the main difference between this embodiment and the previous embodiment is that the pair is based on the three-dimensional coordinates and the positional relationship. After the object is three-dimensionally reconstructed, the method further includes: Step S240: Calculate a relative relationship between camera coordinate systems at different acquisition times, and establish a mapping relationship between image information of objects at different acquisition times according to the relative relationship.
具体地, 如图 4所示, 根据上述步骤 S220 , 可以得到一个连续的摄 像头坐标系序列, 即可以得到任意两个采集时刻下摄像头坐标系的相对 位置关系, 例如, 任取两个采集时刻 T0、 T1 , 这两个采集时刻的摄像头 坐标系之间的相对位置关系可以用旋转矩阵 和平移矩阵 G。来表示, 其 中, 为 3 *3旋转矩阵, G。为 3* 1平移矩阵。 在 TO采集时刻, 物体上的 扫描线上的某一被测点被线状激光所覆盖, 其在摄像头坐标系下的三维 坐标为 p。0。, z。); 在 T1 采集时刻, 该被测点显现自身真实颜色, ^^设 该被测点在 T1采集时刻的摄像头坐标系下的三维坐标为 (χι, ,ζι) , 该被 测点在两个采集时刻摄像头坐标系下的映射关系如式 15。
Figure imgf000014_0001
根据式 10 , 可以得到 , , 若该被测点在 T1时刻的图像坐标为 S XM , 则将 ( ^代入上述式 3和式 4可计算出1 ^(Α, , 最后, 根 据式 2就可以得到该坐标的像素值, 从而得到该被测点的真实颜色信息, 即能实现被测点的色彩还原, 遍历该扫描线上的被测点即可实现整个扫 描线的色彩还原。
Specifically, as shown in FIG. 4, according to the above step S220, a continuous sequence of camera coordinate systems can be obtained, that is, the relative positional relationship of the camera coordinate system at any two acquisition times can be obtained, for example, two acquisition times T0 can be obtained. , T1, the relative positional relationship between the camera coordinate systems of the two acquisition moments can use the rotation matrix and the translation matrix G. To represent, where, is a 3 * 3 rotation matrix, G . A 3* 1 translation matrix. At the time of TO acquisition, a certain measured point on the scanning line on the object is covered by the linear laser, and its three-dimensional coordinate in the camera coordinate system is p . 0. , z. At the time of T1 acquisition, the measured point shows its true color, ^^ sets the three-dimensional coordinates of the measured point in the camera coordinate system at the time of T1 acquisition as ( χ ι, , ζ ι), the measured point is The mapping relationship in the camera coordinate system at the two acquisition moments is as shown in Equation 15.
Figure imgf000014_0001
According to Equation 10, it can be obtained that if the image coordinate of the measured point at time T1 is SX M , then ( ^ can be substituted into Equation 3 and Equation 4 to calculate 1 ^(Α, , finally, according to Equation 2 Obtaining the pixel value of the coordinate, thereby obtaining the true color information of the measured point, that is, realizing the color restoration of the measured point, and traversing the measured point on the scanning line to realize the whole scanning The color reproduction of the line.
同理,记录物体的扫描线上的某一被测点在 T1采集时刻下的摄像头 坐标系坐标, 若 Tl、 Τ2采集时刻下两摄像头坐标系之间的相对位置关系 用旋转矩阵 和平移矩阵 Gi来表示, 则将 、 Gi分别替代 ^、 G。, 将物 体的扫描线上的某一被测点在 T1 采集时刻下的摄像头坐标系坐标替代 0。,y。,z。), 根据式 10同样可以得到 T1采集时刻的某被测点在 T2采集 时刻的摄像头坐标系下的坐标,从而得到该被测点在 T2采集时刻的图像 坐标, 进一步地根据式 2可以得到该图像坐标的像素值, 从而得到该被 测点的真实颜色信息,即能实现被测点的色彩还原, 遍历该扫描线上的点 即可实现整个扫描线的色彩还原。 依此类推, 循环采用上述方法即可实 现整个物体的色彩还原。 Similarly, record the coordinates of the camera coordinate system of a measured point on the scan line of the object at the time of T1 acquisition. If the relative positional relationship between the two camera coordinate systems at the time of T1 and Τ2 acquisition, use the rotation matrix and the translation matrix G. When i is expressed, then, G i replaces ^, G respectively. , Replace a 0 of the camera coordinate system coordinates of a measured point on the scan line of the object at the T1 acquisition time. , y. ,z. According to Equation 10, the coordinates of a measured point at the T1 acquisition time in the camera coordinate system at the time of T2 acquisition can also be obtained, thereby obtaining the image coordinates of the measured point at the time of T2 acquisition, and further can be obtained according to Equation 2. The pixel value of the image coordinates, thereby obtaining the true color information of the measured point, that is, the color restoration of the measured point can be realized, and the color of the entire scan line can be realized by traversing the point on the scan line. And so on, the above method can be used to achieve the color reproduction of the entire object.
优选地, 在一种可能的具体实现方式中, 步骤 S240所选的两个采集 时刻可以为相邻的采集时刻。 从而提高被测点的色彩还原精度。  Preferably, in a possible specific implementation, the two acquisition moments selected in step S240 may be adjacent acquisition moments. Thereby improving the color reproduction accuracy of the measured point.
在一种可能的实现方式中, 所述建立不同采集时刻的物体的图像信 息的映射关系之后还包括:  In a possible implementation manner, after the mapping relationship between the image information of the objects at different acquisition moments is established, the method further includes:
步骤 S250、 将三维重建的结果与建立映射关系的结果进行融合。 具体地, 将三维重建的结果, 即物体上每个被测点对应的位置和建 立映射关系的结果, 即色彩还原中每个被测点的颜色对应起来, 从而实 现物体的三维重建和色彩还原。  Step S250: merging the result of the three-dimensional reconstruction with the result of establishing a mapping relationship. Specifically, the result of the three-dimensional reconstruction, that is, the position corresponding to each measured point on the object and the result of establishing a mapping relationship, that is, the color of each measured point in the color restoration, thereby realizing three-dimensional reconstruction and color restoration of the object .
本实施例提供的三维重建方法, 通过建立相邻采集时刻的图像信息 的映射关系, 能够实现物体的彩色纹理的还原。  The three-dimensional reconstruction method provided in this embodiment can realize the restoration of the color texture of the object by establishing the mapping relationship of the image information of the adjacent acquisition time.
图 5 示出了本发明一个实施例提供的三维重建装置的结构框图。 如 图 5所示, 该三维重建装置 100主要包括: 摄像头 41、 传感器 42、 处理 器 43 以及线激光投射器 44。 上述三维重建装置可以为一个移动终端。 线激光投射器 44主要用于向物体投射线状激光; 摄像头 41主要用于从 至少两个角度连续采集物体被线状激光照射的图像信息; 传感器 42主要 用于连续采集摄像头 41的运动信息; 处理器 43分别与摄像头 41和传感 器 42以及线激光投射器 44连接, 可以包括以下模块: 图像信息处理模 块 431、 运动信息处理模块 432、 三维重建模块 433。 图像信息处理模块 431主要用于根据摄像头 41所采集到的图像信息, 得到每个采集时刻物 体在对应于每个采集时刻的摄像头坐标系下的三维坐标; 运动信息处理 模块 432主要用于根据该运动信息, 得到每个采集时刻的摄像头坐标系 相对全局三维坐标系的位置关系; 三维重建模块 433主要用于根据所述 三维坐标和所述位置关系, 对物体进行三维重建。 FIG. 5 is a structural block diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present invention. As shown in FIG. 5, the three-dimensional reconstruction apparatus 100 mainly includes a camera 41, a sensor 42, a processor 43, and a line laser projector 44. The above three-dimensional reconstruction device can be a mobile terminal. The line laser projector 44 is mainly used to project a linear laser to the object; the camera 41 is mainly used for continuously collecting image information of the object illuminated by the linear laser from at least two angles; the sensor 42 is mainly used for continuously acquiring the motion information of the camera 41; The processor 43 is connected to the camera 41 and the sensor 42 and the line laser projector 44, and may include the following modules: an image information processing module 431, a motion information processing module 432, and a three-dimensional reconstruction module 433. The image information processing module 431 is mainly configured to obtain, according to the image information collected by the camera 41, three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time; The module 432 is mainly configured to obtain a positional relationship between the camera coordinate system and the global three-dimensional coordinate system at each acquisition time according to the motion information; the three-dimensional reconstruction module 433 is mainly configured to perform three-dimensionality on the object according to the three-dimensional coordinates and the positional relationship. reconstruction.
具体地, 摄像头 41、 传感器 42以及处理器 43均可以内置于三维重 建装置 100中, 该三维重建装置 100可以为移动终端例如智能手机在其 上可以安装一个线激光投射器 44。 线激光投射器 44 可以与移动终端的 外部接口例如音频接口相连。 还可以设置一个三维扫描控制开关来控制 三维重建装置进行三维扫描, 当需要对物体进行三维扫描和重建时, 利 用三维扫描控制开关启动三维扫描, 线激光投射器 44控制线状光源向物 体发射线激光, 形成一个激光投射平面。  Specifically, the camera 41, the sensor 42 and the processor 43 can all be built in the three-dimensional reconstruction device 100, and the three-dimensional reconstruction device 100 can be mounted with a line laser projector 44 for a mobile terminal such as a smartphone. The line laser projector 44 can be connected to an external interface of the mobile terminal such as an audio interface. A three-dimensional scanning control switch can also be provided to control the three-dimensional reconstruction device for three-dimensional scanning. When three-dimensional scanning and reconstruction of the object is required, the three-dimensional scanning is started by the three-dimensional scanning control switch, and the linear laser projector 44 controls the linear light source to the object emission line. The laser forms a laser projection plane.
图 6为线激光投射器 44的线激光工作原理的示意图, 如图 6所示, 以安装在智能手机上的线激光投射器 44为例介绍线激光工作原理。线激 光投射器 44安装在智能手机上, 与智能手机的外部接口例如音频接口相 连, 当需要对物体进行三维扫描和重建时, 利用三维扫描控制开关启动 三维扫描, 智能手机的左声道或者右声道产生一定频率的方波, 经过线 激光投射器 44内置的微型变压器 441和整流器 442为其激光二极管 443 供电, 然后柱面镜 444将激光二极管 443发射的激光变为线状激光, 激 光投射平面与物体相交形成一条亮的扫描线, 即光条, 即可启动三维扫 描。 包括: 摄像头 41采集反射回来的物体的图像信息, 传感器 42采集 摄像头的运动信息。  Fig. 6 is a schematic view showing the working principle of the line laser of the line laser projector 44. As shown in Fig. 6, the line laser projector 44 mounted on the smartphone is taken as an example to describe the working principle of the line laser. The line laser projector 44 is mounted on the smart phone and connected to an external interface of the smart phone, such as an audio interface. When three-dimensional scanning and reconstruction of the object is required, the three-dimensional scanning control switch is used to start the three-dimensional scanning, the left channel or the right of the smart phone. The channel generates a square wave of a certain frequency, and the micro-transformer 441 and the rectifier 442 built in the line laser projector 44 supply power to the laser diode 443, and then the cylindrical mirror 444 converts the laser light emitted from the laser diode 443 into a linear laser, and the laser projection The plane intersects the object to form a bright scan line, that is, the light strip, and the three-dimensional scan can be started. The method includes: the camera 41 collects image information of the reflected object, and the sensor 42 collects motion information of the camera.
优选地, 可以控制三维重建装置绕物体一周, 这样摄像头 41就可以 连续采集物体的图像信息, 从而实现连续、 全方位的采集物体的图像信 息。  Preferably, the three-dimensional reconstruction device can be controlled to wrap around the object, so that the camera 41 can continuously acquire image information of the object, thereby realizing continuous and omnidirectional image information of the collected object.
进一步地, 为提高扫描重建的速度和效率, 摄像头 41采集图像信息 与传感器 42采集运动信息可以同时进行。  Further, to improve the speed and efficiency of the scan reconstruction, the image information acquired by the camera 41 and the sensor 42 can be simultaneously acquired.
图 7为移动终端三维扫描和重建的原理示意图。 如图 7所示, 移动 终端主要可以包括摄像头、 线激光投射器, 音频接口以及传感器。 上述 线激光投射器可以外接在音频接口上。 可以通过移动移动终端来实现全 面、 连续、 快速的采集物体的图像信息以及摄像头的运动信息, 从而计 算出物体在全局三维坐标系下的坐标, 最后实现物体的三维扫描和重建。  FIG. 7 is a schematic diagram of the principle of three-dimensional scanning and reconstruction of a mobile terminal. As shown in Figure 7, the mobile terminal can mainly include a camera, a line laser projector, an audio interface, and a sensor. The above line laser projector can be externally connected to the audio interface. The mobile terminal can be used to realize the full, continuous and fast image information of the collected object and the motion information of the camera, thereby calculating the coordinates of the object in the global three-dimensional coordinate system, and finally realizing the three-dimensional scanning and reconstruction of the object.
本发明实施例的三维重建装置, 可以从不同角度连续采集到的物体 的图像信息和连续采集摄像头的运动信息, 根据采集到的信息能够对物 体实现快速和全方位的三维扫描和重建。 The three-dimensional reconstruction device of the embodiment of the invention can continuously collect image information of an object collected from different angles and continuously acquire motion information of the camera, and can be based on the collected information. Fast and omni-directional 3D scanning and reconstruction.
图 8示出了本发明又一个实施例提供的三维重建装置的结构框图。 图 8与图 5标号相同的组件具有相同的功能, 如图 8所示, 本实施例与 上一实施例的主要区别在于, 本实施例的三维重建装置 200的处理器 43 还可以包括标定模块 434 , 该标定模块 434主要用于标定摄像头 41的内 参数和外参数。 相应地, 图像信息处理模块 431 具体可以用于根据摄像 头 41每个采集时刻采集到的图像信息以及内参数和外参数,换算出该物 体在对应于每个采集时刻的摄像头坐标系下的三维坐标。  FIG. 8 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention. 8 and FIG. 5 have the same functions, and as shown in FIG. 8, the main difference between this embodiment and the previous embodiment is that the processor 43 of the three-dimensional reconstruction apparatus 200 of the present embodiment may further include a calibration module. 434, the calibration module 434 is mainly used to calibrate the internal parameters and the external parameters of the camera 41. Correspondingly, the image information processing module 431 can be specifically configured to convert the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time according to the image information collected by each camera camera 41 and the internal parameters and the external parameters. .
具体地, 图像信息处理模块 431 在确定物体在对应于每个采集时刻 的摄像头坐标系下的三维坐标之前, 需要首先利用标定模块 434对摄像 头 41的内参数和外参数进行标定。 其中, 内参数和外参数的定义可以参 照上述实施例的相关说明。 之后, 图像信息处理模块 431 可以内参数对 采集到的各图像信息进行校正, 识别并计算出光条中心的图像坐标, 利 用光条中心的图像坐标, 并根据标定模块 434标定的内参数和外参数计 算出物体在激光投射平面下的坐标以及物体在相应采集时刻的摄像头坐 标系下的三维坐标。  Specifically, the image information processing module 431 first needs to calibrate the internal parameters and the external parameters of the camera 41 by using the calibration module 434 before determining the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time. The definitions of the internal parameters and the external parameters can be referred to the related description of the above embodiments. After that, the image information processing module 431 can correct the collected image information by internal parameters, identify and calculate the image coordinates of the center of the light bar, use the image coordinates of the center of the light bar, and calibrate the internal parameters and external parameters according to the calibration module 434. Calculate the coordinates of the object under the laser projection plane and the three-dimensional coordinates of the object in the camera coordinate system at the corresponding acquisition time.
在一种可能的具体实施方式中, 所述运动信息包括加速度、 角速度 以及航向, 运动信息处理模块 432具体可以用于根据所述加速度、 角速 度以及航向, 采用航位推算法推算出每个采集时刻的摄像头坐标系相对 全局三维坐标系的位置关系。  In a possible implementation, the motion information includes acceleration, angular velocity, and heading. The motion information processing module 432 may be specifically configured to calculate each acquisition time by using a dead reckoning algorithm according to the acceleration, angular velocity, and heading. The positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system.
具体地, 在三维重建装置连续扫描物体的过程中, 三维重建装置通 过传感器 42得到每一采集时刻的摄像头 41的运动信息。该传感器 42具 体可以包括三轴加速度传感器 421、 三轴陀螺仪 422 以及三轴电子罗盘 423。 三轴加速度传感器 421主要用于测量摄像头 41 的加速度, 三轴陀 螺仪 422主要用于测量摄像头 41的角速度, 三轴电子罗盘 423用于测量 摄像头 41 的航向。 然后根据所测量到的摄像头 41 的加速度、 角速度以 及航向, 利用航位推算法推算出每个采集时刻的摄像头坐标系相对全局 三维坐标系的位置关系, 这样, 就可以得到不同采集时刻摄像头坐标系 的相对位置关系, 从而得到一个连续的摄像头坐标系序列。  Specifically, in the process of continuously scanning the object by the three-dimensional reconstruction device, the three-dimensional reconstruction device obtains the motion information of the camera 41 at each acquisition time by the sensor 42. The sensor 42 body may include a three-axis acceleration sensor 421, a three-axis gyroscope 422, and a three-axis electronic compass 423. The three-axis acceleration sensor 421 is mainly used to measure the acceleration of the camera 41, the three-axis gyroscope 422 is mainly used to measure the angular velocity of the camera 41, and the three-axis electronic compass 423 is used to measure the heading of the camera 41. Then, based on the measured acceleration, angular velocity and heading of the camera 41, the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time is calculated by using the dead reckoning algorithm, so that the camera coordinate system at different acquisition times can be obtained. The relative positional relationship, resulting in a continuous sequence of camera coordinate systems.
在一种可能的具体实施方式中, 三维重建模块 433 具体可以用于根 据摄像头坐标系相对全局三维坐标系的位置关系, 将每个采集时刻物体 在对应于每个采集时刻的摄像头坐标系下的三维坐标转换成全局三维坐 标系下的三维坐标, 从而进行物体的三维重建。 In a possible implementation manner, the three-dimensional reconstruction module 433 may be specifically configured to: according to the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system, each of the collection time objects is in a camera coordinate system corresponding to each acquisition time. Convert three-dimensional coordinates into global three-dimensional coordinates The three-dimensional coordinates under the standard are used to perform three-dimensional reconstruction of the object.
本发明实施例的三维重建装置, 可以从不同角度连续采集到的物体 的图像信息, 得到每个采集时刻该物体在对应于每个采集时刻的摄像头 坐标系下的三维坐标以及连续采集摄像头的运动信息, 得到每个采集时 刻的摄像头坐标系相对全局三维坐标系的位置关系, 从而实现对物体快 速和全方位的三维扫描和重建。  The three-dimensional reconstruction device of the embodiment of the invention can obtain the image information of the object continuously collected from different angles, obtain the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition time, and the motion of the continuous acquisition camera at each acquisition time. The information obtains the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition time, thereby realizing fast and comprehensive three-dimensional scanning and reconstruction of the object.
图 9示出了本发明又一个实施例提供的三维重建装置的结构框图。 图 9与图 8、 图 5标号相同的组件具有相同的功能, 如图 9所示, 本实 施例与上一实施例的主要区别在于, 本实施例的三维重建装置 300 中处 理器 43还包括色彩映射还原模块 435 , 色彩映射还原模块 435主要用于 计算不同采集时刻的各所述摄像头坐标系之间的相对关系, 根据该相对 关系建立不同采集时刻的物体的图像信息的映射关系。  FIG. 9 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention. 9 is the same as the components of FIG. 8 and FIG. 5. The main difference between the present embodiment and the previous embodiment is that the processor 43 of the three-dimensional reconstruction apparatus 300 of the present embodiment further includes The color map restoration module 435 is mainly configured to calculate a relative relationship between each camera coordinate system at different acquisition times, and establish a mapping relationship of image information of the objects at different acquisition times according to the relative relationship.
具体地, 根据运动信息处理模块 432 的处理结果, 可以得到一个连 续的摄像头坐标系序列, 任取两个采集时刻的摄像头坐标系, 两个摄像 头坐标系之间的相对关系以及实现被测点的色彩还原方法具体可以参照 上述实施例中的相关描述。  Specifically, according to the processing result of the motion information processing module 432, a continuous camera coordinate system sequence can be obtained, and the camera coordinate system of the two acquisition moments, the relative relationship between the two camera coordinate systems, and the realization of the measured point can be obtained. For the color reproduction method, reference may be made to the related description in the above embodiments.
优选地, 在一种可能的具体实现方式中, 色彩映射还原模块 435 所 选的两个采集时刻可以为相邻时刻, 从而实现该被测点的彩色还原, 从 而被整个扫描线的彩色还原, 进而实现整个物体的彩色还原。  Preferably, in a possible specific implementation, the two acquisition moments selected by the color map restoration module 435 may be adjacent moments, thereby implementing color reduction of the measured point, thereby being restored by the color of the entire scan line. In turn, the color reproduction of the entire object is achieved.
在一种可能的具体实现方式中,处理器 43还可以包括融合模块 436 , 融合模块 436主要用于将三维重建的结果与建立映射关系的结果进行融 合。  In a possible specific implementation, the processor 43 may further include a fusion module 436, and the fusion module 436 is mainly used to combine the result of the three-dimensional reconstruction with the result of establishing a mapping relationship.
本实施例提供的三维重建装置, 可以通过建立相邻采集时刻的图像 信息的映射关系, 能够实现物体的彩色纹理的还原。  The three-dimensional reconstruction apparatus provided in this embodiment can realize the restoration of the color texture of the object by establishing the mapping relationship of the image information of the adjacent acquisition time.
需要说明的是, 上述摄像头、 传感器、 处理器均可以内置于三维重 建装置, 且上述线激光投射器也可以安装在三维重建装置上, 三维重建 装置可以为移动终端, 例如智能手机、 PAD等。 这样, 本申请中的三维 重建装置具有便携性的优点。  It should be noted that the camera, the sensor and the processor may be built in the three-dimensional reconstruction device, and the line laser projector may be installed on the three-dimensional reconstruction device, and the three-dimensional reconstruction device may be a mobile terminal, such as a smart phone, a PAD or the like. Thus, the three-dimensional reconstruction device of the present application has the advantage of portability.
图 10示出了本发明又一个实施例提供的三维重建装置的结构框图。 所述三维重建装置 700可以是具备计算能力的主机服务器、 个人计算机 PC、 或者可携带的便携式计算机或终端等。 本发明具体实施例并不对计 算节点的具体实现做限定。 FIG. 10 is a structural block diagram of a three-dimensional reconstruction apparatus according to still another embodiment of the present invention. The three-dimensional reconstruction device 700 may be a host server having a computing capability, a personal computer PC, or a portable computer or terminal that can be carried. The specific embodiment of the present invention is not The specific implementation of the arithmetic node is limited.
所述三维重建装置 700 包括处理器(processor)710、 通信接口 (Communications Interface)720 , 存储器 (memory array)730和总线 740。 其中, 处理器 710、 通信接口 720、 以及存储器 730通过总线 740完成相 互间的通信。  The three-dimensional reconstruction apparatus 700 includes a processor 710, a communications interface 720, a memory array 730, and a bus 740. The processor 710, the communication interface 720, and the memory 730 complete communication with each other through the bus 740.
通信接口 720用于与网元通信,其中网元包括例如虚拟机管理中心、 共享存储等。  The communication interface 720 is for communicating with a network element, wherein the network element includes, for example, a virtual machine management center, shared storage, and the like.
处理器 710用于执行程序。处理器 710可能是一个中央处理器 CPU, 或者是专用集成电路 ASIC ( Application Specific Integrated Circuit ) , 或 者是被配置成实施本发明实施例的一个或多个集成电路。  The processor 710 is for executing a program. The processor 710 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
存储器 730用于存放文件。 存储器 730可能包含高速 RAM存储器, 也可能还包括非易失性存储器 (non-volatile memory) , 例如至少一个磁盘 存储器。 存储器 730也可以是存储器阵列。 存储器 730还可能被分块, 并且所述块可按一定的规则组合成虚拟卷。  The memory 730 is used to store files. Memory 730 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory. Memory 730 can also be a memory array. Memory 730 may also be partitioned, and the blocks may be combined into virtual volumes according to certain rules.
在一种可能的实现方式中, 上述程序可为包括计算机操作指令的程 序代码。 该程序具体可用于执行以下步骤:  In one possible implementation, the above program may be a program code that includes computer operating instructions. This program can be used to perform the following steps:
向物体投射线状激光;  Projecting a linear laser to the object;
从至少角度连续采集被线状激光照射的物体的图像信息, 并连续采 集摄像头的运动信息;  Collecting image information of the object illuminated by the linear laser from at least an angle, and continuously acquiring motion information of the camera;
根据所述图像信息, 得到每个采集时刻所述物体在对应于每个采集 时刻的摄像头坐标系下的三维坐标;  Obtaining, according to the image information, three-dimensional coordinates of the object at each acquisition time in a camera coordinate system corresponding to each acquisition time;
根据所述运动信息, 得到每个采集时刻的摄像头坐标系相对全局三 维坐标系的位置关系; 以及  Obtaining, according to the motion information, a positional relationship of a camera coordinate system at each acquisition time with respect to a global three-dimensional coordinate system;
根据所述三维坐标和所述位置关系对所述物体进行三维重建。  The object is three-dimensionally reconstructed according to the three-dimensional coordinates and the positional relationship.
在一种可能的实现方式中, 所述根据所述图像信息, 得到每个采集 时刻所述物体在相应采集时刻的摄像头坐标系下的三维坐标之前, 上述 程序还包括: 标定所述摄像头的内参数和外参数; 则所述根据所述图像 信息得到每个采集时刻所述物体在对应于每个采集时刻的摄像头坐标系 下的三维坐标包括: 根据每个采集时刻采集到的所述图像信息以及所述 内参数和外参数, 换算出所述物体在对应于每个采集时刻的摄像头坐标 系下的三维坐标。 在一种可能的实现方式中, 所述运动信息包括加速度、 角速度以及 航向, 所述根据所述运动信息, 得到每个采集时刻的摄像头坐标系相对 全局三维坐标系的位置关系包括: 根据加速度、 角速度以及航向, 采用 航位推算法推算出所述每个采集时刻的摄像头坐标系相对全局三维坐标 系的位置关系。 In a possible implementation manner, before the acquiring, according to the image information, the three-dimensional coordinates of the object in the camera coordinate system at the respective acquisition time, the program further includes: calibrating the inside of the camera The parameter and the external parameter; then, according to the image information, obtaining the three-dimensional coordinates of the object at each acquisition time in the camera coordinate system corresponding to each acquisition time comprises: the image information collected according to each collection time And the inner parameter and the outer parameter, and the three-dimensional coordinates of the object under the camera coordinate system corresponding to each acquisition time are converted. In a possible implementation manner, the motion information includes an acceleration, an angular velocity, and a heading, and according to the motion information, obtaining a positional relationship of a camera coordinate system at each acquisition time relative to a global three-dimensional coordinate system, including: according to the acceleration, The angular velocity and the heading are calculated by using the dead reckoning algorithm to calculate the positional relationship of the camera coordinate system with respect to the global three-dimensional coordinate system at each acquisition time.
在一种可能的实现方式中, 所述根据所述三维坐标和所述位置关系 对所述物体进行三维重建包括: 根据所述每个采集时刻的摄像头坐标系 相对全局三维坐标系的位置关系, 将物体在对应于每个采集时刻的摄像 头坐标系下的三维坐标转换成全局三维坐标系下的三维坐标。  In a possible implementation, the three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship includes: according to a positional relationship of a camera coordinate system of each acquisition time relative to a global three-dimensional coordinate system, The three-dimensional coordinates of the object under the camera coordinate system corresponding to each acquisition time are converted into three-dimensional coordinates under the global three-dimensional coordinate system.
在一种可能的实现方式中, 所述根据所述三维坐标和所述位置关系 对所述物体进行三维重建之后还包括: 计算不同采集时刻的摄像头坐标 系之间的相对关系, 根据所述相对关系建立不同采集时刻的所述物体的 图像信息之间的映射关系。  In a possible implementation manner, after the three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship, the method further includes: calculating a relative relationship between camera coordinate systems at different acquisition times, according to the relative relationship The relationship establishes a mapping relationship between image information of the objects at different acquisition times.
在一种可能的实现方式中, 所述根据所述相对关系建立不同采集时 刻的所述物体的图像信息的映射关系, 包括: 将第 i-1采集时刻所述物体 扫描线上的点在摄像头坐标系下的三维坐标映射到第 i 采集时刻下的摄 像头坐标系中, 计算所述点在第 i 采集时刻下的图像坐标, 根据所述点 在第 i采集时刻下的图像坐标得到所述点的像素值, 其中, i为大于 1的 任意整数。  In a possible implementation, the mapping relationship between the image information of the object at different acquisition times according to the relative relationship includes: setting a point on the scan line of the object at the i-1th acquisition time at the camera The three-dimensional coordinates in the coordinate system are mapped to the camera coordinate system at the ith acquisition time, and the image coordinates of the point at the ith acquisition time are calculated, and the points are obtained according to the image coordinates of the point at the ith acquisition time. Pixel value, where i is any integer greater than one.
在一种可能的实现方式中, 所述建立不同采集时刻的物体的图像信 息的映射关系之后还包括: 将三维重建的结果与建立映射关系的结果进 行融合。  In a possible implementation manner, after the mapping relationship between the image information of the objects at different acquisition times is established, the method further includes: combining the result of the three-dimensional reconstruction with the result of establishing the mapping relationship.
本领域普通技术人员可以意识到, 本文所描述的实施例中的各示例 性单元及算法步骤, 能够以电子硬件、 或者计算机软件和电子硬件的结 合来实现。 这些功能究竟以硬件还是软件形式来实现, 取决于技术方案 的特定应用和设计约束条件。 专业技术人员可以针对特定的应用选择不 同的方法来实现所描述的功能, 但是这种实现不应认为超出本发明的范 围。  Those of ordinary skill in the art will appreciate that the various exemplary elements and algorithm steps in the embodiments described herein can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether these functions are implemented in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can select different methods for implementing the described functions for a particular application, but such implementation should not be considered to be outside the scope of the present invention.
如果以计算机软件的形式来实现所述功能并作为独立的产品销售或 使用时, 则在一定程度上可认为本发明的技术方案的全部或部分 (例如 对现有技术做出贡献的部分) 是以计算机软件产品的形式体现的。 该计 算机软件产品通常存储在计算机可读取的存储介质中, 包括若干指令用 以使得计算机设备 (可以是个人计算机、 服务器、 或者网络设备等) 执 行本发明各实施例方法的全部或部分步骤。而前述的存储介质包括 U盘、 移动硬盘、 只读存储器 (ROM, Read-Only Memory ) 、 随机存取存储器 ( RAM, Random Access Memory ) 、 磁碟或者光盘等各种可以存储程序 代码的介质。 If the function is implemented in the form of computer software and sold or used as a stand-alone product, it is considered to some extent that all or part of the technical solution of the present invention (for example, a part contributing to the prior art) is It is embodied in the form of computer software products. The computer software product is typically stored in a computer readable storage medium, including a number of instructions In order to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the various embodiments of the present invention. The foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不 局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本 发明的保护范围应所述以权利要求的保护范围为准。  The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the claims.

Claims

权 利 要 求 Rights request
1、 一种三维重建方法, 其特征在于, 包括: 1. A three-dimensional reconstruction method, characterized by including:
向物体投射线状激光; Project a linear laser onto an object;
从至少两个角度连续采集所述线状激光照射的所述物体的图像信 息, 并连续采集摄像头的运动信息; Continuously collect image information of the object illuminated by the linear laser from at least two angles, and continuously collect motion information of the camera;
根据所述图像信息, 得到每个采集时刻所述物体在对应于每个采集 时刻的摄像头坐标系下的三维坐标; According to the image information, the three-dimensional coordinates of the object in the camera coordinate system corresponding to each collection moment are obtained;
根据所述运动信息, 得到每个采集时刻的摄像头坐标系相对全局三 维坐标系的位置关系; 以及 According to the motion information, the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition moment is obtained; and
根据所述三维坐标和所述位置关系对所述物体进行三维重建。 Three-dimensional reconstruction of the object is performed based on the three-dimensional coordinates and the positional relationship.
2、 根据权利要求 1所述的三维重建方法, 其特征在于, 所述根据所 述图像信息, 得到每个采集时刻所述物体在对应于每个采集时刻的摄像 头坐标系下的三维坐标之前, 所述方法还包括: 2. The three-dimensional reconstruction method according to claim 1, characterized in that, according to the image information, before obtaining the three-dimensional coordinates of the object in the camera coordinate system corresponding to each collection time at each collection time, The method also includes:
标定所述摄像头的内参数和外参数; Calibrate the internal parameters and external parameters of the camera;
则所述根据所述图像信息得到每个采集时刻所述物体在对应于每个 采集时刻的摄像头坐标系下的三维坐标包括: Then, obtaining the three-dimensional coordinates of the object at each collection time in the camera coordinate system corresponding to each collection time based on the image information includes:
根据每个采集时刻采集到的所述图像信息以及所述内参数和外参 数,计算所述物体在对应于每个采集时刻的摄像头坐标系下的三维坐标。 According to the image information collected at each collection moment and the internal parameters and external parameters, the three-dimensional coordinates of the object in the camera coordinate system corresponding to each collection moment are calculated.
3、 根据权利要求 2所述的三维重建方法, 其特征在于, 所述运动信 息包括加速度、 角速度以及航向, 所述根据所述运动信息, 得到每个采 集时刻的摄像头坐标系相对全局三维坐标系的位置关系包括: 3. The three-dimensional reconstruction method according to claim 2, characterized in that the motion information includes acceleration, angular velocity and heading, and based on the motion information, the camera coordinate system at each acquisition time relative to the global three-dimensional coordinate system is obtained The positional relationships include:
根据所述加速度、 角速度以及航向, 采用航位推算法计算所述每个 采集时刻的摄像头坐标系相对全局三维坐标系的位置关系。 Based on the acceleration, angular velocity and heading, the dead reckoning method is used to calculate the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each acquisition moment.
4、 根据权利要求 3所述的三维重建方法, 其特征在于, 所述根据所 述三维坐标和所述位置关系对所述物体进行三维重建包括: 4. The three-dimensional reconstruction method according to claim 3, wherein the three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship includes:
根据所述每个采集时刻的摄像头坐标系相对全局三维坐标系的位置 关系, 将物体在对应于每个采集时刻的摄像头坐标系下的三维坐标转换 成全局三维坐标系下的三维坐标。 According to the positional relationship between the camera coordinate system at each acquisition moment and the global three-dimensional coordinate system, the three-dimensional coordinates of the object in the camera coordinate system corresponding to each acquisition moment are converted into three-dimensional coordinates in the global three-dimensional coordinate system.
5、 根据权利要求 1-4中任一项所述的三维重建方法, 其特征在于, 在所述根据所述三维坐标和所述位置关系对所述物体进行三维重建之 后, 所述方法还包括: 5. The three-dimensional reconstruction method according to any one of claims 1 to 4, characterized in that, during the three-dimensional reconstruction of the object according to the three-dimensional coordinates and the positional relationship, Finally, the method also includes:
计算不同采集时刻的摄像头坐标系之间的相对关系, 根据所述相对 关系建立不同采集时刻的所述物体的图像信息之间的映射关系。 Calculate the relative relationship between the camera coordinate systems at different collection times, and establish a mapping relationship between the image information of the object at different collection times based on the relative relationship.
6、 根据权利要求 5所述的三维重建方法, 其特征在于, 所述根据所 述相对关系建立不同采集时刻的所述物体的图像信息的映射关系, 包括: 将第 i-i 采集时刻所述物体扫描线上的点在摄像头坐标系下的三维 坐标映射到第 i采集时刻下的摄像头坐标系中,计算所述点在第 i采集时 刻下的图像坐标, 根据所述点在第 i 采集时刻下的图像坐标得到所述点 的像素值, 其中, i为大于 1的任意整数。 6. The three-dimensional reconstruction method according to claim 5, characterized in that: establishing a mapping relationship of image information of the object at different acquisition times according to the relative relationship includes: scanning the object at the i-th acquisition time The three-dimensional coordinates of the points on the line in the camera coordinate system are mapped to the camera coordinate system at the i-th collection time, and the image coordinates of the points at the i-th collection time are calculated, based on the coordinates of the points at the i-th collection time. The image coordinates obtain the pixel value of the point, where i is any integer greater than 1.
7、 一种三维重建装置, 其特征在于, 包括: 7. A three-dimensional reconstruction device, characterized by including:
线激光投射器, 用于向物体投射线状激光; Line laser projector, used to project linear laser to objects;
摄像头, 用于从至少两个角度连续采集被所述线状激光照射的所述 物体的图像信息; A camera for continuously collecting image information of the object illuminated by the linear laser from at least two angles;
传感器, 用于连续采集所述摄像头的运动信息; 以及 A sensor for continuously collecting movement information of the camera; and
处理器, 与所述摄像头、 线性激光投射器以及传感器连接, 所述处 理器包括: Processor, connected to the camera, linear laser projector and sensor, the processor includes:
图像信息处理模块, 用于根据所述图像信息, 得到每个采集时刻所 述物体在对于于每个采集时刻的摄像头坐标系下的三维坐标; An image information processing module, configured to obtain the three-dimensional coordinates of the object at each collection moment in the camera coordinate system for each collection moment based on the image information;
运动信息处理模块, 用于根据所述运动信息, 得到每个采集时刻的 摄像头坐标系相对全局三维坐标系的位置关系; 以及 A motion information processing module, configured to obtain the positional relationship of the camera coordinate system relative to the global three-dimensional coordinate system at each collection moment based on the motion information; and
三维重建模块, 用于根据所述三维坐标和所述位置关系, 对所述物 体进行三维重建。 A three-dimensional reconstruction module, configured to perform three-dimensional reconstruction of the object based on the three-dimensional coordinates and the position relationship.
8、 根据权利要求 7所述的三维重建装置, 其特征在于, 所述处理器 还包括标定模块, 用于标定所述摄像头的内参数和外参数; 8. The three-dimensional reconstruction device according to claim 7, wherein the processor further includes a calibration module for calibrating the internal parameters and external parameters of the camera;
则所述图像信息处理模块具体用于根据每个采集时刻采集到的所述 图像信息以及所述内参数和外参数, 计算出所述物体在对应于每个采集 时刻的所述摄像头坐标系下的三维坐标。 The image information processing module is specifically configured to calculate the position of the object in the camera coordinate system corresponding to each collection time based on the image information collected at each collection time and the internal parameters and external parameters. three-dimensional coordinates.
9、 根据权利要求 8所述的三维重建装置, 其特征在于, 所述运动信 息包括加速度、 角速度以及航向, 所述运动信息处理模块具体用于根据 所述加速度、 角速度以及航向, 采用航位推算法推算出所述每个采集时 刻的摄像头坐标系相对全局三维坐标系的位置关系。 9. The three-dimensional reconstruction device according to claim 8, wherein the motion information includes acceleration, angular velocity and heading, and the motion information processing module is specifically configured to use dead reckoning based on the acceleration, angular velocity and heading. method to calculate each acquisition time The positional relationship between the engraved camera coordinate system and the global three-dimensional coordinate system.
10、 根据权利要求 9所述的三维重建装置, 其特征在于, 所述三维 重建模块具体用于根据所述每个采集时刻的摄像头坐标系相对全局三维 坐标系的位置关系, 将物体在对应于每个采集时刻的摄像头坐标系下的 三维坐标转换成全局三维坐标系下的三维坐标。 10. The three-dimensional reconstruction device according to claim 9, wherein the three-dimensional reconstruction module is specifically configured to reconstruct the object corresponding to the position of the camera coordinate system at each acquisition time relative to the global three-dimensional coordinate system. The three-dimensional coordinates in the camera coordinate system at each acquisition moment are converted into three-dimensional coordinates in the global three-dimensional coordinate system.
1 1、 根据权利要求 7-9 任一项所述的三维重建装置, 其特征在于, 所述处理器还包括色彩映射还原模块, 用于计算不同采集时刻的各所述 摄像头坐标系之间的相对关系, 根据所述相对关系建立不同采集时刻的 所述物体的图像信息之间的映射关系。 1 1. The three-dimensional reconstruction device according to any one of claims 7 to 9, characterized in that the processor further includes a color mapping restoration module for calculating the color difference between the camera coordinate systems at different acquisition times. Relative relationship: establishing a mapping relationship between the image information of the object at different collection moments based on the relative relationship.
12、 根据权利要求 11所述的三维重建装置, 其特征在于, 所述色彩 映射还原模块具体用于将第 i-1 采集时刻所述物体扫描线上的点在摄像 头坐标系下的三维坐标映射到第 i 采集时刻下的摄像头坐标系中, 计算 所述点在第 i采集时刻下的图像坐标,根据所述点在第 i采集时刻下的图 像坐标得到所述点的像素值, 其中, i为大于 1的任意整数。 12. The three-dimensional reconstruction device according to claim 11, wherein the color mapping restoration module is specifically used to map the three-dimensional coordinates of the points on the object scanning line at the i-1th acquisition time in the camera coordinate system. Go to the camera coordinate system at the i-th collection time, calculate the image coordinates of the point at the i-th collection time, and obtain the pixel value of the point based on the image coordinates of the point at the i-th collection time, where, i is any integer greater than 1.
13、 一种移动终端, 其特征在于, 所述移动终端包括: 如权利要求 13. A mobile terminal, characterized in that, the mobile terminal includes: as claimed in claim
7-12任一项所述的三维重建装置。 The three-dimensional reconstruction device described in any one of 7-12.
PCT/CN2014/070135 2013-08-20 2014-01-06 Three-dimensional reconstruction method and device, and mobile terminal WO2015024361A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310364666.0A CN104424630A (en) 2013-08-20 2013-08-20 Three-dimension reconstruction method and device, and mobile terminal
CN201310364666.0 2013-08-20

Publications (1)

Publication Number Publication Date
WO2015024361A1 true WO2015024361A1 (en) 2015-02-26

Family

ID=52483013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/070135 WO2015024361A1 (en) 2013-08-20 2014-01-06 Three-dimensional reconstruction method and device, and mobile terminal

Country Status (2)

Country Link
CN (1) CN104424630A (en)
WO (1) WO2015024361A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118573A (en) * 2018-07-02 2019-01-01 景致三维(江苏)股份有限公司 Image modeling method in fill-in light matrix array three-dimensional reconstruction system
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model
CN110288713A (en) * 2019-07-03 2019-09-27 北京机械设备研究所 A kind of quick three-dimensional model reconstruction method and system based on multi-vision visual
CN110379013A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of three-dimensional reconfiguration system based on multi-angle laser line scanning
CN110619807A (en) * 2018-06-20 2019-12-27 北京京东尚科信息技术有限公司 Method and device for generating global thermodynamic diagram
CN110827392A (en) * 2018-08-31 2020-02-21 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device with good scene usability
CN111045000A (en) * 2018-10-11 2020-04-21 阿里巴巴集团控股有限公司 Monitoring system and method
CN111260781A (en) * 2020-01-15 2020-06-09 北京云迹科技有限公司 Method and device for generating image information and electronic equipment
CN111429570A (en) * 2020-04-14 2020-07-17 深圳市亿道信息股份有限公司 Method and system for realizing modeling function based on 3D camera scanning
CN111612692A (en) * 2020-04-24 2020-09-01 西安理工大学 Cell image reconstruction method based on double-linear-array scanning imaging system
CN112330721A (en) * 2020-11-11 2021-02-05 北京市商汤科技开发有限公司 Three-dimensional coordinate recovery method and device, electronic equipment and storage medium
CN112631431A (en) * 2021-01-04 2021-04-09 杭州光粒科技有限公司 AR (augmented reality) glasses pose determination method, device and equipment and storage medium
CN112815868A (en) * 2021-01-05 2021-05-18 长安大学 Three-dimensional detection method for pavement
CN113870338A (en) * 2020-06-30 2021-12-31 北京瓦特曼科技有限公司 Three-dimensional reconstruction-based zinc bath slagging-off method
CN114155349A (en) * 2021-12-14 2022-03-08 杭州联吉技术有限公司 Three-dimensional mapping method, three-dimensional mapping device and robot

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107404643B (en) * 2016-05-18 2019-01-29 上海宽翼通信科技有限公司 A kind of three-dimensional camera shooting system and its image capture method
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
DE102017113473A1 (en) 2016-06-20 2017-12-21 Cognex Corporation Method for the three-dimensional measurement of moving objects in a known movement
CN113884080A (en) * 2016-11-01 2022-01-04 北京墨土科技有限公司 Method and equipment for determining three-dimensional coordinates of positioning point and photoelectric measuring instrument
CN108474658B (en) * 2017-06-16 2021-01-12 深圳市大疆创新科技有限公司 Ground form detection method and system, unmanned aerial vehicle landing method and unmanned aerial vehicle
CN109495733B (en) * 2017-09-12 2020-11-06 宏达国际电子股份有限公司 Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof
CN108124489B (en) * 2017-12-27 2023-05-12 达闼机器人股份有限公司 Information processing method, apparatus, cloud processing device and computer program product
CN108062790B (en) * 2018-01-02 2021-07-16 广东嘉铭智能科技有限公司 Three-dimensional coordinate system establishing method applied to object three-dimensional reconstruction
CN108413917B (en) * 2018-03-15 2020-08-07 中国人民解放军国防科技大学 Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN108717724A (en) * 2018-04-02 2018-10-30 珠海格力电器股份有限公司 A kind of measurement method and device
CN109285214A (en) * 2018-08-16 2019-01-29 Oppo广东移动通信有限公司 Processing method, device, electronic equipment and the readable storage medium storing program for executing of threedimensional model
CN109493277A (en) * 2018-09-30 2019-03-19 先临三维科技股份有限公司 Probe data joining method, device, computer equipment and storage medium
WO2020064015A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Scanner head data stitching method, scanning device, computer apparatus, storage medium, and image acquisition device
CN109544679B (en) * 2018-11-09 2023-04-18 深圳先进技术研究院 Three-dimensional reconstruction method for inner wall of pipeline
CN109951695B (en) * 2018-11-12 2020-11-24 北京航空航天大学 Mobile phone-based free-moving light field modulation three-dimensional imaging method and imaging system
CN110319772B (en) * 2019-07-12 2020-12-15 上海电力大学 Visual large-span distance measurement method based on unmanned aerial vehicle
CN110230983B (en) * 2019-07-16 2021-01-08 北京欧比邻科技有限公司 Vibration-resisting optical three-dimensional positioning method and device
CN113140030A (en) * 2020-01-17 2021-07-20 北京小米移动软件有限公司 Three-dimensional model generation method and device and storage medium
CN111397528B (en) * 2020-03-26 2021-03-09 北京航空航天大学 Portable train wheel regular section contour structure optical vision measurement system and method
CN111383332B (en) * 2020-03-26 2023-10-13 深圳市菲森科技有限公司 Three-dimensional scanning and reconstruction system, computer device and readable storage medium
CN112964196B (en) * 2021-02-05 2023-01-03 杭州思锐迪科技有限公司 Three-dimensional scanning method, system, electronic device and computer equipment
CN113706692B (en) * 2021-08-25 2023-10-24 北京百度网讯科技有限公司 Three-dimensional image reconstruction method, three-dimensional image reconstruction device, electronic equipment and storage medium
CN114160506A (en) * 2021-11-12 2022-03-11 国能铁路装备有限责任公司 Brake beam cleaning line and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567384A (en) * 2003-06-27 2005-01-19 史中超 Method of image acquisition, digitized measure and reconstruction of three-dimensional object
CN101178815A (en) * 2007-12-10 2008-05-14 电子科技大学 Accurate measurement method for three-dimensional image rebuilding body
CN102831637A (en) * 2012-06-28 2012-12-19 北京理工大学 Three-dimensional reconstruction method based on mobile device
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3624353B2 (en) * 2002-11-14 2005-03-02 有限会社テクノドリーム二十一 Three-dimensional shape measuring method and apparatus
EP2230482B1 (en) * 2005-03-11 2013-10-30 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
CN101969523B (en) * 2010-10-21 2012-10-03 西北农林科技大学 Three-dimensional scanning device and three-dimensional scanning method
CN102184566A (en) * 2011-04-28 2011-09-14 湘潭大学 Micro projector mobile phone platform-based portable three-dimensional scanning system and method
CN103047969B (en) * 2012-12-07 2016-03-16 北京百度网讯科技有限公司 By method and the mobile terminal of mobile terminal generating three-dimensional figures picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567384A (en) * 2003-06-27 2005-01-19 史中超 Method of image acquisition, digitized measure and reconstruction of three-dimensional object
CN101178815A (en) * 2007-12-10 2008-05-14 电子科技大学 Accurate measurement method for three-dimensional image rebuilding body
CN102831637A (en) * 2012-06-28 2012-12-19 北京理工大学 Three-dimensional reconstruction method based on mobile device
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model
CN110619807A (en) * 2018-06-20 2019-12-27 北京京东尚科信息技术有限公司 Method and device for generating global thermodynamic diagram
CN109118573A (en) * 2018-07-02 2019-01-01 景致三维(江苏)股份有限公司 Image modeling method in fill-in light matrix array three-dimensional reconstruction system
CN110838164B (en) * 2018-08-31 2023-03-24 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN110827392A (en) * 2018-08-31 2020-02-21 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device with good scene usability
CN110838164A (en) * 2018-08-31 2020-02-25 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN110827392B (en) * 2018-08-31 2023-03-24 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device
CN111045000A (en) * 2018-10-11 2020-04-21 阿里巴巴集团控股有限公司 Monitoring system and method
CN110379013A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of three-dimensional reconfiguration system based on multi-angle laser line scanning
CN110379013B (en) * 2019-06-17 2023-04-07 杭州电子科技大学 Three-dimensional reconstruction system based on multi-angle laser line scanning
CN110288713B (en) * 2019-07-03 2022-12-23 北京机械设备研究所 Rapid three-dimensional model reconstruction method and system based on multi-view vision
CN110288713A (en) * 2019-07-03 2019-09-27 北京机械设备研究所 A kind of quick three-dimensional model reconstruction method and system based on multi-vision visual
CN111260781A (en) * 2020-01-15 2020-06-09 北京云迹科技有限公司 Method and device for generating image information and electronic equipment
CN111260781B (en) * 2020-01-15 2024-04-19 北京云迹科技股份有限公司 Method and device for generating image information and electronic equipment
CN111429570A (en) * 2020-04-14 2020-07-17 深圳市亿道信息股份有限公司 Method and system for realizing modeling function based on 3D camera scanning
CN111429570B (en) * 2020-04-14 2023-04-18 深圳市亿道信息股份有限公司 Method and system for realizing modeling function based on 3D camera scanning
CN111612692A (en) * 2020-04-24 2020-09-01 西安理工大学 Cell image reconstruction method based on double-linear-array scanning imaging system
CN113870338A (en) * 2020-06-30 2021-12-31 北京瓦特曼科技有限公司 Three-dimensional reconstruction-based zinc bath slagging-off method
CN112330721B (en) * 2020-11-11 2023-02-17 北京市商汤科技开发有限公司 Three-dimensional coordinate recovery method and device, electronic equipment and storage medium
CN112330721A (en) * 2020-11-11 2021-02-05 北京市商汤科技开发有限公司 Three-dimensional coordinate recovery method and device, electronic equipment and storage medium
CN112631431A (en) * 2021-01-04 2021-04-09 杭州光粒科技有限公司 AR (augmented reality) glasses pose determination method, device and equipment and storage medium
CN112631431B (en) * 2021-01-04 2023-06-16 杭州光粒科技有限公司 Method, device and equipment for determining pose of AR (augmented reality) glasses and storage medium
CN112815868A (en) * 2021-01-05 2021-05-18 长安大学 Three-dimensional detection method for pavement
CN114155349A (en) * 2021-12-14 2022-03-08 杭州联吉技术有限公司 Three-dimensional mapping method, three-dimensional mapping device and robot
CN114155349B (en) * 2021-12-14 2024-03-22 杭州联吉技术有限公司 Three-dimensional image construction method, three-dimensional image construction device and robot

Also Published As

Publication number Publication date
CN104424630A (en) 2015-03-18

Similar Documents

Publication Publication Date Title
WO2015024361A1 (en) Three-dimensional reconstruction method and device, and mobile terminal
CN109000582B (en) Scanning method and system of tracking type three-dimensional scanning device, storage medium and equipment
EP3650807B1 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photography measurement and three-dimensional scanning functions
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN110462686B (en) Apparatus and method for obtaining depth information from a scene
CN102278946B (en) Imaging device, distance measuring method
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
JP6708199B2 (en) Imaging device, image processing device, and image processing method
KR101827046B1 (en) Mobile device configured to compute 3d models based on motion sensor data
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN105115560B (en) A kind of non-contact measurement method of cabin volume of compartment
EP2959315A2 (en) Generation of 3d models of an environment
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
JP2014013146A5 (en)
CN110419208B (en) Imaging system, imaging control method, image processing apparatus, and computer readable medium
WO2018098772A1 (en) Method and apparatus for determining viewpoint, electronic device, and computer program product
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
CN110490943B (en) Rapid and accurate calibration method and system of 4D holographic capture system and storage medium
Núnez et al. Data Fusion Calibration for a 3D Laser Range Finder and a Camera using Inertial Data.
WO2019087253A1 (en) Stereo camera calibration method
KR101578891B1 (en) Apparatus and Method Matching Dimension of One Image Up with Dimension of the Other Image Using Pattern Recognition
JP2019118090A (en) Imaging apparatus and control method of imaging apparatus
TWM594322U (en) Camera configuration system with omnidirectional stereo vision
CN114993207B (en) Three-dimensional reconstruction method based on binocular measurement system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14837580

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14837580

Country of ref document: EP

Kind code of ref document: A1