WO2020024144A1 - 一种三维成像方法、装置及终端设备 - Google Patents

一种三维成像方法、装置及终端设备 Download PDF

Info

Publication number
WO2020024144A1
WO2020024144A1 PCT/CN2018/098013 CN2018098013W WO2020024144A1 WO 2020024144 A1 WO2020024144 A1 WO 2020024144A1 CN 2018098013 W CN2018098013 W CN 2018098013W WO 2020024144 A1 WO2020024144 A1 WO 2020024144A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
channel
point
cloud
point cloud
Prior art date
Application number
PCT/CN2018/098013
Other languages
English (en)
French (fr)
Inventor
吕键
Original Assignee
广东朗呈医疗器械科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东朗呈医疗器械科技有限公司 filed Critical 广东朗呈医疗器械科技有限公司
Priority to PCT/CN2018/098013 priority Critical patent/WO2020024144A1/zh
Publication of WO2020024144A1 publication Critical patent/WO2020024144A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present application belongs to the field of three-dimensional imaging technology, and particularly relates to a three-dimensional imaging method, device, and terminal device.
  • a mobile camera such as a handheld scanning device
  • 3D scanning imaging in addition to performing 3D point cloud reconstruction at a single time, it is also necessary to accurately measure the relative position of the camera at different time periods, so that multiple groups of 3D point clouds can be superimposed to finally reconstruct the complete 3D shape of the object.
  • an intraoral scanner for oral orthodontics and prosthodontics
  • an intraoral scanner is used as an example.
  • the depth of field of the imaging system needs to exceed 10mm due to the large depth of the imaging object such as teeth.
  • the image blur caused by the movement of the imaged object is more obvious; and the surface of the human tooth is translucent.
  • the handheld intraoral scanner is extended into the cavity to illuminate the tooth, some light will pass through the tooth surface, thereby affecting the tooth surface. The system distinguishes tooth contours from the acquired images and the accuracy of subsequent 3D data reconstruction.
  • embodiments of the present application provide a three-dimensional imaging method and system to solve the problem that the existing three-dimensional imaging systems and methods have insufficient accuracy for three-dimensional data reconstruction.
  • a first aspect of the present application provides a three-dimensional imaging method, including:
  • the two-dimensional image is divided into two left and right or two sub-images; according to the two-dimensional Acquiring the three-dimensional coordinates of the surface feature points of the target object under multiple channels in the two sub-images of the image;
  • Determining the initial position of the imaging unit under each channel according to the two-dimensional image of the current frame and a matching frame that has enough feature points that match the current frame and the position of the imaging unit is known;
  • Distort the current frame compare the distorted current frame with a matching frame of a known imaging unit position to obtain an accurate imaging unit position, and convert the three-dimensional coordinates of each channel of the current frame to their respective In the overall coordinate system of the channel, a three-dimensional point cloud under each channel of the current frame is formed;
  • One frame of each channel is selected as the root node in space, and the root node is used as the center of the circle, and other frames in the same channel space are searched in order from near to far, and the feature points of the other frames are compared with the root node. Yes; if other frames do not match enough feature points with the root node, mark the other frames as children of the root node; if other frames match enough feature points with the root node, Then mark the other frames as adjacent frames of the root node, and use the adjacent frames as the center of the circle, search other frames in the same channel space in order from near to far, and determine the Adjacent frames and child nodes until all nodes in the space in the channel are searched and marked as adjacent frames or child nodes;
  • a second aspect of the present application provides a three-dimensional imaging device, including:
  • An acquisition module configured to acquire a frame of two-dimensional image, which is acquired by each imaging moment during the process of scanning the target object by the imaging unit; dividing the two-dimensional image into two left and right or two sub-images; Obtaining three-dimensional coordinates of a surface feature point of a target object under multiple channels according to the two sub-images of the two-dimensional image;
  • a determining module configured to determine an initial position of the imaging unit under each channel according to the two-dimensional image of the current frame and a matching frame that has enough feature points that match the current frame and the position of the imaging unit is known;
  • a forming module for warping the current frame comparing the warped current frame with a matching frame of a known imaging unit position to obtain an accurate imaging unit position, and comparing the three-dimensional of each channel of the current frame
  • the coordinates are transformed into the overall coordinate system of their respective channels to form a three-dimensional point cloud under each channel of the current frame;
  • a labeling module which is used to select a frame of each channel as a root node in space, and use the root node as a circle center to search other frames in the same channel space in order from near to far, and compare other frames with the root Nodes compare feature points; if other frames do not match enough feature points with the root node, mark the other frames as children of the root node; if there are enough other frames with the root node If the feature points match, the other frames are marked as adjacent frames of the root node, and the adjacent frames are used as the center of the circle, and other frames in the same channel space are searched in order from near to far to determine all frames.
  • the adjacent frames and child nodes of adjacent frames are described until all nodes in the space in the channel are searched and marked as adjacent frames or child nodes;
  • Merging module which is used to cross-correlate the precise imaging unit position of each adjacent frame with the child and / or root nodes of the adjacent frame to obtain the post-processing position; the three-dimensional point cloud of each adjacent frame The post-processing position is used to compare and average with the child nodes of the adjacent frame to obtain a 3D point cloud centered on the child node; the 3D point cloud centered on the child node is combined with the root node and compared with the root node. Average, merge into a complete 3D point cloud of the channel;
  • a modeling module for forming a three-dimensional model of a target object using an accurate three-dimensional point cloud of each channel
  • a third aspect of the present application provides a terminal device including a memory and a processor.
  • the memory stores a computer program that can run on the processor, and is characterized in that when the processor executes the computer program, To implement the three-dimensional imaging method described in the first aspect.
  • the fourth aspect of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and is characterized in that when the computer program is executed by a processor, the three-dimensional imaging according to the first aspect is implemented. method.
  • the embodiment of the present application uses the multi-channel feature information of the target object to obtain the three-dimensional point cloud data of the target object. Since the acquired original data is more comprehensive, the obtained three-dimensional model is more accurate, and the accuracy of the three-dimensional data reconstruction is improved.
  • FIG. 1 is a method flowchart of a three-dimensional imaging method according to an embodiment of the present application
  • step 12 is a flowchart of implementing step 12 in a three-dimensional imaging method according to an embodiment of the present application
  • FIG. 3 is a method flowchart of another three-dimensional imaging method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a three-dimensional imaging device according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • an embodiment of the present application provides a three-dimensional imaging method.
  • the three-dimensional imaging method is applicable to a situation in which a three-dimensional shape and position data of a target object are acquired based on a color image to implement three-dimensional imaging.
  • the three-dimensional imaging method is implemented by a three-dimensional imaging device.
  • the three-dimensional imaging device is usually configured in a terminal device, and may be implemented by software and / or hardware. Terminal devices include handheld scanning devices, personal computers, or other computing-capable terminals.
  • the three-dimensional imaging method includes two parts: real-time processing and post-processing.
  • the real-time processing includes: steps 11 to 13 and the post-processing includes steps 14 to 16.
  • the specific implementation principle of each step is as follows.
  • Step 11 Obtain a two-dimensional image, which is acquired by each imaging moment in the process of scanning the target object by the imaging unit; divide the two-dimensional image into left and right or up and down two images; according to the two-dimensional The two images of the image acquire the three-dimensional coordinates of the surface feature points of the target object under multiple channels.
  • an imaging unit such as a camera collects one frame of two-dimensional images at each shooting moment of the three-dimensional scanning process of the target object, such as t0, t1, t2,. Divided into left and right images or up and down images. Both images include distribution information of multiple channels of feature points on the surface of the target object. Perform image processing on the two images of the same channel, such as cross-correlation, or the least squares method to compare the left and right or top and bottom images, to obtain a dense three-dimensional point cloud of the target object under each channel at each shooting moment.
  • the dense three-dimensional point cloud does not necessarily reflect the physically existing three-dimensional points, but the three-dimensional coordinates of the feature points on the surface of the target object obtained by comparing the patterns in the two-dimensional image.
  • Scale-invariant feature transform SIFT
  • SURF Speeded Up Robust Features
  • the distance of a specific three-dimensional point cloud near each feature point And the angle information is added to the feature description as an extra feature to get more detailed feature points.
  • three-dimensional data reconstruction is performed on the refined feature points, such as cross-correlation, least squares and other processing, to obtain the three-dimensional coordinates of each feature point on the surface of the target object under different channels.
  • the imaging unit obtains a two-dimensional image or video of multiple points of the target object surface feature points after single-lens multi-eye imaging or multi-lens stereo imaging.
  • Bayer mode (Bayer mode) is used. pattern) or RGB mode with three single colors of red, green and blue; four single colors of CMYK mode (C: cyan, cyan; M: magenta, magenta; Y: yellow, yellow; K: black, black); Or use special customized color or polarization modes.
  • multiple independent channels are acquired, such as: between red and blue, red and green, green and blue, or other specially customized colors or between different polarizations.
  • the associations and differences between the independent channels in the foregoing embodiments are used as the new channels.
  • the feature information of all the channels is used to obtain the 3D point cloud data on the surface of the object, and the object is subjected to 3D color imaging. And build its 3D model.
  • the channels may include multiple independent channels, such as: three single-color channels of red, green, and blue; or four single-color channels of cyan, magenta, yellow, and black. Channel; it can also include multiple independent channels, as well as new channels formed by comparison between multiple independent channels.
  • the handheld three-dimensional imaging system shown in FIG. 1 in the utility model patent CN203606569U (authorization announcement date May 21, 2014) is used to realize the acquisition of two images, that is, the optical system of monocular stereo imaging
  • the optical structure is a single camera lens.
  • An aperture plane with two or more through-holes is placed in front of or behind the lens, and the size of the through-hole is selected according to the required depth of field.
  • the image of the object enters the lens through a right-angle prism or a mirror, and is then divided into two images by an aperture plate, and two two-angles are formed on the imaging element through the oblique prism after the light hole. Dimensional image.
  • these two two-dimensional images may be acquired by two imaging chips separately, or they may be acquired by a single imaging chip and then divided into two parts.
  • the two-dimensional image can be uploaded to a terminal device, such as a personal computer, a server, and other computing devices, for subsequent processing steps, or directly in a computing unit, such as a three-dimensional scanning imager.
  • the processing steps are not limited in this application.
  • Illumination uses white light or a combination of multiple colors of light to image the characteristic points with contrast contrast on the surface of the target object.
  • the characteristic points can be some points with large changes in the texture or color of the object itself, or they can be sprayed with contrasting colors , Such as: black and white particles on the surface of the object, forming some contrasting features on the surface of the object.
  • the feature points can be physically existing points, such as the texture of the object itself or sprayed with small particles, or a pattern formed by some physical features. These features can be highly reflective and almost non-reflective, respectively, and use this obvious difference as a reference point for obtaining three-dimensional data by optical methods.
  • the imaging element is a color imaging element.
  • One pixel on the element contains different color photosensitive elements, which can capture multiple colors of video or images, such as: red, green, and blue three-color photosensitive elements. image.
  • white light or multi-color light
  • characteristic points of different colors in white light or multi-color light
  • optical components such as lenses. Because different colors of light have different wavelengths, they pass through the same optical component on the optical axis.
  • the imaging position will be different, and the image size (magnification) will be slightly different.
  • a color imaging element such as a CMOS chip, is used to collect images of multiple colors of feature points on the surface of the object to obtain images or videos of three (or more) single-color feature points on the surface of the object. It should be noted that although the images of the feature points of different colors are slightly different, the images collected by the system all reflect the same target object.
  • the entire system also collects the correlation and difference between different channels of the feature points, such as red and blue, red and green, and green and blue, as new feature information, and as new channels.
  • different channels of the feature points such as red and blue, red and green, and green and blue
  • new feature information such as red and blue, red and green, and green and blue
  • new channels such as red and blue, red and green, and green and blue
  • the characteristic information of the multiple channels to obtain three-dimensional point cloud data on the surface of the object, three-dimensional color imaging of the object, and building a three-dimensional model thereof, in this embodiment, because the data of more channels is collected, the acquired The original data is more comprehensive, and the obtained 3D model is more accurate.
  • the two-dimensional images and feature points obtained by each channel are slightly different.
  • the chromatic aberration of the optical system can be removed by calibration, and the difference of the characteristic points of different channels obtained after calibration reflects the characteristics of the object surface.
  • a maximum of 3 "relationships" channel data can be obtained, and the data is based on: Differences or combinations of intensity, position, etc. between the feature points, and these differences and associations are used as new feature recognition points.
  • the newly established feature recognition points all reflect the comprehensive characteristics of the surface texture and appearance of the object. Therefore, this application is equivalent to collecting the information of 6 channels on the surface of the object (three original color channels plus three newly-associated channels), which greatly increases the information of the monochrome channels collected by the black and white imaging system. Because the original data obtained is six times that of the monochrome channel, the calculated point cloud and coordinate data will be more accurate and comprehensive.
  • the signals obtained by several photosensitive elements are averaged to obtain a two-dimensional image, and then the two images of this two-dimensional image are image processed to obtain the denseness of the object at that moment.
  • the original data obtained cannot remove errors caused by factors such as chromatic aberration in the optical system, and the accuracy depends only on the original data obtained by a single channel or a single color.
  • Step 12 Determine an initial position of the imaging unit according to the two-dimensional image of the current frame and a matching frame matching enough feature points of the current frame.
  • This step is a step of restoring the position of the imaging unit and obtaining a rough position of the imaging unit. After the three-dimensional imaging unit finishes processing the two-dimensional image of the first frame, the processing of the two-dimensional images of the second frame and subsequent frames will increase the operation of restoring the position of the imaging unit.
  • step 12 includes steps 121 to 123.
  • Step 121 Obtain the feature points of the two-dimensional image of the current frame, and determine whether the number of feature points is sufficient and whether the contained feature information is abundant.
  • step 122 is performed; if not, step 123 is performed.
  • step 122 if the feature points in the current frame are compared with the feature points of the latest frame of the known imaging unit position, if there are enough feature points in the current frame and the latest frame, then each channel is calculated.
  • the relative position of the current frame and the most recent frame is used to obtain the approximate position of the imaging unit; if the current frame does not have enough feature points to match the most recent frame, the position of the current frame is taken as the center of the circle, and the spatial position is searched in order from near to far. Matching frames that match the feature points of the current frame. If there are enough feature points to match the two-dimensional image of the current frame and the matching frame of the known imaging unit location within the preset search range, the current and matching frames of each channel are calculated.
  • Relative position of the imaging unit to get the approximate position of the imaging unit. If no matching frame is found within the preset search range that matches enough feature points of the current frame, it indicates that the location of the imaging unit of the current frame is far from the scanned area. Discard the current frame and acquire a new two-dimensional image;
  • Step 123 If not, discard the current frame and acquire a new two-dimensional image.
  • the system determines whether the number of feature points is sufficient and whether the contained feature information is rich. If so, compare the current frame with the previous frame. If there is no matching information between the current frame and the previous frame, the system will take the position of the current frame as the center of the circle. , From the near to the far of the spatial position, find a frame that matches the feature information of the frame.
  • the multi-channel acquisition used in this application has greatly improved the efficiency and accuracy of the comparison between frames, and it is easier to find matching two frames than the comparison using a single channel.
  • the feature points of three channels: red, blue, and green are matched and compared at the same time; another example is six channels: red, blue, green, red and green, red and blue, and green and blue And compare the feature points.
  • the relative positions of the two frames matched by each channel are independently calculated to obtain the rough position of the imaging unit. For example, record the rough positions of the imaging units for three channels: red, blue, and green; for another example, record six channels, such as: red, green, blue, red and green, red and blue, and green and blue The approximate location of the imaging unit. At this time, the position of the imaging unit of the current frame has been restored, thereby establishing a rough position of each monochrome channel. However, if the search range matching the current frame exceeds the set value and still cannot obtain enough feature point matches, it means that the camera position of the current frame is far from the scanned area, so the current frame is discarded and a new frame image is acquired.
  • the object-side focal lengths of the different colors of light are different, and the depth of the effective working area is different from before and after, but there are overlapping portions.
  • the multi-channel method not only facilitates feature point matching in the process of two-dimensional image processing, but also increases the chance of finding matching points in the longitudinal imaging depth direction, that is, the change in distance in the longitudinal direction (distance from the object) is not easy to produce match.
  • Blue light has the shortest focal length but the longest depth of field, followed by green light, and red light has the longest focal length but the shortest depth of field. From the defocus condition of each color of the image, that is, the degree of blur, the distance of the photographed object from the focal plane can be obtained, so as to feedback to the operator and remind to maintain an appropriate operating distance.
  • the multi-channel acquisition method can effectively solve the problem that the handheld device cannot match the previous and subsequent frames due to human factors such as jitter during operation.
  • Step 13 Distort the current frame, compare the distorted current frame with a matching frame of a known imaging unit position, obtain the precise imaging unit position of the current frame, and convert the three-dimensional coordinates of each channel of the current frame to its In the overall coordinate system of each channel, a three-dimensional point cloud under each channel of the current frame is formed.
  • Step 13 is a step of refining the camera position and establishing a three-dimensional point cloud. After obtaining a rough position, the current frame is made more similar to a matching frame of a known imaging unit position by an image warping method.
  • the distortion process uses three monochrome channels, such as red, green, and blue, to distort the image.
  • the distortion process also uses newly added feature channels, such as red and The association of green, green and blue, and red and blue colors, that is, the association between monochrome channels, assists image distortion and makes distortion more accurate.
  • a cross-correlation comparison is performed using the distorted current frame and a matching frame with a known imaging unit position to obtain a more accurate camera position, that is, to obtain the real-time position of each channel of the current frame, and record the real-time position of each channel.
  • the real-time positions of the three monochrome channels of red, green, and blue and the real-time positions of the six channels of red, green, blue, red and green, red and blue, and green and blue.
  • the three-dimensional coordinates of each channel of the current frame are transformed into the overall coordinate system of each channel.
  • the reconstruction of the 3D model of the object has been initially completed, and then the point cloud reconstruction of each channel of the "post-processing part" is performed.
  • some frames are superimposed using the temporal contiguity of different frames or the spatial adjacent principle.
  • the time-adjacent method is easy to cause a large deviation after reconstruction for a long time after scanning.
  • all the collected data will be post-processed to obtain more accurate 3D data.
  • post-processing can use spatial continuity to rearrange the frames at different times according to the spatial distance. And comparison, thereby reducing the superposition of errors caused by frame comparison at adjacent or similar time, and further improving the accuracy of imaging.
  • Step 14 Select one frame of each channel as the root node in space, and use the root node as the center of the circle to search other frames in the same channel in order from near to far, and perform other frames with the root node.
  • Feature point comparison if other frames do not match enough feature points with the root node, mark the other frames as children of the root node; if other frames have enough features with the root node Point matching, the other frame is marked as a neighboring frame of the root node, and the neighboring frame is used as a circle center, and other frames in the same channel space are searched in order from near to far to determine the phase Adjacent frames and children of adjacent frames until all nodes in the space in the channel are searched and marked as adjacent frames or children.
  • Step 15 After distorting the positions of the precise imaging units of adjacent frames, perform cross-correlation comparison with the child nodes and / or root nodes of the adjacent frames to obtain post-processing positions; after using the 3D point cloud of each adjacent frame, Compare and average the processing position with the child nodes of the adjacent frame to obtain a 3D point cloud centered on the child node; merge and compare each 3D point cloud centered on the child node with the root node to compare and average, Merge into a full 3D point cloud of the channel.
  • a frame of one of the channels is randomly selected as a “root node” in space, which may be the first frame, the last frame, or a frame in the center of the space.
  • the root node As the center, the feature point comparison between each frame in the space and the root node in the same channel data is searched gradually from near to far. If there is a sufficient number of feature points matching the root node to distort the frame and make a more accurate cross-correlation comparison with the root node, then this frame is considered as the "adjacent frame" of the root node. After this frame is marked as an adjacent frame, it will no longer be searched.
  • a frame matches some feature points with the root node but is not sufficient for further cross-correlation comparison, then this frame is considered as a "child node" of the root node. Then use the adjacent frames as the center of the circle, and similarly search the adjacent frames and child nodes of each adjacent frame gradually from near to far, until all the frames in the space are searched and marked as adjacent frames or child nodes. After all frames collected by this channel are searched and labeled, the real-time position of each adjacent frame will be cross-correlated with its nodes, including child nodes and / or root nodes, to obtain a more accurate "post-processing position".
  • the 3D point cloud of each adjacent frame will also be compared and averaged with its child nodes using the post-processing position, so as to merge into a more accurate, larger, 3D point cloud centered on the child nodes. Then, the 3D point cloud obtained by merging the 3D point cloud centered on each child node is finally compared with the root node and averaged to finally merge into a more accurate and complete 3D point cloud of the channel. Then, the above-mentioned processing is performed on the frame data of all the collected channels to obtain an accurate three-dimensional point cloud of each channel. For example, three precise three-dimensional point clouds that include three channels, that is, three monochrome channels, or six precise three-dimensional points that include six channels, that is, three monochrome channel point clouds and the relationship between three monochrome channels Point cloud.
  • Step 16 Use the accurate three-dimensional point cloud of each channel to form a three-dimensional model of the target object.
  • Step 16 is a step of reconstructing multi-channel three-dimensional data.
  • step 16 includes: averaging the point cloud spatial coordinates of the same feature point on the target object surface in each channel according to the precise three-dimensional point cloud of each channel to obtain a three-dimensional feature point of the target object surface. Coordinates to form a three-dimensional model of the target object.
  • step 16 includes: according to the precise three-dimensional point cloud of each channel, if it is determined that the feature points of the target object have the point cloud space coordinates of each channel at the same time, determining the feature point as the main feature cloud point, and The point cloud space coordinates of each channel of the main feature cloud point are averaged to obtain the main point cloud coordinates of the main feature cloud point.
  • the feature point of the target object does not exist in the point cloud space coordinates of each channel, then Determine the feature point as a branch feature cloud point, obtain a relative position of the branch feature cloud point and an adjacent trunk feature cloud point, and obtain a branch point cloud coordinate; combine the trunk point cloud coordinate and the branch The point cloud coordinates obtain the three-dimensional coordinates of each feature point on the surface of the target object, thereby forming a three-dimensional model of the target object.
  • the nine feature points of the target object surface feature points 11 to 33 have three point cloud space coordinates at the same time, then the nine feature points are determined as the main feature cloud points, and the nine main features are respectively
  • the red, green, and blue channels of the cloud point namely the three channels of R, G, and B, are averaged from the point cloud spatial coordinates to obtain the nine main point cloud coordinates of K 11 to K 33 .
  • G31, B31, and R31 are the point cloud space coordinates of the same feature point of the target object under the three channels of red, blue, and green, respectively. Therefore, the main point cloud coordinates of the main feature cloud point 31
  • the space coordinates of the three point clouds G31, B31 and R31 are averaged.
  • the trunk characteristic cloud point adjacent to it is 31.
  • the point cloud spatial coordinates of the feature points are directly averaged to obtain the three-dimensional coordinates of each feature point.
  • the three-dimensional coordinate point cloud matrix established contains more coordinate information of the feature point cloud and is more accurate.
  • the method further includes: combining color information of each feature point of the target object surface extracted from the two-dimensional image to generate a color three-dimensional model of the target object.
  • the complete spatial coordinates of the target object are obtained. Based on this, combined with the color information extracted from the color two-dimensional image, At the same time that the complete spatial coordinates of the target object are reproduced, the color data of the target object is also obtained, and thus a colored three-dimensional model of the target object is obtained. In addition, the complete spatial coordinates and color data of the reproduced object are acquired, and a colorful three-dimensional model can be displayed on the screen.
  • FIG. 4 shows a schematic diagram of a three-dimensional imaging device according to an embodiment of the present application.
  • the three-dimensional imaging device executes any of the foregoing three-dimensional imaging methods, and is usually configured on a terminal device.
  • the three-dimensional imaging device includes the following modules.
  • the obtaining module 41 is configured to obtain a frame of a two-dimensional image, which is acquired by each imaging moment when the imaging unit scans the target object; the two-dimensional image is divided into two left and right or two sub-images Obtaining three-dimensional coordinates of the surface feature points of the target object under multiple channels according to the two sub-images of the two-dimensional image;
  • a determining module 42 is configured to determine an initial position of the imaging unit in each channel according to the two-dimensional image of the current frame and a matching frame that has enough feature points that match the current frame and has a known imaging unit position. ;
  • a forming module 43 is configured to distort the current frame, compare the distorted current frame with a matching frame of a known imaging unit position, obtain an accurate imaging unit position, and compare the The 3D coordinates are transformed into the overall coordinate system of their respective channels to form a 3D point cloud under each channel of the current frame;
  • a labeling module 44 is configured to select a frame of each channel as a root node in space, and use the root node as a circle center to search other frames in the same channel space in order from near to far, and compare other frames with the The root node performs feature point comparison; if other frames do not match enough feature points with the root node, the other frames are marked as children of the root node; if other frames are sufficient with the root node If multiple feature points match, the other frame is marked as a neighboring frame of the root node, and the neighboring frame is used as a circle center, and other frames in the same channel space are searched in order from near to far to determine Adjacent frames and child nodes of the adjacent frames until all nodes in the space in the channel are searched and marked as adjacent frames or child nodes;
  • the merging module 45 is configured to cross-correlate the precise imaging unit position of each adjacent frame with the child nodes and / or root nodes of the adjacent frame after distortion to obtain the post-processing position; the three-dimensional points of each adjacent frame
  • the post-processing position of the cloud is used to compare and average the child nodes of the adjacent frame to obtain a three-dimensional point cloud centered on the child node; the three-dimensional point cloud centered on the child node is merged and compared with the root node. And average, merge into a complete 3D point cloud of the channel;
  • a modeling module 46 is configured to form a three-dimensional model of a target object by using the accurate three-dimensional point cloud of each channel.
  • modeling module 46 is specifically configured to:
  • the feature point is determined as the main feature cloud point, and the point of each channel of the main feature cloud point is determined by The cloud space coordinates are averaged to obtain the main point cloud coordinates of the main feature cloud points.
  • the feature point is determined as the branch feature cloud point, and obtained The relative position of the branch characteristic cloud point and the adjacent trunk characteristic cloud point to obtain the branch point cloud coordinate; combining the trunk point cloud coordinate and the branch point cloud coordinate to obtain a three-dimensional feature point of the target object surface Coordinates to form a three-dimensional model of the target object.
  • FIG. 5 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • the terminal device 5 of this embodiment includes a processor 50, a memory 51, and a computer program 52 stored in the memory 51 and executable on the processor 50, such as a three-dimensional imaging method.
  • the processor 50 executes the computer program 52
  • the steps in the embodiment of the three-dimensional imaging method are implemented, for example, steps 11 to 16 shown in FIG.
  • the processor 50 executes the computer program 52
  • the functions of the modules / units in the embodiments of the three-dimensional imaging device described above are implemented, for example, the functions of modules 41 to 46 shown in FIG. 4.
  • the computer program 52 may be divided into one or more modules / units, and the one or more modules / units are stored in the memory 51 and executed by the processor 50 to complete This application.
  • the one or more modules / units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 52 in the terminal device 5.
  • the terminal device 5 may be a computing device such as a handheld three-dimensional imager, a scanning three-dimensional imager, a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 50 and a memory 51.
  • FIG. 5 is only an example of the terminal device 5, and does not constitute a limitation on the terminal device 5. It may include more or fewer components than shown in the figure, or combine some components or different components.
  • the terminal device may further include an input / output device, a network access device, a bus, and the like.
  • the processor 50 may be a central processing unit (Central Processing Unit (CPU), or other general-purpose processors, digital signal processors (DSPs), and application specific integrated circuits (Application Specific Integrated Circuit (ASIC), off-the-shelf Programmable Gate Array (FPGA), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSPs digital signal processors
  • ASIC Application Specific Integrated Circuit
  • FPGA off-the-shelf Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5.
  • the memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (SD) provided on the terminal device 5. Card, flash card, etc. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device.
  • the memory 51 is configured to store the computer program and other programs and data required by the terminal device.
  • the memory 51 may also be used to temporarily store data that has been output or is to be output.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated module / unit When the integrated module / unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, this application implements all or part of the processes in the method of the above embodiment, and can also be completed by a computer program instructing related hardware.
  • the computer program can be stored in a computer-readable storage medium. The computer When the program is executed by a processor, the steps of the foregoing method embodiments can be implemented.

Abstract

本申请适用于三维成像技术领域,具体涉及一种三维成像方法、装置及终端设备,所述三维成像方法包括实时处理和后处理两部分,实时处理部分包括:获取成像单元在三维扫描过程中采集的二维图像,对所述二维图像进行快速处理,实时计算多个通道下成像单元的位置和三维点云;后处理部分包括对采集到的二维图像进行二次处理,得到各通道下的完整三维点云,根据完整三维点云重建目标物体的三维模型。本申请实施例采用多通道的特征信息获取目标物体三维点云数据,由于所获取的原始数据更全面,得到的三维模型更准确,提高了三维数据重建的精度。

Description

一种三维成像方法、装置及终端设备 技术领域
本申请属于三维成像技术领域,具体涉及一种三维成像方法、装置及终端设备。
背景技术
现有多种方法通过二维数据获取三维数据,如基于激光反射的台式三维激光立体扫描仪、基于光栅投影的照相式的三维扫描系统、基于光学相干断层扫描技术的逐点三维扫描系统等。
在三维成像领域,当三维相机的视野小于物体的形状或者相机需要多个视角才能完整拍摄物体时,通常需要使用移动相机,比如手持式扫描设备来获取物体的完整三维图像。而进行三维扫描成像时,除了需要在单个时间进行三维点云重建以外,还需要精确地测量不同时间段的相机相对位置,从而可以将多组三维点云叠加来最终重建物体的完整三维形状。这里面就有两个容易产生误差的地方:一是获取到的相机位置的误差,另一个是获取的三维点云的误差。在目前三维数据的重建过程中,这两个因素一起叠加,加上本身多组的三维点云间的叠加,往往使累积的误差成非线性增长,导致最终的三维图像存在较大的精度误差,成像失真较为严重。
三维扫描成像的光学方法有多种,如多目立体成像、单镜头立体成像、光栅投影成像、散焦成像、动态波前采样、共聚焦成像等。然而,上述这些方法对三维数据重建的精度并不一定适用于各种实际应用。以用于齿科正畸和修复的口内三维取模应用口内扫描仪为例,在口腔内采集牙齿的三维数据时,由于成像对象如牙齿等纵深大,成像系统的景深要求超过10mm,手抖动或被成像对象挪动带来的图像模糊更明显;并且人体牙齿的表面是半透明的,当手持式口内扫描仪伸入口腔内照射到牙齿上时会有部分光线穿过牙齿表面,从而影响到系统从采集的图像上对牙齿轮廓的分辨以及后续三维数据重建的精度。
技术问题
有鉴于此,本申请实施例提供了一种三维成像方法和系统,以解决现有三维成像系统和方法三维数据重建的精度不够的问题。
技术解决方案
本申请第一方面提供了一种三维成像方法,包括:
获取二维图像,每帧所述二维图像由成像单元扫描目标物体过程中的每个拍摄时刻采集得到;将所述二维图像分成左右两幅或上下两幅子图像;根据所述二维图像的两幅所述子图像,获取多个通道下目标物体的表面特征点的三维坐标;
根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配且已知成像单元位置的匹配帧,确定各个通道下的所述成像单元的初始位置;
扭曲所述当前帧,将扭曲后的所述当前帧与已知成像单元位置的匹配帧进行比对,得到精确的成像单元位置,并将当前帧每个通道的所述三维坐标转换到其各自通道的整体坐标系下,形成当前帧各个通道下的三维点云;
分别选定各通道的一帧作为空间上的根节点,以根节点为圆心,从近到远依次搜索同一个通道中空间内的其他帧,并将其他帧与所述根节点进行特征点比对;若其他帧与所述根节点没有足够多的特征点匹配,则将所述其他帧标记为所述根节点的子节点;若其他帧与所述根节点有足够多的特征点匹配,则将所述其他帧标记为所述根节点的相邻帧,并以所述相邻帧为圆心,从近到远依次搜索同一个通道中空间内的其他帧,确定所述相邻帧的相邻帧和子节点,直至该通道中空间内的所有节点均被搜索到并被标记为相邻帧或子节点为止;
将各相邻帧的精确成像单元位置经过扭曲后与该相邻帧的子节点和/或根节点进行互相关比对,得到后处理位置;各相邻帧的三维点云使用后处理位置与该相邻帧的子节点进行比对和平均,得到以子节点为中心的三维点云;将各以子节点为中心的三维点云合并后与所述根节点比对和平均,合并成该通道的完整三维点云;
利用各个通道的精确三维点云形成目标物体的三维模型
本申请第二方面提供了一种三维成像装置,包括:
获取模块,用于获取一帧二维图像,所述二维图像由成像单元扫描目标物体过程中的每个拍摄时刻采集得到;将所述二维图像分成左右两幅或上下两幅子图像;根据所述二维图像的两幅所述子图像,获取多个通道下目标物体的表面特征点的三维坐标;
确定模块,用于根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配且已知成像单元位置的匹配帧,确定各个通道下的所述成像单元的初始位置;
形成模块,用于扭曲所述当前帧,将扭曲后的所述当前帧与已知成像单元位置的匹配帧进行比对,得到精确的成像单元位置,并将当前帧每个通道的所述三维坐标转换到其各自通道的整体坐标系下,形成当前帧各个通道下的三维点云;
标记模块,用于分别选定各通道的一帧作为空间上的根节点,以根节点为圆心,从近到远依次搜索同一个通道中空间内的其他帧,并将其他帧与所述根节点进行特征点比对;若其他帧与所述根节点没有足够多的特征点匹配,则将所述其他帧标记为所述根节点的子节点;若其他帧与所述根节点有足够多的特征点匹配,则将所述其他帧标记为所述根节点的相邻帧,并以所述相邻帧为圆心,从近到远依次搜索同一个通道中空间内的其他帧,确定所述相邻帧的相邻帧和子节点,直至该通道中空间内的所有节点均被搜索到并被标记为相邻帧或子节点为止;
合并模块,用于将各相邻帧的精确成像单元位置经过扭曲后与该相邻帧的子节点和/或根节点进行互相关比对,得到后处理位置;各相邻帧的三维点云使用后处理位置与该相邻帧的子节点进行比对和平均,得到以子节点为中心的三维点云;将各以子节点为中心的三维点云合并后与所述根节点比对和平均,合并成该通道的完整三维点云;
建模模块,用于利用各个通道的精确三维点云形成目标物体的三维模型;
本申请第三方面提供了一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现上述第一方面所述的三维成像方法。
本申请第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述第一方面所述的三维成像方法。
有益效果
本申请实施例采用目标物体多通道的特征信息,得到获取目标物体三维点云数据,由于所获取的原始数据更全面,得到的三维模型更准确,提高了三维数据重建的精度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种三维成像方法的方法流程图;
图2是本申请实施例提供的一种三维成像方法中步骤12的实现流程图;
图3是本申请实施例提供的另一种三维成像方法的方法流程图;
图4是本申请实施例提供的一种三维成像装置的示意图;
图5是本申请实施例提供的一种终端设备的示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
如图1所示,本申请实施例提供一种三维成像方法,该三维成像方法适用于基于彩色图像获取目标物体三维形状及位置数据从而实现三维成像的情形,该三维成像方法由三维成像装置来执行,该三维成像装置通常配置于终端设备,可由软件和/或硬件实现。终端设备包括手持式扫描设备、个人电脑、或其他可执行计算的终端。该三维成像方法包括实时处理及后期处理两个部分,其中,实时处理部分包括:步骤11至步骤13,后期处理部分包括步骤14至步骤16,各步骤的具体实现原理如下。
步骤11、获取一帧二维图像,所述二维图像由成像单元扫描目标物体过程中的每个拍摄时刻采集得到;将所述二维图像分成左右或上下两幅图像;根据所述二维图像的所述两幅图像,获取多个通道下目标物体表面特征点的三维坐标。
其中,通过成像单元,如相机,在对目标物体的三维扫描过程的每个拍摄时刻,如t0、t1、t2、……、tn,分别采集一帧二维图像,并将每帧二维图像分成左右两幅图像或上下两幅图像。这两幅图像都包括目标物体表面特征点的多个通道的分布信息。对同一个通道的两幅图像进行图像处理,如互相关、或最小二乘法等比对左右或上下两幅图像,得到每个拍摄时刻各个通道下目标物体的密集三维点云。需要说明的是,密集三维点云反映的并不一定是物理存在的三维点,而是通过二维图像中的图案对比获得的目标物体表面特征点的三维坐标。同时利用如尺度不变特征转换(Scale-invariant feature transform,SIFT)、加速稳健特征(Speeded Up Robust Features,SURF)、或其它的特征算法来对图像中的特征点进行粗略的特征描述,再将每个特征点附近特定的三维点云的距离及角度信息作为额外的特征加入到特征描述,得到更细化的特征点。然后,对细化的特征点进行三维数据重建,例如互相关、最小二乘法等处理,得到在不同通道下目标物体表面各个特征点的三维坐标。
作为本申请一实施例,成像单元获得目标物体表面特征点经过单镜头多目成像或多镜头立体成像后的多个通道的二维图像或视频,如:采用拜尔模式(Bayer pattern)或RGB模式的红、绿、蓝三个单一颜色;采用CMYK模式(C:cyan,青;M:magenta,洋红;Y:yellow,黄;K:black,黑色)的四个单一颜色;或者采用特殊订制的颜色或偏振等模式。
作为本申请另一实施例,在前述实施例的基础上,获取多个独立通道间,如:红与蓝、红与绿、绿与蓝、或其他特殊订制的颜色之间或不同偏振之间的关联与差异作为新的特征信息,即将前述实施例的独立通道之间的关联与差异作为新的通道,利用这所有的通道的特征信息获取物体表面三维点云数据,对物体进行三维彩色成像,并构建其三维模型。
本申请对通道的模式和数量不做具体限定,通道可以包括多个独立的通道,如:红、绿和蓝三个单一颜色的通道;或,青、洋红、黄和黑四个单一颜色的通道;也可以在包括多个独立通道的同时,还包括多个独立通道间的比对形成的新的通道。
作为本申请一实施例,采用实用新型专利CN203606569U(授权公告日2014年5月21日)中图1所示的手持三维成像系统来实现两幅图像的采集,即采用单目立体成像的光学系统,其光学结构为单个相机镜头,镜头前或后放置一个拥有两个或两个以上通光孔的光圈平面,根据所需的景深选择通光孔的大小。如实用新型专利CN203606569U中图1所示,物体的影像通过直角棱镜或反射镜进入镜头,之后被光圈板分割成两个像,并通过通光孔后的斜棱镜在成像元件上形成两个二维图像。需要注意的是,这两个二维图像可以是由两个成像芯片分别获取,也可以是由单个成像芯片采集然后分成两部分形成。所述二维图像可上传至终端设备,如个人电脑、服务器等具有计算能力的终端设备进行后续的处理步骤,也可以直接在具有计算能力的成像单元,如三维扫描成像仪等,直接进行后续的处理步骤,本申请对此不做限制。
照明采用白光或多种颜色光组合照明,对目标物体表面有对比度反差的特征点成像,该特征点可以是物体本身纹理或颜色变化较大的某些点,也可以是通过喷洒混有对比颜色,如:黑色与白色,的微颗粒到物体表面,在物体表面形成一些对比强的特征。这里需要说明的是,特征点可以是物理存在的点,如物体本身纹理或喷洒的微小颗粒,也可以是由一些物理特征形成的图案。这些特征可分别是反光较强的及几乎不反光的,通过这明显的差别作为光学方法获取三维数据的参考点。
成像元件选用彩色成像元件,元件上一个像素包含不同颜色的感光元件,可采集多个颜色的视频或图像,如:包括红、绿、蓝三种颜色的感光元件,采集三个颜色的视频或图像。当使用白光(或多色光)照明时,白光(或多色光)中不同颜色的特征点经过镜头等光学组件时,因为不同颜色的光拥有不同的波长,其经过同一光学组件后在光轴上的成像位置将不同,其像的大小(放大率)也有略微差异。利用彩色成像元件,例如CMOS芯片,上的不同颜色的感光元件分别采集物体表面特征点多个颜色的像,得到物体表面特征点三个(或多个)单一颜色的图像或视频。需要说明的是,尽管获得的不同颜色的特征点的像有些微差别,但系统采集的像都是反映同一个目标物体的。
需要说的是,在本申请其他实施例中,整个系统还采集特征点不同通道间,如红与蓝,红与绿,绿与蓝间的关联与差异作为新的特征信息,作为新的通道,利用这多个通道的特征信息获取物体表面三维点云数据,对物体进行三维彩色成像,并构建其三维模型,在这种实施例中,因为采集的是更多个通道的数据,所获取的原始数据更全面,得到的三维模型更准确。
因光学系统的色差及物体表面特征点的特性不同,如颜色、纹理特征不同,各个通道获得的二维图像及特征点都有些细微差别。光学系统的色差可通过校准去除,而经校准后获得的不同通道特征点差异则反映物体表面的特性。
利用上述不同通道特征点间的差异与关联,如:红与绿,红与蓝,绿与蓝颜色通道之间,最多能获得3个“特征间关系”的通道数据,该数据基于如:通道特征点间的强度、位置等的差异或组合,把这些差异与关联作为新的特征识别点,这新建立的特征识别点均反映物体表面纹理、外观的综合特性。因此,本申请相当于采集了物体表面6个通道的信息(三个原有的颜色通道加上三个新增的关联通道),比黑白成像系统采集单色通道的信息大大增加。因为获取的原始数据是单色通道的六倍,其计算得到的点云及坐标数据会更准确更全面。普通的单镜头单色通道的图像换算坐标时,只是把几个感光元件获得的信号进行平均获得一个二维图像,再把这个二维影像的两幅图像进行图像处理并获得该时刻物体的密集三维点云及三维特征点坐标,其获取的原始数据无法去除光学系统中色差等因素造成的误差,准确度仅依赖于单一通道或单一颜色获取的原始数据。
步骤12、根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配的匹配帧,确定所述成像单元的初始位置。
该步骤为恢复成像单元位置,获取成像单元粗略位置的步骤。在三维成像单元完成第一帧二维图像的处理后,第二帧及后续帧二维图像的处理将增加恢复成像单元位置的运算。
具体的,如图2所示,步骤12包括步骤121至123。
步骤121,获取当前帧二维图像的特征点情况,判断特征点数目是否足够以及包含的特征信息是否丰富。
若特征点数目足够且包含的特征信息丰富,则执行步骤122,若否,则执行步骤123。
步骤122,若是,则将当前帧里的特征点与已知成像单元位置的最近一帧的特征点进行特征比对,若当前帧与最近一帧有足够多的特征点匹配,则计算各个通道当前帧与最近一帧的相对位置,得到成像单元的粗略位置;若当前帧与最近一帧没有足够的特征点匹配,则以当前帧的位置为圆心,从空间位置的近到远依次寻找与当前帧的特征点匹配的匹配帧,若在预设搜索范围内搜索到当前帧和已知成像单元位置的匹配帧二维图像有足够多的特征点匹配,则计算各个通道当前帧和匹配帧的相对位置,得到成像单元的粗略位置,若在预设搜索范围内未搜索到与当前帧有足够多的特征点匹配的匹配帧,说明当前帧的成像单元的位置已远离扫描过的区域,则抛弃当前帧并采集新的一帧二维图像;
步骤123,若否,则抛弃当前帧并采集新的一帧二维图像。
其中,判断特征点数目是否足够以及包含的特征信息是否丰富,若是,则把当前帧与前一帧进行比对,如果当前帧与前一帧没有匹配信息,系统将以当前帧的位置为圆心,从空间位置的近到远寻找与本帧特征信息匹配的帧。本申请采用的多通道采集,在帧间比对的效率及准确度上有了大幅的提升,比采用单一通道的比对能更容易找到匹配的两帧。例如,同时对三个通道:红、蓝和绿三个通道的特征点进行匹配比较;又如,同时对六个通道:红、蓝、绿、红与绿、红与蓝、绿与蓝间,的特征点进行匹配比较。若当前帧和已知成像单元位置的前一帧获得足够多的匹配点匹配后,独立计算出各个通道匹配的两帧的相对位置,得到成像单元的粗略位置。例如,记录三个通道:红、蓝和绿三个通道的成像单元的粗略位置;又如,记录六个通道,如:红、绿、蓝、红与绿、红与蓝、绿与蓝间的成像单元的粗略位置。此时,即当前帧的成像单元位置已恢复,从而建立了各个单色通道的粗略位置。而如果搜索与当前帧匹配的搜索范围超过设定值仍无法获得足够的特征点匹配,说明当前帧的相机位置已远离扫描过的区域,所以抛弃当前帧并采集新的一帧图像。
需要说的是,当前帧和已知成像单元位置的最近一帧获得足够多的特征点匹配,其中,足够多的特征点,其数量为经验值,预先设置于终端中,在需要的情况下,还可以进行更改。本申请对此不做具体限制。
如前所述的不同颜色的光的物方焦距不同,其有效的工作区域的纵深有前后的差别但也有重叠部分。采用多通道的方法不仅是在二维图像处理过程中有利于特征点匹配,更在纵向成像深度方向上增加找到匹配点的机会,即纵向方向上距离的变化(离物体远近)不容易产生不匹配。蓝光的像方焦距最短但景深最长,绿光次之,红光的像方焦距最长但景深最短。从图像的各个颜色的离焦情况,即模糊程度,可获取所拍物体离焦平面的远近,从而反馈给操作者,提醒保持适当的操作距离。利用蓝光景深较大的特点,首先比对蓝色图像帧的匹配情况,这样能有效提高匹配成功的效率。多通道的采集方法可有效解决手持式的设备因操作过程中抖动等人为因素造成前后帧图像无法匹配的问题。
步骤13、扭曲当前帧,将扭曲后的当前帧与已知成像单元位置的匹配帧进行比对,得到当前帧精确的成像单元位置,并将当前帧每个通道的所述三维坐标转换到其各自通道的整体坐标系下,形成当前帧各个通道下的三维点云。
步骤13为细化相机位置并建立三维点云的步骤,当获得粗略位置后,通过图像扭曲的方法把当前帧变得与已知成像单元位置的匹配帧更相似。在本申请一实施例中,扭曲过程利用三个单色通道,如红、绿、蓝,进行图像扭曲;在本申请其他实施例中,扭曲过程还利用了新增加的特征通道,如红与绿,绿与蓝,红与蓝颜色的关联,即单色通道间的关联,辅助进行图像扭曲,使扭曲更加的精确。然后,利用扭曲后的当前帧与已知成像单元位置的匹配帧进行互相关比对,从而得到更精确的相机位置,即得到当前帧各个通道的实时位置,并记录各个通道的实时位置。例如,红、绿、蓝三个单色通道的实时位置;又如,红、绿、蓝、红与绿间、红与蓝间、绿与蓝间六个通道的实时位置。利用得到的实时位置,将当前帧各个通道的三维坐标转换到各自通道的整体坐标系下。
当实时扫描结束时,物体的三维模型重建已初步完成,接着进行“后处理部分”的各个通道的点云重建。实时处理过程中有部分帧是利用不同帧在时间上的相邻也有是利用空间位置相邻的原则叠加的。利用时间相邻的方法容易在长时间扫描后,造成重建时误差积累而引起较大的偏差。这时将对采集到的所有数据进行后处理以得到更精确的三维数据。由于实时处理后得到的三维模型已经把扫描中采集到的所有时刻的帧在空间上连成了一个整体,后处理可以利用空间上的连续性将不同时刻的帧根据空间上的远近进行重新排列和比对,从而减少相邻或相近时刻帧比对而引起的误差叠加,进一步提高成像的精度。
步骤14、分别选定各通道的一帧作为空间上的根节点,以根节点为圆心,从近到远依次搜索同一个通道中空间内的其他帧,并将其他帧与所述根节点进行特征点比对;若其他帧与所述根节点没有足够多的特征点匹配,则将所述其他帧标记为所述根节点的子节点;若其他帧与所述根节点有足够多的特征点匹配,则将所述其他帧标记为所述根节点的相邻帧,并以所述相邻帧为圆心,从近到远依次搜索同一个通道中空间内的其他帧,确定所述相邻帧的相邻帧和子节点,直至该通道中空间内的所有节点均被搜索到并被标记为相邻帧或子节点为止。
步骤15、将各相邻帧的精确成像单元位置经过扭曲后与该相邻帧的子节点和/或根节点进行互相关比对,得到后处理位置;各相邻帧的三维点云使用后处理位置与该相邻帧的子节点进行比对和平均,得到以子节点为中心的三维点云;将各以子节点为中心的三维点云合并后与所述根节点比对和平均,合并成该通道的完整三维点云。
本申请首先随机选定其中一个通道的一帧作为空间上的“根节点”,可以是第一帧、最后一帧、或者空间里中心位置的帧等。以根节点为圆心,从近到远逐渐搜索同一个通道数据中空间内各帧与根节点的特征点比对情况。如果某一帧与根节点存在足够多的特征点匹配,可以扭曲此帧进而与根节点进行更精确的互相关比对,那么此帧被认为是根节点的“相邻帧”。此帧被标记为相邻帧之后将不再被继续搜索。如果某一帧与根节点存在一些特征点匹配但不足以进一步进行互相关比对,那么此帧被认为是根节点的“子节点”。然后以各相邻帧为圆心,同样从近到远逐渐搜索各相邻帧的相邻帧和子节点,直到空间内所有的帧都被搜索到并被标记为相邻帧或子节点为止。该通道所采集的所有帧搜索并标记完成后,各相邻帧的实时位置将通过扭曲和其节点,包括子节点和/或根节点进行互相关比对得到更精确的“后处理位置”。同时各相邻帧的三维点云也将使用后处理位置与其子节点进行比对和平均,从而合并成更精确、范围更大、以子节点为中心的三维点云。然后,以各子节点为中心的三维点云合并得到的三维点云再通过与根节点比对和平均最终合并成一个该通道的更精确的完整的三维点云。然后,对所有采集的通道的帧数据进行上述的处理,得到各个通道的精确三维点云。例如,包含三个通道,即三个单色通道的共三个精确三维点云,或包括六个通道,即三个单色通道点云及三个单色通道间关系的共六个精确三维点云。
步骤16、利用各个通道的精确三维点云形成目标物体的三维模型。
其中,步骤16为对多通道三维数据的重建步骤。
作为本申请一实施例,步骤16包括:根据各通道的精确三维点云,分别对各通道中目标物体表面同一特征点的点云空间坐标作平均,得到所述目标物体表面各特征点的三维坐标,从而形成目标物体的三维模型。
作为本申请另一实施例,步骤16包括:根据各通道的精确三维点云,若确定目标物体的特征点同时存在各个通道的点云空间坐标,则确定该特征点为主干特征云点,通过对所述主干特征云点的各通道的点云空间坐标作平均,得到所述主干特征云点的主干点云坐标;若确定目标物体的特征点未同时存在各个通道的点云空间坐标,则确定该特征点为枝干特征云点,获取所述枝干特征云点与相邻的主干特征云点的相对位置,得到枝干点云坐标;结合所述主干点云坐标和所述枝干点云坐标得到目标物体表面各特征点的三维坐标,从而形成目标物体的三维模型。
其中,获取所述枝干特征云点与相邻的主干特征云点的相对位置,得到枝干点云坐标,包括:
分别计算所述枝干特征云点的每个通道中点云空间坐标,与相邻的主干特征云点的各通道中点云空间坐标的相对位置分矢量,对各所述相对位置分矢量作平均,得到所述枝干特征云点与所述主干特征云点的相对位置矢量;将所述相对位置矢量结合所述主干特征云点的主干点云坐标,得到所述枝干特征云点的枝干点云坐标。
为了更好的说明该枝干点云坐标的计算方法,以三个单色通道,如红色、绿色和蓝色三个通道,的精确三维点云的方案为例进行说明。
如图3所示,目标物体表面特征点11至33这9个特征点同时存在三个通道的点云空间坐标,则确定这9个特征点为主干特征云点,分别对这9个主干特征云点的红色、绿色、蓝色三个通道,即R、G和B三个通道,的点云空间坐标作平均后得到K 11到K 33的9个主干点云坐标。对31这个主干特征云点而言,G31,B31,R31分别为红色、蓝色和绿色三个通道下目标物体同一个特征点的点云空间坐标,因而主干特征云点31的主干点云坐标由G31、B31和R31三个点云空间坐标作平均得到。
同时,由于目标物体表面特征点41至43这三个特征点只有红色通道有特征点云,则确定这3个特征点为枝干特征云点,对应图3中的红色通道的R41,R42,R43三个点云空间坐标。对41这个枝干特征云点而言,与其相邻的主干特征云点为31。根据点云空间坐标R41与相邻的主干特征云点31的空间点云坐标R31,计算红色通道数据中的R41与R31的相对位置矢量 b ;根据点云空间坐标R41与相邻的主干特征云点31的空间点云坐标B31,计算红色通道中R41与蓝色通道中B31间的相对位置矢量 c ;根据点云空间坐标R41与相邻的主干特征云点31的空间点云坐标G31,计算红色通道中R41与蓝色通道中G31间的相对位置矢量 d ;因而枝干特征云点41与相邻的主干特征云点31的相对位置为( b + c + d )/3。再结合主干特征云点31的主干点云坐标,就得到了枝干特征云点41的枝干点云坐标。
可见,本实施例比前述实施例对特征点的点云空间坐标直接作平均得到各特征点的三维坐标,建立的三维坐标点云矩阵包含了更多的特征点云的坐标信息,更精确。
在上述实施例的基础上,在步骤16之后,还包括:结合从二维图像上提取出来的目标物体表面各特征点的颜色信息,生成目标物体的彩色三维模型。
其中,在利用各个通道的精确三维点云形成目标物体的三维模型后,即获取了目标物体的完整的空间坐标,在此基础上,结合从彩色二维图像上提取出来的颜色信息,则在获取了重现目标物体的完整的空间坐标的同时,也获取了目标物体的颜色数据,因而获得目标物体的彩色的三维模型。此外,获取了重现物体的完整的空间坐标及颜色的数据,可在屏幕上显示彩色的三维模型。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图4示出了本申请实施例提供的一种三维成像装置的示意图,该三维成像装置执行前述任一个三维成像方法,通常配置于终端设备。该三维成像装置实施例未详细描述之处请参见三维成像方法实施例的相关描述。如图4所示,三维成像装置包括以下模块。
获取模块41,用于获取一帧二维图像,所述二维图像由成像单元扫描目标物体过程中的每个拍摄时刻采集得到;将所述二维图像分成左右两幅或上下两幅子图像;根据所述二维图像的两幅所述子图像,获取多个通道下目标物体的表面特征点的三维坐标;
确定模块42,用于根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配且已知成像单元位置的匹配帧,确定各个通道下的所述成像单元的初始位置;
形成模块43,用于扭曲所述当前帧,将扭曲后的所述当前帧与已知成像单元位置的匹配帧进行比对,得到精确的成像单元位置,并将当前帧每个通道的所述三维坐标转换到其各自通道的整体坐标系下,形成当前帧各个通道下的三维点云;
标记模块44,用于分别选定各通道的一帧作为空间上的根节点,以根节点为圆心,从近到远依次搜索同一个通道中空间内的其他帧,并将其他帧与所述根节点进行特征点比对;若其他帧与所述根节点没有足够多的特征点匹配,则将所述其他帧标记为所述根节点的子节点;若其他帧与所述根节点有足够多的特征点匹配,则将所述其他帧标记为所述根节点的相邻帧,并以所述相邻帧为圆心,从近到远依次搜索同一个通道中空间内的其他帧,确定所述相邻帧的相邻帧和子节点,直至该通道中空间内的所有节点均被搜索到并被标记为相邻帧或子节点为止;
合并模块45,用于将各相邻帧的精确成像单元位置经过扭曲后与该相邻帧的子节点和/或根节点进行互相关比对,得到后处理位置;各相邻帧的三维点云使用后处理位置与该相邻帧的子节点进行比对和平均,得到以子节点为中心的三维点云;将各以子节点为中心的三维点云合并后与所述根节点比对和平均,合并成该通道的完整三维点云;
建模模块46,用于利用各个通道的精确三维点云形成目标物体的三维模型。
可选地,所述建模模块46具体用于:
根据各通道的精确三维点云,若确定目标物体的特征点同时存在各个通道的点云空间坐标,则确定该特征点为主干特征云点,通过对所述主干特征云点的各通道的点云空间坐标作平均,得到所述主干特征云点的主干点云坐标;若确定目标物体的特征点未同时存在各个通道的点云空间坐标,则确定该特征点为枝干特征云点,获取所述枝干特征云点与相邻的主干特征云点的相对位置,得到枝干点云坐标;结合所述主干点云坐标和所述枝干点云坐标得到目标物体表面各特征点的三维坐标,从而形成目标物体的三维模型。
图5是本申请一实施例提供的终端设备的示意图。如图5所示,该实施例的终端设备5包括:处理器50、存储器51以及存储在所述存储器51中并可在所述处理器50上运行的计算机程序52,例如三维成像方法。所述处理器50执行所述计算机程序52时实现上述三维成像方法实施例中的步骤,例如图1所示的步骤11至16。或者,所述处理器50执行所述计算机程序52时实现上述各三维成像装置实施例中各模块/单元的功能,例如图4所示模块41至46的功能。
示例性的,所述计算机程序52可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器51中,并由所述处理器50执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序52在所述终端设备5中的执行过程。
所述终端设备5可以是手持式三维成像仪、扫描式三维成像仪、桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器50、存储器51。本领域技术人员可以理解,图5仅仅是终端设备5的示例,并不构成对终端设备5的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器50可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器51可以是所述终端设备5的内部存储单元,例如终端设备5的硬盘或内存。所述存储器51也可以是所述终端设备5的外部存储设备,例如所述终端设备5上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器51还可以既包括所述终端设备5的内部存储单元也包括外部存储设备。所述存储器51用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器51还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (10)

  1. 一种三维成像方法,其特征在于,包括:
    获取二维图像,每帧所述二维图像由成像单元扫描目标物体过程中的每个拍摄时刻采集得到;将所述二维图像分成左右两幅或上下两幅子图像;根据所述二维图像的两幅所述子图像,获取多个通道下目标物体的表面特征点的三维坐标;
    根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配且已知成像单元位置的匹配帧,确定各个通道下的所述成像单元的初始位置;
    扭曲所述当前帧,将扭曲后的所述当前帧与已知成像单元位置的匹配帧进行比对,得到精确的成像单元位置,并将当前帧每个通道的所述三维坐标转换到其各自通道的整体坐标系下,形成当前帧各个通道下的三维点云;
    分别选定各通道的一帧作为空间上的根节点,以根节点为圆心,从近到远依次搜索同一个通道中空间内的其他帧,并将其他帧与所述根节点进行特征点比对;若其他帧与所述根节点没有足够多的特征点匹配,则将所述其他帧标记为所述根节点的子节点;若其他帧与所述根节点有足够多的特征点匹配,则将所述其他帧标记为所述根节点的相邻帧,并以所述相邻帧为圆心,从近到远依次搜索同一个通道中空间内的其他帧,确定所述相邻帧的相邻帧和子节点,直至该通道中空间内的所有节点均被搜索到并被标记为相邻帧或子节点为止;
    将各相邻帧的精确成像单元位置经过扭曲后与该相邻帧的子节点和/或根节点进行互相关比对,得到后处理位置;各相邻帧的三维点云使用后处理位置与该相邻帧的子节点进行比对和平均,得到以子节点为中心的三维点云;将各以子节点为中心的三维点云合并后与所述根节点比对和平均,合并成该通道的完整三维点云;
    利用各个通道的精确三维点云形成目标物体的三维模型。
  2. 如权利要求1所述的三维成像方法,其特征在于,所述根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配且已知成像单元位置的匹配帧,确定各个通道下的所述成像单元的初始位置,包括:
    获取当前帧二维图像的特征点情况,判断特征点数目是否足够以及包含的特征信息是否丰富;
    若是,则将当前帧里的特征点与已知成像单元位置的最近一帧的特征点进行特征比对,若当前帧与最近一帧有足够多的特征点匹配,则计算各个通道当前帧与最近一帧的相对位置,得到成像单元的粗略位置;若当前帧与最近一帧没有足够的特征点匹配,则以当前帧的位置为圆心,从空间位置的近到远依次寻找与当前帧的特征点匹配的匹配帧,若在预设搜索范围内搜索到当前帧和已知成像单元位置的匹配帧二维图像有足够多的特征点匹配,则计算各个通道当前帧和匹配帧的相对位置,得到成像单元的粗略位置,若在预设搜索范围内未搜索到与当前帧有足够多的特征点匹配的匹配帧,说明当前帧的成像单元的位置已远离扫描过的区域,则抛弃当前帧并采集新的一帧二维图像;
    若否,则抛弃当前帧并采集新的一帧二维图像。
  3. 如权利要求1或2所述的三维成像方法,其特征在于,所述利用各个通道的精确三维点云形成目标物体的三维模型,包括:
    根据各通道的精确三维点云,分别对各通道中目标物体表面同一特征点的点云空间坐标作平均,得到所述目标物体表面各特征点的三维坐标,从而形成目标物体的三维模型。
  4. 如权利要求1或2所述的三维成像方法,其特征在于,所述利用各个通道的精确三维点云形成目标物体的三维模型,包括:
    根据各通道的精确三维点云,若确定目标物体的特征点同时存在各个通道的点云空间坐标,则确定该特征点为主干特征云点,通过对所述主干特征云点的各通道的点云空间坐标作平均,得到所述主干特征云点的主干点云坐标;若确定目标物体的特征点未同时存在各个通道的点云空间坐标,则确定该特征点为枝干特征云点,获取所述枝干特征云点与相邻的主干特征云点的相对位置,得到枝干点云坐标;结合所述主干点云坐标和所述枝干点云坐标得到目标物体表面各特征点的三维坐标,从而形成目标物体的三维模型。
  5. 如权利要求4所述的三维成像方法,其特征在于,所述获取所述枝干特征云点与相邻的主干特征云点的相对位置,得到枝干点云坐标,包括:
    分别计算所述枝干特征云点的每个通道中点云空间坐标,与相邻的主干特征云点的各通道中点云空间坐标的相对位置分矢量,对各所述相对位置分矢量作平均,得到所述枝干特征云点与所述主干特征云点的相对位置矢量;
    将所述相对位置矢量结合所述主干特征云点的主干点云坐标,得到所述枝干特征云点的枝干点云坐标。
  6. 如权利要求1或2所述的三维成像方法,其特征在于,还包括:结合从所述二维图像上提取出来的目标物体表面各特征点的颜色信息,生成目标物体的彩色三维模型。
  7. 一种三维成像装置,其特征在于,包括:
    获取模块,用于获取一帧二维图像,所述二维图像由成像单元扫描目标物体过程中的每个拍摄时刻采集得到;将所述二维图像分成左右两幅或上下两幅子图像;根据所述二维图像的两幅所述子图像,获取多个通道下目标物体的表面特征点的三维坐标;
    确定模块,用于根据当前帧的所述二维图像和与所述当前帧有足够多的特征点匹配且已知成像单元位置的匹配帧,确定各个通道下的所述成像单元的初始位置;
    形成模块,用于扭曲所述当前帧,将扭曲后的所述当前帧与已知成像单元位置的匹配帧进行比对,得到精确的成像单元位置,并将当前帧每个通道的所述三维坐标转换到其各自通道的整体坐标系下,形成当前帧各个通道下的三维点云;
    标记模块,用于分别选定各通道的一帧作为空间上的根节点,以根节点为圆心,从近到远依次搜索同一个通道中空间内的其他帧,并将其他帧与所述根节点进行特征点比对;若其他帧与所述根节点没有足够多的特征点匹配,则将所述其他帧标记为所述根节点的子节点;若其他帧与所述根节点有足够多的特征点匹配,则将所述其他帧标记为所述根节点的相邻帧,并以所述相邻帧为圆心,从近到远依次搜索同一个通道中空间内的其他帧,确定所述相邻帧的相邻帧和子节点,直至该通道中空间内的所有节点均被搜索到并被标记为相邻帧或子节点为止;
    合并模块,用于将各相邻帧的精确成像单元位置经过扭曲后与该相邻帧的子节点和/或根节点进行互相关比对,得到后处理位置;各相邻帧的三维点云使用后处理位置与该相邻帧的子节点进行比对和平均,得到以子节点为中心的三维点云;将各以子节点为中心的三维点云合并后与所述根节点比对和平均,合并成该通道的完整三维点云;
    建模模块,用于利用各个通道的精确三维点云形成目标物体的三维模型。
  8. 如权利要求7所述的三维成像装置,其特征在于,所述建模模块具体用于:
    根据各通道的精确三维点云,若确定目标物体的特征点同时存在各个通道的点云空间坐标,则确定该特征点为主干特征云点,通过对所述主干特征云点的各通道的点云空间坐标作平均,得到所述主干特征云点的主干点云坐标;若确定目标物体的特征点未同时存在各个通道的点云空间坐标,则确定该特征点为枝干特征云点,获取所述枝干特征云点与相邻的主干特征云点的相对位置,得到枝干点云坐标;结合所述主干点云坐标和所述枝干点云坐标得到目标物体表面各特征点的三维坐标,从而形成目标物体的三维模型。
  9. 一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现如权利要求1-7任一项所述的三维成像方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行以实现如权利要求1至7任一项所述的三维成像方法。
PCT/CN2018/098013 2018-08-01 2018-08-01 一种三维成像方法、装置及终端设备 WO2020024144A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/098013 WO2020024144A1 (zh) 2018-08-01 2018-08-01 一种三维成像方法、装置及终端设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/098013 WO2020024144A1 (zh) 2018-08-01 2018-08-01 一种三维成像方法、装置及终端设备

Publications (1)

Publication Number Publication Date
WO2020024144A1 true WO2020024144A1 (zh) 2020-02-06

Family

ID=69232130

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098013 WO2020024144A1 (zh) 2018-08-01 2018-08-01 一种三维成像方法、装置及终端设备

Country Status (1)

Country Link
WO (1) WO2020024144A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112067007A (zh) * 2020-11-12 2020-12-11 湖北亿咖通科技有限公司 地图生成方法、计算机存储介质及电子设备
CN115131507A (zh) * 2022-07-27 2022-09-30 北京百度网讯科技有限公司 图像处理方法、图像处理设备和元宇宙三维重建方法
CN115222799A (zh) * 2021-08-12 2022-10-21 达闼机器人股份有限公司 图像重力方向的获取方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517280A (zh) * 2013-11-14 2015-04-15 广东朗呈医疗器械科技有限公司 三维成像方法
CN104574498A (zh) * 2013-11-27 2015-04-29 广东朗呈医疗器械科技有限公司 图像数据的后处理方法
GB2497517B (en) * 2011-12-06 2016-05-25 Toshiba Res Europe Ltd A reconstruction system and method
CN106033621A (zh) * 2015-03-17 2016-10-19 阿里巴巴集团控股有限公司 一种三维建模的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2497517B (en) * 2011-12-06 2016-05-25 Toshiba Res Europe Ltd A reconstruction system and method
CN104517280A (zh) * 2013-11-14 2015-04-15 广东朗呈医疗器械科技有限公司 三维成像方法
CN104574498A (zh) * 2013-11-27 2015-04-29 广东朗呈医疗器械科技有限公司 图像数据的后处理方法
CN106033621A (zh) * 2015-03-17 2016-10-19 阿里巴巴集团控股有限公司 一种三维建模的方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112067007A (zh) * 2020-11-12 2020-12-11 湖北亿咖通科技有限公司 地图生成方法、计算机存储介质及电子设备
CN112067007B (zh) * 2020-11-12 2021-01-29 湖北亿咖通科技有限公司 地图生成方法、计算机存储介质及电子设备
CN115222799A (zh) * 2021-08-12 2022-10-21 达闼机器人股份有限公司 图像重力方向的获取方法、装置、电子设备及存储介质
CN115131507A (zh) * 2022-07-27 2022-09-30 北京百度网讯科技有限公司 图像处理方法、图像处理设备和元宇宙三维重建方法

Similar Documents

Publication Publication Date Title
CN110111262A (zh) 一种投影仪畸变校正方法、装置和投影仪
TWI699117B (zh) 標定成安裝在全光相機之感測器,被濾色器陣列部份罩覆的微影像區至少一個再聚焦影像色彩組成份之決定方法,光場資料獲取裝置,電腦程式製品,以及非暫態電腦可讀式載波媒體
CN111023970A (zh) 多模式三维扫描方法及系统
CN112150528A (zh) 一种深度图像获取方法及终端、计算机可读存储介质
JP7227969B2 (ja) 三次元再構成方法および三次元再構成装置
US9008412B2 (en) Image processing device, image processing method and recording medium for combining image data using depth and color information
WO2020024144A1 (zh) 一种三维成像方法、装置及终端设备
CN102227746A (zh) 立体图像处理装置、方法、记录介质和立体成像设备
KR102632960B1 (ko) 플렌옵틱 카메라 시스템을 교정하기 위한 방법 및 시스템
JP7170224B2 (ja) 三次元生成方法および三次元生成装置
JP2013026844A (ja) 画像生成方法及び装置、プログラム、記録媒体、並びに電子カメラ
JPWO2019065260A1 (ja) 情報処理装置、情報処理方法、及び、プログラム、並びに、交換レンズ
CN108805921A (zh) 图像获取系统及方法
CN111757086A (zh) 有源双目相机、rgb-d图像确定方法及装置
JP2015188251A (ja) 画像処理装置、撮像装置、画像処理方法、及びプログラム
JP2012142952A (ja) 撮像装置
US10332269B2 (en) Color correction of preview images for plenoptic imaging systems
CN110796726B (zh) 一种三维成像方法、装置及终端设备
JP2018147480A (ja) プレノプティック撮像システム用のリアルタイムカラープレビュー生成
JP2016134661A (ja) 画像処理方法、画像処理装置、撮像装置、プログラム、および、記憶媒体
JP7300895B2 (ja) 画像処理装置および画像処理方法、プログラム、並びに記憶媒体
CN103841327B (zh) 一种基于原始图像的四维光场解码预处理方法
JP6732440B2 (ja) 画像処理装置、画像処理方法、及びそのプログラム
JP2016162833A (ja) 撮像素子、撮像装置および画像処理装置
CN1441314A (zh) 多镜头数码立体相机

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18928789

Country of ref document: EP

Kind code of ref document: A1