WO2024066471A1 - 一种数据采集设备、方法、装置及存储介质 - Google Patents

一种数据采集设备、方法、装置及存储介质 Download PDF

Info

Publication number
WO2024066471A1
WO2024066471A1 PCT/CN2023/099341 CN2023099341W WO2024066471A1 WO 2024066471 A1 WO2024066471 A1 WO 2024066471A1 CN 2023099341 W CN2023099341 W CN 2023099341W WO 2024066471 A1 WO2024066471 A1 WO 2024066471A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
line laser
images
measured
depth image
Prior art date
Application number
PCT/CN2023/099341
Other languages
English (en)
French (fr)
Other versions
WO2024066471A9 (zh
Inventor
王春茂
张文聪
Original Assignee
杭州海康机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人股份有限公司 filed Critical 杭州海康机器人股份有限公司
Publication of WO2024066471A1 publication Critical patent/WO2024066471A1/zh
Publication of WO2024066471A9 publication Critical patent/WO2024066471A9/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of data processing technology, and in particular to a data acquisition device, method, apparatus and storage medium.
  • a depth image of the above objects can be obtained, and the three-dimensional data such as length, width and height of the objects can be determined based on the depth image.
  • a laser light source is usually used to emit laser light to an object, and then a camera is used to capture an image of the object, detect deformation information of the laser light in the image, and obtain a depth image of the object based on the deformation information. Subsequently, three-dimensional data of the object can be obtained based on the depth information in the depth image.
  • the above-mentioned solution can be used to obtain a depth image of an object, reflection, refraction, and other reasons on the surface of the object may interfere with the laser light irradiating the object, resulting in low accuracy of the deformation information obtained when detecting the deformation information of the laser light in the image, resulting in low accuracy of the depth image of the object obtained.
  • the purpose of the embodiments of the present application is to provide a data acquisition device, method, apparatus and storage medium to improve the accuracy of the depth image of the object obtained.
  • the specific technical solution is as follows:
  • an embodiment of the present application provides a data acquisition device, the data acquisition device comprising: a depth camera, a laser light source, a scanning mirror, a laser camera, and a processor, wherein:
  • the depth camera is used to: collect an initial depth image of the object to be measured;
  • the laser light source is used to emit laser light to the scanning mirror, and the scanning mirror is used to reflect the laser light to the object to be measured in a scanning manner;
  • the laser camera is used to: collect a line laser image of the object to be measured during scanning;
  • the processor is used to: fuse different line laser images based on the initial depth image to obtain a target depth image of the object to be measured.
  • the processor is specifically configured to:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the processor in a static scanning scenario, is specifically configured to:
  • the same initial depth image is used as the reference to fuse each line laser image.
  • the depth camera is specifically used to: collect multiple frames of initial depth images of the object to be measured;
  • the processor is specifically used for:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the depth camera in a dynamic scanning scenario, is specifically used to: collect multiple frames of initial depth images of the object to be measured during the dynamic scanning process, wherein the collection time of each frame of the initial depth image is synchronized with the collection time of each frame of the line laser image;
  • the processor is specifically used for:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the processor is specifically used to:
  • differences between the different initial depth images are obtained, and the differences between the different initial depth images are used to calculate the posture change information of the depth camera, and the calculated posture change information is used to correct the positions of the matching areas corresponding to the different line laser images in the different initial depth images;
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the depth camera comprises: a time-of-flight TOF light source and a TOF camera, wherein the TOF light source is used to: emit TOF light to the object to be measured;
  • the TOF camera is used to obtain an initial depth image of the object to be measured according to the flight time of the TOF light, wherein the flight time is the time between the emission moment of the TOF light from the TOF light source and the reception moment of the TOF light reflected by the object to be measured by the TOF camera.
  • the laser camera is a binocular camera
  • the laser light source is a multi-line laser light source or a single-line laser light source.
  • the scanning mirror includes a reflecting mirror and a motion mechanism, and the motion mechanism is used to drive the reflecting mirror to move.
  • an embodiment of the present application provides a data acquisition method, the method is applied to a processor in a data acquisition device, the data acquisition device further comprising: a depth camera, a laser light source, a scanning mirror, and a laser camera;
  • the method comprises:
  • the method of fusing different line laser images based on the initial depth map to obtain a target depth image of the object to be measured includes:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the initial depth map is used as a reference to fuse different line laser images to obtain a target depth image of the object to be measured, including:
  • each line laser image is fused to obtain a target depth image of the object to be measured;
  • the received multiple frames of initial depth images are fused to obtain a fused depth image; for each line laser image, a matching area of the line laser image in the fused depth image is determined; according to the distribution positions of the matching areas corresponding to different line laser images in the fused depth image, different line laser images are fused to obtain a target depth image of the object to be measured.
  • the initial depth map is used as a reference to fuse different line laser images to obtain a target depth image of the object to be measured, including:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the different line laser images are fused according to the distribution positions of the matching areas corresponding to the different line laser images in the different initial depth images to obtain the target depth image of the object to be measured, including:
  • differences between the different initial depth images are obtained, and the differences between the different initial depth images are used to calculate the posture change information of the depth camera, and the calculated posture change information is used to correct the positions of the matching areas corresponding to the different line laser images in the different initial depth images;
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the present application provides a data acquisition device, which is arranged in a data acquisition device.
  • the data acquisition device further comprises: a depth camera, a laser light source, a scanning mirror, and a laser camera;
  • the device comprises:
  • a first image receiving module configured to receive an initial depth image of the object to be measured obtained by the depth camera
  • a second image receiving module is used to receive a line laser image of the object to be measured acquired by the laser camera, wherein the line laser image is acquired by the laser camera during the process in which the scanning mirror reflects the laser light emitted by the laser light source to the scanning mirror toward the object to be measured in a scanning manner;
  • the target depth map determination module is used to fuse different line laser images based on the initial depth map to obtain a target depth image of the object to be measured.
  • the target depth map determination module is specifically used to:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the target depth map determination module is specifically used to:
  • each line laser image is fused to obtain a target depth image of the object to be measured;
  • the received multiple frames of initial depth images are fused to obtain a fused depth image; for each line laser image, a matching area of the line laser image in the fused depth image is determined; according to the distribution positions of the matching areas corresponding to different line laser images in the fused depth image, different line laser images are fused to obtain a target depth image of the object to be measured.
  • the target depth map determination module includes:
  • the region determination submodule is used to determine, for each line laser image, a matching region of the line laser image in the synchronously acquired initial depth image;
  • the target depth map determination submodule is used to fuse different line laser images according to the distribution positions of the matching areas corresponding to the different line laser images in different initial depth images to obtain the target depth image of the object to be measured.
  • the target depth map determination submodule is specifically used to:
  • differences between the different initial depth images are obtained, and the differences between the different initial depth images are used to calculate the posture change information of the depth camera, and the calculated posture change information is used to correct the positions of the matching areas corresponding to the different line laser images in the different initial depth images;
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method steps described in any one of the second aspects are implemented.
  • an embodiment of the present application further provides a computer program product comprising instructions, which, when executed on a computer, enables the computer to execute any of the method steps described in the second aspect above.
  • the data acquisition device includes: a depth camera, a laser light source, a scanning mirror, a laser camera, and a processor, wherein: the depth camera is used to: acquire the initial depth image of the object to be measured; the laser light source is used to: emit laser light to the scanning mirror; the scanning mirror is used to: reflect the laser light to the object to be measured in a scanning manner; the laser camera is used to: acquire the line laser image of the object to be measured during the scanning process; the processor is used to: fuse different line laser images based on the initial depth image to obtain the target depth image of the object to be measured.
  • the depth camera is used to: acquire the initial depth image of the object to be measured
  • the laser light source is used to: emit laser light to the scanning mirror
  • the scanning mirror is used to: reflect the laser light to the object to be measured in a scanning manner
  • the laser camera is used to: acquire the line laser image of the object to be measured during the scanning process
  • the processor is used to: fuse different line laser images based on the initial depth image to obtain
  • the initial depth image of the object to be measured can be obtained by using the depth camera, and the laser light source is used to emit laser light to the scanning mirror, the scanning mirror reflects the laser light to the object to be measured in a scanning manner, the laser camera acquires the line laser image of the object to be measured during the scanning process, and the processor combines the initial depth image and the line laser image to obtain the target depth image of the object to be measured, wherein, due to the concentrated laser line energy, the imaging quality of the line laser image is relatively high, for example, the line laser image can obtain better imaging quality on a highly reflective surface or a black surface, and the embodiment of the present application can obtain a more accurate target depth image based on the line laser image with high imaging quality. It can be seen that the application of the solution provided in the embodiment of the present application can improve the accuracy of the target depth image of the object obtained.
  • FIG1 is a schematic diagram of the structure of a data acquisition device provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of the structure of another data acquisition device provided in an embodiment of the present application.
  • FIG3 is a flow chart of a data collection method provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the structure of a data acquisition device provided in an embodiment of the present application.
  • the embodiments of the present application provide a data acquisition device, method, apparatus and storage medium, which are respectively introduced in detail below.
  • the embodiment of the present application provides a data acquisition device, which includes: a depth camera, a laser light source, a scanning mirror, a laser camera, and a processor, wherein:
  • the depth camera is used to: collect the initial depth image of the object to be measured;
  • the laser light source is used to emit laser light to the scanning mirror, and the scanning mirror is used to reflect the laser light to the object to be measured in a scanning manner;
  • the laser camera is used to: collect a line laser image of the object to be measured during the scanning process;
  • the processor is used to: fuse different line laser images based on the initial depth image to obtain a target depth image of the object to be measured.
  • the depth camera can be used to obtain the initial depth image of the object to be measured, and then the laser light source can be used to emit laser light to the scanning mirror, and the scanning mirror can reflect the laser light to the object to be measured in a scanning manner.
  • the laser camera collects the line laser image of the object to be measured during the scanning process, and the processor combines the initial depth image and the line laser image to obtain the target depth image of the object to be measured. It can be seen that the accuracy of the target depth image of the object obtained by applying the solution provided in the embodiment of the present application can be improved.
  • FIG. 1 is a schematic diagram of the structure of a data acquisition device provided in an embodiment of the present application.
  • the data acquisition device includes: a depth camera 101, a laser light source 102, a scanning mirror 103, a laser camera 104, and a processor 105, wherein:
  • the depth camera 101 is used to: collect an initial depth image of the object to be measured;
  • the laser light source 102 is used to emit laser light to the scanning mirror 103, and the scanning mirror 103 is used to reflect the laser light to the object to be measured in a scanning manner;
  • the process in which the laser light source 102 emits laser light to the scanning mirror 103 to scan the object to be measured can be referred to as galvanometer scanning.
  • the laser camera 104 is used to: collect a line laser image of the object to be measured during the scanning process;
  • the processor 105 is used to fuse different line laser images based on the initial depth image to obtain a target depth image of the object to be measured.
  • the laser light source 102 is a multi-line laser light source or a single-line laser light source.
  • the laser light emitted by the laser light source 102 may be a linear light, a planar light, a dot matrix light, etc.
  • the laser camera 104 may be a monocular camera, a binocular camera, etc.
  • the relative position relationship between the depth camera 101 and the laser camera 104 remains unchanged during the image acquisition process.
  • the depth camera 101 and the laser camera 104 may be fixedly connected via connecting rods, connecting plates or other connecting parts to ensure that the relative position relationship between the depth camera 101 and the laser camera 104 remains unchanged.
  • the depth camera 101 and the laser camera 104 can be installed separately.
  • the postures of the two cameras can be calibrated when installing them to ensure that the relative posture relationship of the two cameras remains unchanged during the image acquisition process.
  • the relative posture relationship of the two cameras can be preset. When installing the two cameras, one of the cameras is installed first, and then the posture of the other camera is determined according to the posture of the installed camera and the preset relative posture relationship, so that the other camera is installed according to the determined posture.
  • the depth camera 101 and the laser camera 104 are respectively connected to the processor 105 in a wired or wireless manner, so that the processor 105 can obtain the image of the object to be measured captured by the depth camera 101 and the laser camera 104 .
  • the above-mentioned objects to be tested may be people, bolts, wrenches, etc.
  • the processor 105 can process different line laser images based on the camera parameters of the depth camera 101 and the laser camera 104.
  • the camera parameters may include: internal parameters of the depth camera 101, internal parameters of the laser camera 104, and the posture conversion relationship between the depth camera 101 and the laser camera 104, etc.
  • the internal parameters refer to the focal length, center point, distortion coefficient, etc.
  • the camera parameters may be pre-calibrated.
  • the laser light source 102 can emit laser light to the scanning mirror 103, and the scanning mirror 103 reflects the laser light onto the object to be measured.
  • the laser camera 104 then collects a line laser image of the object to be measured. Since the surface of the object to be measured is often not a plane, the laser light distributed on the surface of the object to be measured in the line laser image will be deformed. The deformation can reflect the depth information of the surface of the object to be measured.
  • the laser camera 104 can send the line laser image to the processor 105.
  • the above-mentioned scanning mirror 103 can be a mechanical mirror or a MEMS (Micro-Electro-Mechanical System) mirror.
  • MEMS Micro-Electro-Mechanical System
  • the scanning mirror 103 includes a reflecting mirror and a motion mechanism, and the motion mechanism is used to drive the reflecting mirror to move.
  • the processor 105 may receive the initial depth image and the line laser image, and then determine the target depth image of the object to be measured based on the initial depth image, the line laser image, and the camera parameters of the depth camera 101 and the laser camera 104 .
  • the processor 105 may fuse the line laser images of different frames based on the same frame of initial depth image, or may fuse the line laser images based on different frame of initial depth images.
  • processor 105 for fusing the line laser images can be found in the subsequent embodiments and will not be described in detail here.
  • the depth camera 101 is used to collect an initial depth image of the object to be measured.
  • the depth camera 101 may be a TOF (Time of flight) camera, including a TOF light source and a TOF camera.
  • the TOF light emitted by the TOF light source may be a point light, a linear light, a surface light, a dot matrix light, a light spot, etc.
  • the TOF camera may be a monocular camera, a binocular camera, etc.
  • the above-mentioned TOF camera can adopt iTOF (Indirect Time-of-Flight) technology or DTOF (Direct Time-of-Flight) technology to acquire depth images, and can obtain initial depth images at the cm (centimeter) level.
  • iTOF Indirect Time-of-Flight
  • DTOF Direct Time-of-Flight
  • the TOF camera is used to obtain an initial depth image of the object to be measured based on the flight time of the TOF light, wherein the flight time is the time between the emission moment of the TOF light from the TOF light source and the reception moment of the TOF light reflected by the object to be measured by the TOF camera.
  • the TOF light source can emit TOF light to the object to be measured
  • the object to be measured can reflect the TOF light
  • the TOF camera can receive the TOF light reflected by the object to be measured, and determine the emission time of the TOF light emitted by the TOF light source and the reception time of the TOF light reflected by the object to be measured by the TOF camera, calculate the time difference between the emission time and the reception time as the flight time of the TOF light from the TOF light source to the object to be measured and from the object to be measured to the TOF camera, calculate the distance between the TOF camera and the object to be measured according to the flight time and the propagation speed of the TOF light, obtain the depth image of the object to be measured according to the distance, and use it as the initial depth image.
  • the processor 105 sends the initial depth image.
  • the depth camera 101 can be used to first obtain the initial depth image of the object to be measured, and then the laser light source 102 can be used to emit laser light to the scanning mirror 103, and the scanning mirror 103 can reflect the laser light to the object to be measured in a scanning manner.
  • the laser camera 104 collects the line laser image of the object to be measured during the scanning process, and the processor 105 combines the initial depth image and the line laser image to obtain the target depth image of the object to be measured. It can be seen that the solution provided in the embodiment of the present application can improve the accuracy of the target depth image of the object obtained.
  • the processor 105 is specifically configured to:
  • a matching area of the line laser image in the initial depth image is determined.
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the line laser image and the initial depth image are images of the same object to be measured acquired by different cameras. Therefore, both the line laser image and the initial depth image contain the object area corresponding to the same object to be measured. Since the laser camera acquires the image of the object to be measured in the process of scanning the object to be measured with laser light, the area where the laser light is located in the line laser image belongs to the object area corresponding to the object to be measured. In view of this, the matching area of the above-mentioned line laser image in the initial depth image can be understood as the area corresponding to the part of the object to be measured corresponding to the area where the laser light is located in the line laser image in the initial depth image.
  • the object to be measured may be a bolt
  • the line laser image may be an image of the bolt captured by a laser camera when the laser light scans the nut of the bolt
  • the initial depth image may be an image of the bolt captured by the depth camera.
  • the matching area of the laser image in the initial depth image may be understood as the area corresponding to the nut of the bolt in the initial depth image.
  • Different line laser images are collected at different times, and the areas where the laser light is located in the images are also different.
  • the matching area of the line laser image can be determined in the initial depth image.
  • image feature matching can be performed on the line laser image and the initial depth image to determine the area in the initial depth image that corresponds to the same real scene as the line laser image, that is, the image features of the area where the laser light is located in the line laser image are extracted, and the image features of the initial depth image are extracted, and feature matching is performed on the image features extracted from the two images, so that features in the features of the initial depth image that match the image features of the line laser image can be determined, and the image area represented by the matched features can be determined in the initial depth image as the above-mentioned matching area.
  • the same real scene can be understood as the object area of the object to be measured.
  • the object to be measured is a bolt
  • the object area of the object to be measured can be the area where the nut of the bolt is located.
  • the area in the initial depth image corresponding to the same real scene is the matching area.
  • the distribution position of the matching area corresponding to each line laser image in the initial depth image and the area content of the area where the laser light in each line laser image is located can be spliced together to obtain a spliced image as the above-mentioned target depth image.
  • the line laser images may be fused based on a solution in the related art, which will not be described in detail here.
  • Scanning the object to be measured using laser light can be divided into static scanning and dynamic scanning.
  • static scanning scenario the object to be measured and the data acquisition device remain relatively still during the scanning process;
  • dynamic scanning scenario the object to be measured and the data acquisition device move relative to each other during the scanning process.
  • the processor 105 may fuse the laser line images in any of the following two implementations.
  • the processor 105 is specifically configured to:
  • the same initial depth image is used as a reference to fuse the line laser images to obtain the target depth image of the object to be measured.
  • the depth camera 101 is specifically used to: collect multiple frames of initial depth images of the object to be measured;
  • the processor 105 is specifically configured to:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the depth camera 101 is specifically used to: collect multiple frames of initial depth images of the object to be measured during the dynamic scanning process, wherein the collection time of each frame of the initial depth image is synchronized with the collection time of each frame of the line laser image;
  • the processor 105 is specifically used for:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the process of performing image feature matching based on the line laser images collected at different times and the initial depth image can be called feature matching in the time domain.
  • the processor 105 is specifically configured to:
  • differences between the different initial depth images are obtained, and the differences between the different initial depth images are used to calculate the posture change information of the depth camera 101, and the calculated posture change information is used to correct the positions of the matching areas corresponding to the different line laser images in the different initial depth images;
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • a schematic diagram of the structure of another data acquisition device provided in an embodiment of the present application, wherein the data acquisition device comprises a TOF light source, a TOF camera, a laser light source, a laser camera, a processor, and a galvanometer.
  • the laser camera can be divided into a left camera and a right camera.
  • the TOF light source, the TOF camera, the laser light source, the laser camera, the processor, and the galvanometer are fixedly mounted on a connecting board, wherein:
  • the galvanometer is similar to the scanning mirror mentioned above, and is used to reflect laser light toward the object to be measured in a scanning manner.
  • the laser light source is used to: emit laser light to the galvanometer, and the galvanometer reflects the received laser light to the object to be measured, so as to scan the object to be measured by using the laser light;
  • the laser camera is used to: collect multiple frames of laser images of the object to be measured during the scanning process;
  • the TOF light source is used to: emit TOF light to the object to be measured;
  • the TOF camera is used to: during the scanning process, obtain multiple frames of initial depth images of the object to be measured according to the flight time of the TOF light, wherein the acquisition time of each frame of the initial depth image is synchronized with the acquisition time of each frame of the laser image;
  • the processor is used to: for each laser image, take the initial depth map collected synchronously with the laser image as a reference, and obtain a reference depth map of the object to be measured according to the laser image and the camera parameters of the TOF camera and the laser camera; fuse multiple reference depth maps to obtain a target depth map of the object to be measured.
  • the data acquisition device includes: a time-of-flight TOF light source, a TOF camera, a laser light source, a laser camera, and a processor, wherein: the TOF light source is used to: emit TOF light to the object to be measured; the TOF camera is used to: obtain the initial depth image of the object to be measured according to the flight time of the TOF light, wherein the flight time is: the time between the emission moment of the TOF light emitted by the TOF light source and the reception moment of the TOF light reflected by the object to be measured; the laser light source is used to: emit laser light to the object to be measured; the laser camera is used to: collect the laser image of the object to be measured; the processor is used to: determine the target depth map of the object to be measured based on the laser image and the camera parameters of the TOF camera and the laser camera based on the initial depth map.
  • the TOF light source is used to: emit TOF light to the object to be measured
  • the TOF camera is used to: obtain the initial depth
  • the initial depth image of the object to be measured can be first obtained by using the TOF light, and then the initial depth image and the laser image are fused based on the camera parameters of the TOF camera and the laser camera, and the depth image of the object to be measured is obtained by combining the initial depth image and the laser image. It can be seen that the accuracy of the depth image of the object obtained by applying the scheme provided in the above embodiment can be improved.
  • the data acquisition device further includes: a depth camera, a laser light source, a scanning mirror, and a laser camera.
  • the relative posture relationship between the depth camera and the laser camera remains unchanged during the image acquisition process, and in one case, the depth camera is fixedly connected to the laser camera.
  • the method comprises:
  • the line laser image is collected by the laser camera when the scanning mirror reflects the laser light emitted by the laser light source to the scanning mirror toward the object to be measured in a scanning manner.
  • step S301 can be executed first, and then step S302; step S302 can be executed first, and then step S301; step S301 and step S302 can also be executed in parallel.
  • the method of fusing different line laser images based on the initial depth map to obtain a target depth image of the object to be measured includes:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the data acquisition solution provided in the embodiments of the present application can improve the accuracy of obtaining the target depth image of the object to be measured.
  • the initial depth map is used as a reference to fuse different line laser images to obtain a target depth image of the object to be measured, including:
  • each line laser image is fused to obtain a target depth image of the object to be measured.
  • the data acquisition solution provided in the embodiments of the present application can improve the accuracy of obtaining the target depth image of the object to be measured.
  • the initial depth map is used as a reference to fuse different line laser images to obtain a target depth image of the object to be measured, including:
  • the collected multiple frames of initial depth images are fused to obtain a fused depth image; for each line laser image, the matching area of the line laser image in the fused depth image is determined; according to the distribution positions of the matching areas corresponding to different line laser images in the fused depth image, different line laser images are fused to obtain a target depth image of the object to be measured.
  • the fused depth image contains more comprehensive three-dimensional object information of the object to be measured.
  • the matching area corresponding to the line laser image can be accurately determined in the fused depth image, so that different line laser images are fused according to the distribution positions of the more accurate matching areas corresponding to each line laser image to obtain the above-mentioned target depth image, which can improve the accuracy of the obtained target depth image.
  • the initial depth map is used as a reference to fuse different line laser images to obtain a target depth image of the object to be measured, including:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the data acquisition solution provided in the embodiments of the present application can improve the accuracy of obtaining the target depth image of the object to be measured.
  • the different line laser images are fused according to the distribution positions of the matching areas corresponding to the different line laser images in the different initial depth images to obtain the target depth image of the object to be measured, including:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the position of the matching areas corresponding to different line laser images in different initial depth images can be corrected. This can improve the accuracy of the matching areas corresponding to different line laser images in different initial depth images, and thus the different line laser images can be fused according to the distribution positions of the matching areas after the position correction, which can improve the accuracy of the obtained target depth image.
  • each of the above method embodiments also has the following beneficial effects:
  • the depth camera can be used to obtain the initial depth image of the object to be measured
  • the laser light source can be used to emit laser light to the scanning mirror, and the scanning mirror reflects the laser light to the object to be measured in a scanning manner.
  • the laser camera collects the line laser image of the object to be measured during the scanning process, and the processor combines the initial depth image and the line laser image to obtain the target depth image of the object to be measured. It can be seen that the accuracy of the target depth image of the object obtained by applying the solution provided in the embodiment of the present application can be improved.
  • FIG 4 is a schematic diagram of the structure of a data acquisition device provided in an embodiment of the present application.
  • the device is set in a processor in a data acquisition device, and the data acquisition device also includes: a depth camera, a laser light source, a scanning mirror, and a laser camera.
  • the relative posture relationship between the depth camera and the laser camera remains unchanged during the image acquisition process.
  • the depth camera is fixedly connected to the laser camera.
  • the device comprises:
  • a first image receiving module 401 is used to receive an initial depth image of the object to be measured obtained by the depth camera;
  • the second image receiving module 402 is used to receive the line laser image of the object to be measured acquired by the laser camera, wherein the line laser image is acquired by the laser camera when the scanning mirror reflects the laser light emitted by the laser light source to the scanning mirror in a scanning manner toward the object to be measured;
  • the target depth map determination module 403 is used to fuse different line laser images based on the initial depth map to obtain a target depth image of the object to be measured.
  • the embodiment of the present application can obtain a more accurate target depth image based on the line laser image with higher imaging quality. It can be seen that the accuracy of the target depth image of the object obtained by applying the scheme provided in the embodiment of the present application can be improved.
  • the target depth map determination module 403 is specifically used to:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the data acquisition solution provided in the embodiments of the present application can improve the accuracy of obtaining the target depth image of the object to be measured.
  • the target depth map determination module 403 is specifically used to:
  • each line laser image is fused to obtain a target depth image of the object to be measured.
  • the data acquisition solution provided in the embodiments of the present application can improve the accuracy of obtaining the target depth image of the object to be measured.
  • the target depth map determination module 403 is specifically used to:
  • the collected multiple frames of initial depth images are fused to obtain a fused depth image; for each line laser image, the matching area of the line laser image in the fused depth image is determined; according to the distribution positions of the matching areas corresponding to different line laser images in the fused depth image, different line laser images are fused to obtain a target depth image of the object to be measured.
  • the fused depth image contains more comprehensive three-dimensional object information of the object to be measured.
  • the matching area corresponding to the line laser image can be accurately determined in the fused depth image, so that according to the distribution position of the more accurate matching area corresponding to each line laser image, the different line laser images are memorized and fused to obtain the above-mentioned target depth image, which can improve the accuracy of the obtained target depth image.
  • the target depth map determination module 403 includes:
  • the region determination submodule is used to determine, for each line laser image, a matching region of the line laser image in the synchronously acquired initial depth image;
  • the target depth map determination submodule is used to fuse different line laser images according to the distribution positions of the matching areas corresponding to the different line laser images in different initial depth images to obtain the target depth image of the object to be measured.
  • the data acquisition solution provided in the embodiments of the present application can improve the accuracy of obtaining the target depth image of the object to be measured.
  • the target depth map determination submodule is specifically used to:
  • the different line laser images are fused to obtain the target depth image of the object to be measured.
  • the position of the matching areas corresponding to different line laser images in different initial depth images can be corrected. This can improve the accuracy of the matching areas corresponding to different line laser images in different initial depth images, and thus the different line laser images can be fused according to the distribution positions of the matching areas after the position correction, which can improve the accuracy of the obtained target depth image.
  • each of the above device embodiments also has the following beneficial effects:
  • a depth camera can be used to obtain an initial depth image of the object to be measured, and then a laser light source is used to emit laser light to a scanning mirror, and the scanning mirror reflects the laser light to the object to be measured in a scanning manner.
  • the laser camera collects a line laser image of the object to be measured during the scanning process, and the processor combines the initial depth image and the line laser image to obtain a target depth image of the object to be measured. It can be seen that the accuracy of the target depth image of the object obtained by applying the solution provided in the embodiment of the present application can be improved.
  • a computer-readable storage medium in which a computer program is stored.
  • the computer program is executed by a processor, the steps of any of the above-mentioned data acquisition methods are implemented.
  • a computer program product including instructions is also provided, which, when executed on a computer, enables the computer to execute any of the data collection methods in the above embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from a website site, a computer, a server or a data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server, a data center, etc. that contains one or more available media integrated.
  • the available medium can be a magnetic medium, (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), etc.
  • the data acquisition device includes: a depth camera, a laser light source, a scanning mirror, a laser camera, and a processor, wherein: the depth camera is used to: acquire an initial depth image of the object to be measured; the laser light source is used to: emit laser light to the scanning mirror; the scanning mirror is used to: reflect the laser light to the object to be measured in a scanning manner; the laser camera is used to: acquire a line laser image of the object to be measured during the scanning process; the processor is used to: fuse different line laser images based on the initial depth image to obtain a target depth image of the object to be measured.
  • the depth camera is used to: acquire an initial depth image of the object to be measured
  • the laser light source is used to: emit laser light to the scanning mirror
  • the scanning mirror is used to: reflect the laser light to the object to be measured in a scanning manner
  • the laser camera is used to: acquire a line laser image of the object to be measured during the scanning process
  • the processor is used to: fuse different line laser images based on the initial depth
  • the depth camera can be used to first obtain the initial depth image of the object to be measured, and then the laser light source can be used to emit laser light to the scanning mirror, the scanning mirror can reflect the laser light to the object to be measured in a scanning manner, the laser camera can acquire a line laser image of the object to be measured during the scanning process, and the processor can combine the initial depth image to obtain the target depth image of the object to be measured.
  • the target depth image of the object to be measured is obtained by using the solution provided in the embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种数据采集设备、方法、装置及存储介质,涉及数据处理技术领域,数据采集设备包括:深度相机、激光光源、扫描转镜、激光相机、处理器,其中:深度相机用于:采集待测对象的初始深度图像;激光光源用于:向扫描转镜发射激光光线,扫描转镜用于:以扫描的方式向待测对象反射激光光线;激光相机用于:在扫描过程中采集待测对象的线激光图像;处理器用于:以初始深度图像为基准,对不同线激光图像进行融合,得到待测对象的目标深度图像。应用本申请实施例提供的方案可以提高所获得的深度图像的准确度。

Description

一种数据采集设备、方法、装置及存储介质
本申请要求于2022年9月27日提交中国专利局、申请号为202211182006.6发明名称为“一种数据采集设备、方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,特别是涉及一种数据采集设备、方法、装置及存储介质。
背景技术
为了获得如螺栓、扳手等物体的三维数据,可以获得上述物体的深度图像,根据深度图像确定物体的长、宽、高等三维数据。
相关技术中,通常可以利用激光光源向物体发射激光光线,然后利用相机采集物体的图像,检测图像中激光光线的形变信息,根据该形变信息获得物体的深度图像,后续可以根据深度图像中的深度信息,得到物体的三维数据。
应用上述方案虽然可以获得物体的深度图像,但是可能由于物体表面反光、折射等原因,导致对照射至物体的激光光线产生干扰,进而导致在检测图像中激光光线的形变信息时,所得到的形变信息的准确度较低,使得所获得的物体的深度图像的准确度较低。
发明内容
本申请实施例的目的在于提供一种数据采集设备、方法、装置及存储介质,以提高所获得的物体的深度图像的准确度。具体技术方案如下:
第一方面,本申请实施例提供了一种数据采集设备,所述数据采集设备包括:深度相机、激光光源、扫描转镜、激光相机、处理器,其中:
所述深度相机用于:采集待测对象的初始深度图像;
所述激光光源用于:向所述扫描转镜发射激光光线,所述扫描转镜用于:以扫描的方式向所述待测对象反射所述激光光线;
所述激光相机用于:在扫描过程中采集所述待测对象的线激光图像;
所述处理器用于:以所述初始深度图像为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述处理器,具体用于:
针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,在静态扫描场景下,所述处理器具体用于:
针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合, 得到所述待测对象的目标深度图像;
所述深度相机具体用于:采集所述待测对象的多帧初始深度图像;
所述处理器具体用于:
对所采集的多帧初始深度图像进行融合,得到融合后深度图像;
针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,在动态扫描场景下,所述深度相机具体用于:在动态扫描过程中,采集所述待测对象的多帧初始深度图像,其中,各帧初始深度图像的采集时刻与各帧线激光图像的采集时刻同步;
所述处理器具体用于:
针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述处理器具体用于:
针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述深度相机包括:飞行时间TOF光源、TOF相机,其中,所述TOF光源用于:向所述待测对象发射TOF光线;
所述TOF相机用于:根据所述TOF光线的飞行时长,获得所述待测对象的初始深度图像,其中,所述飞行时长为:所述TOF光源发射TOF光线的发射时刻至所述TOF相机接收到所述待测对象反射的TOF光线的接收时刻之间的时长。
本申请的一个实施例中,所述激光相机为双目相机;
和/或
所述激光光源为多线激光光源或单线激光光源。
本申请的一个实施例中,所述扫描转镜包括反光镜、运动机构,所述运动机构用于带动所述反光镜运动。
第二方面,本申请实施例提供了一种数据采集方法,所述方法应用于数据采集设备中的处理器,所述数据采集设备还包括:深度相机、激光光源、扫描转镜、激光相机;
所述方法包括:
接收所述深度相机获得的待测对象的初始深度图像;
接收所述激光相机采集的所述待测对象的线激光图像,其中,所述线激光图像是:所述激光相机在所述扫描转镜以扫描的方式向所述待测对象反射所述激光光源向所述扫描转镜发射的激光光线的过程中采集的;
以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,在静态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像;
对所接收的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,在动态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
第三方面,本申请实施例提供了一种数据采集装置,所述装置设置于数据采集设备中 的处理器,所述数据采集设备还包括:深度相机、激光光源、扫描转镜、激光相机;
所述装置包括:
第一图像接收模块,用于接收所述深度相机获得的待测对象的初始深度图像;
第二图像接收模块,用于接收所述激光相机采集的所述待测对象的线激光图像,其中,所述线激光图像是:所述激光相机在所述扫描转镜以扫描的方式向所述待测对象反射所述激光光源向所述扫描转镜发射的激光光线的过程中采集的;
目标深度图确定模块,用于以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述目标深度图确定模块,具体用于:
针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,在静态扫描场景下,所述目标深度图确定模块,具体用于:
针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像;
对所接收的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,在动态扫描场景下,所述目标深度图确定模块,包括:
区域确定子模块,用于针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
目标深度图确定子模块,用于按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
本申请的一个实施例中,所述目标深度图确定子模块,具体用于:
针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述第二方面任一所述的方法步骤。
第五方面,本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第二方面任一所述的方法步骤。
本申请实施例有益效果:
本申请实施例提供的数据采集方案中,所述数据采集设备包括:深度相机、激光光源、扫描转镜、激光相机、处理器,其中:深度相机用于:采集待测对象的初始深度图像;激光光源用于:向扫描转镜发射激光光线;扫描转镜用于:以扫描的方式向待测对象反射激光光线;激光相机用于:在扫描过程中采集待测对象的线激光图像;处理器用于:以初始深度图像为基准,对不同线激光图像进行融合,得到待测对象的目标深度图像。这样可以利用深度相机获得待测对象的初始深度图像,并利用激光光源向扫描转镜发射激光光线,扫描转镜以扫描的方式向待测对象反射激光光线,激光相机在扫描过程中采集待测对象的线激光图像,处理器结合初始深度图像、线激光图像,得到待测对象的目标深度图像,其中,由于激光线能量集中,因此线激光图像的成像质量较高,例如,线激光图像在高反光表面、黑色表面可以获得较佳的成像质量,本申请实施例基于成像质量较高的线激光图像,可以获得较准确的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。
图1为本申请实施例提供的一种数据采集设备的结构示意图;
图2为本申请实施例提供的另一种数据采集设备的结构示意图;
图3为本申请实施例提供的一种数据采集方法的流程示意图;
图4为本申请实施例提供的一种数据采集装置的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
为了提高所获得的物体的深度图像的准确度,本申请实施例提供了一种数据采集设备、方法、装置及存储介质,下面分别进行详细介绍。
本申请实施例提供了一种数据采集设备,数据采集设备包括:深度相机、激光光源、扫描转镜、激光相机、处理器,其中:
深度相机用于:采集待测对象的初始深度图像;
激光光源用于:向扫描转镜发射激光光线,扫描转镜用于:以扫描的方式向待测对象反射激光光线;
激光相机用于:在扫描过程中采集待测对象的线激光图像;
处理器用于:以初始深度图像为基准,对不同线激光图像进行融合,得到待测对象的目标深度图像。
这样可以首先利用深度相机获得待测对象的初始深度图像,然后利用激光光源向扫描转镜发射激光光线,扫描转镜以扫描的方式向待测对象反射激光光线,激光相机在扫描过程中采集待测对象的线激光图像,处理器结合初始深度图像、线激光图像,得到待测对象的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
下面对上述数据采集设备进行详细介绍。
参见图1,图1为本申请实施例提供的一种数据采集设备的结构示意图,数据采集设备包括:深度相机101、激光光源102、扫描转镜103、激光相机104、处理器105,其中:
深度相机101用于:采集待测对象的初始深度图像;
激光光源102用于:向扫描转镜103发射激光光线,扫描转镜103用于:以扫描的方式向待测对象反射激光光线;
激光光源102向扫描转镜103发射激光光线从而扫描待测对象的过程可以被称为振镜扫描。
激光相机104用于:在扫描过程中采集待测对象的线激光图像;
处理器105用于:以初始深度图像为基准,对不同线激光图像进行融合,得到待测对象的目标深度图像。
上述激光光源102为多线激光光源或单线激光光源。
上述激光光源102所发射的激光光线可以是线性光线、面形光线、点阵光线等。上述激光相机104可以是单目相机、双目相机等。
深度相机101与激光相机104在采集图像过程中的相对位姿关系保持不变。
例如,深度相机101与激光相机104可以通过连接杆、连接板等连接件固定连接,以保证深度相机101与激光相机104之间的相对位姿关系保持不变。
又例如,深度相机101与激光相机104可以分开安装,这种情况下,在安装这两种相机时可以对这两种相机的位姿进行标定,以保证这两种相机在采集图像过程中的相对位姿关系保持不变。如可以预先设置这两种相机的相对位姿关系,在安装这两种相机时,首先安装其中一种相机,再根据所安装的相机的位姿以及预设的相对位姿关系,确定另一种相机的位姿,从而按照所确定位姿安装该另一种相机。
深度相机101、激光相机104分别通过有线连接方式或无线连接方式与处理器105通信连接,以便于处理器105获得深度相机101、激光相机104采集的待测对象的图像。
上述待测对象可以是人、螺栓、扳手等。
上述处理器105可以基于深度相机101与激光相机104的相机参数对不同线激光图像 进行融合,上述相机参数可以包括:深度相机101的内参、激光相机104的内参、深度相机101与激光相机104之间的位姿转换关系等,上述内参指的是焦距、中心点、畸变系数等。该相机参数可以预先标定得到。
激光光源102可以向扫描转镜103发射激光光线,扫描转镜103将激光光线反射至待测对象上,再由激光相机104采集待测对象的线激光图像,由于待测对象的表面往往不是平面,因此线激光图像中待测对象表面分布的激光光线会产生形变,该形变可以反映待测对象表面的深度信息,激光相机104可以向处理器105发送该线激光图像。
上述扫描转镜103可以是机械式转镜或MEMS(Micro-Electro-Mechanical System,微机电系统)式转镜。
本申请的一个实施例中,上述扫描转镜103包括反光镜、运动机构,上述运动机构用于带动上述反光镜运动。
处理器105可以接收上述初始深度图像、线激光图像,然后以初始深度图为基准,根据线激光图像、及深度相机101与激光相机104的相机参数,确定待测对象的目标深度图像。
处理器105在对不同线激光图像进行融合的过程中,针对不同帧线激光图像可以基于同一帧初始深度图像对线激光图像进行融合,也可以基于不同帧初始深度图像对线激光图像进行融合。
处理器105对线激光图像进行融合的多种实现方式可参见后续实施例,这里暂不详述。
上述深度相机101用于采集待测对象的初始深度图像。
本申请的一个实施例中,上述深度相机101可以为TOF(Time of flight,飞行时间)相机,包括TOF光源、TOF相机,上述TOF光源所发射的TOF光线可以是点形光线、线性光线、面形光线、点阵光线、光斑等。上述TOF相机可以是单目相机、双目相机等。
具体的,上述TOF相机可以采用iTOF(Indirect Time-of-Flight,间接飞行时间)技术或DTOF(Direct Time-of-Flight,直接飞行时间)技术进行深度图像采集,能够获得cm(厘米)级的初始深度图像。
TOF相机用于:根据TOF光线的飞行时长,获得待测对象的初始深度图像,其中,飞行时长为:TOF光源发射TOF光线的发射时刻至TOF相机接收到待测对象反射的TOF光线的接收时刻之间的时长。
具体的,TOF光源可以向待测对象发射TOF光线,待测对象可以对上述TOF光线进行反射,TOF相机可以接收待测对象反射的TOF光线,并确定TOF光源发射TOF光线的发射时刻、以及TOF相机接收到待测对象反射的TOF光线的接收时刻,计算上述发射时刻与接收时刻之间的时差,作为TOF光线自TOF光源至待测对象、以及自待测对象至TOF相机的飞行时长,根据该飞行时长、TOF光线的传播速率,计算TOF相机与待测对象之间的距离,根据该距离获得待测对象的深度图像,作为初始深度图像,TOF相机可以向处 理器105发送该初始深度图像。这样可以首先利用深度相机101获得待测对象的初始深度图像,然后利用激光光源102向扫描转镜103发射激光光线,扫描转镜103以扫描的方式向待测对象反射激光光线,激光相机104在扫描过程中采集待测对象的线激光图像,处理器105结合初始深度图像、线激光图像,得到待测对象的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
下面对上述处理器融合线激光图像的实现方式进行说明。
本申请的一个实施例中,上述处理器105,具体用于:
针对每一线激光图像,确定该线激光图像在初始深度图像中的匹配区域。
按照不同线激光图像对应的匹配区域在初始深度图像中的分布位置,对不同线激光图像进行融合,得到待测对象的目标深度图像。
具体的,线激光图像与初始深度图像为不同相机采集同一待测对象得到的图像,因此,线激光图像与初始深度图像中均包含同一待测对象对应的对象区域,又由于激光相机是在激光光线扫描待测对象的过程中采集待测对象的图像,因此,在线激光图像中,激光光线所在区域属于待测对象对应的对象区域。鉴于此,上述线激光图像在初始深度图像中的匹配区域,可以理解为线激光图像中激光光线所在区域对应的待测对象的部位在初始深度图像中对应的区域。
例如,上述待测对象可以是螺栓,上述线激光图像可以是激光光线扫描到螺栓的螺帽时激光相机采集的螺栓的图像,上述初始深度图像为深度相机采集的螺栓的图像,这种情况下,上述激光图像在初始深度图像中的匹配区域可以理解为螺栓的螺帽在初始深度图像中对应的区域。
不同线激光图像的采集时刻不同,图像中激光光线所在区域也就不同,针对每一线激光图像,可以在初始深度图像中确定该线激光图像的匹配区域。
本申请的一个实施例中,针对每一线激光图像,可以对该线激光图像与初始深度图像进行图像特征匹配,确定初始深度图像中与该线激光图像对应相同现实场景的区域,即提取该线激光图像中激光光线所在区域的图像特征,并提取初始深度图像的图像特征,对在这两张图像中提取的图像特征进行特征匹配,从而可以确定初始深度图像的特征中与线激光图像的图像特征相匹配的特征,并在初始深度图像中确定出所匹配的特征表征的图像区域,作为上述匹配区域。
上述相同现实场景可以理解为待测对象的对象区域,例如,若上述待测对象为螺栓,则待测对象的对象区域可以是螺栓的螺帽所在区域。初始深度图像中与上述相同现实场景对应的区域即为匹配区域。
在确定出各线激光图像对应的匹配区域后,可以将各线激光图像对应的匹配区域在初始深度图像中的分布位置,将各线激光图像中激光光线所在区域的区域内容拼接在一起,从而得到拼接后的图像,作为上述目标深度图像。
本申请的一个实施例中,可以基于相关技术中的方案对线激光图像进行融合,在此不再赘述。
利用激光光线对待测对象进行扫描可以分为静态扫描以及动态扫描。在静态扫描场景中,待测对象和数据采集设备在扫描过程中保持相对静止;在动态扫描场景中,待测对象和数据采集设备在扫描过程中存在相对运动。
下面分别对在静态扫描场景和动态扫描场景下处理器105对各线激光图像进行融合的具体实现方式进行说明。
例如,在静态扫描场景下,处理器105可以通过以下两种实现方式中任一种,对各线激光图像进行融合。
第一种实现方式中,处理器105具体用于:
针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到待测对象的目标深度图像。
第二种实现方式中,深度相机101具体用于:采集待测对象的多帧初始深度图像;
处理器105具体用于:
对所采集的多帧初始深度图像进行融合,得到融合后深度图像;
针对每一线激光图像,确定该线激光图像在融合后深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在融合后深度图像中的分布位置,对不同线激光图像进行融合,得到待测对象的目标深度图像。
又例如,在动态扫描场景下,所述深度相机101具体用于:在动态扫描过程中,采集待测对象的多帧初始深度图像,其中,各帧初始深度图像的采集时刻与各帧线激光图像的采集时刻同步;
处理器105具体用于:
针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
由于不同线激光图像是在不同时刻采集的,基于不同时刻采集的线激光图像与初始深度图像进行图像特征匹配的过程可以被称为时域上的特征匹配。
本申请的一个实施例中,所述处理器105具体用于:
针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机101的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。参见图2,图2为 本申请实施例提供的另一种数据采集设备的结构示意图,该数据采集设备中包含TOF光源、TOF相机、激光光源、激光相机、处理器、振镜,该激光相机可以分为左相机、右相机,TOF光源、TOF相机、激光光源、激光相机、处理器、振镜固定安装于连接板上,其中:
振镜与上述扫描转镜相似,均用于以扫描的方式向待测对象反射激光光线。
激光光源用于:向振镜发射激光光线,由振镜将所接收的激光光线反射至待测对象,以实现利用激光光线对待测对象进行扫描;
激光相机用于:在扫描过程中,采集待测对象的多帧激光图像;
TOF光源用于:向待测对象发射TOF光线;
TOF相机用于:在扫描过程中,根据TOF光线的飞行时长,获得待测对象的多帧初始深度图像,其中,各帧初始深度图的采集时刻与各帧激光图像的采集时刻同步;
处理器用于:针对每一激光图像,以与该激光图像同步采集的初始深度图为基准,根据该激光图像、及TOF相机与激光相机的相机参数,获得待测对象的参考深度图;对多个参考深度图进行融合,得到待测对象的目标深度图。
上述实施例提供的数据采集方案中,数据采集设备包括:飞行时间TOF光源、TOF相机、激光光源、激光相机、处理器,其中:TOF光源用于:向待测对象发射TOF光线;TOF相机用于:根据TOF光线的飞行时长,获得待测对象的初始深度图像,其中,飞行时长为:TOF光源发射TOF光线的发射时刻至接收到待测对象反射的TOF光线的接收时刻之间的时长;激光光源用于:向待测对象发射激光光线;激光相机用于:采集待测对象的激光图像;处理器用于:以初始深度图为基准,根据激光图像、及TOF相机与激光相机的相机参数,确定待测对象的目标深度图。这样可以首先利用TOF光线获得待测对象的初始深度图像,然后再基于TOF相机与激光相机的相机参数,对上述初始深度图像、激光图像进行融合,结合初始深度图像、激光图像,得到待测对象的深度图像。由此可见,应用上述实施例提供的方案,可以提高所获得的物体的深度图像的准确度。
参见3,图3为本申请实施例提供的一种数据采集方法的流程示意图,所述方法应用于数据采集设备中的处理器,所述数据采集设备还包括:深度相机、激光光源、扫描转镜、激光相机。具体的,深度相机与激光相机在采集图像过程中的相对位姿关系保持不变,一种情况下,所述深度相机与所述激光相机固定连接。
所述方法包括:
S301,接收深度相机获得的待测对象的初始深度图像。
S302,接收激光相机采集的待测对象的线激光图像。
其中,线激光图像是:激光相机在扫描转镜以扫描的方式向待测对象反射激光光源向扫描转镜发射的激光光线的过程中采集的。
S303,以初始深度图为基准,对不同线激光图像进行融合,得到待测对象的目标深度图像。
另外,本申请实施例对上述步骤S301以及步骤S302的执行顺序并不限定,可以首先执行步骤S301,再执行步骤S302;也可以首先执行步骤S302,再执行步骤S301;还可以并行执行步骤S301以及步骤S302。
本申请的一个实施例中,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的采集数据方案,能够提高获得待测对象的目标深度图像的准确性。
本申请的一个实施例中,在静态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的采集数据方案,能够提高获得待测对象的目标深度图像的准确性。
本申请的一个实施例中,在静态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
对所采集的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的方案采集数据时,由于融合后深度图像来源于多帧初始深度图像,因此,融合后深度图像所包含的待测对象的三维对象信息更全面,这样针对每一线激光图像,可以在融合后深度图像中,准确确定出与该线激光图像对应的匹配区域,从而按照各线激光图像对应的较为准确的匹配区域的分布位置,对不同线激光图像进行融合,得到上述目标深度图像,能够提高所获得的目标深度图像的准确性。
本申请的一个实施例中,在动态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的采集数据方案,能够提高获得待测对象的目标深度图像的准确性。
本申请的一个实施例中,所述按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
针对不同的初始深度图像,利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的方案采集数据时,可以对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正,这样能够提高不同线激光图像在不同初始深度图像中对应的匹配区域的准确性,从而按照位置修正后的匹配区域的分布位置对不同线激光图像进行融合,能够提高所获得的目标深度图像的准确性。
除上述提及的有益效果之外,上述各方法实施例均还具有以下有益效果:
上述各方法实施例中,均可以利用深度相机获得待测对象的初始深度图像,并利用激光光源向扫描转镜发射激光光线,扫描转镜以扫描的方式向待测对象反射激光光线,激光相机在扫描过程中采集待测对象的线激光图像,处理器结合初始深度图像、线激光图像,得到待测对象的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
参见图4,图4为本申请实施例提供的一种数据采集装置的结构示意图,所述装置设置于数据采集设备中的处理器,所述数据采集设备还包括:深度相机、激光光源、扫描转镜、激光相机。具体的,深度相机与激光相机在采集图像过程中的相对位姿关系保持不变,一种情况下,所述深度相机与所述激光相机固定连接。
所述装置包括:
第一图像接收模块401,用于接收所述深度相机获得的待测对象的初始深度图像;
第二图像接收模块402,用于接收所述激光相机采集的所述待测对象的线激光图像,其中,所述线激光图像是:激光相机在扫描转镜以扫描的方式向待测对象反射激光光源向扫描转镜发射的激光光线的过程中采集的;
目标深度图确定模块403,用于以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的方案采集数据时,由于激光线能量集中,线激光图像的成像质量较高,因此,本申请实施例基于成像质量较高的线激光图像,可以获得较准确的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
本申请的一个实施例中,所述目标深度图确定模块403,具体用于:
针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;
按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的采集数据方案,能够提高获得待测对象的目标深度图像的准确性。
本申请的一个实施例中,在静态扫描场景下,所述目标深度图确定模块403,具体用于:
针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的采集数据方案,能够提高获得待测对象的目标深度图像的准确性。
本申请的一个实施例中,在静态扫描场景下,所述目标深度图确定模块403,具体用于:
对所采集的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的方案采集数据时,由于融合后深度图像来源于多帧初始深度图像,因此,融合后深度图像所包含的待测对象的三维对象信息更全面,这样针对每一线激光图像,可以在融合后深度图像中,准确确定出与该线激光图像对应的匹配区域,从而按照各线激光图像对应的较为准确的匹配区域的分布位置,对不同线激光图像记性融合,得到上述目标深度图像,能够提高所获得的目标深度图像的准确性。
本申请的一个实施例中,在动态扫描场景下,所述目标深度图确定模块403,包括:
区域确定子模块,用于针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
目标深度图确定子模块,用于按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的采集数据方案,能够提高获得待测对象的目标深度图像的准确性。
本申请的一个实施例中,所述目标深度图确定子模块,具体用于:
针对不同的初始深度图像,利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
应用本申请实施例提供的方案采集数据时,可以对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正,这样能够提高不同线激光图像在不同初始深度图像中对应的匹配区域的准确性,从而按照位置修正后的匹配区域的分布位置对不同线激光图像进行融合,能够提高所获得的目标深度图像的准确性。
除上述提及的有益效果之外,上述各装置实施例均还具有以下有益效果:
上述各装置实施例中,均可以利用深度相机获得待测对象的初始深度图像,然后利用激光光源向扫描转镜发射激光光线,扫描转镜以扫描的方式向待测对象反射激光光线,激光相机在扫描过程中采集待测对象的线激光图像,处理器结合初始深度图像、线激光图像,得到待测对象的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一数据采集方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一数据采集方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)等。
上述实施例提供的数据采集方案中,所述数据采集设备包括:深度相机、激光光源、扫描转镜、激光相机、处理器,其中:所述深度相机用于:采集待测对象的初始深度图像;所述激光光源用于:向所述扫描转镜发射激光光线;所述扫描转镜用于:以扫描的方式向所述待测对象反射所述激光光线;所述激光相机用于:在扫描过程中采集所述待测对象的线激光图像;所述处理器用于:以所述初始深度图像为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。这样可以首先利用深度相机获得待测对象的初始深度图像,然后利用激光光源向扫描转镜发射激光光线,扫描转镜以扫描的方式向待测对象反射激光光线,激光相机在扫描过程中采集待测对象的线激光图像,处理器结合初始深度图 像、线激光图像,得到待测对象的目标深度图像。由此可见,应用本申请实施例提供的方案,可以提高所获得的物体的目标深度图像的准确度。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例、电子设备实施例、计算机可读存储介质实施例、计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (16)

  1. 一种数据采集设备,其特征在于,所述数据采集设备包括:深度相机、激光光源、扫描转镜、激光相机、处理器,其中:
    所述深度相机用于:采集待测对象的初始深度图像;
    所述激光光源用于:向所述扫描转镜发射激光光线,所述扫描转镜用于:以扫描的方式向所述待测对象反射所述激光光线;
    所述激光相机用于:在扫描过程中采集所述待测对象的线激光图像;
    所述处理器用于:以所述初始深度图像为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  2. 根据权利要求1所述的设备,其特征在于,
    所述处理器,具体用于:针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  3. 根据权利要求1所述的设备,其特征在于,在静态扫描场景下,
    所述处理器具体用于:针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像;
    所述深度相机具体用于:采集所述待测对象的多帧初始深度图像;所述处理器具体用于:对所采集的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  4. 根据权利要求1所述的设备,其特征在于,在动态扫描场景下,
    所述深度相机具体用于:在动态扫描过程中,采集所述待测对象的多帧初始深度图像,其中,各帧初始深度图像的采集时刻与各帧线激光图像的采集时刻同步;
    所述处理器具体用于:针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  5. 根据权利要求4所述的设备,其特征在于,
    所述处理器具体用于:针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位 置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  6. 根据权利要求1-5中任一项所述的设备,其特征在于,所述深度相机包括:飞行时间TOF光源、TOF相机,其中,所述TOF光源用于:向所述待测对象发射TOF光线;
    所述TOF相机用于:根据所述TOF光线的飞行时长,获得所述待测对象的初始深度图像,其中,所述飞行时长为:所述TOF光源发射TOF光线的发射时刻至所述TOF相机接收到所述待测对象反射的TOF光线的接收时刻之间的时长。
  7. 根据权利要求1-5中任一项所述的设备,其特征在于,所述激光相机为双目相机;
    和/或
    所述激光光源为多线激光光源或单线激光光源。
  8. 根据权利要求1-5中任一项所述的设备,其特征在于,所述扫描转镜包括反光镜、运动机构,所述运动机构用于带动所述反光镜运动。
  9. 一种数据采集方法,其特征在于,所述方法应用于数据采集设备中的处理器,所述数据采集设备还包括:深度相机、激光光源、扫描转镜、激光相机;
    所述方法包括:
    接收所述深度相机获得的待测对象的初始深度图像;
    接收所述激光相机采集的所述待测对象的线激光图像,其中,所述线激光图像是:所述激光相机在所述扫描转镜以扫描的方式向所述待测对象反射所述激光光源向所述扫描转镜发射的激光光线的过程中采集的;
    以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  10. 根据权利要求9所述的方法,其特征在于,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
    针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;
    按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  11. 根据权利要求9所述的方法,其特征在于,在静态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
    针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像;
    对所接收的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  12. 根据权利要求9所述的方法,其特征在于,在动态扫描场景下,所述以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
    针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;
    按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  13. 根据权利要求12所述的方法,其特征在于,所述按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像,包括:
    针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;
    按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  14. 一种数据采集装置,其特征在于,所述装置设置于数据采集设备中的处理器,所述数据采集设备还包括:深度相机、激光光源、扫描转镜、激光相机;
    所述装置包括:
    第一图像接收模块,用于接收所述深度相机获得的待测对象的初始深度图像;
    第二图像接收模块,用于接收所述激光相机采集的所述待测对象的线激光图像,其中,所述线激光图像是:所述激光相机在所述扫描转镜以扫描的方式向所述待测对象反射所述激光光源向所述扫描转镜发射的激光光线的过程中采集的;
    目标深度图确定模块,用于以所述初始深度图为基准,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  15. 根据权利要求14所述的装置,其特征在于,
    所述目标深度图确定模块,具体用于:针对每一线激光图像,确定该线激光图像在所述初始深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像;
    或,
    在静态扫描场景下,所述目标深度图确定模块,具体用于:针对不同的线激光图像,以同一初始深度图像为基准,对各个线激光图像进行融合,得到所述待测对象的目标深度图像;或对所接收的多帧初始深度图像进行融合,得到融合后深度图像;针对每一线激光图像,确定该线激光图像在所述融合后深度图像中的匹配区域;按照不同线激光图像对应的匹配区域在所述融合后深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像;
    或,
    在动态扫描场景下,所述目标深度图确定模块,包括:区域确定子模块,用于针对每一线激光图像,确定该线激光图像在同步采集的初始深度图像中的匹配区域;目标深度图确定子模块,用于按照不同线激光图像对应的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像;
    或,
    所述目标深度图确定子模块,具体用于:针对不同的初始深度图像,获得不同的初始深度图像之间的差异,并利用不同的初始深度图像之间的差异,计算所述深度相机的位姿变化信息,利用所计算得到的位姿变化信息,对不同线激光图像在不同初始深度图像中对应的匹配区域进行位置修正;按照不同线激光图像对应的位置修正后的匹配区域在不同初始深度图像中的分布位置,对不同线激光图像进行融合,得到所述待测对象的目标深度图像。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求9-13任一所述的方法步骤。
PCT/CN2023/099341 2022-09-27 2023-06-09 一种数据采集设备、方法、装置及存储介质 WO2024066471A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211182006.6A CN115588037A (zh) 2022-09-27 2022-09-27 一种数据采集设备、方法、装置及存储介质
CN202211182006.6 2022-09-27

Publications (2)

Publication Number Publication Date
WO2024066471A1 true WO2024066471A1 (zh) 2024-04-04
WO2024066471A9 WO2024066471A9 (zh) 2024-05-16

Family

ID=84772687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/099341 WO2024066471A1 (zh) 2022-09-27 2023-06-09 一种数据采集设备、方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN115588037A (zh)
WO (1) WO2024066471A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588037A (zh) * 2022-09-27 2023-01-10 杭州海康机器人股份有限公司 一种数据采集设备、方法、装置及存储介质
CN116984628B (zh) * 2023-09-28 2023-12-29 西安空天机电智能制造有限公司 一种基于激光特征融合成像的铺粉缺陷检测方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021212916A1 (zh) * 2020-04-20 2021-10-28 奥比中光科技集团股份有限公司 一种tof深度测量装置、方法及电子设备
WO2022017366A1 (zh) * 2020-07-23 2022-01-27 华为技术有限公司 一种深度成像方法及深度成像系统
CN114998499A (zh) * 2022-06-08 2022-09-02 深圳大学 一种基于线激光振镜扫描的双目三维重建方法及系统
CN115588037A (zh) * 2022-09-27 2023-01-10 杭州海康机器人股份有限公司 一种数据采集设备、方法、装置及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021212916A1 (zh) * 2020-04-20 2021-10-28 奥比中光科技集团股份有限公司 一种tof深度测量装置、方法及电子设备
WO2022017366A1 (zh) * 2020-07-23 2022-01-27 华为技术有限公司 一种深度成像方法及深度成像系统
CN114998499A (zh) * 2022-06-08 2022-09-02 深圳大学 一种基于线激光振镜扫描的双目三维重建方法及系统
CN115588037A (zh) * 2022-09-27 2023-01-10 杭州海康机器人股份有限公司 一种数据采集设备、方法、装置及存储介质

Also Published As

Publication number Publication date
WO2024066471A9 (zh) 2024-05-16
CN115588037A (zh) 2023-01-10

Similar Documents

Publication Publication Date Title
WO2024066471A9 (zh) 一种数据采集设备、方法、装置及存储介质
CN104483676B (zh) 一种3d/2d非扫描激光雷达复合成像装置
JP2003130621A (ja) 3次元形状計測方法およびその装置
CN1507742A (zh) 对红外辐射敏感的红外摄像机
CN107063117A (zh) 基于光场成像的水下激光同步扫描三角测距成像系统和方法
CN209156612U (zh) 一种远程自动激光清洗系统
CN109387354A (zh) 一种光学扫描器测试装置及测试方法
CN115597551B (zh) 一种手持激光辅助双目扫描装置及方法
CN201594861U (zh) 多波段图象融合红外成像系统
JP2005216160A (ja) 画像生成装置、侵入者監視装置及び画像生成方法
CN111024242A (zh) 红外热像仪及其连续自动对焦方法
JP3986748B2 (ja) 3次元画像検出装置
TW200817651A (en) Distance measurement system and method
WO2022100668A1 (zh) 温度测量方法、装置、系统、存储介质及程序产品
WO2022127212A1 (zh) 一种三维扫描测距装置及方法
CN107608072A (zh) 一种影像扫描系统的光学扫描装置
CN117553919A (zh) 一种基于三维点云与热像图的继电器温度场测量方法
CN112230244B (zh) 一种融合的深度测量方法及测量装置
CN104048813A (zh) 一种激光损伤光学元件过程的记录方法及其装置
WO2023185375A1 (zh) 一种深度图生成系统、方法及自主移动设备
CN107462967A (zh) 一种激光测距的定焦方法和系统
CN112749610A (zh) 深度图像、参考结构光图像生成方法、装置及电子设备
CN206348459U (zh) 基于多传感器融合的三维视觉传感装置
CN115190285B (zh) 一种3d图像采集系统及方法
US11367220B1 (en) Localization of lens focus parameter estimation and subsequent camera calibration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23869720

Country of ref document: EP

Kind code of ref document: A1