CN114782632A - Image reconstruction method, device and equipment - Google Patents

Image reconstruction method, device and equipment Download PDF

Info

Publication number
CN114782632A
CN114782632A CN202210470234.7A CN202210470234A CN114782632A CN 114782632 A CN114782632 A CN 114782632A CN 202210470234 A CN202210470234 A CN 202210470234A CN 114782632 A CN114782632 A CN 114782632A
Authority
CN
China
Prior art keywords
texture
point
image
pixel
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210470234.7A
Other languages
Chinese (zh)
Inventor
李耿磊
陈澄
盛鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN202210470234.7A priority Critical patent/CN114782632A/en
Publication of CN114782632A publication Critical patent/CN114782632A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides an image reconstruction method, an image reconstruction device and image reconstruction equipment, which comprise the following steps: determining a plurality of key point pairs based on N first light bar central lines in the first target image and N second light bar central lines in the second target image, wherein the key point pairs comprise first pixel points in the first light bar central lines and second pixel points in the second light bar central lines; aiming at each first pixel point in the center line of the first light bar, determining a plurality of candidate matching points corresponding to the first pixel point from all the center lines of the second light bar, determining texture fusion characteristics corresponding to the first pixel point based on the first texture image, and determining the texture fusion characteristics corresponding to each candidate matching point based on the second texture image; selecting a second pixel point corresponding to the first pixel point from the candidate matching points based on the texture fusion characteristics corresponding to the first pixel point and the texture fusion characteristics corresponding to the candidate matching points; and generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key point pairs. Through this application scheme, solve multi-thread laser mismatch problem.

Description

Image reconstruction method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image reconstruction method, apparatus, and device.
Background
The three-dimensional imaging device may be composed of a laser and a camera, and the laser is used to project a line-structured light onto a surface of a measured object (i.e., a measured object), and the camera is used to photograph the measured object to obtain an image with the line-structured light, i.e., a line-structured light image. After the line structured light image is obtained, the light strip center line of the line structured light image can be obtained, and the light strip center line is converted according to the pre-calibrated sensor parameters, so that the space coordinate (namely, the three-dimensional coordinate) of the measured object at the current position is obtained. Based on the spatial coordinates of the object to be measured at the current position, three-dimensional reconstruction (i.e., three-dimensional reconstruction) of the object to be measured can be achieved.
In order to realize the three-dimensional reconstruction of the measured object, the line structure light images of different positions of the measured object need to be collected, namely, the laser projects line structure light to different positions of the measured object, each position corresponds to one line structure light image, and the camera only collects the line structure light image corresponding to one position at a time, so that the camera needs to collect multiple line structure light images to complete the three-dimensional reconstruction, namely, the three-dimensional reconstruction time is longer.
Disclosure of Invention
The application provides an image reconstruction method, which is applied to three-dimensional imaging equipment, wherein the three-dimensional imaging equipment comprises at least one texture projector, a first camera, a second camera and a multi-line laser, and the method comprises the following steps:
for each texture projector, when the texture projector projects a regular texture pattern to a measured object, acquiring a first background image acquired by a first camera and a second background image acquired by a second camera; determining a first texture image corresponding to the first background image and a second texture image corresponding to the second background image;
when the multi-line laser projects light with N line structures to a measured object, acquiring a first original image acquired by a first camera and a second original image acquired by a second camera; determining a first target image corresponding to the first original image and a second target image corresponding to the second original image;
determining a plurality of key point pairs based on N first optical strip center lines in the first target image and N second optical strip center lines in the second target image, wherein the key point pairs comprise first pixel points in the first optical strip center lines and second pixel points in the second optical strip center lines, and the first pixel points and the second pixel points are pixel points corresponding to the same position point on a measured object; determining a plurality of candidate matching points corresponding to the first pixel point from all second light bar center lines aiming at each first pixel point in the first light bar center line, determining texture fusion features corresponding to the first pixel point based on all first texture images, and determining texture fusion features corresponding to the candidate matching points based on all second texture images; selecting a second pixel point corresponding to the first pixel point from the candidate matching points based on the texture fusion characteristics corresponding to the first pixel point and the texture fusion characteristics corresponding to the candidate matching points; and generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key point pairs.
The application provides an image reconstruction device, is applied to three-dimensional imaging equipment, three-dimensional imaging equipment includes at least one texture projector, first camera, second camera and multi-line laser ware, includes:
the acquiring module is used for acquiring a first background image acquired by the first camera and a second background image acquired by the second camera when the texture projector projects a regular texture pattern to the measured object for each texture projector; when the multi-line laser projects light with N line structures to a measured object, acquiring a first original image acquired by a first camera and a second original image acquired by a second camera; the determining module is used for determining a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image and a second target image corresponding to the second original image; determining a plurality of key point pairs based on N first optical strip center lines in the first target image and N second optical strip center lines in the second target image, wherein the key point pairs comprise first pixel points in the first optical strip center lines and second pixel points in the second optical strip center lines, and the first pixel points and the second pixel points are pixel points corresponding to the same position point on a measured object; determining a plurality of candidate matching points corresponding to the first pixel point from all second light bar center lines aiming at each first pixel point in the first light bar center line, determining texture fusion features corresponding to the first pixel point based on all first texture images, and determining texture fusion features corresponding to the candidate matching points based on all second texture images; selecting a second pixel point corresponding to the first pixel point from a plurality of candidate matching points based on the texture fusion characteristics corresponding to the first pixel point and the texture fusion characteristics corresponding to each candidate matching point; and the generating module is used for generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key point pairs.
The application provides a three-dimensional imaging device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the image reconstruction method disclosed in the above example of the present application.
It can be seen from the above technical solutions that, in the embodiment of the present application, the multi-line laser projects N line-structured light to the object to be measured each time, where N is a positive integer greater than 1, such as 7, 11, 15, and the like, so that the line-structured light image acquired by the camera each time includes N light bar center lines, and this line-structured light image is equivalent to the line-structured light image at N positions of the object to be measured, thereby reducing the acquisition times of the line-structured light image and reducing the time of three-dimensional reconstruction. When the multi-line laser scans the surface of the measured object, the whole contour data of the measured object can be rapidly acquired, the three-dimensional image information of the measured object is output, and the detection precision and the detection speed are improved. Through using first camera and second camera to gather line structure light image simultaneously, based on the line structure light image that two cameras were gathered, just can utilize triangulation to acquire the three-dimensional information of testee, can be in order to obtain the degree of depth information of testee to utilize the single to gather the image and acquire the degree of depth information on the multi-line laser, can the single promotes scanning efficiency N times, realize the full width scanning to the whole profile of testee fast. By acquiring the first texture image and the second texture image corresponding to the regular texture pattern, when the first pixel point corresponds to a plurality of candidate matching points, the second pixel point corresponding to the first pixel point can be selected from the plurality of candidate matching points based on all the first texture images and all the second texture images, so that the problem of multi-line laser mismatching can be solved, particularly the problem of mismatching of ultra multi-line laser (such as 11 lines and above) in a complex environment (such as large depth of field shielding, metal workpiece reflection and the like), and high-precision, high-efficiency and high-quality point cloud and depth map data can be acquired.
Drawings
Fig. 1 is a schematic structural diagram of a three-dimensional imaging apparatus in an embodiment of the present application;
FIG. 2 is a schematic view of a regular texture pattern in one embodiment of the present application;
FIG. 3 is a schematic control diagram of a texture projector and a multi-line laser in one embodiment of the present application;
FIG. 4 is a flowchart illustrating an image reconstruction method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating an image reconstruction method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a center point corresponding to a first pixel point in one embodiment of the present application;
FIG. 7 is a schematic representation of the triangularization in one embodiment of the present application;
fig. 8 is a schematic structural diagram of an image reconstruction apparatus according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "at … …" or "in response to a determination.
In order to acquire a three-dimensional reconstructed image, in the related art, a surface projection structured light method, a binocular speckle method, a Time of flight (TOF) method, a single line laser profile scanning method, and the like may be employed. When the surface projection structured Light method is adopted, a DLP (Digital Light Processing) or LCD (Liquid Crystal Display) projection technology can be adopted, and an LED (Light Emitting Diode) Light source is adopted as a projection Light source, so that the projection volume is large, the energy is diffused, and the volume is large and the power consumption is large under a large visual field and a long distance, which is not beneficial to three-dimensional positioning application. When the binocular speckle method is adopted, the binocular parallax and laser speckle binocular matching method is adopted, the detection precision is low, the edge cutting edge profile difference is not beneficial to profile scanning and three-dimensional positioning application. When the TOF method is adopted, the method is limited by the resolution of a camera, the detection precision is in the level of cm, and the application of automatic high-precision positioning is not met. When the single-line laser contour scanning method is adopted, the depth information of an object is scanned by using single-line laser, but the scanning speed is low, the stability is poor, and the positioning requirement is not met.
Taking the single line laser profile scanning method as an example, a laser can be used to project line structured light onto the surface of the object to be measured, and a camera is used to shoot the object to be measured, so as to obtain an image with line structured light, i.e. a line structured light image. After the line-structured light image is obtained, the spatial coordinate (i.e., three-dimensional coordinate) of the measured object at the current position can be obtained based on the line-structured light image, so as to realize three-dimensional reconstruction (i.e., three-dimensional reconstruction) of the measured object. However, in order to realize the three-dimensional reconstruction of the object to be measured, the line-structured light images at different positions of the object to be measured need to be collected, that is, the laser projects line-structured light to different positions of the object to be measured, each position corresponds to one line-structured light image, and because the camera only collects the line-structured light image corresponding to one position at a time, the camera needs to collect multiple line-structured light images to complete the three-dimensional reconstruction, that is, the three-dimensional reconstruction time is long, the scanning speed is slow, the stability is poor, and the positioning requirement of the three-dimensional reconstruction is not met.
In view of the above findings, the embodiment of the present application provides a three-dimensional imaging method using multi-line laser scanning, which can acquire depth information of an object to be measured by using a triangulation method, and form optical scanning on the surface of the object to be measured by using multi-line laser, thereby quickly acquiring entire profile data of the object to be measured and outputting three-dimensional image information of the object to be measured. The three-dimensional imaging method of the multi-line laser scanning can be applied to the field of machine vision and the field of industrial automation, can be used for realizing three-dimensional measurement and robot positioning, and is not limited to the application scene.
In this embodiment, the depth information of the object to be measured can be obtained by using a triangulation method, the scanning efficiency can be improved by 10-20 times at a time by obtaining the depth information on the multi-line laser at a time, such as 10 lines, 20 lines and the like, and then the whole profile of the object to be measured can be scanned in a full-width manner by driving the multi-line laser to scan.
In the embodiment, the problems of large volume, large power consumption and the like of the surface projection structured light method can be solved, the problems of low detection precision, poor edge cutting edge profile and the like of the binocular speckle method can be solved, the problems of low detection precision and the like of the TOF method can be solved, and the problems of low scanning speed, poor stability and the like of the single-line laser profile scanning method can be solved. In summary, the three-dimensional imaging method of multi-line laser scanning in the embodiment is a three-dimensional scanning imaging method with high precision, low cost, small volume and low power consumption, and has faster detection speed and higher detection precision.
In this embodiment, when one pixel point corresponds to multiple candidate matching points, the pixel point corresponding to the pixel point is selected from the multiple candidate matching points based on the texture image, so that the problem of mismatching of multi-line laser, especially the mismatching of ultra multi-line laser (such as 11 lines or more) in a complex environment (such as large depth of field shielding, metal workpiece reflection, and the like) can be solved, and high-precision, high-efficiency, and high-quality point cloud and depth map data can be obtained.
The embodiment of the application provides a three-dimensional imaging method of multi-line laser scanning, which can be applied to three-dimensional imaging equipment, does not limit the type of the three-dimensional imaging equipment, and can be any equipment with a three-dimensional imaging function, such as any equipment in the field of machine vision or industrial automation, and the like.
Referring to fig. 1, a schematic diagram of a three-dimensional imaging device is shown, which may include, but is not limited to: the device comprises a left camera, a right camera, a multi-line laser, at least one texture projector, a rotating mechanism, a high reflector, a control unit, a data processing unit, an external output interface and the like.
Illustratively, the left camera and the right camera can adopt black and white cameras, the front end of the camera is additionally provided with a filter with the same bandwidth as the wavelength of the multi-line laser, only light in a laser wavelength range is allowed to pass through, namely, only the laser wavelength reflected by a measured object is received, a reflected image of a laser line is obtained, the contrast is improved, and the ambient light interference is reduced.
Illustratively, the multi-line laser is a laser capable of emitting multiple laser lines simultaneously, and may be composed of a laser diode, a collimating lens, and a multi-line DOE. The laser diode adopts a high-power red laser diode, and the wavelength can be 635nm, 660nm or other wavelengths. Furthermore, the multi-line DOE may emit 10 laser lines, or 11 laser lines, or 25 laser lines, etc. simultaneously, which is not limited.
Illustratively, the texture projector is used for projecting a regular texture pattern to the object to be measured, and the regular texture pattern may be a regular texture pattern composed of at least one of points, lines, triangles, parallelograms, and circles. For example, based on basic geometric shapes such as points, lines, triangles, parallelograms, circles, and the like, regular texture patterns may be formed according to a certain rule, as shown in fig. 2, which shows several regular texture patterns, such as regular texture patterns of a regular lattice, regular texture patterns of a regular linear array, regular texture patterns of a regular grid, and regular texture patterns of a regular concentric circle. Of course, the regular texture pattern shown in fig. 2 is only a few examples, and the regular texture pattern is not limited in this embodiment, and may be a regular texture pattern of any shape.
The regular texture patterns projected by the different texture projectors may be the same or different for at least one texture projector as each texture projector projects the regular texture pattern to the object to be measured.
For example, taking as an example that the at least one texture projector is a first texture projector and a second texture projector, the regular texture pattern projected by the first texture projector may be a regular texture pattern of a regular grid, and the regular texture pattern projected by the second texture projector may be a regular texture pattern of regular concentric circles.
Illustratively, the control unit is used for controlling the turning on and off of the multi-line laser, the texture projector, the left camera and the right camera and controlling the rotating operation of the rotating mechanism. Assuming that the at least one texture projector is a first texture projector and a second texture projector, the control unit may, as illustrated in fig. 3:
first, the control unit controls the rotating mechanism to move to the initial position a, and when the rotating mechanism is located at the initial position a, the control unit controls the first texture projector to start, controls the left camera to acquire the background image Lb1 of the object to be measured, and controls the right camera to acquire the background image Rb1 of the object to be measured.
Then, the control unit controls the first texture projector to be turned off, controls the second texture projector to be turned on, controls the left camera to acquire a background image Lb2 of the object to be measured, and controls the right camera to acquire a background image Rb2 of the object to be measured in a case where the first texture projector is turned off and the second texture projector is turned on.
Then, the control unit controls the second texture projector to be turned off and controls the multi-line laser to be started, under the conditions that the first texture projector is turned off, the second texture projector is turned off and the multi-line laser is started, the multi-line laser can emit multi-line laser (namely, line structure light), the multi-line laser irradiates the surface of the measured object after being reflected by the high reflector, and on the basis, the control unit controls the left camera to collect a line structure light image L1 of the measured object and controls the right camera to collect a line structure light image R1 of the measured object.
Then, the control unit controls the rotating mechanism to move to a position B next to the initial position A, when the rotating mechanism is located at the position B, the control unit controls the multi-line laser to start, the first texture projector and the second texture projector are both closed at the moment, the multi-line laser emits multi-line laser, the control unit controls the left camera to collect a line structure light image L2 of the measured object, and controls the right camera to collect a line structure light image R2 of the measured object.
Then, the control unit controls the rotating mechanism to move to the next position C of the position B, when the rotating mechanism is located at the position C, the control unit controls the multi-line laser to start, controls the left camera to collect a line structure light image L3 of the measured object, and controls the right camera to collect a line structure light image R3 of the measured object.
And analogizing until the control unit controls the rotating mechanism to move to the last position, controlling the left camera to collect the line structure light image Lm of the measured object, and controlling the right camera to collect the line structure light image Rm of the measured object, so that the image collection process is completed, and the control unit controls the multi-line laser to be turned off.
Illustratively, the data processing unit may be a processor, such as a CPU or GPU, etc., and the data processing unit may acquire the above-described images, such as the background image Lb1, the background image Rb1, the background image Lb2, the background image Rb2, the line-structured light image L1, the line-structured light image R1, the line-structured light image L2, the line-structured light image R2. Based on the images, the data processing unit can obtain three-dimensional reconstruction images and output the three-dimensional reconstruction images through an external output interface.
For example, based on the background image Lb1, the background image Rb1, the background image Lb2, the background image Rb2, the line-structured light image L1, and the line-structured light image R1, the data processing unit may determine a three-dimensional point (i.e., a three-dimensional point cloud) at the initial position a. Based on the background image Lb1, the background image Rb1, the background image Lb2, the background image Rb2, the line-structured light image L2, and the line-structured light image R2, the data processing unit may determine a three-dimensional point at the position B, and so on, may obtain three-dimensional points at all positions. On the basis, the three-dimensional points at all positions are spliced to obtain complete three-dimensional points of the surface of the measured object, and then the three-dimensional reconstruction image is obtained.
The following describes an image reconstruction method according to an embodiment of the present application with reference to specific embodiments. Referring to fig. 4, which is a flowchart illustrating an image reconstruction method, the method may be applied to a three-dimensional imaging device, the three-dimensional imaging device may include at least one texture projector, a first video camera, a second video camera, and a multi-line laser, the first video camera may be a left camera, the second video camera may be a right camera, or the first video camera may be a right camera, and the second video camera may be a left camera, and the method may include:
step 401, for each texture projector, when the texture projector projects a regular texture pattern to a measured object, acquiring a first background image acquired by a first camera and a second background image acquired by a second camera; and determining a first texture image corresponding to the first background image and a second texture image corresponding to the second background image. When each texture projector projects a regular texture pattern to a measured object, the regular texture patterns projected by different texture projectors can be the same or different; illustratively, the regular texture pattern may be a regular texture pattern composed of at least one shape of a dot, a line, a triangle, a parallelogram, and a circle.
Step 402, when the multi-line laser projects N line structured light (i.e. laser lines) to a measured object (i.e. a measured object), acquiring a first original image of the measured object acquired by a first camera and a second original image of the measured object acquired by a second camera; and determining a first target image corresponding to the first original image and a second target image corresponding to the second original image. Wherein N may be a positive integer greater than 1.
For example, epipolar correction may be performed on the first background image, the second background image, the first original image, and the second original image to obtain a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image, and a second target image corresponding to the second original image; and the epipolar line correction is used for enabling the same position point on the measured object to have the same pixel height in the first texture image, the second texture image, the first target image and the second target image.
Step 403, determining a plurality of key point pairs based on the N first light bar center lines in the first target image and the N second light bar center lines in the second target image. The key point pairs can comprise first pixel points in the central line of the first light bar and second pixel points in the central line of the second light bar, and the first pixel points and the second pixel points are pixel points corresponding to the same position point on the measured object. For each first pixel point in the first optical stripe center line, a plurality of candidate matching points corresponding to the first pixel point can be determined from all second optical stripe center lines, texture fusion features corresponding to the first pixel point are determined based on all first texture images, and texture fusion features corresponding to the candidate matching points are determined based on all second texture images. On the basis, based on the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, the second pixel point corresponding to the first pixel point is selected from the candidate matching points.
For example, since N strip line structured light is projected to the object to be measured by the multi-line laser, the first original image includes N first light bar regions corresponding to the N strip line structured light, the second original image includes N second light bar regions corresponding to the N strip line structured light, the first target image also includes N first light bar regions corresponding to the N strip line structured light, and the second target image also includes N second light bar regions corresponding to the N strip line structured light. On the basis, the center line of the first optical stripe corresponding to each first optical stripe region in the first target image can be determined, namely N first optical stripe center lines are obtained, and the center line of the second optical stripe corresponding to each second optical stripe region in the second target image can be determined, namely N second optical stripe center lines are obtained.
For example, determining the texture fusion feature corresponding to the first pixel point based on all the first texture images and determining the texture fusion feature corresponding to each candidate matching point based on all the second texture images may include, but is not limited to: and determining a first texture point corresponding to the first pixel point from the first texture image, wherein the pixel coordinate of the first texture point in the first texture image is the same as the pixel coordinate of the first pixel point in the first target image. Acquiring texture features of a texture area taking the first texture point as a center; and fusing the texture features corresponding to all the first texture images to obtain the texture fusion features corresponding to the first pixel points. For each candidate matching point, determining a second texture point corresponding to the candidate matching point from a second texture image, wherein the pixel coordinate of the second texture point in the second texture image is the same as the pixel coordinate of the candidate matching point in a second target image; acquiring texture features of a texture area taking the second texture point as a center; and fusing texture features corresponding to all the second texture images to obtain texture fusion features corresponding to the candidate matching points.
For example, the selecting a second pixel point corresponding to the first pixel point from the multiple candidate matching points based on the texture fusion feature corresponding to the first pixel point and the texture fusion feature corresponding to each candidate matching point may include, but is not limited to: and determining the similarity between the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, selecting the maximum similarity from the similarities corresponding to the candidate matching points, and selecting the candidate matching point corresponding to the maximum similarity as a second pixel point corresponding to the first pixel point.
For example, determining a plurality of candidate matching points corresponding to the first pixel point from all the second light bar centerlines may include, but is not limited to: converting the first pixel point into a plurality of target three-dimensional reconstruction points based on a plurality of calibration equations; and each calibration equation represents the functional relationship between a pixel point in the first target image and the three-dimensional reconstruction point. Converting the target three-dimensional reconstruction points into a plurality of projection pixel points in a second target image; and determining a central point with the same pixel height as the first pixel point from the central line of the second optical bar based on the central line of each second optical bar, and determining whether the central point is a candidate matching point corresponding to the first pixel point or not based on the distance between the central point and each projection pixel point. Or, based on the central line of each second optical bar, determining a central point with the same pixel height as the first pixel point from the central line of the second optical bar, and determining the central point as a candidate matching point, that is, the central points with the same pixel height as the first pixel point are all used as candidate matching points, and the candidate matching points are not required to be selected from all the central points based on the distance between the projection pixel point and the central point.
For example, determining that the central point is a candidate matching point corresponding to the first pixel point or is not a candidate matching point corresponding to the first pixel point based on the distance between the central point and each of the projected pixel points may include: if the distance between the central point and each projection pixel point is larger than a first distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point; if the distance between the central point and any projection pixel point is not larger than a first distance threshold value, determining that the central point is a candidate matching point corresponding to the first pixel point; or determining a distance average value based on the distance between the center point and each projection pixel point, if the distance average value is greater than a second distance threshold, determining that the center point is not a candidate matching point corresponding to the first pixel point, and if the distance average value is not greater than the second distance threshold, determining that the center point is the candidate matching point corresponding to the first pixel point.
And step 404, generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key point pairs.
For example, for each key point pair, the three-dimensional point corresponding to the key point pair may be determined based on the key point pair and the camera calibration parameter. The camera calibration parameters may include camera internal parameters of the first camera, camera internal parameters of the second camera, and camera external parameters between the first camera and the second camera. Distortion correction can be carried out on the first pixel point through camera internal parameters of the first video camera, and the pixel point after distortion correction is converted into a first homogeneous coordinate; distortion correction is carried out on the second pixel point through camera internal parameters of a second camera, and the pixel point after distortion correction is converted into a second homogeneous coordinate; and determining the three-dimensional points corresponding to the key point pairs by utilizing a triangulation mode based on the first homogeneous coordinate, the second homogeneous coordinate, the camera internal reference of the first video camera, the camera internal reference of the second video camera and the camera external reference between the first video camera and the second video camera.
According to the technical scheme, in the embodiment of the application, the multi-line laser projects N line-structured light to the measured object every time, wherein N is a positive integer larger than 1, such as 7, 11, 15 and the like, so that the line-structured light image acquired by the camera every time comprises N light bar central lines, and the line-structured light image is equivalent to the line-structured light image at N positions of the measured object, thereby reducing the acquisition times of the line-structured light image and reducing the time of three-dimensional reconstruction. When the multi-line laser scans the surface of the measured object, the whole contour data of the measured object can be rapidly acquired, the three-dimensional image information of the measured object is output, and the detection precision and the detection speed are improved. Through using first camera and second camera to gather line structure light image simultaneously, based on the line structure light image that two cameras were gathered, just can utilize triangulation to acquire the three-dimensional information of testee, can be in order to obtain the degree of depth information of testee to utilize the single to gather the image and acquire the degree of depth information on the multi-line laser, can the single promotes scanning efficiency N times, realize the full width scanning to the whole profile of testee fast. By acquiring the first texture image and the second texture image corresponding to the regular texture pattern, when the first pixel point corresponds to a plurality of candidate matching points, the second pixel point corresponding to the first pixel point can be selected from the candidate matching points based on all the first texture images and all the second texture images, so that the problem of multi-line laser mismatching can be solved, particularly the problem of mismatching of ultra multi-line laser (11 lines and above) in a complex environment (such as large depth of field shielding, metal workpiece reflection and the like), and high-precision, high-efficiency and high-quality point cloud and depth map data can be acquired.
The above technical solution of the embodiment of the present application is described below with reference to specific application scenarios.
In this application scenario, taking two texture projectors as an example, when the number of the texture projectors is larger, the implementation manner is similar, the application scenario is not repeated, and the two texture projectors are marked as a first texture projector and a second texture projector, so that the three-dimensional imaging device includes a first video camera, a second video camera, a multi-line laser, a first texture projector and a second texture projector, the first video camera is a left camera, the second video camera is a right camera, or the first video camera is a right camera, and the second video camera is a left camera. The camera calibration parameters and the calibration equation corresponding to the three-dimensional imaging device can be obtained in advance, and the camera calibration parameters and the calibration equation are stored.
For example, the camera calibration parameters may include a camera internal parameter of the first video camera, a camera internal parameter of the second video camera, and a camera external parameter between the first video camera and the second video camera. The camera intrinsic parameters of the first video camera may be parameters related to the characteristics of the first video camera itself, such as focal length, pixel size, distortion factor, etc. The camera parameters of the second camera may be parameters related to the characteristics of the second camera itself, such as focal length, pixel size, distortion factor, etc. The camera external parameters between the first camera and the second camera are parameters in a world coordinate system, such as the position and the rotation direction of the first camera, the position and the rotation direction of the second camera, the position relationship between the first camera and the second camera, for example, a rotation matrix and a translation matrix, and the like.
The camera internal reference of the first video camera is an intrinsic parameter of the first video camera, and the camera internal reference of the first video camera is already given by the first video camera when the first video camera is shipped. The camera parameter of the second video camera is an intrinsic parameter of the second video camera, and the camera parameter of the second video camera is already given by the second video camera when the second video camera is shipped.
With respect to camera extrinsic parameters between the first camera and the second camera, such as a rotation matrix and a translation matrix, a plurality of calibration points may be deployed in the target scene, a first calibration image of the target scene is acquired by the first camera, the first calibration image including the plurality of calibration points, and a second calibration image of the target scene is acquired by the second camera, the second calibration image including the plurality of calibration points. Based on the pixel coordinates of the plurality of calibration points in the first calibration image and the pixel coordinates of the plurality of calibration points in the second calibration image, the camera external parameters between the first camera and the second camera can be determined, and the determination process of the camera external parameters is not limited.
Illustratively, the calibration equation represents a functional relationship between the pixel points and the three-dimensional reconstruction points in the first camera captured image. If the multi-line laser projects light with N line structures to the measured object, N calibration equations are obtained, and the calibration equations can be light plane equations. Regarding the manner of obtaining the calibration equation, the method may include:
step S11, when the multi-line laser projects N line structured light onto the white background plate, acquiring an image S1 for the white background plate acquired by the first camera, and acquiring an image S2 for the white background plate acquired by the second camera. The image s1 may include N first light bar regions corresponding to the N striped lights, where the N first light bar regions correspond to the N striped lights one to one, and the image s2 may include N second light bar regions corresponding to the N striped lights, where the N second light bar regions correspond to the N striped lights one to one.
Step S12, determining a first light bar centerline corresponding to each first light bar region in the image S1, and determining a second light bar centerline corresponding to each second light bar region in the image S2, that is, obtaining N first light bar centerlines corresponding to N structured lights, and obtaining N second light bar centerlines corresponding to N structured lights.
Step S13, determining a plurality of key point pairs based on the central lines of all the first light bars and the central lines of all the second light bars, where each key point pair includes a first central point in the central lines of the first light bars and a second central point in the central lines of the second light bars, and the first central point and the second central point are pixel points corresponding to the same position point on the white background plate.
For example, assuming that the N line structured light is line structured light 1 and line structured light 2, image s1 includes first light bar region 1 corresponding to line structured light 1 and first light bar region 2 corresponding to line structured light 2, and image s2 includes second light bar region 1 corresponding to line structured light 1 and second light bar region 2 corresponding to line structured light 2. The first light bar region 1 corresponds to a first light bar central line 1, the first light bar region 2 corresponds to a first light bar central line 2, the second light bar region 1 corresponds to a second light bar central line 1, and the second light bar region 2 corresponds to a second light bar central line 2.
For example, since the object to be measured is a white background plate, and when the line structured light projects 2 lines of structured light onto the white background plate, the light stripe regions are relatively clear, and no miscellaneous points are generated, when the first light stripe centerline 1 is determined based on the first light stripe region 1, each line of the first light stripe centerline 1 has only 1 central point, and similarly, each line of the second light stripe centerline 1 has only 1 central point. On the basis, a key point pair 11 is formed by a first row central point of a first light bar central line 1 and a first row central point of a second light bar central line 1, a key point pair 12 is formed by a second row central point of the first light bar central line 1 and a second row central point of the second light bar central line 1, and so on. Similarly, the key point pair 21 may be formed by a first row central point of the first light stripe central line 2 and a first row central point of the second light stripe central line 2, the key point pair 22 may be formed by a second row central point of the first light stripe central line 2 and a second row central point of the second light stripe central line 2, and so on.
Step S14, for each key point pair, determining a three-dimensional point corresponding to the key point pair based on the key point pair and the camera calibration parameter, for example, determining the three-dimensional point corresponding to the key point pair in a triangularization manner, which may be referred to in subsequent embodiments, and is not described herein again.
And step S15, determining a calibration equation corresponding to each line structure light based on the key point pairs and the three-dimensional points corresponding to the key point pairs. For example, based on a plurality of key point pairs (e.g., key point pair 11, key point pair 12, etc.) between the first light bar centerline 1 and the second light bar centerline 1 and the three-dimensional points corresponding to each key point pair, the calibration equation corresponding to the line structured light 1 can be determined. For example, based on a plurality of central points of the first optical stripe centerline 1 and the three-dimensional point corresponding to each central point, a calibration equation may be determined, where the calibration equation is used to represent a functional relationship between the pixel point (i.e. the central point of the first optical stripe centerline 1) in the image s1 and the three-dimensional reconstruction point (i.e. the three-dimensional point corresponding to the central point), for example, a planar model or a quadratic model is used for fitting, so as to obtain the calibration equation. Similarly, the calibration equation corresponding to the linear structured light 2 can be obtained, and by analogy, the calibration equation corresponding to each linear structured light can be obtained, that is, N calibration equations are obtained.
In the application scenario, referring to fig. 5, the image reconstruction method of the present embodiment may include:
step 501, when the first texture projector projects the regular texture pattern to the object to be measured, a first background image Lb1 collected by the first camera and a second background image Rb1 collected by the second camera are acquired. When the second texture projector projects the regular texture pattern to the object to be measured, a first background image Lb2 captured by the first camera, a second background image Rb2 captured by the second camera are acquired. Wherein the regular texture pattern projected by the first texture projector and the regular texture pattern projected by the second texture projector may be the same or different.
Wherein, the capturing timings of the first background image Lb1 and the second background image Rb1 may be the same, and the capturing timings of the first background image Lb2 and the second background image Rb2 may be the same.
Step 502, when the multi-line laser projects the N-line structured light to the object to be measured, acquiring a first original image of the object to be measured acquired by the first camera, and acquiring a second original image of the object to be measured acquired by the second camera, where the acquisition time of the first original image may be the same as the acquisition time of the second original image. For example, the first original image is a line-structured light image L1 and the second original image is a line-structured light image R1, or the first original image is a line-structured light image L2, the second original image is a line-structured light image R2, and so on.
Illustratively, the first original image includes N first light bar regions corresponding to N line structured lights, such as first light bar region 1 corresponding to line structured light 1, first light bar region 2 corresponding to line structured light 2, and so on. The second original image includes N second light bar regions corresponding to the N line structured lights, such as the second light bar region 1 corresponding to the line structured light 1, the second light bar region 2 corresponding to the line structured light 2, and so on.
Step 503, performing epipolar correction on the first background image Lb1, the second background image Rb1, the first background image Lb2, the second background image Rb2, the first original image and the second original image to obtain a first texture image Lb1 corresponding to the first background image Lb1, a second texture image Rb1 corresponding to the second background image Rb1, a first texture image Lb2 corresponding to the first background image Lb2, a second texture image Rb2 corresponding to the second background image Rb2, a first target image corresponding to the first original image and a second target image corresponding to the second original image. Illustratively, epipolar correction is used to make the same location point on the measured object have the same pixel height in the first texture image Lb1 ', the second texture image Rb 1', the first texture image Lb2 ', the second texture image Rb 2', the first target image, and the second target image.
For example, it is very time consuming to match corresponding points in a two-dimensional space, in order to reduce the matching search range, an epipolar constraint may be utilized to reduce the matching of the corresponding points from two-dimensional search to one-dimensional search, and the epipolar correction functions to perform line correspondence on the first background image Lb1, the second background image Rb1, the first background image Lb2, the second background image Rb2, the first original image, and the second original image, so as to obtain the first texture image Lb1 ', the second texture image Rb 1', the first texture image Lb2 ', the second texture image Rb 2', the first target image, and the second target image, so that the epipolar lines of these images are exactly at the same horizontal line, and any point on one image and the corresponding point on another image necessarily have the same line number, and only the one-dimensional search needs to be performed on the line.
Illustratively, the first target image includes N first light bar regions corresponding to N line structured light, such as first light bar region 1 corresponding to line structured light 1, first light bar region 2 corresponding to line structured light 2, and so on. The second target image includes N second light bar regions corresponding to N line structured lights, such as second light bar region 1 corresponding to line structured light 1, second light bar region 2 corresponding to line structured light 2, and so on.
Step 504, determining a first light strip center line corresponding to each first light strip region in the first target image, namely obtaining N first light strip center lines in the first target image; and determining a second light bar central line corresponding to each second light bar area in the second target image to obtain N second light bar central lines in the second target image.
For example, each row of the first optical stripe region may include a plurality of pixel points, a center point of the row may be selected from the plurality of pixel points, and the center points of all rows of the first optical stripe region form a first optical stripe center line, so that a first optical stripe region 1 corresponding to the first optical stripe center line 1, a first optical stripe region 2 corresponding to the first optical stripe center line 2, and so on are obtained. Similarly, a second light stripe central line 1 corresponding to the second light stripe region 1, a second light stripe central line 2 corresponding to the second light stripe region 2, and so on are obtained.
For example, the light bar center line corresponding to the light bar region may be determined by using a light bar center line extraction algorithm, for example, the center point of each line of the light bar region may be extracted by using a gaussian fitting method, a COG method, or a STEGER method, so as to obtain the light bar center line.
For example, assuming that the height of the first target image and the second target image is H, each of the first light bar center lines includes a center point of the H row, and each of the second light bar center lines includes a center point of the H row.
Step 505, for each first light stripe centerline (the processing mode of each first light stripe centerline is the same, and then one first light stripe centerline is taken as an example), for each first pixel point in the first light stripe centerline, converting the first pixel point into a plurality of target three-dimensional reconstruction points based on a plurality of calibration equations. And each calibration equation represents the functional relationship between the pixel point and the three-dimensional reconstruction point in the first target image.
Exemplarily, since which line structured light corresponds to the centerline of the first optical stripe is unknown, calibration equations corresponding to all line structured light, that is, N calibration equations, may be obtained, and since each calibration equation represents a functional relationship between a pixel point and a three-dimensional reconstruction point in the first target image, the first pixel point may be converted into a target three-dimensional reconstruction point based on each calibration equation. For example, the first pixel point is converted into the target three-dimensional reconstruction point based on the calibration equation corresponding to the line structured light 1, the first pixel point is converted into the target three-dimensional reconstruction point based on the calibration equation corresponding to the line structured light 2, and so on, and the first pixel point can be converted into the N target three-dimensional reconstruction points based on the N calibration equations corresponding to the N line structured lights.
Step 506, based on the plurality of target three-dimensional reconstruction points corresponding to the first pixel point, the plurality of target three-dimensional reconstruction points may be converted into a plurality of projection pixel points in the second target image.
For example, for each target three-dimensional reconstruction point, the target three-dimensional reconstruction point is a three-dimensional point in a world coordinate system, based on the position of the second camera (obtained based on camera calibration parameters), the target three-dimensional reconstruction point may be converted into a pixel point in the second target image, and the pixel point is marked as a projection pixel point corresponding to the target three-dimensional reconstruction point, which is not limited in this embodiment.
Obviously, for each target three-dimensional reconstruction point, a projection pixel point corresponding to the target three-dimensional reconstruction point can be obtained, that is, N target three-dimensional reconstruction points can correspond to N projection pixel points, that is, N projection pixel points corresponding to the first pixel point. To sum up, for each first pixel point in the center line of the first optical stripe, the first pixel point may be projected to the second target image to obtain N projected pixel points.
And 507, aiming at each first pixel point in the center lines of the first optical stripes, determining a central point with the same pixel height as the first pixel point from the center lines of each second optical stripe, wherein due to the existence of N second optical stripe center lines, a plurality of central points corresponding to the first pixel point can be obtained.
In summary, based on steps 506 and 507, for each first pixel point in the first light bar centerline, N projection pixel points corresponding to the first pixel point can be determined from the second target image, and a plurality of central points corresponding to the first pixel point can be determined from the second target image.
And step 508, selecting candidate matching points corresponding to the first pixel point from all the center points corresponding to the first pixel point based on all the projection pixel points and all the center points corresponding to the first pixel point.
For example, for each center point corresponding to the first pixel point, a distance between the center point and each projection pixel point, such as an euclidean distance, may be calculated, and based on the distance between the center point and each projection pixel point, it may be determined that the center point is a candidate matching point corresponding to the first pixel point, or the center point is not a candidate matching point corresponding to the first pixel point. Obviously, after the above-mentioned processing is performed on each center point, the candidate matching points corresponding to the first pixel point may be obtained, and the number of the candidate matching points may be at least one, so that at least one candidate matching point corresponding to the first pixel point may be determined from all the second light bar center lines.
In a possible implementation manner, for each central point corresponding to the first pixel point, if the distance between the central point and each projection pixel point is greater than a first distance threshold, it is determined that the central point is not a candidate matching point corresponding to the first pixel point; and if the distance between the central point and any projection pixel point is not greater than a first distance threshold, determining that the central point is a candidate matching point corresponding to the first pixel point.
Referring to FIG. 6, the first pixel point is clThe N line structured light is p from left to right in sequence1、...、pNThe central point corresponding to the first pixel point on the second target image is cr1、cr2、…、crMM is the total number of the central points, and on the basis, the candidate matching point corresponding to the first pixel point
Figure BDA0003622122250000171
The following formula can be satisfied:
Figure BDA0003622122250000172
in the above formula, f (c)l,pl,crj) Representing a first pixel pointcl projected pixel point in the second target image and the center point crjThe euclidean distance between them, thresh representing the first distance threshold, is empirically configured.
As can be seen from the above formula, each center point c corresponding to the first pixel pointrjIf the center point crjDetermining a central point c if the Euclidean distance between each projection pixel point and each projection pixel point is greater than a first distance threshold valuerjCandidate matching points not corresponding to the first pixel point
Figure BDA0003622122250000173
If the center point crjDetermining the central point c if the Euclidean distance between the projection pixel point and any projection pixel point is not greater than a first distance threshold valuerjIs a candidate matching point corresponding to the first pixel point
Figure BDA0003622122250000174
In summary, the candidate matching point corresponding to the first pixel point can be determined from the second target image.
In another possible implementation manner, for each first pixel point in the center lines of the first light bars, a central point having the same pixel height as that of the first pixel point may be determined from each center line of the second light bars, and since there are N second light bar center lines, a plurality of central points corresponding to the first pixel point may be obtained.
Step 509, determining the texture fusion features corresponding to the first pixel points based on all the first texture images, and determining the texture fusion features corresponding to the candidate matching points based on all the second texture images.
In one possible embodiment, the texture fusion feature may be determined by:
in step 5091, for the first pixel point, a first texture point corresponding to the first pixel point may be determined from the first texture image (i.e. each first texture image), and a pixel coordinate of the first texture point in the first texture image is the same as a pixel coordinate of the first pixel point in the first target image.
For example, a first texture point x11 corresponding to the first pixel point may be determined from the first texture image Lb1 ', and a first texture point x12 corresponding to the first pixel point may be determined from the first texture image Lb 2'.
At step 5092, texture features of the texture region centered on the first texture point are obtained.
For example, the texture region may include a plurality of texture points in the first texture image, the texture points are centered on the first texture point, such as a texture region with a size r centered on the first texture point, r may be 3, 4, 5, 6, etc., without limitation. After the texture region is obtained, the texture feature of the texture region, that is, the texture feature of the local region in the first texture image, is determined, and is not limited to this texture feature, and the texture feature may be one of global features, and is used to describe surface properties of a scene corresponding to the image or the image region, such as thickness and density of texture. The texture features may include, but are not limited to, gray level co-occurrence matrices, autoregressive texture models, Tamura texture features, wavelet transforms, and the like.
For example, a texture region centered on the first texture point x11 is determined from the first texture image Lb1 ', a texture feature x13 of the texture region is determined, a texture region centered on the first texture point x12 is determined from the first texture image Lb 2', and a texture feature x14 of the texture region is determined.
And 5093, fusing the texture features corresponding to all the first texture images to obtain texture fusion features corresponding to the first pixel points. For example, the texture feature x13 corresponding to the first texture image Lb1 'and the texture feature x14 corresponding to the first texture image Lb 2' are fused to obtain a texture fusion feature corresponding to the first pixel point.
For each candidate matching point corresponding to the first pixel point, a second texture point corresponding to the candidate matching point may be determined from the second texture image (i.e. each second texture image), and the pixel coordinates of the second texture point in the second texture image are the same as the pixel coordinates of the candidate matching point in the second target image, step 5094. For example, the second texture point y11 corresponding to the candidate matching point may be determined from the second texture image Rb1 ', and the second texture point y12 corresponding to the candidate matching point may be determined from the second texture image Rb 2'.
Step 5095, texture features of the texture region centered at the second texture point are obtained.
For example, the texture region may include a plurality of texture points in the second texture image, the texture points being centered on the second texture point, such as a texture region of size r × r centered on the second texture point, and the texture feature of the texture region, i.e., the texture feature of the local region in the second texture image, may be determined.
For example, a texture region centered on the second texture point y11 is determined from the second texture image Rb1 'and the texture feature y13 of the texture region is determined, and a texture region centered on the second texture point y12 is determined from the second texture image Rb 2' and the texture feature y14 of the texture region is determined.
And 5096, fusing the texture features corresponding to all the second texture images to obtain texture fusion features corresponding to the candidate matching points. For example, the texture feature y13 and the texture feature y14 are fused to obtain a texture fusion feature corresponding to the candidate matching point, and the fusion mode is not limited, such as weighted fusion and the like.
In summary, the texture fusion features corresponding to the first pixel point can be obtained, and the texture fusion features corresponding to each candidate matching point corresponding to the first pixel point are obtained, so that step 509 is completed.
And step 510, selecting a second pixel point corresponding to the first pixel point from the candidate matching points based on the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points.
Illustratively, the similarity between the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points is determined, the maximum similarity is selected from the similarities corresponding to the candidate matching points, and the candidate matching point corresponding to the maximum similarity is selected as the second pixel point corresponding to the first pixel point.
And obtaining a second pixel point corresponding to the first pixel point, and forming a key point pair by the first pixel point and the second pixel point, wherein the key point pair comprises a first pixel point in the central line of the first light bar and a second pixel point in the central line of the second light bar, the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object, the first pixel point is positioned in the first target image, and the second pixel point is positioned in the second target image.
And 511, determining the three-dimensional point corresponding to the key point pair based on the key point pair and the camera calibration parameter.
Illustratively, for each key point pair, the key point pair includes a first pixel point in a first target image and a second pixel point in a second target image, and the first pixel point and the second pixel point are pixel points corresponding to the same position point on the object to be measured, on this basis, a triangularization mode may be adopted to determine a three-dimensional point corresponding to the key point pair, and the following describes the process in combination with specific steps.
Step S21, distortion correction is carried out on the first pixel point through camera internal parameters of the first video camera, and the pixel point after distortion correction is converted into a first coordinate of the same order; and distortion correction is carried out on the second pixel points through camera internal parameters of the second video camera, and the pixel points after distortion correction are converted into second homogeneous coordinates.
For example, due to the manufacturing accuracy of the lens and the assembling process deviation, the image collected by the first camera has distortion, i.e., distortion, such as radial distortion and tangential distortion, exists. To solve the distortion problem, the camera parameters of the first video camera include distortion parameters, such as radial distortion parameters k1, k2, k3, tangential distortion parameters p1, p2, and the like. Based on this, in this embodiment, distortion correction may be performed on the first pixel point by using camera internal parameters of the first video camera, so as to obtain a pixel coordinate after distortion removal processing. After the undistorted pixel coordinates are obtained, the undistorted pixel coordinates may be converted to first-order coordinates. Similarly, distortion correction may be performed on the second pixel coordinate using camera intrinsic parameters of the second video camera, and the pixel coordinate after distortion correction may be converted into a second homogeneous coordinate. In summary, the homogeneous coordinates of a keypoint pair may comprise a first homogeneous coordinate of a first keypoint and a second homogeneous coordinate of a second keypoint.
And step S22, determining the three-dimensional points corresponding to the key point pairs by utilizing a triangularization mode based on the first homogeneous coordinate, the second homogeneous coordinate, the camera internal parameters of the first video camera, the camera internal parameters of the second video camera and the camera external parameters (namely the camera external parameters between the first video camera and the second video camera, such as position relation and the like).
For example, see FIG. 7 for a schematic diagram of the triangularization method, OLIs the position of the first camera, ORBased on camera external parameters between the first camera and the second camera, the position of the second camera can be obtainedLAnd ORThe positional relationship of (c). For a three-dimensional point P in three-dimensional space, the imaging position in the image plane of the first camera is PlThe imaging position at the image plane of the second camera is pr。plAs the first pixel point, prAnd as a second pixel point, the first pixel point and the second pixel point form a key point pair, and the coordinate of the three-dimensional point P is the three-dimensional point corresponding to the key point pair. Is prepared from OL,OR,plAnd prConverting to the same coordinate system and aiming at O in the same coordinate systemL,OR,plAnd pr,OLAnd plThere is a straight line a1, ORAnd prThere is a straight line a2, and if there is an intersection between the straight line a1 and the straight line a2, the intersection between the straight line a1 and the straight line a2 is the three-dimensional point P. If the straight line a1 does not have an intersection with the straight line a2, the three-dimensional point P is the closest point to the straight line a1 and the straight line a 2. Based on the application scene, three of the three-dimensional points P can be obtained in a triangularization modeAnd (4) dimensional space coordinates are obtained, so that a three-dimensional point corresponding to the key point pair is obtained. Of course, the above implementation is only an example of the triangularization manner, and the implementation of the triangularization manner is not limited.
Of course, the above-mentioned triangularization manner is only an example of determining three-dimensional points, which is not limited to this, for example, in practical applications, the three-dimensional points may also be determined by using a parallax manner in binocular stereo vision. The parallax formula in binocular stereo vision is: z is bf/d, b represents a binocular baseline distance, namely, a distance between a first video camera and a second video camera, and can be known based on camera external parameters, f represents a camera focal length, namely, a camera focal length of the first video camera or a camera focal length of the second video camera, the two camera focal lengths are the same, d represents a parallax, namely, a distance between a first pixel point and a second pixel point, because the first pixel point and the second pixel point are located at the same pixel height, the parallax d also represents a horizontal distance between the first pixel point and the second pixel point, and z represents a z-direction coordinate of a three-dimensional point.
In summary, for each key point pair, a three-dimensional point corresponding to the key point pair may be obtained, where the first target image includes N first light bar center lines, and each first light bar center line includes H first pixel points, so that N × H key point pairs may be obtained, where the N × H key point pairs correspond to N × H three-dimensional points.
And step 512, generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key point pairs.
For example, for each position of the rotating mechanism, such as position a, position B, etc., steps 501 and 511 may be used to determine N × H three-dimensional points in the position, in the moving process of the rotating mechanism, each position may acquire a set of original images to perform the above operation, and if the rotating mechanism has M positions in total, M × H three-dimensional points in M positions may be obtained. On the basis, a three-dimensional reconstruction image is generated based on the M x N x H three-dimensional points, the three-dimensional reconstruction image is point cloud data, and the three-dimensional reconstruction image is output. Alternatively, the three-dimensional reconstructed image may be projected onto a certain camera to obtain a depth image, and the depth image may be output.
According to the technical scheme, when the surface of the measured object is scanned by the multi-line laser, the whole profile data of the measured object is rapidly acquired, the three-dimensional image information of the measured object is output, and the detection precision and the detection speed are improved. The first camera and the second camera are used for simultaneously collecting line-structured light images, the three-dimensional information of the measured object can be obtained by a triangulation method, and the depth information of the measured object is obtained, so that the depth information on the multi-line laser is obtained by collecting the images once, the scanning efficiency is improved by N times once, and the whole outline of the measured object is rapidly scanned in a full-width mode. By obtaining the first texture image and the second texture image corresponding to the regular texture pattern, when the first pixel point corresponds to a plurality of candidate matching points, the second pixel point corresponding to the first pixel point can be selected from the plurality of candidate matching points based on all the first texture images and all the second texture images, so that the problem of multi-line laser mismatching, particularly the problem of ultra-multi-line laser mismatching in a complex environment, is solved, and high-precision, high-efficiency and high-quality point cloud and depth map data are obtained.
Based on the same application concept as the method, an image reconstruction apparatus is provided in the embodiment of the present application, which is applied to a three-dimensional imaging device, where the three-dimensional imaging device includes at least one texture projector, a first camera, a second camera, and a multi-line laser, and as shown in fig. 8, is a schematic structural diagram of the apparatus, including:
the acquiring module 81 is configured to acquire, for each texture projector, a first background image acquired by a first camera and a second background image acquired by a second camera when the texture projector projects a regular texture pattern to a measured object; when the multi-line laser projects light with N line structures to a measured object, acquiring a first original image acquired by a first camera and a second original image acquired by a second camera;
a determining module 82, configured to determine a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image, and a second target image corresponding to the second original image; determining a plurality of key point pairs based on N first optical strip center lines in the first target image and N second optical strip center lines in the second target image, wherein the key point pairs comprise first pixel points in the first optical strip center lines and second pixel points in the second optical strip center lines, and the first pixel points and the second pixel points are pixel points corresponding to the same position point on a measured object; determining a plurality of candidate matching points corresponding to the first pixel point from all second light bar center lines aiming at each first pixel point in the first light bar center line, determining texture fusion features corresponding to the first pixel point based on all first texture images, and determining texture fusion features corresponding to the candidate matching points based on all second texture images; selecting a second pixel point corresponding to the first pixel point from the candidate matching points based on the texture fusion characteristics corresponding to the first pixel point and the texture fusion characteristics corresponding to the candidate matching points;
and a generating module 83, configured to generate a three-dimensional reconstructed image based on the three-dimensional points corresponding to the plurality of key point pairs.
For example, when the determining module 82 determines that the first texture image corresponding to the first background image, the second texture image corresponding to the second background image, the first target image corresponding to the first original image, and the second target image corresponding to the second original image are specifically configured to: performing epipolar line correction on the first background image, the second background image, the first original image and the second original image to obtain a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image and a second target image corresponding to the second original image; wherein the epipolar line correction is used for enabling the same position point on the measured object to have the same pixel height in the first texture image, the second texture image, the first target image and the second target image.
For example, the determining module 82 is specifically configured to, when determining the texture fusion features corresponding to the first pixel point based on all the first texture images, and determining the texture fusion features corresponding to each candidate matching point based on all the second texture images: determining a first texture point corresponding to the first pixel point from a first texture image, wherein the pixel coordinate of the first texture point in the first texture image is the same as the pixel coordinate of the first pixel point in a first target image; acquiring texture features of a texture region taking the first texture point as a center; fusing texture features corresponding to all the first texture images to obtain texture fusion features corresponding to the first pixel points; for each candidate matching point, determining a second texture point corresponding to the candidate matching point from a second texture image, wherein the pixel coordinate of the second texture point in the second texture image is the same as the pixel coordinate of the candidate matching point in a second target image; acquiring texture features of a texture region taking the second texture point as a center; and fusing texture features corresponding to all the second texture images to obtain texture fusion features corresponding to the candidate matching points.
For example, the determining module 82 is specifically configured to, based on the texture fusion feature corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, select a second pixel point corresponding to the first pixel point from the candidate matching points: and determining the similarity between the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, selecting the maximum similarity from the similarities corresponding to the candidate matching points, and selecting the candidate matching point corresponding to the maximum similarity as a second pixel point corresponding to the first pixel point.
For example, the determining module 82 is specifically configured to determine a plurality of candidate matching points corresponding to the first pixel point from all the second optical bar center lines: determining a central point with the same pixel height as the first pixel point from the central lines of the second optical stripes based on the central lines of the second optical stripes, and determining the central point as a candidate matching point; or converting the first pixel point into a plurality of target three-dimensional reconstruction points based on a plurality of calibration equations; each calibration equation represents a functional relation between a pixel point and a three-dimensional reconstruction point in the first target image; converting the plurality of target three-dimensional reconstruction points into a plurality of projection pixel points in the second target image; and determining a central point with the same pixel height as the first pixel point from the central line of the second optical bar based on the central line of each second optical bar, and determining whether the central point is a candidate matching point corresponding to the first pixel point or not based on the distance between the central point and each projection pixel point.
For example, the determining module 82 is specifically configured to, based on the distance between the central point and each projection pixel point, determine that the central point is a candidate matching point corresponding to the first pixel point, or is not a candidate matching point corresponding to the first pixel point: if the distance between the central point and each projection pixel point is larger than a first distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point; if the distance between the central point and any projection pixel point is not larger than a first distance threshold value, determining that the central point is a candidate matching point corresponding to the first pixel point; or determining a distance average value based on the distance between the central point and each projection pixel point, if the distance average value is greater than a second distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point, otherwise, determining that the central point is the candidate matching point corresponding to the first pixel point.
Illustratively, the generating module 83 is further configured to determine the three-dimensional point corresponding to each key point pair by: distortion correction is carried out on the first pixel point through camera internal parameters of the first video camera, and the pixel point after distortion correction is converted into a first coordinate of the same order; distortion correction is carried out on the second pixel point through camera internal parameters of a second camera, and the pixel point after distortion correction is converted into a second homogeneous coordinate; and determining the three-dimensional points corresponding to the key point pairs by utilizing a triangulation mode based on the first homogeneous coordinate, the second homogeneous coordinate, the camera internal parameter of the first video camera, the camera internal parameter of the second video camera and the camera external parameter between the first video camera and the second video camera.
Illustratively, when each texture projector projects a regular texture pattern to the object to be measured, the regular texture patterns projected by different texture projectors are the same or different; wherein the regular texture pattern is a regular texture pattern composed of at least one of points, lines, triangles, parallelograms and circles.
Based on the same application concept as the method, the embodiment of the application provides a three-dimensional imaging device, which may include a processor and a machine-readable storage medium, wherein the machine-readable storage medium stores machine-executable instructions capable of being executed by the processor; the processor is configured to execute machine executable instructions to implement the image reconstruction method disclosed in the above example of the present application.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the image reconstruction method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: RAM (random Access Memory), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may be in the form of a personal computer, laptop, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (11)

1. An image reconstruction method, applied to a three-dimensional imaging device including at least one texture projector, a first camera, a second camera, and a multi-line laser, comprising:
for each texture projector, when the texture projector projects regular texture patterns to a measured object, acquiring a first background image acquired by a first camera and a second background image acquired by a second camera; determining a first texture image corresponding to the first background image and a second texture image corresponding to the second background image;
when the multi-line laser projects N line-structured light to a measured object, acquiring a first original image acquired by a first camera and a second original image acquired by a second camera; determining a first target image corresponding to the first original image and a second target image corresponding to the second original image;
determining a plurality of key point pairs based on N first optical strip center lines in the first target image and N second optical strip center lines in the second target image, wherein the key point pairs comprise first pixel points in the first optical strip center lines and second pixel points in the second optical strip center lines, and the first pixel points and the second pixel points are pixel points corresponding to the same position point on a measured object; determining a plurality of candidate matching points corresponding to the first pixel point from all second light bar center lines aiming at each first pixel point in the first light bar center line, determining texture fusion features corresponding to the first pixel point based on all first texture images, and determining texture fusion features corresponding to the candidate matching points based on all second texture images; selecting a second pixel point corresponding to the first pixel point from the candidate matching points based on the texture fusion characteristics corresponding to the first pixel point and the texture fusion characteristics corresponding to the candidate matching points;
and generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key point pairs.
2. The method according to claim 1, wherein the determining a first texture image corresponding to a first background image and a second texture image corresponding to a second background image and determining a first target image corresponding to the first original image and a second target image corresponding to the second original image comprises:
performing epipolar line correction on the first background image, the second background image, the first original image and the second original image to obtain a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image and a second target image corresponding to the second original image; wherein the epipolar line correction is used for enabling the same position point on the measured object to have the same pixel height in the first texture image, the second texture image, the first target image and the second target image.
3. The method of claim 1,
the determining the texture fusion features corresponding to the first pixel points based on all the first texture images and the determining the texture fusion features corresponding to the candidate matching points based on all the second texture images includes:
determining a first texture point corresponding to the first pixel point from a first texture image, wherein the pixel coordinate of the first texture point in the first texture image is the same as the pixel coordinate of the first pixel point in a first target image; acquiring texture features of a texture region taking the first texture point as a center; fusing texture features corresponding to all the first texture images to obtain texture fusion features corresponding to the first pixel points;
for each candidate matching point, determining a second texture point corresponding to the candidate matching point from a second texture image, wherein the pixel coordinate of the second texture point in the second texture image is the same as the pixel coordinate of the candidate matching point in a second target image; acquiring texture features of a texture region taking the second texture point as a center; and fusing the texture features corresponding to all the second texture images to obtain the texture fusion features corresponding to the candidate matching points.
4. The method of claim 1 or 3,
the selecting a second pixel point corresponding to the first pixel point from the candidate matching points based on the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points comprises:
and determining the similarity between the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, selecting the maximum similarity from the similarities corresponding to the candidate matching points, and selecting the candidate matching point corresponding to the maximum similarity as a second pixel point corresponding to the first pixel point.
5. The method of claim 1, wherein determining a plurality of candidate matching points corresponding to the first pixel point from all second light bar centerlines comprises:
determining a central point with the same pixel height as the first pixel point from the central lines of the second optical strips on the basis of the central lines of the second optical strips, and determining the central point as a candidate matching point; or,
converting the first pixel point into a plurality of target three-dimensional reconstruction points based on a plurality of calibration equations; each calibration equation represents a functional relationship between a pixel point in the first target image and a three-dimensional reconstruction point; converting the plurality of target three-dimensional reconstruction points into a plurality of projection pixel points in the second target image; and determining a central point with the same pixel height as the first pixel point from the central line of the second optical bar based on the central line of each second optical bar, and determining whether the central point is a candidate matching point corresponding to the first pixel point or not based on the distance between the central point and each projection pixel point.
6. The method of claim 5,
the determining that the center point is the candidate matching point corresponding to the first pixel point or not based on the distance between the center point and each projection pixel point includes:
if the distance between the central point and each projection pixel point is larger than a first distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point; if the distance between the central point and any projection pixel point is not larger than a first distance threshold value, determining that the central point is a candidate matching point corresponding to the first pixel point; or determining a distance average value based on the distance between the central point and each projection pixel point, if the distance average value is greater than a second distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point, otherwise, determining that the central point is a candidate matching point corresponding to the first pixel point.
7. The method of claim 1,
before generating a three-dimensional reconstructed image based on the plurality of key point pairs corresponding to the three-dimensional points, the method further comprises: determining the corresponding three-dimensional point of each key point pair by adopting the following method:
distortion correction is carried out on the first pixel point through camera internal parameters of the first video camera, and the pixel point after distortion correction is converted into a first homogeneous coordinate; distortion correction is carried out on the second pixel point through camera internal parameters of a second camera, and the pixel point after the distortion correction is converted into a second homogeneous coordinate;
and determining the three-dimensional points corresponding to the key point pairs by utilizing a triangulation mode based on the first homogeneous coordinate, the second homogeneous coordinate, the camera internal parameters of the first video camera, the camera internal parameters of the second video camera and the camera external parameters between the first video camera and the second video camera.
8. The method of claim 1,
when each texture projector projects regular texture patterns to a measured object, the regular texture patterns projected by different texture projectors are the same or different; wherein the regular texture pattern is a regular texture pattern composed of at least one of points, lines, triangles, parallelograms and circles.
9. An image reconstruction apparatus, applied to a three-dimensional imaging device including at least one texture projector, a first camera, a second camera and a multi-line laser, comprising:
the acquiring module is used for acquiring a first background image acquired by the first camera and a second background image acquired by the second camera when the texture projector projects a regular texture pattern to the measured object for each texture projector; when the multi-line laser projects N line-structured light to a measured object, acquiring a first original image acquired by a first camera and a second original image acquired by a second camera;
the determining module is used for determining a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image and a second target image corresponding to the second original image; determining a plurality of key point pairs based on N first optical strip center lines in the first target image and N second optical strip center lines in the second target image, wherein the key point pairs comprise first pixel points in the first optical strip center lines and second pixel points in the second optical strip center lines, and the first pixel points and the second pixel points are pixel points corresponding to the same position point on a measured object; aiming at each first pixel point in the center line of the first light bar, determining a plurality of candidate matching points corresponding to the first pixel point from all the center lines of the second light bar, determining texture fusion features corresponding to the first pixel point based on all the first texture images, and determining the texture fusion features corresponding to the candidate matching points based on all the second texture images; selecting a second pixel point corresponding to the first pixel point from a plurality of candidate matching points based on the texture fusion characteristics corresponding to the first pixel point and the texture fusion characteristics corresponding to each candidate matching point;
and the generating module is used for generating a three-dimensional reconstruction image based on the three-dimensional points corresponding to the plurality of key points.
10. The apparatus of claim 9,
when the determining module determines the first texture image corresponding to the first background image, the second texture image corresponding to the second background image, the first target image corresponding to the first original image, and the second target image corresponding to the second original image, the determining module is specifically configured to: performing epipolar line correction on the first background image, the second background image, the first original image and the second original image to obtain a first texture image corresponding to the first background image, a second texture image corresponding to the second background image, a first target image corresponding to the first original image and a second target image corresponding to the second original image; wherein the epipolar line correction is used for enabling the same position point on the measured object to have the same pixel height in the first texture image, the second texture image, the first target image and the second target image;
the determining module is configured to determine texture fusion features corresponding to the first pixel points based on all the first texture images, and is specifically configured to, when determining texture fusion features corresponding to the candidate matching points based on all the second texture images: determining a first texture point corresponding to the first pixel point from a first texture image, wherein the pixel coordinate of the first texture point in the first texture image is the same as the pixel coordinate of the first pixel point in a first target image; acquiring texture features of a texture region taking the first texture point as a center; fusing texture features corresponding to all the first texture images to obtain texture fusion features corresponding to the first pixel points; for each candidate matching point, determining a second texture point corresponding to the candidate matching point from a second texture image, wherein the pixel coordinate of the second texture point in the second texture image is the same as the pixel coordinate of the candidate matching point in a second target image; acquiring texture features of a texture region taking the second texture point as a center; fusing texture features corresponding to all second texture images to obtain texture fusion features corresponding to the candidate matching points;
the determining module is specifically configured to, based on the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, select a second pixel point corresponding to the first pixel point from the candidate matching points: determining the similarity between the texture fusion features corresponding to the first pixel point and the texture fusion features corresponding to the candidate matching points, selecting the maximum similarity from the similarities corresponding to the candidate matching points, and selecting the candidate matching point corresponding to the maximum similarity as a second pixel point corresponding to the first pixel point;
the determining module is specifically configured to, when determining a plurality of candidate matching points corresponding to the first pixel point from all second light bar centerlines: determining a central point with the same pixel height as the first pixel point from the central lines of the second optical strips on the basis of the central lines of the second optical strips, and determining the central point as a candidate matching point; or converting the first pixel point into a plurality of target three-dimensional reconstruction points based on a plurality of calibration equations; each calibration equation represents a functional relationship between a pixel point in the first target image and a three-dimensional reconstruction point; converting the target three-dimensional reconstruction points into a plurality of projection pixel points in the second target image; determining a central point with the same pixel height as the first pixel point from the central line of the second optical bar based on the central line of each second optical bar, and determining whether the central point is a candidate matching point corresponding to the first pixel point or not based on the distance between the central point and each projection pixel point;
the determining module is configured to determine, based on a distance between the center point and each projection pixel point, whether the center point is a candidate matching point corresponding to the first pixel point, or not: if the distance between the central point and each projection pixel point is larger than a first distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point; if the distance between the central point and any projection pixel point is not larger than a first distance threshold value, determining that the central point is a candidate matching point corresponding to the first pixel point; or determining a distance average value based on the distance between the central point and each projection pixel point, if the distance average value is greater than a second distance threshold value, determining that the central point is not a candidate matching point corresponding to the first pixel point, otherwise determining that the central point is the candidate matching point corresponding to the first pixel point;
the generating module is further configured to determine a three-dimensional point corresponding to each key point pair by using the following method: distortion correction is carried out on the first pixel point through camera internal parameters of the first video camera, and the pixel point after distortion correction is converted into a first coordinate of the same order; distortion correction is carried out on the second pixel point through camera internal parameters of a second camera, and the pixel point after distortion correction is converted into a second homogeneous coordinate; determining three-dimensional points corresponding to the key point pairs by utilizing a triangularization mode based on the first homogeneous coordinate, the second homogeneous coordinate, the camera internal parameter of the first video camera, the camera internal parameter of the second video camera and the camera external parameter between the first video camera and the second video camera;
when each texture projector projects regular texture patterns to a measured object, the regular texture patterns projected by different texture projectors are the same or different; wherein the regular texture pattern is a regular texture pattern composed of at least one of points, lines, triangles, parallelograms, and circles.
11. A three-dimensional imaging apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to perform the method steps of any of claims 1 to 8.
CN202210470234.7A 2022-04-28 2022-04-28 Image reconstruction method, device and equipment Pending CN114782632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210470234.7A CN114782632A (en) 2022-04-28 2022-04-28 Image reconstruction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210470234.7A CN114782632A (en) 2022-04-28 2022-04-28 Image reconstruction method, device and equipment

Publications (1)

Publication Number Publication Date
CN114782632A true CN114782632A (en) 2022-07-22

Family

ID=82434777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210470234.7A Pending CN114782632A (en) 2022-04-28 2022-04-28 Image reconstruction method, device and equipment

Country Status (1)

Country Link
CN (1) CN114782632A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131507A (en) * 2022-07-27 2022-09-30 北京百度网讯科技有限公司 Image processing method, image processing apparatus, and three-dimensional reconstruction method of metauniverse
WO2023207756A1 (en) * 2022-04-28 2023-11-02 杭州海康机器人股份有限公司 Image reconstruction method and apparatus, and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023207756A1 (en) * 2022-04-28 2023-11-02 杭州海康机器人股份有限公司 Image reconstruction method and apparatus, and device
CN115131507A (en) * 2022-07-27 2022-09-30 北京百度网讯科技有限公司 Image processing method, image processing apparatus, and three-dimensional reconstruction method of metauniverse

Similar Documents

Publication Publication Date Title
US20210112229A1 (en) Three-dimensional scanning device and methods
EP3444560B1 (en) Three-dimensional scanning system and scanning method thereof
EP2568253B1 (en) Structured-light measuring method and system
US8848201B1 (en) Multi-modal three-dimensional scanning of objects
CN106548489B (en) A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image
CN111566437B (en) Three-dimensional measurement system and three-dimensional measurement method
CN114782632A (en) Image reconstruction method, device and equipment
US20120176478A1 (en) Forming range maps using periodic illumination patterns
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
WO2013008804A1 (en) Measurement device and information processing device
CN112525107B (en) Structured light three-dimensional measurement method based on event camera
CN107860337B (en) Structured light three-dimensional reconstruction method and device based on array camera
CN114820939A (en) Image reconstruction method, device and equipment
CN102368137B (en) Embedded calibrating stereoscopic vision system
WO2023207756A1 (en) Image reconstruction method and apparatus, and device
KR20230065978A (en) Systems, methods and media for directly repairing planar surfaces in a scene using structured light
CN111189415A (en) Multifunctional three-dimensional measurement reconstruction system and method based on line structured light
JP6009206B2 (en) 3D measuring device
EP3951314A1 (en) Three-dimensional measurement system and three-dimensional measurement method
US9245375B2 (en) Active lighting for stereo reconstruction of edges
KR102160340B1 (en) Method and apparatus for generating 3-dimensional data of moving object
CN110310371B (en) Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image
CN111583388B (en) Scanning method and equipment of three-dimensional scanning system
KR20240118827A (en) Intraoral scanners, intraoral scanning systems, methods of performing intraoral scans, and computer program products
JP6671589B2 (en) Three-dimensional measurement system, three-dimensional measurement method, and three-dimensional measurement program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.