CN113052889A - Depth calculation method and system - Google Patents

Depth calculation method and system Download PDF

Info

Publication number
CN113052889A
CN113052889A CN202110314157.1A CN202110314157A CN113052889A CN 113052889 A CN113052889 A CN 113052889A CN 202110314157 A CN202110314157 A CN 202110314157A CN 113052889 A CN113052889 A CN 113052889A
Authority
CN
China
Prior art keywords
image
speckle
blob
spot
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110314157.1A
Other languages
Chinese (zh)
Inventor
兰富洋
李秋平
王兆民
杨鹏
黄源浩
肖振中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202110314157.1A priority Critical patent/CN113052889A/en
Publication of CN113052889A publication Critical patent/CN113052889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Abstract

The application is applicable to the field of image processing, and relates to a depth calculation method, which comprises the following steps: acquiring a first speckle image and a second speckle image; acquiring first position information and second position information of each spot in the first spot image and the second spot image corresponding to the spot in the sub-area; calculating a disparity of the blob of each region of the first blob image with the blob of each region of the corresponding second blob image based on the first position information and the second position information; and calculating the depth of the target area according to the parallax. The parallax calculation method for calculating the position information of the spots in the first spot image and the second spot image based on the plurality of regions is simple in calculation and high in speed.

Description

Depth calculation method and system
Technical Field
The application belongs to the field of image processing, and particularly relates to a depth calculation method and system.
Background
In the prior art, the main algorithm for acquiring the disparity map from the binocular system is a stereo matching algorithm, but the accuracy and the speed of the algorithm are in a constraint relationship with each other. The calculation speed of the disparity map determines the real-time performance of the binocular vision system in processing the acquired information. However, most of the high-precision parallax image algorithms use a graph cutting method or a global method such as confidence propagation, and the operation speed is slow, so that the real-time requirement cannot be met. In addition, for a weak texture target, due to the lack of characteristic points for matching in a binocular image, it is often difficult to obtain accurate depth information of the target, and a projection module in the existing binocular system projects dense spots, which have the disadvantages of large power consumption, low single-point optical power, and the like when projecting the dense spots.
Disclosure of Invention
The embodiment of the application provides a depth calculation method and a depth calculation system, which can solve the technical problems that in the depth calculation method in the prior art, the operation is complex and the speed is low, errors are easy to occur when spot blocks are matched, and the requirement of real-time performance cannot be met.
In a first aspect, an embodiment of the present application provides a depth calculation method, including:
acquiring a first speckle image and a second speckle image, wherein the first speckle image and the second speckle image are images formed by projecting regular speckles through a projection module, and reflecting the regular speckles to imaging areas of a first camera unit and a second camera unit respectively after the regular speckles pass through a target area;
acquiring first position information and second position information of each corresponding spot in the region in the first spot image and the second spot image;
calculating a disparity of the blobs of the first blob image and the second blob image, each corresponding to the region, based on the first position information and the second position information;
calculating the depth of the target area according to the parallax;
wherein the imaging region has been previously divided into a plurality of regions, each of the regions comprising only one spot.
In a possible implementation manner of the first aspect, the processing the first speckle image obtains first position information of a speckle of each of the regions in the first speckle image; processing the second speckle image to obtain second position information of the speckles of each region in the second speckle image comprises:
calculating coordinate information of the spots of each region at the spot contour edge points of the first spot image and the second spot image;
and calculating the center coordinates of the spots of each region based on the coordinate information of the edge points of the spot outline.
Wherein the calculating of the coordinate information of the blob of each of the regions at the blob outline edge points of the first and second blob images comprises:
performing Gaussian filtering on the first speckle image and the second speckle image by using a Gaussian kernel of variance to obtain a filtered first speckle image and a filtered second speckle image;
performing a laplacian transform on the filtered first and second speckle images to obtain laplacian images of the filtered first and second speckle images;
combining the filtered first speckle image and the Laplace image of the filtered first speckle image to obtain a contour edge point solving equation of the first speckle image;
solving an equation for the contour edge points of the first spot image to obtain coordinate information of the contour edge points of the first spot image;
combining the filtered second speckle image and the Laplace image of the filtered second speckle image to obtain a contour edge point solving equation of the second speckle image;
and solving an equation for the contour edge points of the second spot image to obtain the coordinate information of the contour edge points of the second spot image.
Wherein the calculating the center coordinates of the blobs for each region based on the coordinate information of the edge points of the blob outline comprises:
calculating center coordinates of the blobs of each region of the first blob image based on the coordinate information of the edge points of the first blob outline;
calculating center coordinates of blobs for each of the regions of the second blob image based on the coordinate information of the edge points of the second blob outline.
In a possible implementation manner of the first aspect, the calculating, based on the first position information and the second position information, a disparity between a blob of each of the regions of the first blob image and a blob of each of the regions of the corresponding second blob image includes:
calculating a disparity of the blobs of each of the regions of the first blob image with the blobs of each of the regions of the corresponding second blob image based on the center coordinates of the blobs of each of the regions of the first blob image and the center coordinates of the blobs of each of the regions of the second blob image.
In a second aspect, an embodiment of the present application provides a depth calculation system, including:
a projection module for projecting a regular speckle pattern to a target area;
a camera module including a first camera unit and a second camera unit, an imaging area of which has been divided into a plurality of areas, for acquiring a speckle image reflected back through a target area and generating a first speckle image and a second speckle image;
a control and processing module for controlling the projection module and the camera module and for calculating a disparity in each corresponding region from the first and second speckle images to further obtain a depth;
wherein each of the imaging regions comprises only one spot.
Compared with the prior art, the embodiment of the application has the advantages that:
when the method is used for calculating the depth of the spots, the imaging area is divided into areas, the position information of the spots of each area of the first spot image and the position information of the spots of each area of the second spot image are determined, and then the depth of the target area can be calculated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1a is a schematic structural diagram of a depth calculation system according to an embodiment of the present disclosure;
FIG. 1b is a schematic diagram of an imaging region divided into a plurality of regions according to an embodiment of the present disclosure;
FIG. 2a is a flowchart illustrating steps of a depth calculation method according to an embodiment of the present disclosure;
FIG. 2b is a schematic view of a first speckle image provided by an embodiment of the present application;
FIG. 3 is a flowchart of method steps provided by an embodiment of the present application for processing a first speckle image and a second speckle image;
FIG. 4 is a schematic diagram illustrating coordinate information for calculating edge points of a blob outline according to an embodiment of the present application;
FIG. 5 is a schematic diagram of obtaining the coordinates of the center of each region blob according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a principle of binocular structured light triangulation according to an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Reference throughout this specification to "one embodiment of the present application" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "one embodiment of the present application", "other embodiments of the present application", "in other embodiments", and the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
FIG. 1a is a schematic diagram of a depth calculation system according to an embodiment of the present application, the system includes a projection module 110, a camera module, and a control and processing module 130, the projection module 110 is used for projecting a regular speckle pattern onto a target area; the camera module includes a first camera unit 121 and a second camera unit 122, the first camera unit 121 and the second camera unit 122 include an imaging area that has been divided into a plurality of areas for acquiring a speckle pattern reflected back through a target area and generating a first speckle image and a second speckle image; the control and processing module 130 is used to control the projection module 110 and the camera module on the one hand, and to calculate the disparity in each sub-region from the speckle image and the depth of the target region from the disparity on the other hand. The speckle pattern projected by the projection module 110 is designed according to the divided regions of the imaging region of the camera module, so that each region only includes one speckle.
In some embodiments, projection module 110 includes a light source 111 and an optical assembly 112. The light source 111 may be an edge-emitting laser, a vertical cavity surface-emitting laser, or the like, or may be a light source array composed of a plurality of light sources, and a light beam emitted by the light source may be laser, visible light, infrared light, ultraviolet light, or the like, and the embodiment of the present application does not limit the configuration of the light source 111. In the embodiments of the present application, a light source is taken as a vertical cavity surface emitting laser for example, and laser light emitted by the vertical cavity surface emitting laser has characteristics that common light does not have, such as good monochromaticity, good coherence, good directivity, and high brightness. It is because of these characteristics of laser light that when the laser light irradiates a rough surface or passes through a projection body having uneven refraction, a speckle pattern is generated. It should be noted that the light source may be a single-point laser or a regular array laser, which is not limited herein.
In one embodiment, the optical assembly 112 includes a diffractive optic element that receives and focuses the light beam emitted by the light source onto the diffractive optic element and a lens element that receives the light beam focused by the lens element and projects a regular speckle pattern onto the target area. The speckle pattern formed in the embodiment of the present application is a speckle pattern having a sparse lattice; the number of lens elements can be designed according to specific conditions; the optical diffraction element and the lens element may be independent elements or may be an integrated element, which is not limited herein.
It should be appreciated that the projection module 110 actively projects a speckle pattern with a sparse lattice to the target area, which reduces the total power consumption of the projection module or at the same power consumption, and has a higher single-point optical power, obtains a speckle image with a higher signal-to-noise ratio, and has a longer detection distance and higher noise immunity, compared to the existing speckle pattern with a dense lattice.
In yet another embodiment, the optical assembly 112 includes a microlens array comprising a plurality of microlens elements, and the light source can be a single point laser when the microlens element size is much smaller than the size of a single spot emitted by the light source; when the microlens unit size is similar or equal to the size of a single spot emitted by the light source, the light source is a regular or irregular array laser. The microlens array receives the multiple light beams emitted by the light source, shapes the light beams into uniform spots and projects the uniform spots onto a target area. It is to be understood that the optical assembly 112 may further include a lens element that receives the uniform spot shaped by the microlens array and collimates the projection onto the target area, which is not limited herein.
In another embodiment, when the optical assembly 112 includes only lens elements, the light source 111 is a regularly arrayed laser. The lens element receives the multiple beams emitted by the array laser and collimates the beams into parallel beams to project the parallel beams to a target area to form a regular spot in the target area. It should be noted that the number of lens elements can be designed according to the specific situation.
In some embodiments, the camera module includes a first camera unit 121 and a second camera unit 122 (shown in fig. 1 a) located at the left and right sides of the projection module 110, wherein the first camera unit 121 may also be referred to as a left camera and the second camera unit 122 may also be referred to as a right camera. It should be noted that the camera may also be a three-view camera or a multi-view camera, and the position design of the camera is not required, so as to meet the actual situation.
In one embodiment, the first camera unit 121 and the second camera unit 122 each include an image sensor, an imaging area in the image sensor is divided into a plurality of areas in advance, and is used for receiving at least part of the spots reflected by the object in the target area and imaging the spots on the image sensor, and forming a first spot image and a second spot image to further acquire the depth of the target area. The image sensor may be an image sensor composed of a Charge Coupled Device (CCD), a complementary metal-oxide-semiconductor (CMOS), an Avalanche Diode (AD), a Single Photon Avalanche Diode (SPAD), and the like, and the embodiment of the present application does not limit the composition of the image sensor.
In one embodiment, the imaging area of the first camera unit 121 and the imaging area of the second camera unit 122 are divided based on the binocular structured light trigonometry principle, and the specific areas are divided as follows:
the minimum working distance of the first camera unit 121 and the second camera unit 122 is determined, and the working distance refers to the range of the front-back distance of the object in the target area measured by the imaging that the first camera unit 121 and the second camera unit 122 can obtain a clear image.
The embodiment of the present application determines the minimum working distance z of the first camera unit 121 and the second camera unit 122minBased on the principle of binocular light triangulation, the maximum parallax width formed by the first camera unit 121 and the second camera unit 122 within the working distance is:
Figure BDA0002990440230000071
wherein d ismaxIs the maximum parallax width formed by the first camera unit and the second camera unit in the working distance, f is the focal length of the camera, b is the base length, zminIs the minimum working distance.
The imaging regions of the first and second camera units 121 and 122 are divided into a plurality of regions according to the maximum parallax width, as shown in fig. 1 b. Region(s)Preferably d in widthmaxThe height of the regions is larger than the diameter D of the spots to ensure that only one spot is included in each region when the first camera unit 121 and the second camera unit 122 are operated at the maximum working distance and the minimum working distance.
In addition, d ismaxAnd D is a threshold value, and the width and height of the region are set to be larger than the threshold value, so that each region only comprises one spot, which is not limited herein.
In one embodiment, the control and processing module 130 may be a Central Processing Unit (CPU), and the control and processing module 130 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should be noted that, in other embodiments of the present application, the camera module body may have a computing capability, and in this case, the functions of the control and processing module 130 may be integrated into the camera module. The camera module may also include, but is not limited to, a processing unit, a storage unit, and a computer program stored in and executable on the storage unit. The processing unit implements the steps in the embodiments of the depth calculation method described below when executing the computer program. It will be understood by those skilled in the art that the camera module including the image sensor, the lens unit, the processing unit, the storage unit and the computer program is only an example of the camera, and does not constitute a limitation of the camera module, and may include more or less components, or combine some components, or different components, for example, an input-output device, a network access device, etc. may also be included. Wherein a processing unit in the camera may calculate the depth of the target area.
This document does not limit the specific configuration of the depth calculation system, which may include more or fewer components than the example shown in FIG. 1a, or some components in combination, or different components. FIG. 1a is an exemplary depiction only, and should not be construed as a specific limitation to the present application.
To sum up, the depth calculation system that this application embodiment provided, projection module initiative throw the speckle image that has regular sparse dot matrix, for the speckle pattern of the dense dot matrix of present initiative projection, reduced projection module at total power consumption or under the same power consumption, have higher single spot light power, obtain the speckle image that the SNR is higher, have farther detection distance and higher anti-noise ability, for present passive triangulation technique, guaranteed to have sufficient characteristic point on the measured object. The first camera unit and the second camera unit collect at least part of the spots reflected by the object in the target area and image a first spot image and a second spot image formed on the imaging area. The parallax calculation method for calculating the position information of the first and second spots based on the plurality of regions by dividing the imaging region into the plurality of regions is simple and fast in calculation, and the spots of each region of the first spot image and the spots of each region of the second spot image do not need to be matched, so that the real-time requirement is met.
Fig. 2a is a flowchart illustrating steps of a depth calculation method according to an embodiment of the present application. According to the embodiment of the application, imaging areas of the first camera unit and the second camera unit are divided into a plurality of areas respectively, and the parallax is further calculated according to corresponding spots in the areas of the first camera unit and the second camera unit, and the depth is acquired. As an implementation, the method in fig. 2a may be performed by the control and processing module 130 in fig. 1 a. As other implementations, the method in fig. 2a may be performed by a camera. The method more specifically includes S201 to S204:
s201: a first speckle image and a second speckle image are acquired.
In the embodiment of the present application, the acquired first speckle image and the acquired second speckle image are obtained by projecting encoded laser light, i.e. a speckle pattern with a sparse dot matrix, onto an object in a target area through a projection module, preferably, the radius of a speckle in the projected speckle pattern is D/2, the laser speckle is projected onto the target area and is reflected to the first camera unit and the second camera unit through the target area, respectively, a first speckle image is formed in each area of an imaging area of the first camera unit (as shown in fig. 2 b), and a second speckle image is formed in each area of an imaging area of the second camera unit.
S202: first position information and second position information of the blobs in each of the regions in the first blob image and the second blob image are obtained. The blobs of each of the regions of the first blob image correspond to the blobs of each of the regions of the second blob image.
In an embodiment, after the first speckle image and the second speckle image are acquired, the first speckle image and the second speckle image are respectively processed, the first speckle image is processed to obtain first position information, the second speckle image is processed to obtain second position information, the specific processing steps of the first speckle image and the second speckle image refer to fig. 3, fig. 3 is a flowchart of method steps for processing the first speckle image and the second speckle image provided in an embodiment of the present application, and includes S301 to S302.
S301, calculating coordinate information of the speckle contour edge points of the speckles of each region in the first speckle image and the second speckle image.
In the embodiment of the present application, the position information of the blobs in each region may be detected by using a blob detection algorithm, such as a laplacian of gaussian algorithm, a Surf algorithm, or a Sift algorithm, which is not limited in the present application.
Fig. 4 is an embodiment of S301 in fig. 3. In one embodiment, the position information of the blobs in the respective regions is detected using the laplacian of gaussian algorithm, and more particularly includes S401 to S406:
s401, Gaussian filtering is conducted on the first speckle image and the second speckle image by means of a Gaussian core of variance, and the filtered first speckle image and the filtered second speckle image are obtained.
Specifically, the speckle image I (x, y) is gaussian-filtered by the gaussian kernel G (x, y, σ) of the variance σ, and the speckle image I (x, y) described in the embodiment of the present application may be the first speckle image or the second speckle image, and the method of gaussian-filtering the first speckle image and the second speckle image is the same, and the first speckle image is exemplified in the present application. The first speckle image and the second speckle image are subjected to Gaussian filtering, and Gaussian noise can be eliminated. Thus, the filtered first speckle image is:
Lσ=I(X,Y)*Gσ(X,Y)
where I represents the gray value of the first blob image and x represents the convolution. It should be noted that the variance σ may be specifically designed according to the radius D/2 of the spot, and is not limited herein.
S402, performing laplacian transform on the filtered first and second speckle images to obtain a laplacian image of the filtered first and second speckle images.
Specifically, the laplacian image of the first speckle image after gaussian filtering is:
Figure BDA0002990440230000101
it should be noted that: the method of performing laplace transform on the filtered second speckle image is the same as the method of performing laplace transform on the filtered first speckle image, and is not described herein again.
And S403, combining the filtered first speckle image and the Laplace image of the filtered first speckle image to obtain a contour edge point solving equation of the first speckle image.
Specifically, substituting the filtered first speckle image into the laplacian image of the gaussian-filtered first speckle image may obtain:
Figure BDA0002990440230000111
s404, solving an equation for the contour edge points of the first speckle image to obtain the coordinate information of the contour edge points of the first speckle image.
Specifically, the extreme point in the formula in S403 is the speckle contour edge point of each region in the first speckle image, and the grayscale value of the speckle image can be obtained by substituting the speckle contour edge point into the formula in S403.
And S405, combining the filtered second speckle image and the Laplace image of the filtered second speckle image to obtain a contour edge point solving equation of the second speckle image.
The specific method of S405 and S403 is the same, and is not described herein again.
And S406, solving an equation for the contour edge points of the second speckle image to obtain the coordinate information of the contour edge points of the second speckle image.
The specific methods of S406 and S404 are the same, and are not described herein again.
It should be noted that, in the embodiment of the present application, by using the specific methods of S401 to S406, there is no time-series limitation in calculating the coordinate information of the contour edge point of the blob in each region in the first blob image and the coordinate information of the contour edge point of the blob in each region in the second blob image corresponding to each region in the first blob image, the coordinate information of the contour edge point of the spot in each area in the first spot image can be calculated firstly, then the coordinate information of the contour edge point of the spot in each area in the second spot image can be calculated, the coordinate information of the contour edge point of the spot in each area in the first spot image can be calculated firstly, and then the coordinate information of the contour edge point of the spot in each area in the first spot image can be calculated, and the coordinate information of the contour edge point of the spot in each area in the first spot image can be calculated simultaneously, and the coordinate information of the contour edge point of the spot in each area in the second spot image can be calculated simultaneously.
S302, based on the coordinate information of the edge points of the spot contour, the center coordinates of the spots of each area are calculated.
The central coordinates of each region spot can be obtained by using a least square ellipse center quadratic fitting method or a least square Gaussian distribution fitting method and the like. The present application does not limit the method of obtaining the center coordinates of the blobs in each region.
FIG. 5 is a schematic diagram of obtaining the coordinates of the center of each region blob according to an embodiment of the present application. In one embodiment, the method of ellipse center quadratic fitting using least squares obtains the center coordinates of each region spot, and more particularly includes S501 to S502:
s501, based on the coordinate information of the edge points of the first speckle contour, the center coordinates of the speckles in each area of the first speckle image are calculated.
Specifically, based on S301, the coordinate information of the edge point of the blob outline of the first blob image and the gray value of the first blob image may be obtained, and in the embodiment of the present application, it is assumed that the center coordinate of the blob of the first blob image is (c) ((c))00), the pixel size of the blob in the first blob image is m × n, for the formula
Figure BDA0002990440230000121
And solving, namely calculating the center coordinates of the spots of each area of the first spot image.
S502, based on the coordinate information of the edge points of the second speckle contour, the center coordinates of the speckles of each area of the second speckle image are calculated.
The method for calculating the center coordinates of the blobs in each region of the second blob image is the same as the method for calculating the center coordinates of the blobs in each region of the first blob image, and will not be described herein again.
The embodiment of the present application may calculate the center coordinates of the blobs in each region in the first blob image and the center coordinates of the blobs in each region in the second blob image corresponding to each region in the first blob image by using the specific method of steps S501 to S502.
It should be noted that, in S202, the first and second speckle images are processed, and there is no time-sequential limitation on obtaining the first position information of the speckle in the first speckle image and the second position information of the speckle in each region in the second speckle image. The first speckle image is processed to obtain first position information, and the second speckle image is processed to obtain second position information; or processing the second speckle image to obtain second position information, and processing the first speckle image to obtain first position information; the first and second speckle images may be processed simultaneously to obtain the first and second position information.
S203, based on the first position information and the second position information, calculating a disparity between the blob of each region of the first blob image and the blob of each region of the corresponding second blob image.
Specifically, the parallax of the center coordinates of the blobs in each region in the first blob image and the center coordinates of the blobs in each region in the second blob image corresponding to each region in the first blob image is calculated from the center coordinates of the blobs in each region in the first blob image and the center coordinates of the blobs in each region in the second blob image corresponding to each region in the first blob image in S202.
And S204, calculating the depth of the target area according to the parallax.
In one embodiment, the depth image is calculated based on the principle of binocular light triangulation, which is shown in fig. 6, the point P is the object to be measured in the target area, the point P and the optical center C of the first camera unit 121LHas an intersection point with the phase plane of the first camera unit 121 of PLPoint of intersection PLIs the projected point of point P on the first camera unit. P point and optical center C of the second camera unit 122RThe intersection point of the first connecting line and the phase plane of the second camera unit 122 is PRPoint of intersection PRIs the projected point of point P on the second camera unit 122. X in FIG. 6RAnd XLThe difference between them is the parallax d.
Let PLPoint to PRThe distance of the points is dis, then:
dis=b-(XL-XR)
as can be seen from the figure: delta PCLCRAnd delta PPLPRSimilarly, then:
Figure BDA0002990440230000131
wherein, XL-XRThe parallax d is the optical distance between the first camera unit 121 and the second camera unit 122, i.e., the length of the reference line, b is the focal length of the first camera unit 121 and the second camera unit 122, f is the focal length of the first camera unit 121 and the second camera unit 122, and z is the depth of the object to be measured in the target region.
The following can be obtained:
Figure BDA0002990440230000132
it can be seen that, if the control and processing module 130 in the depth calculating system in fig. 1a needs to calculate the depth value of the object to be measured in the target area, the focal length f of the first camera unit 121 and the second camera unit 122, the reference line b of the first camera unit 121 and the second camera unit 122, and the parallax d need to be determined. The focal length f and the reference line b of the first camera unit 121 and the second camera unit 122 are clearly obtained, so that the depth value can be obtained by calculating the parallax between the first spot image and the second spot image. However, the depth calculation formula is an ideal model derived under ideal conditions, and when the depth is calculated by using the above formula, the depth is affected by the distortion of the camera lens, whether the optical axes of the first camera unit and the second camera unit are parallel, and other factors, therefore, in this embodiment, when the calculation formula is performed by using the above formula, the camera needs to be calibrated in advance, and the calibration of the camera is performed in order to solve the problems of the distortion of the camera lens and the non-parallel optical axes of the first camera unit and the second camera unit, and the calibration method of the camera is not limited in this embodiment of the present application.
In one embodiment, the disparity between the coordinates of the center of a blob in one region in the first blob image and the coordinates of the center of a blob in a region in the second blob image corresponding to one region in the first blob image is denoted as d.
Substituting the disparity d into the formula derived from fig. 6:
Figure BDA0002990440230000141
wherein f is the focal length of the camera, b is the length of the base line of the central points of the projection module and the first camera unit and the second camera unit, and d is the parallax calculated in S203.
The depth of the spots in one region can be obtained, and the depth information of the target region can be obtained by traversing the spots in all regions by taking any one of the first spot image and the second spot image as a reference image.
In summary, the embodiment of the present application provides a depth calculation method: according to the imaging areas of the divided areas, the center coordinates of the spot of each area of the first spot image and the corresponding spot of each area of the second spot image are determined, and the depth of the target area is calculated based on the center coordinates. The center coordinates of the spots obtained by dividing the regions are adopted for carrying out parallax calculation, so that the operation is simple and the speed is higher, and the spots of each region of the first spot image and the spots of each region of the second spot image do not need to be matched, so that the real-time requirement is met.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program may implement the steps in the depth calculation method embodiment described above.
The embodiment of the present application provides a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the depth calculation method embodiment when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer memory, read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunication signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A depth calculation method, comprising:
acquiring a first speckle image and a second speckle image, wherein the first speckle image and the second speckle image are images formed by projection of regular speckles by a projection module and reflection of the regular speckles to imaging areas of a first camera unit and a second camera unit respectively through a target area;
acquiring first position information and second position information of each corresponding spot in the region in the first spot image and the second spot image;
calculating a disparity of the blobs of the first blob image and the second blob image, each corresponding to the region, based on the first position information and the second position information;
calculating the depth of the target area according to the parallax;
wherein the imaging region has been previously divided into a plurality of regions, each of the regions comprising only one spot.
2. The depth calculation method according to claim 1, wherein the obtaining of the first and second speckle images each corresponds to first and second position information of a speckle in the region; the method comprises the following steps:
calculating coordinate information of the spots of each region at the spot contour edge points of the first spot image and the second spot image;
and calculating the center coordinates of the spots of each region based on the coordinate information of the edge points of the spot outline.
3. The depth calculation method according to claim 2, wherein the calculating of the coordinate information of the blob of each of the regions at the blob outline edge points of the first and second blob images comprises:
performing Gaussian filtering on the first speckle image and the second speckle image by using a Gaussian kernel of variance to obtain a filtered first speckle image and a filtered second speckle image;
performing a laplacian transform on the filtered first and second speckle images to obtain laplacian images of the filtered first and second speckle images;
combining the filtered first speckle image and the Laplace image of the filtered first speckle image to obtain a contour edge point solving equation of the first speckle image;
solving an equation for the contour edge points of the first spot image to obtain coordinate information of the contour edge points of the first spot image;
combining the filtered second speckle image and the Laplace image of the filtered second speckle image to obtain a contour edge point solving equation of the second speckle image;
and solving an equation for the contour edge points of the second spot image to obtain the coordinate information of the contour edge points of the second spot image.
4. The depth calculation method according to claim 2, wherein the calculating the center coordinates of the blob for each of the regions based on the coordinate information of the edge points of the blob outline comprises:
calculating center coordinates of the blobs of each region of the first blob image based on the coordinate information of the edge points of the first blob outline;
calculating center coordinates of blobs for each of the regions of the second blob image based on the coordinate information of the edge points of the second blob outline.
5. The depth calculation method of claim 4, wherein the calculating the disparity of the blobs of each of the regions of the first blob image and the corresponding blobs of each of the regions of the second blob image based on the first position information and the second position information comprises:
calculating a disparity of the blobs of each of the regions of the first blob image with the blobs of each of the regions of the corresponding second blob image based on the center coordinates of the blobs of each of the regions of the first blob image and the center coordinates of the blobs of each of the regions of the second blob image.
6. A depth calculation system, comprising:
a projection module for projecting a regular speckle pattern to a target area;
a camera module including a first camera unit and a second camera unit, an imaging area of which has been divided into a plurality of areas, for acquiring a speckle image reflected back through a target area and generating a first speckle image and a second speckle image;
a control and processing module for controlling the projection module and the camera module and for calculating a disparity in each corresponding region from the first and second speckle images to further obtain a depth;
wherein each of the imaging regions comprises only one spot.
7. The depth calculation system of claim 6, wherein the projection module comprises a light source and an optical assembly, wherein the optical assembly comprises at least one of a lens element, an optical diffraction element, or a microlens array.
8. The depth calculation system of claim 6, wherein the width of the division of the imaging region into the regions is greater than or equal to the maximum disparity value of the first camera unit and the second camera unit; the height of the imaging area divided into the areas is larger than or equal to the diameter size of the spots projected by the projection module.
9. The depth calculation system of claim 6, wherein the calculating of the disparity in the corresponding region from the first and second speckle images comprises:
acquiring first position information and second position information of each corresponding spot in the region in the first spot image and the second spot image;
calculating a disparity of the blobs of the first blob image and the second blob image, each corresponding to the region, based on the first position information and the second position information.
10. The depth calculation system of claim 9, wherein the obtaining first and second location information for each of the first and second speckle images corresponding to a speckle in the region comprises:
calculating coordinate information of the spots of each region at the spot contour edge points of the first spot image and the second spot image;
and calculating the center coordinates of the spots of each region based on the coordinate information of the edge points of the spot outline.
CN202110314157.1A 2021-03-24 2021-03-24 Depth calculation method and system Pending CN113052889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110314157.1A CN113052889A (en) 2021-03-24 2021-03-24 Depth calculation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110314157.1A CN113052889A (en) 2021-03-24 2021-03-24 Depth calculation method and system

Publications (1)

Publication Number Publication Date
CN113052889A true CN113052889A (en) 2021-06-29

Family

ID=76514911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110314157.1A Pending CN113052889A (en) 2021-03-24 2021-03-24 Depth calculation method and system

Country Status (1)

Country Link
CN (1) CN113052889A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496161A (en) * 2011-12-13 2012-06-13 浙江欧威科技有限公司 Method for extracting contour of image of printed circuit board (PCB)
CN105160680A (en) * 2015-09-08 2015-12-16 北京航空航天大学 Design method of camera with no interference depth based on structured light
WO2017138210A1 (en) * 2016-02-12 2017-08-17 ソニー株式会社 Image pickup apparatus, image pickup method, and image pickup system
CN107564091A (en) * 2017-07-26 2018-01-09 深圳大学 A kind of three-dimensional rebuilding method and device based on quick corresponding point search
CN109405765A (en) * 2018-10-23 2019-03-01 北京的卢深视科技有限公司 A kind of high accuracy depth calculation method and system based on pattern light
CN110657785A (en) * 2019-09-02 2020-01-07 清华大学 Efficient scene depth information acquisition method and system
CN111079772A (en) * 2019-12-18 2020-04-28 深圳科瑞技术股份有限公司 Image edge extraction processing method, device and storage medium
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system
CN111561872A (en) * 2020-05-25 2020-08-21 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN112233063A (en) * 2020-09-14 2021-01-15 东南大学 Circle center positioning method for large-size round object
CN112487893A (en) * 2020-11-17 2021-03-12 北京的卢深视科技有限公司 Three-dimensional target identification method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496161A (en) * 2011-12-13 2012-06-13 浙江欧威科技有限公司 Method for extracting contour of image of printed circuit board (PCB)
CN105160680A (en) * 2015-09-08 2015-12-16 北京航空航天大学 Design method of camera with no interference depth based on structured light
WO2017138210A1 (en) * 2016-02-12 2017-08-17 ソニー株式会社 Image pickup apparatus, image pickup method, and image pickup system
CN107564091A (en) * 2017-07-26 2018-01-09 深圳大学 A kind of three-dimensional rebuilding method and device based on quick corresponding point search
CN109405765A (en) * 2018-10-23 2019-03-01 北京的卢深视科技有限公司 A kind of high accuracy depth calculation method and system based on pattern light
CN110657785A (en) * 2019-09-02 2020-01-07 清华大学 Efficient scene depth information acquisition method and system
CN111079772A (en) * 2019-12-18 2020-04-28 深圳科瑞技术股份有限公司 Image edge extraction processing method, device and storage medium
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system
CN111561872A (en) * 2020-05-25 2020-08-21 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN112233063A (en) * 2020-09-14 2021-01-15 东南大学 Circle center positioning method for large-size round object
CN112487893A (en) * 2020-11-17 2021-03-12 北京的卢深视科技有限公司 Three-dimensional target identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张善文 等编: "《图像模式识别》", 西安电子科技大学出版社, pages: 79 - 80 *

Similar Documents

Publication Publication Date Title
CN108957911B (en) Speckle structure light projection module and 3D degree of depth camera
US9817159B2 (en) Structured light pattern generation
CN102203551B (en) Method and system for providing three-dimensional and range inter-planar estimation
US20210358157A1 (en) Three-dimensional measurement system and three-dimensional measurement method
WO1997006406A1 (en) Distance measuring apparatus and shape measuring apparatus
CN108924408B (en) Depth imaging method and system
CN108881717B (en) Depth imaging method and system
US11698441B2 (en) Time of flight-based three-dimensional sensing system
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
CN102088617B (en) A three-dimensional imaging apparatus and a method of generating a three-dimensional image of an object
EP3596425B1 (en) Optoelectronic devices for collecting three-dimensional data
US11663734B2 (en) Systems and methods of measuring an object in a scene of a captured image
JP2023522755A (en) Irradiation pattern for object depth measurement
CN113052887A (en) Depth calculation method and system
JP2022552238A (en) Projector for illuminating at least one object
CN113052889A (en) Depth calculation method and system
JP2006220603A (en) Imaging apparatus
CN108924407B (en) Depth imaging method and system
Kawasaki et al. Optimized aperture for estimating depth from projector's defocus
CN213091888U (en) Depth measurement system and electronic device
JP2006308452A (en) Method and apparatus for measuring three-dimensional shape
US20200088508A1 (en) Three-dimensional information generating device and method capable of self-calibration
CN113513988B (en) Laser radar target detection method and device, vehicle and storage medium
US11920918B2 (en) One shot calibration
KR101840328B1 (en) 3-dimensional laser scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination