CN113888614B - Depth recovery method, electronic device, and computer-readable storage medium - Google Patents
Depth recovery method, electronic device, and computer-readable storage medium Download PDFInfo
- Publication number
- CN113888614B CN113888614B CN202111117448.8A CN202111117448A CN113888614B CN 113888614 B CN113888614 B CN 113888614B CN 202111117448 A CN202111117448 A CN 202111117448A CN 113888614 B CN113888614 B CN 113888614B
- Authority
- CN
- China
- Prior art keywords
- value
- point
- speckle pattern
- valued
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000011084 recovery Methods 0.000 title claims abstract description 46
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 65
- 238000012545 processing Methods 0.000 claims abstract description 36
- 238000003709 image segmentation Methods 0.000 claims abstract description 25
- 230000011218 segmentation Effects 0.000 claims description 28
- 238000001559 infrared map Methods 0.000 claims description 18
- 238000001914 filtration Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000009499 grossing Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 5
- 239000003550 marker Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000005259 measurement Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application relates to the technical field of image processing, and discloses a depth recovery method, electronic equipment and a computer-readable storage medium, wherein the depth recovery method comprises the following steps: segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region; multi-valued processing is carried out on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern; determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask; and determining the depth value of each point in the foreground area according to the parallax value. The depth recovery method provided by the embodiment of the application can greatly reduce the calculated amount while ensuring the high precision of the depth recovery, avoid increasing the cost of a chip and effectively improve the efficiency of the depth recovery.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a depth recovery method, electronic equipment and a computer-readable storage medium.
Background
With the rapid development of 3D technology, the 3D structured light technology for projecting in three-dimensional space and obtaining three-dimensional images based on structured light is mature, and is different from the pure passive three-dimensional measurement technology including binocular stereo vision technology, the 3D structured light technology mainly uses a near-infrared laser to emit light with certain structural characteristics, projects the light onto a shot object, and then is collected by a special infrared camera, because the light with certain structure can be changed differently in different depth areas of the shot object, the structure of an image generated after the collection of the infrared camera changes relative to the original light, and the change of the structure is converted into depth information by an arithmetic unit, so that the three-dimensional structure of the shot object can be determined, the speckle structured light is one of the light, and the 3D structured light technology is mainly applied to unlocking of intelligent equipment at present, Human body measurement, object volume measurement, face modeling and the like.
The depth recovery method adopted by the depth camera is mostly the extension and continuation of a stereoscopic vision binocular matching algorithm, such as a semi-global matching algorithm (SGM) for dense-global matching, a region growing algorithm, a global search optimization algorithm and the like, and the depth recovery and the depth calculation of a scene are achieved through the methods.
However, as the demand of the depth camera for the output resolution is continuously increased, the computation resource required by the depth camera is also multiplied, the depth recovery method based on the stereoscopic vision binocular matching algorithm cannot meet the actual demand of the depth camera, the accuracy of the depth recovery is low, and the cost of the chip is greatly increased by optimizing and upgrading the computation unit of the depth camera.
Disclosure of Invention
An object of the embodiments of the present application is to provide a depth recovery method, an electronic device, and a computer-readable storage medium, which can greatly reduce the amount of computation while ensuring high accuracy of depth recovery, avoid increasing the cost of a chip, and effectively improve the efficiency of depth recovery.
In order to solve the above technical problem, an embodiment of the present application provides a depth recovery method, including the following steps: segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region; multi-valued processing is carried out on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern; determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask; and determining the depth value of each point in the foreground area according to the parallax value.
An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described depth recovery method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the above-described depth recovery method.
In the depth recovery method, the electronic device, and the computer-readable storage medium provided by the embodiments of the present application, the server segments the filtered infrared image according to a preset image segmentation algorithm, determines a foreground region of the infrared image, generates a mask of the foreground region, performs multi-valued processing on a speckle pattern corresponding to the infrared image to obtain a multi-valued speckle pattern, determines disparity values of points in the foreground region according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern, and the mask, and determines depth values of the points in the foreground region according to the disparity values of the points in the foreground region The depth recovery is not carried out, the calculated amount of the depth recovery can be greatly reduced, the cost of a depth camera chip is reduced, the efficiency of the depth recovery is improved, meanwhile, the infrared image after being filtered is segmented, the determined foreground area can be more accurate and clear, in addition, the speckle pattern used in the depth recovery is the speckle pattern after multi-valued processing, compared with the speckle pattern of the binarization processing, the robustness of the depth recovery can be improved by using the speckle pattern of the multivalued processing, and the high precision of the depth recovery is ensured.
In addition, the multi-valued speckle pattern is obtained by performing multi-valued processing on the speckle pattern corresponding to the infrared image, and the method includes: traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed; calculating the SAD value of the gray value of the point to be processed; calculating the mean value of the gray values of all points in the first window corresponding to the point to be processed and the mean value of the SAD values of the gray values of all points according to the size of a preset first window; and performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern, and performing multi-valued processing on the processing points by using the SAD value of the gray value of the points to be processed, the mean value of the gray value of each point in a window corresponding to the points to be processed and the mean value of the SAD value of the gray value of each point in the window corresponding to the points to be processed, so that the multi-valued assignment is realized, the multi-valued assignment can be more accurate, and the robustness and the precision of depth recovery are further improved.
In addition, the preset image segmentation algorithm is a watershed segmentation algorithm; the method for segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image and generating a mask of the foreground region comprises the following steps: carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image; performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image; marking the determined foreground area image according to a connected component marking algorithm to obtain a marked image; according to the marker map, the uncertain region map, the binarized infrared map and the watershed segmentation algorithm, distance judgment is carried out on the uncertain region map, the foreground region of the infrared map is determined, the mask of the foreground region is generated, and the filtered infrared map is segmented by the watershed segmentation algorithm, so that the determined foreground region of the infrared map is more accurate, the actual requirement of depth recovery is better met, and the use experience of a user is improved.
In addition, before the segmenting the filtered infrared image according to a preset image segmentation algorithm and determining a foreground region of the infrared image, the method includes: the method comprises the steps of performing selective mask smoothing filtering on an obtained infrared image to obtain a filtered infrared image, and eliminating noise of the infrared image while inevitably bringing about the disadvantage of averaging by considering filtering modes such as mean filtering, weighted mean filtering and the like so that sharply changed edges or lines become fuzzy.
In addition, the determining the parallax value of each point in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern and the mask includes: generating a parallax cost matrix according to the width and the height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern and a preset parallax search range; determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask; the pixel value of the point corresponding to the point to be matched on the mask is 1, the pixel value of each point in the mask comprises 0 and 1, and the point with the pixel value of 1 is located in the foreground area; determining a matching cost value between the point to be matched and a target point according to the parallax cost matrix, and determining a minimum value of the matching cost value; the target point is a point of the point to be matched in a corresponding second window in the multi-valued reference speckle pattern, and the second window is the parallax search range; and determining the parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched, and calculating only a plurality of matching cost values of the point to be matched after generating a parallax cost matrix, namely determining the parallax value of the point to be matched only by calculation, so that the calculation amount can be further reduced, and the efficiency of depth recovery is further improved.
In addition, the determining the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched includes: performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched; and performing sub-pixel interpolation on the parallax between the target point corresponding to the minimum value and the point to be matched by using the parallax value between the target point corresponding to the minimum value and the point to be matched after interpolation as the parallax value of the point to be matched, so that the obtained parallax value is more scientific and accurate.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
FIG. 1 is a first flowchart of a depth recovery method according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating a multi-valued speckle pattern obtained by applying a multi-valued speckle pattern to a speckle pattern corresponding to an infrared pattern according to an embodiment of the present application;
FIG. 3 is a flowchart of segmenting a filtered IR map according to a predetermined image segmentation algorithm, determining foreground regions of the IR map, and generating masks for the foreground regions, according to an embodiment of the present application;
FIG. 4 is a schematic diagram of 9 template windows for selective mask smoothing filtering provided in an embodiment of the present application;
FIG. 5 is a flow chart for determining disparity values for points in a foreground region based on a multi-valued speckle pattern, a pre-set multi-valued reference speckle pattern, and a mask, according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a disparity value between a target point corresponding to a minimum value and a point to be matched as a disparity value of the point to be matched according to an embodiment of the present application;
FIG. 7 is a flow chart two of a depth recovery method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in the various embodiments of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present application, and the embodiments may be mutually incorporated and referred to without contradiction.
One embodiment of the present application relates to a depth recovery method applied to an electronic device; the electronic device may be a terminal or a server, and the electronic device in this embodiment and the following embodiments is described by taking the server as an example. The implementation details of the depth recovery method of the present embodiment are specifically described below, and the following description is only provided for the convenience of understanding, and is not necessary for implementing the present embodiment.
A specific flow of the depth recovery method of this embodiment may be as shown in fig. 1, and includes:
Specifically, after the server acquires the infrared image shot by the depth camera, the server may filter the acquired infrared image according to a preset filtering method to obtain the filtered infrared image, and then the server may segment the filtered infrared image according to a preset image segmentation algorithm to determine a foreground region and a background region of the infrared image, and generate a mask of the foreground region according to a foreground partition of the infrared image, where the preset filtering method and the preset image segmentation algorithm may be set by a person skilled in the art according to actual needs.
In the specific implementation, the server segments the filtered infrared image according to a preset image segmentation algorithm, namely, performs region division according to gray features, texture features, shape features and the like of the filtered infrared image, so that differences are presented among regions of the filtered infrared image, and similarity is presented in a certain region, thereby determining a foreground region and a background region of the infrared image, wherein the foreground region contains information required by a user, and the information in the background region is irrelevant to the requirements of the user, the server generates a mask of the foreground region based on the foreground region of the infrared image, and only the foreground region can be processed according to the mask of the foreground region.
In one example, the preset image segmentation algorithm may include, but is not limited to: a threshold segmentation algorithm, an edge segmentation algorithm, a region segmentation algorithm, a graph theory segmentation algorithm, an energy flooding segmentation algorithm, a watershed segmentation algorithm, and the like.
And 102, performing multi-valued processing on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern.
Specifically, after the foreground area of the infrared image is determined by the server and the mask of the foreground area is generated, multi-valued processing can be performed on the speckle image corresponding to the infrared image to obtain the multi-valued speckle image.
In a specific implementation, the server can determine a foreground region of the infrared image and generate a mask of the foreground region, and then perform multivalued processing on a speckle pattern corresponding to the infrared image; or multi-valued processing can be performed on the speckle pattern corresponding to the obtained infrared image, then the filtered infrared image is segmented according to a preset image segmentation algorithm, the foreground area of the infrared image is determined, and a mask of the foreground area is generated; the segmentation of the filtered infrared image and the multivalued processing of the speckle pattern can also be carried out at the same time.
In an example, the server performs multivalued processing on the speckle pattern corresponding to the infrared pattern, and may perform multivalued assignment on each point of the speckle pattern according to a relationship between a pixel value (i.e., a gray value) of each point of the speckle pattern and a preset threshold, where the preset threshold may be set by a person skilled in the art according to actual needs and experience, and the embodiment of the present application is not particularly limited thereto.
And 103, determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern and the mask.
Specifically, after obtaining the multi-valued speckle pattern and generating the mask of the foreground region, the server may determine the disparity values of the points in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern, and the mask of the foreground region, where the preset multi-valued reference speckle pattern may be set by a person skilled in the art according to actual needs.
In one example, after the server obtains the infrared map and the speckle pattern corresponding to the infrared map, a preset reference speckle pattern corresponding to the speckle pattern may be obtained, and when the server performs multi-valued processing on the speckle pattern, the server simultaneously performs the same multi-valued processing on the reference speckle pattern to obtain a multi-valued speckle pattern and a multi-valued reference speckle pattern.
In one example, the server may combine the mask with the multi-valued speckle pattern to obtain the multi-valued speckle pattern only including the foreground region, traverse each point in the multi-valued speckle pattern only including the foreground region, sequentially compare each point in the multi-valued speckle pattern only including the foreground region with a corresponding point in the multi-valued reference speckle pattern, and thereby calculate the disparity value of each point in the foreground region.
And step 104, determining the depth values of all points in the foreground area according to the parallax values.
Specifically, after the server determines the disparity values of the points in the foreground region, the server may determine the depth values of the points in the foreground region according to the disparity values of the points in the foreground region.
In a specific implementation, the server may determine the depth values of the points in the foreground region according to the disparity values of the points in the foreground region, the distance between the points and the reference plane, the camera calibration focal length, and the camera baseline distance based on the triangulation principle by the following formula:
in the formula, z0And d is the distance from the reference plane, d is the parallax value, f is the camera calibration focal length, L is the camera baseline distance, Z is the depth value, and the unit of the depth value is millimeter.
In this embodiment, compared with the technical solution of performing depth restoration based on a stereoscopic vision binocular matching algorithm, in the embodiment of the present application, a server segments a filtered infrared image according to a preset image segmentation algorithm, determines a foreground region of the infrared image, generates a mask of the foreground region, performs multivalued processing on a speckle pattern corresponding to the infrared image to obtain a multivalued speckle pattern, determines disparity values of points in the foreground region according to the multivalued speckle pattern, a preset multivalued reference speckle pattern, and the mask, and determines depth values of the points in the foreground region according to the disparity values of the points in the foreground region, in the embodiment of the present application, the foreground region of the infrared image is determined first, the mask of the foreground region is generated, only depth values of the points in the foreground region are determined, that is, only depth restoration is performed on the foreground region of the infrared image, the background area does not participate in calculation and does not carry out depth restoration, the calculation amount of the depth restoration can be greatly reduced, the cost of a depth camera chip is reduced, the efficiency of the depth restoration is improved, meanwhile, the infrared image after filtering is segmented, the determined foreground area can be more accurate and clear, in addition, the speckle pattern used in the depth restoration is the speckle pattern after multi-valued processing, compared with the speckle pattern of binarization processing, the robustness of the depth restoration can be improved by using the speckle pattern of multi-valued processing, and the high precision of the depth restoration is ensured.
In an embodiment, the server performs multivalued processing on the speckle pattern corresponding to the infrared pattern to obtain a multivalued speckle pattern, which can be implemented by the steps shown in fig. 2, and specifically includes:
In step 202, the SAD value of the gray value of the point to be processed is calculated.
In a specific implementation, when the server performs multi-valued processing on the speckle pattern corresponding to the infrared image, the server may traverse each point in the speckle pattern corresponding to the infrared image, sequentially take each point in the speckle pattern as a point to be processed, and calculate an SAD value of a gray value of each point to be processed, where the SAD value of the gray value is a sum of absolute values of differences between the gray value of the point to be processed and gray values of points in a neighborhood of the point to be processed.
In a specific implementation, after calculating the SAD value of the gray value of each point to be processed, the server may calculate an average value of the gray value of each point in the first window corresponding to the point to be processed in the speckle pattern according to a preset size of the first window, and calculate an average value of the SAD value of the gray value of each point in the first window corresponding to the point to be processed in the speckle pattern, where the preset size of the first window may be set by a person skilled in the art according to actual needs.
In one example, the preset first window size may be 15px × 15 px.
And 204, performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern.
In a specific implementation, after calculating the SAD value of the gray value of each point to be processed, the mean value of the gray value of each point in the first window corresponding to the point to be processed, and the mean value of the SAD value of the gray value of each point, the server may perform multi-valued assignment on the point to be processed according to the gray value of the point to be processed, the mean value of the gray value, and the mean value of the SAD value of the gray value, so as to obtain a multi-valued speckle pattern.
In one example, the server may perform octalization processing on the speckle pattern corresponding to the infrared pattern, and perform octalization assignment on the to-be-processed points according to the gray value of the to-be-processed points, the mean value of the gray values, and the mean value of the SAD values of the gray values by using the following formulas:
β1<β2<β3<β4<β5<β6<β7
wherein X (i, j) is the gray value of the point to be processed,is the average of the gray-scale values,is the mean value of SAD values of gray values, beta1、β2、β3、β4、β5、β6、β7For the default parameters, B (i, j) is an octave assignment of points to be processed, where β1、β2、β3、β4、β5、β6、β7The setting can be performed by a person skilled in the art according to actual needs and experimental experience.
In one example, the server sets β1=-0.8,β2=-0.3,β3=0.2,β4=0.6,β5=1,β6=1.4,β7=2。
In this embodiment, the multi-valuating the speckle pattern corresponding to the infrared image to obtain a multi-valued speckle pattern includes: traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed; calculating the SAD value of the gray value of the point to be processed; calculating the mean value of the gray values of all points in the first window corresponding to the point to be processed and the mean value of the SAD values of the gray values of all points according to the size of a preset first window; and performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern, and performing multi-valued processing on the processing points by using the SAD value of the gray value of the points to be processed, the mean value of the gray value of each point in a window corresponding to the points to be processed and the mean value of the SAD value of the gray value of each point in the window corresponding to the points to be processed, so that the multi-valued assignment is realized, the multi-valued assignment can be more accurate, and the robustness and the precision of depth recovery are further improved.
In an embodiment, the preset image segmentation algorithm is a watershed segmentation algorithm, and the server segments the filtered infrared image according to the preset image segmentation algorithm, determines a foreground region of the infrared image, and generates a mask of the foreground region, which may be implemented by the steps shown in fig. 3, and specifically includes:
and 301, carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image.
And step 302, performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image.
Specifically, the watershed segmentation algorithm is a segmentation method based on mathematical morphology of a topological theory, the basic idea is that an image is regarded as a geometrological topological landform, the gray value of each point in the image represents the altitude of the point, each local minimum value and an influence area of the local minimum value are called as a water collecting basin, the boundary of the water collecting basin forms a watershed, the algorithm can be realized as a flood submerging process, the lowest point of the image is submerged firstly, then flood submerges the whole valley gradually, the water level overflows when reaching a certain height, a dam is built at the position where the water overflows, the process is repeated until the points on the whole image are submerged completely, and a series of built dams become the watershed for separating each basin.
In specific implementation, when segmenting the filtered infrared image based on a watershed segmentation algorithm, the server needs to perform graying and binarization processing on the filtered infrared image to obtain a binarized infrared image, and performs image expansion operation on the binarized infrared image to obtain a determined background area and generate a determined background area image; simultaneously, performing distance conversion operation on the binarized infrared image to obtain a determined foreground area and generate a determined foreground area image; and removing the determined foreground area and the determined residual part, namely the uncertain area, of the background area from the infrared image, and generating an uncertain area image by the server.
And 303, marking the determined foreground region image according to a connected component marking algorithm to obtain a marked image.
And 304, according to the marker map, the uncertain region map, the binarized infrared map and the watershed segmentation algorithm, performing distance judgment on the uncertain region map, determining the foreground region of the infrared map, and generating a mask of the foreground region.
In specific implementation, after obtaining a determined foreground region, a determined background region and an uncertain region, the server can further determine the uncertain region, firstly, the server marks a determined foreground region map according to a connected component marking algorithm to obtain a marked map, then, according to the marked map, the uncertain region map, a binarized infrared map and a watershed segmentation algorithm, distance judgment is carried out on the uncertain region map to determine the foreground region in the uncertain region, so that a complete foreground region of the infrared map is obtained, and a mask of the foreground region is generated.
In this embodiment, the preset image segmentation algorithm is a watershed segmentation algorithm; the method for segmenting the filtered infrared image according to the preset image segmentation algorithm, determining the foreground region of the infrared image and generating the mask of the foreground region comprises the following steps: carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image; performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image; marking the determined foreground area image according to a connected component marking algorithm to obtain a marked image; according to the label map, the uncertain region map, the binarized infrared map and the watershed segmentation algorithm, distance judgment is carried out on the uncertain region map, the foreground region of the infrared map is determined, the mask of the foreground region is generated, and the filtered infrared map is segmented by the watershed segmentation algorithm, so that the determined foreground region of the infrared map is more accurate, the actual requirement of depth recovery is better met, and the use experience of a user is improved.
In an embodiment, before the server segments the filtered infrared image according to a preset image segmentation algorithm and determines a foreground region of the infrared image, the server may perform selective mask smoothing filtering on the acquired infrared image to obtain the filtered infrared image.
In the specific implementation, the selective mask smoothing filter takes a template window with the size of 5px × 5px, takes a central pixel as a reference point, makes 9 screen windows with the shapes of 4 pentagons, 4 hexagons and a square with the side length of 3, and respectively calculates the average value and the variance in each window, because the variance of a region with a sharp edge is larger than that of a gentle region, the server can use a shielding window with the minimum variance to average, so that the filtering operation can be completed without damaging the details of the region boundary, and the 9 template windows of the selective mask smoothing filter can be as shown in fig. 4.
In this embodiment, before the segmenting the filtered infrared image according to a preset image segmentation algorithm and determining a foreground region of the infrared image, the method includes: the method comprises the steps of performing selective mask smoothing filtering on an obtained infrared image to obtain the filtered infrared image, and considering filtering modes such as mean filtering, weighted mean filtering and the like, the method can eliminate the noise of the infrared image and bring the defect of averaging inevitably at the same time, so that sharply changed edges or lines become fuzzy.
In an embodiment, the server determines the disparity values of the points in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern, and the mask, and may be implemented by the steps shown in fig. 5, which specifically include:
In a specific implementation, the disparity cost matrix CostVolume generated by the server is a three-dimensional cube, and the size of the three-dimensional cube is as follows: width, height and disparity range, wherein the width is the width of the speckle pattern, the height is the height of the speckle pattern, the disparity range is a preset viewing search range, and each position of the parallax cost matrix stores an SAD value of a gray value in a window, and the preset parallax search range can be set by a person skilled in the art according to actual needs.
In one example, the preset disparity search range may be 165.
Specifically, the pixel values of the points in the mask include 0 and 1, the point with the pixel value of 1 is located in the foreground region, the point with the pixel value of 0 is located in the background region, and the pixel value of the point to be matched corresponding to the point on the mask is 1.
In a specific implementation, before calculating the matching cost value pixel by pixel, the server may perform screening according to the mask, that is, determine whether the pixel value of a point corresponding to each point in the multi-valued speckle pattern in the mask is 1, and if the pixel value of a point corresponding to a certain point in the multi-valued speckle pattern in the mask is 1, take the point as a point to be matched; if the pixel value of a corresponding point in the mask at a certain point in the multivalued speckle pattern is 0, the point is ignored and no calculation is performed.
And 403, determining a matching cost value between the point to be matched and the target point according to the parallax cost matrix, and determining a minimum value of the matching cost value.
And step 404, determining a parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched.
Specifically, the target point is a point of the point to be matched in a corresponding second window in the multi-valued reference speckle pattern, and the second window is the parallax search range.
In one example, the server may match the points to be matched with the reference speckle pattern respectively, fluctuate left and right parallax by 16 pixels, and calculate the matching cost value by the following formula:
wherein p represents the point to be matched, d is the parallax value, binaryL(. to) is a multivalued reference speckle pattern, binaryR(. for multivalued speckle pattern, N)PRepresenting the neighborhood, CSADAnd (p, d) is the matching cost value.
In one example, the preset parallax search range is 165, that is, each point to be matched has 165 matching cost values, and the server selects a parallax value between a target point corresponding to the minimum value of the matching cost values and the point to be matched as the parallax value of the point to be matched.
In this embodiment, the determining the parallax values of the points in the foreground area according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern, and the mask includes: generating a parallax cost matrix according to the width and the height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern and a preset parallax search range; determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask; the pixel value of the point corresponding to the point to be matched on the mask is 1, the pixel value of each point in the mask comprises 0 and 1, and the point with the pixel value of 1 is located in the foreground area; determining a matching cost value between the point to be matched and a target point according to the parallax cost matrix, and determining a minimum value of the matching cost value; the target point is a point corresponding to the point to be matched in the multi-valued reference speckle pattern and each point in the parallax search range; and determining the parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched, and calculating only a plurality of matching cost values of the point to be matched after generating a parallax cost matrix, namely determining the parallax value of the point to be matched only by calculation, so that the calculation amount can be further reduced, and the efficiency of depth recovery is further improved.
In an embodiment, the server determines the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched, which may be implemented by the steps shown in fig. 6, and specifically includes:
and step 501, performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched.
And 502, taking the parallax value between the target point corresponding to the minimum value after interpolation and the point to be matched as the parallax value of the point to be matched.
In a specific implementation, in order to further improve the accuracy of depth recovery, the server may perform sub-pixel interpolation on a disparity value between a target point corresponding to the minimum value and a point to be matched, and use the disparity value between the target point corresponding to the minimum value and the point to be matched after interpolation as the disparity value of the point to be matched, so as to obtain a more accurate disparity value of the point to be matched.
In one example, the server may perform sub-pixel interpolation on the disparity value between the target point corresponding to the minimum value and the point to be matched by the following formula:
CL=costd-1-costd,CR=costd+1-costd,in the formula, costdCost corresponding to the current point to be matchedd-1Cost for the last point to be matchedd+1And d is the matching cost value corresponding to the next point to be matched, d is the parallax value between the target point corresponding to the minimum value and the point to be matched, and d' is the parallax value between the target point corresponding to the minimum value after sub-pixel interpolation and the point to be matched.
In this embodiment, the determining the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched includes: performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched; and performing sub-pixel interpolation on the parallax between the target point corresponding to the minimum value and the point to be matched by using the parallax value between the target point corresponding to the minimum value and the point to be matched after interpolation as the parallax value of the point to be matched, so that the obtained parallax value is more scientific and accurate.
Another embodiment of the present application relates to a depth recovery method, and the following describes implementation details of the depth recovery method of this embodiment in detail, where the following are provided only for facilitating understanding, and are not necessary to implement the present invention, and a specific flow of the depth recovery method of this embodiment may be as shown in fig. 7, and includes:
And step 603, traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed.
And 606, performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of this patent to add insignificant modifications or introduce insignificant designs to the algorithms or processes, but not to change the core designs of the algorithms and processes.
Another embodiment of the present application relates to an electronic device, as shown in fig. 8, including: at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the depth recovery method in the embodiments described above.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting together one or more of the various circuits of the processor and the memory. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
Another embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.
Claims (9)
1. A method of depth recovery, comprising:
segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region;
multi-valued processing is carried out on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern;
determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask;
determining the depth value of each point in the foreground area according to the parallax value;
wherein the determining the parallax value of each point in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern and the mask comprises:
generating a parallax cost matrix according to the width and the height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern and a preset parallax search range;
determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask; the pixel value of the point corresponding to the point to be matched on the mask is 1, the pixel value of each point in the mask comprises 0 and 1, and the point with the pixel value of 1 is located in the foreground area;
determining a matching cost value between the point to be matched and a target point according to the parallax cost matrix, and determining a minimum value of the matching cost value; the target point is a point of the point to be matched in a corresponding second window in the multi-valued reference speckle pattern, and the second window is the parallax search range;
and determining the parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched.
2. The depth recovery method according to claim 1, wherein the multivalued processing of the speckle pattern corresponding to the infrared image to obtain a multivalued speckle pattern includes:
traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed;
calculating the SAD value of the gray value of the point to be processed;
calculating the mean value of the gray values of all points in the first window corresponding to the point to be processed and the mean value of the SAD values of the gray values of all points according to the size of a preset first window;
and performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern.
3. The depth restoration method according to claim 2, wherein the multi-valued process includes an octave process, and the multi-valued assignment is performed for the point to be processed according to the SAD value of the gray value, the mean value of the gray value, and the mean value of the SAD value of the gray value by the following formulas:
4. The depth restoration method according to any one of claims 1 to 3, wherein the preset image segmentation algorithm is a watershed segmentation algorithm;
the method for segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image and generating a mask of the foreground region comprises the following steps:
carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image;
performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image;
marking the determined foreground area image according to a connected component marking algorithm to obtain a marked image;
and according to the marker map, the uncertain region map, the binarized infrared map and a watershed segmentation algorithm, performing distance judgment on the uncertain region map, determining a foreground region of the infrared map, and generating a mask of the foreground region.
5. The depth restoration method according to any one of claims 1 to 3, wherein before the segmenting the filtered infrared image according to a preset image segmentation algorithm and determining the foreground region of the infrared image, the method comprises:
and carrying out selective mask smoothing filtering on the obtained infrared image to obtain the filtered infrared image.
6. The depth recovery method according to claim 1, wherein the determining the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched comprises:
performing sub-pixel interpolation on a parallax value between the target point corresponding to the minimum value and the point to be matched;
and taking the parallax value between the target point corresponding to the minimum value after interpolation and the point to be matched as the parallax value of the point to be matched.
7. The depth restoration method according to claim 1, wherein the preset image segmentation algorithm is any one of the following: a threshold segmentation algorithm, an edge segmentation algorithm, a region segmentation algorithm, a graph theory segmentation algorithm, an energy flooding segmentation algorithm, and a watershed segmentation algorithm.
8. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the depth recovery method of any one of claims 1 to 7.
9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the depth recovery method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111117448.8A CN113888614B (en) | 2021-09-23 | 2021-09-23 | Depth recovery method, electronic device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111117448.8A CN113888614B (en) | 2021-09-23 | 2021-09-23 | Depth recovery method, electronic device, and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113888614A CN113888614A (en) | 2022-01-04 |
CN113888614B true CN113888614B (en) | 2022-05-31 |
Family
ID=79010441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111117448.8A Active CN113888614B (en) | 2021-09-23 | 2021-09-23 | Depth recovery method, electronic device, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113888614B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115393224A (en) * | 2022-09-02 | 2022-11-25 | 点昀技术(南通)有限公司 | Depth image filtering method and device |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268608A (en) * | 2013-05-17 | 2013-08-28 | 清华大学 | Depth estimation method and device based on near-infrared laser speckles |
CN103424083A (en) * | 2012-05-24 | 2013-12-04 | 北京数码视讯科技股份有限公司 | Object depth detection method, device and system |
CN103581653A (en) * | 2013-11-01 | 2014-02-12 | 北京航空航天大学 | Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation |
CN103810708A (en) * | 2014-02-13 | 2014-05-21 | 西安交通大学 | Method and device for perceiving depth of laser speckle image |
AU2013206597A1 (en) * | 2013-06-28 | 2015-01-22 | Canon Kabushiki Kaisha | Depth constrained superpixel-based depth map refinement |
CN109583304A (en) * | 2018-10-23 | 2019-04-05 | 宁波盈芯信息科技有限公司 | A kind of quick 3D face point cloud generation method and device based on structure optical mode group |
CN109658443A (en) * | 2018-11-01 | 2019-04-19 | 北京华捷艾米科技有限公司 | Stereo vision matching method and system |
CN110288564A (en) * | 2019-05-22 | 2019-09-27 | 南京理工大学 | Binaryzation speckle quality evaluating method based on power spectrumanalysis |
CN110853133A (en) * | 2019-10-25 | 2020-02-28 | 深圳奥比中光科技有限公司 | Method, device, system and readable storage medium for reconstructing three-dimensional model of human body |
CN111402313A (en) * | 2020-03-13 | 2020-07-10 | 合肥的卢深视科技有限公司 | Image depth recovery method and device |
CN112330751A (en) * | 2020-10-30 | 2021-02-05 | 合肥的卢深视科技有限公司 | Line deviation detection method and device for structured light camera |
CN112465723A (en) * | 2020-12-04 | 2021-03-09 | 北京华捷艾米科技有限公司 | Method and device for repairing depth image, electronic equipment and computer storage medium |
CN112700484A (en) * | 2020-12-31 | 2021-04-23 | 南京理工大学智能计算成像研究院有限公司 | Depth map colorization method based on monocular depth camera |
CN112927280A (en) * | 2021-03-11 | 2021-06-08 | 北京的卢深视科技有限公司 | Method and device for acquiring depth image and monocular speckle structured light system |
CN113379816A (en) * | 2021-06-29 | 2021-09-10 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2014104445A (en) * | 2014-02-07 | 2015-08-20 | ЭлЭсАй Корпорейшн | FORMING DEPTH IMAGES USING INFORMATION ABOUT DEPTH RECOVERED FROM AMPLITUDE IMAGE |
CN105205786B (en) * | 2014-06-19 | 2019-02-05 | 联想(北京)有限公司 | A kind of picture depth restoration methods and electronic equipment |
CN104268871A (en) * | 2014-09-23 | 2015-01-07 | 清华大学 | Method and device for depth estimation based on near-infrared laser speckles |
EP3680853A4 (en) * | 2017-09-11 | 2020-11-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and device, electronic device, and computer-readable storage medium |
CN108701361A (en) * | 2017-11-30 | 2018-10-23 | 深圳市大疆创新科技有限公司 | Depth value determines method and apparatus |
CN109461181B (en) * | 2018-10-17 | 2020-10-27 | 北京华捷艾米科技有限公司 | Depth image acquisition method and system based on speckle structured light |
CN112771573B (en) * | 2019-04-12 | 2023-01-20 | 深圳市汇顶科技股份有限公司 | Depth estimation method and device based on speckle images and face recognition system |
CN111105452B (en) * | 2019-11-26 | 2023-05-09 | 中山大学 | Binocular vision-based high-low resolution fusion stereo matching method |
-
2021
- 2021-09-23 CN CN202111117448.8A patent/CN113888614B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103424083A (en) * | 2012-05-24 | 2013-12-04 | 北京数码视讯科技股份有限公司 | Object depth detection method, device and system |
CN103268608A (en) * | 2013-05-17 | 2013-08-28 | 清华大学 | Depth estimation method and device based on near-infrared laser speckles |
AU2013206597A1 (en) * | 2013-06-28 | 2015-01-22 | Canon Kabushiki Kaisha | Depth constrained superpixel-based depth map refinement |
CN103581653A (en) * | 2013-11-01 | 2014-02-12 | 北京航空航天大学 | Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation |
CN103810708A (en) * | 2014-02-13 | 2014-05-21 | 西安交通大学 | Method and device for perceiving depth of laser speckle image |
CN109583304A (en) * | 2018-10-23 | 2019-04-05 | 宁波盈芯信息科技有限公司 | A kind of quick 3D face point cloud generation method and device based on structure optical mode group |
CN109658443A (en) * | 2018-11-01 | 2019-04-19 | 北京华捷艾米科技有限公司 | Stereo vision matching method and system |
CN110288564A (en) * | 2019-05-22 | 2019-09-27 | 南京理工大学 | Binaryzation speckle quality evaluating method based on power spectrumanalysis |
CN110853133A (en) * | 2019-10-25 | 2020-02-28 | 深圳奥比中光科技有限公司 | Method, device, system and readable storage medium for reconstructing three-dimensional model of human body |
CN111402313A (en) * | 2020-03-13 | 2020-07-10 | 合肥的卢深视科技有限公司 | Image depth recovery method and device |
CN112330751A (en) * | 2020-10-30 | 2021-02-05 | 合肥的卢深视科技有限公司 | Line deviation detection method and device for structured light camera |
CN112465723A (en) * | 2020-12-04 | 2021-03-09 | 北京华捷艾米科技有限公司 | Method and device for repairing depth image, electronic equipment and computer storage medium |
CN112700484A (en) * | 2020-12-31 | 2021-04-23 | 南京理工大学智能计算成像研究院有限公司 | Depth map colorization method based on monocular depth camera |
CN112927280A (en) * | 2021-03-11 | 2021-06-08 | 北京的卢深视科技有限公司 | Method and device for acquiring depth image and monocular speckle structured light system |
CN113379816A (en) * | 2021-06-29 | 2021-09-10 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
Non-Patent Citations (6)
Title |
---|
Depth estimation for speckle projection system using progressive reliable points growing matching;Guijin Wang等;《APPLIED OPTICS》;20130120;第52卷(第3期);第516-524页 * |
Efficient active depth sensing by laser speckle projection system;Xuanwu Yin等;《Optical Engineering》;20140131;第53卷(第1期);第1-10页 * |
基于散斑的三维体感交互系统;吴清等;《计算机辅助设计与图形学学报》;20160715;第28卷(第07期);第1105-1114页 * |
基于深度学习的散斑投影轮廓术;钟锦鑫等;《红外与激光工程》;20200625;第49卷(第06期);第1-11页 * |
基于激光散斑的半稠密深度图获取算法;古家威等;《中国激光》;20200331;第47卷(第3期);第1-9页 * |
基于激光散斑的带式输送机煤流负载监测方法研究;郝丁丁;《中国优秀硕士学位论文全文数据库 基础科学辑》;20210115(第(2021)01期);A005-324 * |
Also Published As
Publication number | Publication date |
---|---|
CN113888614A (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111209770B (en) | Lane line identification method and device | |
Zhang et al. | Critical regularizations for neural surface reconstruction in the wild | |
CN111063021A (en) | Method and device for establishing three-dimensional reconstruction model of space moving target | |
CN109658515A (en) | Point cloud gridding method, device, equipment and computer storage medium | |
CN107369204B (en) | Method for recovering basic three-dimensional structure of scene from single photo | |
CN103106651B (en) | Method for obtaining parallax error plane based on three-dimensional hough | |
CN101082988A (en) | Automatic deepness image registration method | |
CN110322572A (en) | A kind of underwater culvert tunnel inner wall three dimensional signal space method based on binocular vision | |
CN111105451B (en) | Driving scene binocular depth estimation method for overcoming occlusion effect | |
Rossi et al. | Joint graph-based depth refinement and normal estimation | |
CN115239870A (en) | Multi-view stereo network three-dimensional reconstruction method based on attention cost body pyramid | |
Vu et al. | Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing | |
CN115222889A (en) | 3D reconstruction method and device based on multi-view image and related equipment | |
CN101765019A (en) | Stereo matching algorithm for motion blur and illumination change image | |
CN113888614B (en) | Depth recovery method, electronic device, and computer-readable storage medium | |
CN112270701A (en) | Packet distance network-based parallax prediction method, system and storage medium | |
CN113920270B (en) | Layout reconstruction method and system based on multi-view panorama | |
CN114494582B (en) | Three-dimensional model dynamic updating method based on visual perception | |
CN111739071A (en) | Rapid iterative registration method, medium, terminal and device based on initial value | |
Ikonen et al. | Distance and nearest neighbor transforms on gray-level surfaces | |
CN108805841B (en) | Depth map recovery and viewpoint synthesis optimization method based on color map guide | |
CN117788731A (en) | Road reconstruction method, device and equipment | |
CN113344941A (en) | Depth estimation method based on focused image and image processing device | |
CN111738061A (en) | Binocular vision stereo matching method based on regional feature extraction and storage medium | |
CN116704123A (en) | Three-dimensional reconstruction method combined with image main body extraction technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220419 Address after: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province Applicant after: Hefei lushenshi Technology Co.,Ltd. Address before: 100083 room 3032, North B, bungalow, building 2, A5 Xueyuan Road, Haidian District, Beijing Applicant before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD. Applicant before: Hefei lushenshi Technology Co., Ltd |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |