CN113126944A - Depth map display method, display device, electronic device, and storage medium - Google Patents

Depth map display method, display device, electronic device, and storage medium Download PDF

Info

Publication number
CN113126944A
CN113126944A CN202110535531.0A CN202110535531A CN113126944A CN 113126944 A CN113126944 A CN 113126944A CN 202110535531 A CN202110535531 A CN 202110535531A CN 113126944 A CN113126944 A CN 113126944A
Authority
CN
China
Prior art keywords
pixel
depth map
normal vector
depth
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110535531.0A
Other languages
Chinese (zh)
Other versions
CN113126944B (en
Inventor
户磊
曹天宇
王亚运
季栋
薛远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilusense Technology Co Ltd
Hefei Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilusense Technology Co Ltd, Hefei Dilusense Technology Co Ltd filed Critical Beijing Dilusense Technology Co Ltd
Priority to CN202110535531.0A priority Critical patent/CN113126944B/en
Publication of CN113126944A publication Critical patent/CN113126944A/en
Application granted granted Critical
Publication of CN113126944B publication Critical patent/CN113126944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention relates to the field of image processing, and discloses a display method and device of a depth map, electronic equipment and a storage medium. In some embodiments of the present invention, a method for displaying a depth map includes: acquiring a normal vector of each pixel in the depth map; based on a preset illumination model, enhancing depth information represented by a normal vector of each pixel to obtain an intensity coefficient of each pixel; according to the intensity coefficient of each pixel, a depth map is displayed. This embodiment makes it possible to show area details with relatively small changes in depth.

Description

Depth map display method, display device, electronic device, and storage medium
Technical Field
Embodiments of the present invention relate to the field of image processing, and in particular, to a method and an apparatus for displaying a depth map, an electronic device, and a storage medium.
Background
Currently, depth maps are typically presented by mapping depth values to gray values or by mapping depth values to color space. In particular, the meaning of a pixel value in a depth map is the distance value of each point in the scene from the sensor. And in the process of displaying the depth map by mapping the depth value to the gray value, mapping the distance value to the gray value, and displaying the depth map. However, in a scene with higher precision, when a depth map is displayed by using a grayscale image by using millimeters as a distance unit, a conventional display cannot display more than 256 grayscales, and has great limitations; the details of the region with relatively small depth change are displayed weakly, and the visualization effect is not ideal in some scenes. In the process of displaying the depth map through mapping the depth values to the color space, a certain range of depth values are mapped to the color space, and the color image is used for displaying the depth map. Therefore, more different depth values can be displayed, and areas with larger depth value differences can be displayed as completely different colors by introducing color information, so that the result is more visual. However, in displaying a depth map by mapping depth values to a color space, the visualization effect is not ideal in some scenarios due to the weaker presentation of area details with relatively small depth variations.
Therefore, the existing depth map display method is weak in displaying details of a region with relatively small depth change, such visualization is often only used for observing depth information of a large region, and details cannot be intuitively observed in the region with relatively small depth change.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a display method, a display device, an electronic device, and a storage medium for a depth map, which enable display of area details with relatively small depth variations.
In order to solve the above technical problem, an embodiment of the present invention provides a method for displaying a depth map, including: acquiring a normal vector of each pixel in the depth map; based on a preset illumination model, enhancing depth information represented by a normal vector of each pixel to obtain an intensity coefficient of each pixel; according to the intensity coefficient of each pixel, a depth map is displayed.
An embodiment of the present invention also provides a display device of a depth map, including: the device comprises an acquisition module, a calculation module and a display module; the acquisition module is used for acquiring a normal vector of each pixel in the depth map; the calculation module is used for enhancing the depth information represented by the normal vector of each pixel based on a preset illumination model to obtain the intensity coefficient of each pixel; the display module is used for displaying the depth map according to the intensity coefficient of each pixel.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of displaying a depth map as mentioned in the above embodiments.
The embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for displaying the depth map mentioned in the above embodiment is implemented.
According to the depth map display method, the depth map display device, the electronic device and the storage medium, the intensity coefficient of the pixel is calculated according to the normal vector of the pixel in the depth map based on the illumination model. The depth map is displayed by referring to the intensity coefficient of each pixel, so that the depth change can be strengthened, and further, the area details with relatively small depth change can be better displayed.
In addition, acquiring a normal vector of each pixel in the depth map includes: acquiring an initial normal vector of each pixel; for each pixel, a normal vector of the pixel is calculated from the initial normal vector of the pixel and the initial normal vector of the domain pixel of the pixel.
In addition, acquiring a normal vector of each pixel in the depth map includes: and aiming at each pixel, calculating a normal vector of the pixel according to the depth value of the pixel and the depth value of a neighborhood pixel in a preset area of the pixel, wherein the side length of the preset area is more than 3 times of the diameter of the pixel.
In addition, for each pixel, calculating a normal vector of the pixel according to the depth value of the pixel and the depth value of a neighborhood pixel in a preset area of the pixel, including: aiming at each pixel, calculating an initial normal vector of the pixel according to the depth value of the pixel and the depth value of a neighborhood pixel in a preset area of the pixel; and calculating the normal vector of the pixel according to the initial normal vector of the pixel and the initial normal vector of the neighborhood pixel in the preset area of the pixel.
In addition, before the obtaining of the normal vector of each pixel in the depth map, the method for displaying the depth map further includes: and calling different processing threads to respectively read the depth values of the depth map so as to calculate the normal vectors of the pixels of the depth map in parallel.
In addition, based on a preset illumination model, enhancing the depth information represented by the normal vector of each pixel to obtain the intensity coefficient of each pixel, including: obtaining a light direction vector of a simulated light source corresponding to the illumination model on a pixel; and calculating the intensity coefficient of the pixel according to the light direction vector of the pixel and the normal vector of the pixel.
In addition, calculating the intensity coefficient of the pixel according to the light direction vector of the pixel and the normal vector of the pixel comprises the following steps: and calculating the quantity product of the light direction vector of the pixel and the normal vector of the pixel as the intensity coefficient of the pixel.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of a display method of a depth map of a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional model constructed during implementation of the method for displaying a depth map shown in FIG. 1;
FIG. 3 is a schematic diagram illustrating a depth map displayed by using a grayscale image according to a first embodiment of the present invention;
fig. 4 is a schematic view showing an effect when a depth map is shown by the display method of the depth map shown in fig. 1;
fig. 5a and 5b are schematic diagrams illustrating another effect of displaying a depth map by using a grayscale image according to the first embodiment of the present invention;
fig. 6a and 6b are schematic views illustrating another effect when a depth map is displayed by the display method of the depth map shown in fig. 1;
fig. 7 is a flowchart of a display method of a depth map of the second embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating the locations of pixels in the neighborhood of the ith row and the jth column of the second embodiment of the present invention;
FIG. 9 is a schematic diagram of the positions of the pixels in the second area of the pixels in the ith row and the jth column according to the second embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating the positions of pixels in a predetermined area of the pixels in the ith row and the jth column according to the second embodiment of the present invention;
FIG. 11 is a depth map after two-level optimization for the second embodiment of the present invention;
FIG. 12 is a diagram illustrating the reading of pixel information when 4 processing threads are processed in parallel according to the second embodiment of the present invention;
fig. 13 is a schematic view of a display method of a depth map of the third embodiment of the present invention;
FIG. 14 is a schematic diagram of the positional relationship of the light rays of a simulated light source and a three-dimensional model according to a third embodiment of the invention;
fig. 15 is a schematic structural view of a display device of a depth map according to a fourth embodiment of the present invention;
fig. 16 is a schematic configuration diagram of an electronic apparatus according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to a display method of a depth map, including the steps of: acquiring a normal vector of each pixel in the depth map; based on a preset illumination model, enhancing depth information represented by a normal vector of each pixel to obtain an intensity coefficient of each pixel; according to the intensity coefficient of each pixel, a depth map is displayed. In this embodiment, the intensity coefficients of the pixels are calculated from the normal vectors of the pixels in the depth map based on the illumination model. The depth map is displayed by referring to the intensity coefficient of each pixel, so that the depth change can be strengthened, and further, the area details with relatively small depth change can be better displayed.
The following describes details of the depth map display method according to the present embodiment. The following disclosure provides implementation details for the purpose of facilitating understanding, and is not necessary to practice the present solution.
The method for displaying a depth map in this embodiment is applied to an electronic device. The electronic device may be a terminal, a server, a cloud server, or the like. As shown in fig. 1, the display method of the depth map specifically includes the following steps:
step 101: and acquiring a normal vector of each pixel in the depth map.
In particular, in the illumination model, the normal vector of the object surface is an important variable. In this embodiment, the electronic device may establish a spatial rectangular coordinate system O-WHD with three dimensions of width, height, and depth value for the depth map, where the ith row and the jth column of pixels p on the depth mapijCorresponds to the expression (j, i, p) in O-WHDij). All pixels in the depth map form a three-dimensional model in space in the form of a cylinder at the O-WHD, as shown in fig. 2. Normal direction of pixelThe quantity is a normal vector of the pixel on the three-dimensional model, and can be calculated according to the depth value of the pixel and the depth values of the neighboring pixels of the pixel.
Step 102: and based on a preset illumination model, enhancing the depth information represented by the normal vector of each pixel to obtain the intensity coefficient of each pixel.
Specifically, the illumination model is a calculation model designed in a computer by using some physical laws of light in reality, and can simulate a three-dimensional model to form an image according with the impression habit. The intensity coefficient of the pixel is calculated based on the illumination model, so that the depth change in the depth map can be strengthened, and the cognitive experience of human eyes on the content shot by the depth camera is obviously improved.
Step 103: according to the intensity coefficient of each pixel, a depth map is displayed.
Specifically, according to the intensity coefficient of each pixel, each intensity coefficient is mapped to a gray scale image or a color image for displaying a depth map, so that the gray scale image or the color image added with the illumination model is obtained, and the details of an area with small depth change can be better shown.
In one example, the electronic device is at a point G on the grayscale image GijOr a point C on the color image CijMultiplying by a pixel intensity coefficient m based on an illumination modelijTherefore, the display effect when the depth map is displayed through the mapping from the depth value to the gray value or through the mapping from the depth value to the color space can be improved, and the details of the area with less depth change can be better displayed through the G 'and the C' added into the illumination model.
For example, when the depth map is presented by mapping depth values to gray values, i.e. the depth map is presented using a gray image, the presentation effect of the depth map is as shown in fig. 3. When the depth map is displayed by using the depth map display method according to the present embodiment, that is, when the gray scale value of each pixel in the gray scale image shown in fig. 3 is recalculated according to the formula a and the depth map is displayed, the display effect of the depth map is shown in fig. 4.
Formula a: g'ij=Gij*mij
Wherein, G'ijRepresenting the gray value, G, of the pixel in the ith row and jth column of the improved depth map (i.e. the optimized gray image)ijRepresenting the gray value, m, of the pixel in the ith row and jth column in a depth map (i.e. a gray image obtained by mapping depth values to gray values)ijRepresenting the intensity coefficients of the pixels in row i and column j of the depth map.
As can be seen from comparing fig. 3 and fig. 4, the present embodiment is a method for visualizing a depth map based on an illumination model, which can be improved based on a technique of displaying a depth map using a grayscale map or displaying a depth map using a color image. The depth image displayed by the display method of the depth map mentioned in the embodiment can better display the details of the area with less depth change. In addition, the display method of the depth map according to the present embodiment is small in calculation amount and can be used for real-time display. When the content shot by the depth camera needs to be observed, the display method of the depth map obviously improves the cognitive experience of human eyes on the content shot by the depth camera.
In addition, for the same scene, the performance of the different depth perception systems can be observed by the display method of the depth map mentioned in the embodiment. For example, when the depth map is presented only by a mapping of depth values to gray values, i.e. by a gray image, as in fig. 5a and 5b, it is difficult to observe the depth map performance of fig. 5a and 5b for a depth perception system. When the display method of the depth map mentioned in this embodiment is used, the display effects of the depth map are shown in fig. 6a and fig. 6b, respectively, and it can be clearly observed that the data obtained by the depth sensing system corresponding to fig. 6a has high accuracy, and the data obtained by the depth sensing system corresponding to fig. 6b has low accuracy.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, the display method of the depth map provided in the embodiment calculates the intensity coefficient of the pixel according to the normal vector of the pixel in the depth map based on the illumination model. The depth map is displayed by referring to the intensity coefficient of each pixel, so that the depth change can be strengthened, and further, the area details with relatively small depth change can be better displayed.
A second embodiment of the present invention relates to a method of displaying a depth map. This embodiment is a detailed example of the first embodiment, and illustrates an acquisition method of a normal vector of each pixel in a depth map.
Specifically, as shown in fig. 7, in the present embodiment, the method for displaying a depth map includes steps 201 to 204, where step 203 and step 204 are substantially the same as step 102 and step 103 of the first embodiment, and are not repeated here, and the following differences are mainly introduced:
step 201: an initial normal vector for each pixel is obtained.
Specifically, for each pixel, the electronics can calculate an initial normal vector for the pixel, as per equation b.
Formula b:
Figure BDA0003069474930000051
wherein the content of the first and second substances,
Figure BDA0003069474930000052
an initial vector representing the pixels of the ith row and the jth column,
Figure BDA0003069474930000053
the calculation method of (c) is as follows:
formula c:
Figure BDA0003069474930000054
wherein K represents the number of neighboring pixels in the first region of pixels, PijRepresenting the pixel in the ith row and the jth column, pkIs pijOf the k-th neighborhood of pixels, wherein pkTo pk+1Relative to pijIs changed to a fixed direction.
For example, with pkTo pk+1Relative to pijChange to counter-clockwise direction, of the pixel in the ith row and jth columnThe first region is a circular region with a radius of 1 centered on the pixel in the ith row and the jth column, and the pixel p in the ith row and the jth columnijThe schematic location diagram of the neighborhood pixels of (2) is shown in fig. 8. The unit length is the side length of a single pixel, the neighborhood pixels in the first area refer to other pixels except the ith row and the jth column which are located in the first area at the center, and K is 4. I.e. pixel p in ith row and jth columnijIncludes p1、p2、p3And p4
It should be noted that, as can be understood by those skilled in the art, in practical applications, the size of the first area may be determined according to project requirements, computing capabilities of the electronic device, and the like, and the present embodiment is not limited.
Step 202: for each pixel, a normal vector of the pixel is calculated from the initial normal vector of the pixel and the initial normal vector of the domain pixel of the pixel.
Specifically, the inventors found that when a subsequent operation is performed using an initial normal vector as a normal vector of a pixel, a layering effect as in fig. 4 is produced in a region where the depth variation is small due to a shaping accuracy problem. The layering refers to observation of numerical values, the numerical values in different areas are in consistency in a concentrated mode, obvious faults of observable scales exist among different numerical values, and the numerical values are not smooth enough and natural excessively. To optimize the display effect of the depth map, the inventor finds the initial vector map and then
Figure BDA0003069474930000069
As a center, will
Figure BDA0003069474930000061
And
Figure BDA0003069474930000062
the new normal vector is obtained together with the initial normal vector in a certain specified range, and the new normal vector is used as the normal vector of the pixel in the subsequent operation.
For example, the second region of the pixel in the ith row and the jth column is a square region with a size of 3 x 3 and the pixel in the ith row and the jth column is taken as the center,the unit length is the side length of a single pixel. Fig. 9 is a schematic diagram showing the positions of the pixels in the second region of the ith row and the jth column. Wherein the content of the first and second substances,
Figure BDA0003069474930000063
is the initial normal vector for the pixel in row i and column j,
Figure BDA0003069474930000064
and
Figure BDA0003069474930000065
is the initial normal vector of a neighborhood pixel within the second region of the ith row and jth column of pixels.
It should be noted that, as can be understood by those skilled in the art, in practical applications, the size of the second area may be determined according to project requirements, computing capabilities of the electronic device, and the like, and the embodiment is not limited.
It should be noted that, as can be understood by those skilled in the art, the size of the first region and the size of the second region may be the same or different, and the present embodiment is not limited thereto.
In one example, a normal vector for a pixel is calculated according to formula d based on an initial normal vector for the pixel and initial normal vectors for neighboring pixels within the second region of the pixel.
Formula d:
Figure BDA0003069474930000066
wherein the content of the first and second substances,
Figure BDA0003069474930000067
a normal vector representing the pixel, L represents the number of pixels in the second region of the pixel in the ith row and the jth column,
Figure BDA0003069474930000068
is the initial normal vector of the ith pixel in the second region.
It should be noted that, as will be understood by those skilled in the art, in practical applications, a new normal vector may also be calculated based on other constraint relationships, and this embodiment is merely an example, and does not limit the constraint relationship between the initial normal vector of the pixel and the normal vector of the pixel used in the subsequent operation.
It should be noted that, as will be understood by those skilled in the art, in practical applications, the normal vector of the pixel may also be calculated in other manners, and this embodiment is merely an example.
In another embodiment, to reduce the degree of delamination, the electronic device calculates the normal vector of the pixel in other ways, for example, the electronic device calculates the normal vector of the pixel based on, for each pixel, a depth value of the pixel and a depth value of a neighboring pixel within a preset region of the pixel, wherein the side length of the preset region is greater than 3 times the diameter of the pixel. In particular, when calculating the normal vector or initial normal vector of a pixel, p will beijThe radius of the neighborhood is increased, and the number K of the neighborhood pixels is increased so as to reduce the jitter amplitude of the normal vector in the continuous area and enhance the stability of the normal vector. For example, as shown in fig. 10, the predetermined region is a square region having a size of 5 × 5 and a unit length of a side length of a single pixel, p, with the pixel in the ith row and the jth column as a center1To p23The number of the neighborhood pixels of the ith row and the jth column is increased to 23 so as to reduce the jitter amplitude of the normal vector in a continuous area and enhance the stability of the normal vector.
Alternatively, the electronic device may optimize the display effect in two levels. Specifically, the electronic device calculates, for each pixel, an initial normal vector of the pixel according to a depth value of the pixel and a depth value of a neighborhood pixel in a preset region of the pixel; and calculating the normal vector of the pixel according to the initial normal vector of the pixel and the initial normal vector of the neighborhood pixel in the preset area of the pixel. The method for calculating the final normal vector of the pixel according to the initial normal vector of the pixel and the initial normal vector of the neighboring pixel of the pixel may refer to the description of formula d, and details are not repeated here. Fig. 11 shows the depth map after the two-stage optimization, and it can be known from fig. 11 that the layering phenomenon is significantly reduced after the two-stage optimization is performed on the display effect of the depth map, so that the displayed depth map is smoother.
It should be noted that, in practical applications, any embodiment may be selected as needed to reduce the degree of delamination, as will be understood by those skilled in the art.
In one example, before the electronic device obtains a normal vector of each pixel in the depth map, the method for displaying the depth map further includes: and calling different processing threads to respectively read the depth values of the depth map so as to calculate the normal vectors of the pixels of the depth map in parallel. Electronic devices have a need to browse and view a large number of depth maps, process depth maps of greater resolution, frequently zoom depth maps and display depth maps. These requirements require a large amount of computation, and generally, the storage device is occupied in the form of a new file after the batch processing is completed, and then the new file is browsed in real time. The embodiment provides a parallel implementation method to achieve real-time browsing experience. Specifically, the electronic device performs calculations on the pixel grid and obtains a depth map of the visualization based on the illumination model. In each step of operation, the calculation of the value in the pixel does not depend on the calculation result of the step of the adjacent pixel, and only the read-only operation is performed on the value used for calculation, so the embodiment proposes a parallel processing scheme. Specifically, the electronic device is provided with a plurality of processing threads. Each processing thread is responsible for the calculation of the value in one unit pixel. All processing threads are independent of each other in the same computation step in the whole graph. According to the actual situation of the deployment platform, a plurality of processing threads can be processed in parallel to realize real-time processing. For example, taking parallel processing of 4 processing threads as an example, as shown in fig. 12, the 4 processing threads T1, T2, T3 and T4 are simultaneously performing the same step of calculation, and the result on the pixel shown by the gray shading is calculated. The text in the pixel grid indicates that the processing thread will read the current data of the pixel, and since T1 and T2 perform read-only operation, even if they overlap, the parallelism is not affected.
Step 203: and based on a preset illumination model, enhancing the depth information represented by the normal vector of each pixel to obtain the intensity coefficient of each pixel.
Step 204: according to the intensity coefficient of each pixel, a depth map is displayed.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, the display method of the depth map provided in the embodiment calculates the intensity coefficient of the pixel according to the normal vector of the pixel in the depth map based on the illumination model. The depth map is displayed by referring to the intensity coefficient of each pixel, so that the depth change can be strengthened, and further, the area details with relatively small depth change can be better displayed. In addition, the normal vector of the pixel is calculated according to the initial normal vector of the pixel and the initial normal vector of the adjacent pixel of each pixel, so that the shaking amplitude of the normal vector in a continuous region is reduced, the stability of the normal vector is enhanced, the layering degree is further weakened, and the display effect is optimized.
A third embodiment of the present invention relates to a method for displaying a depth map. This embodiment is a detailed example of the first embodiment, and illustrates another method for acquiring a normal vector of each pixel in a depth map.
Specifically, as shown in fig. 13, in the present embodiment, the method for displaying a depth map includes steps 301 to 304, where steps 301 and 304 are substantially the same as steps 101 and 103 of the first embodiment, and are not repeated here, and differences are mainly introduced below:
step 301: and acquiring a normal vector of each pixel in the depth map. The process of obtaining the normal vector of each pixel may refer to the description of the second embodiment, and is not described herein again.
Step 302: and obtaining the light direction vector of the simulated light source corresponding to the illumination model on the pixel.
Specifically, an analog light source may be provided on the camera coordinate system, and the analog light source may be a point light source or a parallel light source. And calculating the ray direction vector of the simulated light source on the pixel according to the relative position information of the simulated light source and the pixel.
Step 303: and calculating the intensity coefficient of the pixel according to the light direction vector of the pixel and the normal vector of the pixel.
In one example, the number product of the light direction vector of the pixel and the normal vector of the pixel is calculated as the intensity coefficient of the pixel.
The calculation of the intensity coefficients of the pixels is exemplified below by setting a parallel light source pointing to the object perpendicular to the XOY plane in the camera coordinate system as an example. As shown in FIG. 14, the light direction vector of the collimated light source is
Figure BDA0003069474930000083
A point p on the known depth mapijCorresponding normal vector
Figure BDA0003069474930000081
Then the following equation e can be followed:
Figure BDA0003069474930000082
calculating the intensity coefficient m of the pixelij. After calculating the intensity coefficients of all pixels, m is calculatedijMapping to a grayscale image or a color image is sufficient.
It should be noted that, as can be understood by those skilled in the art, in practical applications, other existing more complex illumination models may also be used, so as to obtain different styles of depth map visualization effects based on the illumination models. The present embodiment does not limit the illumination model used specifically.
Step 304: according to the intensity coefficient of each pixel, a depth map is displayed.
Compared with the prior art, the display method of the depth map provided in the embodiment calculates the intensity coefficient of the pixel according to the normal vector of the pixel in the depth map based on the illumination model. The depth map is displayed by referring to the intensity coefficient of each pixel, so that the depth change can be strengthened, and further, the area details with relatively small depth change can be better displayed. In addition, by enlarging the neighborhood range, the jitter amplitude of the normal vector in a continuous region is reduced, the stability of the normal vector is enhanced, the layering degree is further weakened, and the display effect is optimized.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to a depth map display device, as shown in fig. 15, including: an acquisition module 401, a calculation module 402 and a display module 403. The obtaining module 401 is configured to obtain a normal vector of each pixel in the depth map; the calculating module 402 is configured to enhance depth information represented by a normal vector of each pixel based on a preset illumination model to obtain an intensity coefficient of each pixel; the display module 403 is configured to display a depth map according to the intensity coefficient of each pixel.
It should be noted that this embodiment is a system embodiment corresponding to the first to third embodiments, and may be implemented in cooperation with the first to third embodiments. The related technical details mentioned in the first to third embodiments are still valid in the present embodiments to third embodiments, and are not described herein again in order to reduce the repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first to third embodiments.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fifth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 16, including: at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can perform the depth map display method according to the above embodiments.
The electronic device includes: one or more processors 501 and a memory 502, with one processor 501 being an example in fig. 16. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 16 illustrates the connection by a bus as an example. Memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 501 executes various functional applications and data processing of the device, that is, implements the above-described display method of the depth map, by executing the nonvolatile software program, instructions, and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502 and when executed by the one or more processors 501 perform the method of displaying a depth map in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A sixth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific embodiments for practicing the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A method for displaying a depth map, comprising:
acquiring a normal vector of each pixel in the depth map;
based on a preset illumination model, enhancing depth information represented by a normal vector of each pixel to obtain an intensity coefficient of each pixel;
and displaying the depth map according to the intensity coefficient of each pixel.
2. The method for displaying the depth map according to claim 1, wherein the obtaining a normal vector of each pixel in the depth map comprises:
acquiring an initial normal vector of each pixel;
and aiming at each pixel, calculating the normal vector of the pixel according to the initial normal vector of the pixel and the initial normal vector of the field pixel of the pixel.
3. The method for displaying the depth map according to claim 1, wherein the obtaining a normal vector of each pixel in the depth map comprises:
and aiming at each pixel, calculating a normal vector of the pixel according to the depth value of the pixel and the depth value of a neighborhood pixel in a preset area of the pixel, wherein the side length of the preset area is more than 3 times of the diameter of the pixel.
4. The method for displaying the depth map according to claim 3, wherein for each of the pixels, the calculating the normal vector of the pixel according to the depth value of the pixel and the depth values of the neighboring pixels in the preset region of the pixel comprises:
aiming at each pixel, calculating an initial normal vector of the pixel according to the depth value of the pixel and the depth value of a neighborhood pixel in a preset area of the pixel;
and calculating the normal vector of the pixel according to the initial normal vector of the pixel and the initial normal vector of the neighborhood pixel in the preset area of the pixel.
5. The method for displaying the depth map according to any one of claims 1 to 4, wherein before the obtaining the normal vector of each pixel in the depth map, the method for displaying the depth map further comprises:
and calling different processing threads to respectively read the depth values of the depth map so as to calculate the normal vectors of all pixels of the depth map in parallel.
6. The method for displaying the depth map according to any one of claims 1 to 4, wherein the enhancing depth information represented by the normal vector of each pixel based on a preset illumination model to obtain the intensity coefficient of each pixel comprises:
obtaining a light direction vector of a simulated light source corresponding to the illumination model on the pixel;
and calculating the intensity coefficient of the pixel according to the light direction vector of the pixel and the normal vector of the pixel.
7. The method for displaying the depth map according to claim 6, wherein the calculating the intensity coefficient of the pixel according to the light direction vector of the pixel and the normal vector of the pixel comprises:
and calculating the quantity product of the light direction vector of the pixel and the normal vector of the pixel as the intensity coefficient of the pixel.
8. A display device for depth maps, comprising: the device comprises an acquisition module, a calculation module and a display module;
the acquisition module is used for acquiring a normal vector of each pixel in the depth map;
the calculation module is used for enhancing the depth information represented by the normal vector of each pixel based on a preset illumination model to obtain the intensity coefficient of each pixel;
the display module is used for displaying the depth map according to the intensity coefficient of each pixel.
9. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of displaying a depth map of any one of claims 1 to 7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the method of displaying a depth map of any one of claims 1 to 7.
CN202110535531.0A 2021-05-17 2021-05-17 Depth map display method, display device, electronic device, and storage medium Active CN113126944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110535531.0A CN113126944B (en) 2021-05-17 2021-05-17 Depth map display method, display device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110535531.0A CN113126944B (en) 2021-05-17 2021-05-17 Depth map display method, display device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN113126944A true CN113126944A (en) 2021-07-16
CN113126944B CN113126944B (en) 2021-11-09

Family

ID=76782088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110535531.0A Active CN113126944B (en) 2021-05-17 2021-05-17 Depth map display method, display device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113126944B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881878A (en) * 2022-05-12 2022-08-09 厦门微图软件科技有限公司 Depth image enhancement method, device, equipment and storage medium

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6288721B1 (en) * 1999-07-07 2001-09-11 Litton Systems, Inc. Rendering process and method for digital map illumination intensity shading
US20110254843A1 (en) * 2003-07-28 2011-10-20 Landmark Graphics Corporation System and Method for Real-Time Co-Rendering of Multiple Attributes
CN103400351A (en) * 2013-07-30 2013-11-20 武汉大学 Low illumination image enhancing method and system based on KINECT depth graph
WO2014105542A1 (en) * 2012-12-26 2014-07-03 Intel Corporation Apparatus for enhancement of 3-d images using depth mapping and light source synthesis
CN105474261A (en) * 2013-05-23 2016-04-06 生物梅里埃公司 Method, system and computer program product for improving the quality of an image
US20160300326A1 (en) * 2015-04-10 2016-10-13 Realtek Semiconductor Corporation Image processing device and method thereof
CN106408524A (en) * 2016-08-17 2017-02-15 南京理工大学 Two-dimensional image-assisted depth image enhancement method
CN107169933A (en) * 2017-04-14 2017-09-15 杭州光珀智能科技有限公司 A kind of edge reflections pixel correction method based on TOF depth cameras
CN107807806A (en) * 2017-10-27 2018-03-16 广东欧珀移动通信有限公司 Display parameters method of adjustment, device and electronic installation
CN108335267A (en) * 2017-12-29 2018-07-27 上海玮舟微电子科技有限公司 A kind of processing method of depth image, device, equipment and storage medium
CN108961390A (en) * 2018-06-08 2018-12-07 华中科技大学 Real-time three-dimensional method for reconstructing based on depth map
CN109118582A (en) * 2018-09-19 2019-01-01 东北大学 A kind of commodity three-dimensional reconstruction system and method for reconstructing
CN109683699A (en) * 2019-01-07 2019-04-26 深圳增强现实技术有限公司 The method, device and mobile terminal of augmented reality are realized based on deep learning
CN109816781A (en) * 2019-02-01 2019-05-28 武汉大学 A kind of multiple view solid geometry method enhanced based on image detail and structure
CN110211061A (en) * 2019-05-20 2019-09-06 清华大学 List depth camera depth map real time enhancing method and device neural network based
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110544233A (en) * 2019-07-30 2019-12-06 北京的卢深视科技有限公司 Depth image quality evaluation method based on face recognition application
CN110956603A (en) * 2018-09-25 2020-04-03 Oppo广东移动通信有限公司 Method and device for detecting edge flying spot of depth image and electronic equipment
CN111145119A (en) * 2019-12-25 2020-05-12 维沃移动通信(杭州)有限公司 Image processing method and electronic equipment
CN111145094A (en) * 2019-12-26 2020-05-12 北京工业大学 Depth map enhancement method based on surface normal guidance and graph Laplace prior constraint
CN111402313A (en) * 2020-03-13 2020-07-10 合肥的卢深视科技有限公司 Image depth recovery method and device
CN111402392A (en) * 2020-01-06 2020-07-10 香港光云科技有限公司 Illumination model calculation method, material parameter processing method and material parameter processing device
CN111563950A (en) * 2020-05-07 2020-08-21 贝壳技术有限公司 Texture mapping strategy determination method and device and computer readable storage medium
CN112070889A (en) * 2020-11-13 2020-12-11 季华实验室 Three-dimensional reconstruction method, device and system, electronic equipment and storage medium
CN112116602A (en) * 2020-08-31 2020-12-22 北京的卢深视科技有限公司 Depth map repairing method and device and readable storage medium
CN112258565A (en) * 2019-07-22 2021-01-22 华为技术有限公司 Image processing method and device
CN112489103A (en) * 2020-11-19 2021-03-12 北京的卢深视科技有限公司 High-resolution depth map acquisition method and system
CN112801907A (en) * 2021-02-03 2021-05-14 北京字节跳动网络技术有限公司 Depth image processing method, device, equipment and storage medium

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6288721B1 (en) * 1999-07-07 2001-09-11 Litton Systems, Inc. Rendering process and method for digital map illumination intensity shading
US20110254843A1 (en) * 2003-07-28 2011-10-20 Landmark Graphics Corporation System and Method for Real-Time Co-Rendering of Multiple Attributes
US9536345B2 (en) * 2012-12-26 2017-01-03 Intel Corporation Apparatus for enhancement of 3-D images using depth mapping and light source synthesis
WO2014105542A1 (en) * 2012-12-26 2014-07-03 Intel Corporation Apparatus for enhancement of 3-d images using depth mapping and light source synthesis
CN105474261A (en) * 2013-05-23 2016-04-06 生物梅里埃公司 Method, system and computer program product for improving the quality of an image
CN103400351A (en) * 2013-07-30 2013-11-20 武汉大学 Low illumination image enhancing method and system based on KINECT depth graph
US20160300326A1 (en) * 2015-04-10 2016-10-13 Realtek Semiconductor Corporation Image processing device and method thereof
CN106408524A (en) * 2016-08-17 2017-02-15 南京理工大学 Two-dimensional image-assisted depth image enhancement method
CN107169933A (en) * 2017-04-14 2017-09-15 杭州光珀智能科技有限公司 A kind of edge reflections pixel correction method based on TOF depth cameras
CN107807806A (en) * 2017-10-27 2018-03-16 广东欧珀移动通信有限公司 Display parameters method of adjustment, device and electronic installation
CN108335267A (en) * 2017-12-29 2018-07-27 上海玮舟微电子科技有限公司 A kind of processing method of depth image, device, equipment and storage medium
CN108961390A (en) * 2018-06-08 2018-12-07 华中科技大学 Real-time three-dimensional method for reconstructing based on depth map
CN109118582A (en) * 2018-09-19 2019-01-01 东北大学 A kind of commodity three-dimensional reconstruction system and method for reconstructing
CN110956603A (en) * 2018-09-25 2020-04-03 Oppo广东移动通信有限公司 Method and device for detecting edge flying spot of depth image and electronic equipment
CN109683699A (en) * 2019-01-07 2019-04-26 深圳增强现实技术有限公司 The method, device and mobile terminal of augmented reality are realized based on deep learning
CN109816781A (en) * 2019-02-01 2019-05-28 武汉大学 A kind of multiple view solid geometry method enhanced based on image detail and structure
CN110211061A (en) * 2019-05-20 2019-09-06 清华大学 List depth camera depth map real time enhancing method and device neural network based
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN112258565A (en) * 2019-07-22 2021-01-22 华为技术有限公司 Image processing method and device
CN110544233A (en) * 2019-07-30 2019-12-06 北京的卢深视科技有限公司 Depth image quality evaluation method based on face recognition application
CN111145119A (en) * 2019-12-25 2020-05-12 维沃移动通信(杭州)有限公司 Image processing method and electronic equipment
CN111145094A (en) * 2019-12-26 2020-05-12 北京工业大学 Depth map enhancement method based on surface normal guidance and graph Laplace prior constraint
CN111402392A (en) * 2020-01-06 2020-07-10 香港光云科技有限公司 Illumination model calculation method, material parameter processing method and material parameter processing device
CN111402313A (en) * 2020-03-13 2020-07-10 合肥的卢深视科技有限公司 Image depth recovery method and device
CN111563950A (en) * 2020-05-07 2020-08-21 贝壳技术有限公司 Texture mapping strategy determination method and device and computer readable storage medium
CN112116602A (en) * 2020-08-31 2020-12-22 北京的卢深视科技有限公司 Depth map repairing method and device and readable storage medium
CN112070889A (en) * 2020-11-13 2020-12-11 季华实验室 Three-dimensional reconstruction method, device and system, electronic equipment and storage medium
CN112489103A (en) * 2020-11-19 2021-03-12 北京的卢深视科技有限公司 High-resolution depth map acquisition method and system
CN112801907A (en) * 2021-02-03 2021-05-14 北京字节跳动网络技术有限公司 Depth image processing method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闫增强: "深度图像增强算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881878A (en) * 2022-05-12 2022-08-09 厦门微图软件科技有限公司 Depth image enhancement method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113126944B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
US10529117B2 (en) Systems and methods for rendering optical distortion effects
EP2973423B1 (en) System and method for display of a repeating texture stored in a texture atlas
US8970586B2 (en) Building controllable clairvoyance device in virtual world
US20190318530A1 (en) Systems and Methods for Reducing Rendering Latency
CN111640180B (en) Three-dimensional reconstruction method and device and terminal equipment
US10699467B2 (en) Computer-graphics based on hierarchical ray casting
US10553012B2 (en) Systems and methods for rendering foveated effects
CN111047506B (en) Environmental map generation and hole filling
US11579466B2 (en) Method, device, apparatus and computer readable storage medium of simulating volumetric 3D display
US20160343155A1 (en) Dynamic filling of shapes for graphical display of data
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
JP2019536162A (en) System and method for representing a point cloud of a scene
CN111161398B (en) Image generation method, device, equipment and storage medium
CN110458954B (en) Contour line generation method, device and equipment
CN113126944B (en) Depth map display method, display device, electronic device, and storage medium
CN113129420B (en) Ray tracing rendering method based on depth buffer acceleration
CN104915948A (en) System and method for selecting a two-dimensional region of interest using a range sensor
CN107464278B (en) Full-view sphere light field rendering method
CN110738719A (en) Web3D model rendering method based on visual range hierarchical optimization
KR101658852B1 (en) Three-dimensional image generation apparatus and three-dimensional image generation method
CN116012483A (en) Image rendering method and device, storage medium and electronic equipment
CN112634439B (en) 3D information display method and device
JP2024521816A (en) Unrestricted image stabilization
Oliveira et al. Incremental texture mapping for autonomous driving
CN111028357B (en) Soft shadow processing method and device of augmented reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210716

Assignee: Anhui Xingtai Financial Leasing Co.,Ltd.

Assignor: Hefei lushenshi Technology Co.,Ltd.

Contract record no.: X2022980006062

Denomination of invention: Depth map display method, display device, electronic equipment and storage medium

Granted publication date: 20211109

License type: Exclusive License

Record date: 20220523

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Depth map display method, display device, electronic equipment and storage medium

Effective date of registration: 20220525

Granted publication date: 20211109

Pledgee: Anhui Xingtai Financial Leasing Co.,Ltd.

Pledgor: Hefei lushenshi Technology Co.,Ltd.

Registration number: Y2022980006214

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230404

Address after: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Patentee after: Hefei lushenshi Technology Co.,Ltd.

Address before: 100083 room 3032, North B, bungalow, building 2, A5 Xueyuan Road, Haidian District, Beijing

Patentee before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

Patentee before: Hefei lushenshi Technology Co.,Ltd.

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230719

Granted publication date: 20211109

Pledgee: Anhui Xingtai Financial Leasing Co.,Ltd.

Pledgor: Hefei lushenshi Technology Co.,Ltd.

Registration number: Y2022980006214

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Anhui Xingtai Financial Leasing Co.,Ltd.

Assignor: Hefei lushenshi Technology Co.,Ltd.

Contract record no.: X2022980006062

Date of cancellation: 20230720