CN113965679A - Depth map acquisition method, structured light camera, electronic device, and storage medium - Google Patents

Depth map acquisition method, structured light camera, electronic device, and storage medium Download PDF

Info

Publication number
CN113965679A
CN113965679A CN202111217814.7A CN202111217814A CN113965679A CN 113965679 A CN113965679 A CN 113965679A CN 202111217814 A CN202111217814 A CN 202111217814A CN 113965679 A CN113965679 A CN 113965679A
Authority
CN
China
Prior art keywords
main
speckle pattern
auxiliary
camera
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111217814.7A
Other languages
Chinese (zh)
Other versions
CN113965679B (en
Inventor
付贤强
化雪诚
薛远
户磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilusense Technology Co Ltd
Hefei Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilusense Technology Co Ltd, Hefei Dilusense Technology Co Ltd filed Critical Beijing Dilusense Technology Co Ltd
Priority to CN202111217814.7A priority Critical patent/CN113965679B/en
Publication of CN113965679A publication Critical patent/CN113965679A/en
Application granted granted Critical
Publication of CN113965679B publication Critical patent/CN113965679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention relates to the field of computer vision, and discloses a depth map acquisition method, a structured light camera, electronic equipment and a storage medium. The invention relates to a depth map acquisition method, which is applied to a structured light camera, wherein the structured light camera comprises a main camera and N auxiliary cameras arranged in different directions of the main camera, and the method comprises the following steps: acquiring a main speckle pattern through a main camera, and performing quality evaluation on the main speckle pattern to acquire a quality evaluation result; when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern; when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard, detecting the illumination intensity values of the environment through light sensors which are preset in N different directions of a main camera; according to the illumination intensity value, selectively starting an auxiliary camera in one direction to obtain an auxiliary speckle pattern; and acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern. The method gives consideration to the quality, the processing efficiency, the resource consumption and the like of the depth map.

Description

Depth map acquisition method, structured light camera, electronic device, and storage medium
Technical Field
The embodiment of the invention relates to the field of computer vision, in particular to a depth map acquisition method, a structured light camera, electronic equipment and a storage medium.
Background
A structured light camera is typically mounted on the electronic device to acquire a depth image of the scene. The structured light camera generally comprises a laser projector and a camera, wherein the laser projector projects laser to the outside, the camera receives the laser emitted back from the outside to obtain a speckle image, and then the depth image is obtained through the speckle image. However, when the shooting environment is night, lamplight, sunlight and the distance between the shot object and the shot object is far or close, the acquired speckle pattern has larger noise, and some speckle points are connected together and cannot be distinguished or accurately extracted, so that the quality of the acquired depth image is poor.
Disclosure of Invention
An object of embodiments of the present invention is to provide a depth map acquisition method, a structured light camera, an electronic device, and a storage medium, which improve the quality and processing efficiency of a depth map and reduce resource consumption.
In order to solve the above technical problem, an embodiment of the present invention provides a depth map obtaining method, which is applied to a structured light camera, where the structured light camera includes a main camera and N auxiliary cameras arranged in different directions of the main camera, and the method includes: acquiring a main speckle pattern through the main camera, performing quality evaluation on the main speckle pattern, and acquiring a quality evaluation result of the main speckle pattern; when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern; when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard, detecting illumination intensity values in the environment through optical sensors preset in the N different directions of the main camera; according to the illumination intensity value detected by each optical sensor, selectively starting the auxiliary camera in one direction, and acquiring an auxiliary speckle pattern through the started auxiliary camera; and acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern.
Embodiments of the present invention also provide a structured light camera, including:
the main camera is used for acquiring a main speckle pattern;
the auxiliary cameras are arranged in N different directions of the main camera and are used for acquiring auxiliary speckle patterns;
at least one processor; the system is used for carrying out quality evaluation on the main speckle pattern and obtaining a quality evaluation result of the main speckle pattern; when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern; when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard, acquiring illumination intensity values in the environment according to the N optical sensors in different directions preset in the main camera; and selecting to start the auxiliary camera in one direction, and acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern acquired by the started auxiliary camera.
An embodiment of the present invention also provides an electronic device, including:
a housing; and the number of the first and second groups,
the structured light camera according to the above embodiment is disposed on the housing, and the structured light camera is capable of executing the depth map acquisition method according to the above embodiment.
The embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the depth map acquisition method mentioned in the above embodiment is implemented.
According to the depth map acquisition method provided by the embodiment of the invention, the quality of the main speckle map acquired by the main camera of the structured light camera is evaluated, and whether the auxiliary camera needs to be started or not is determined according to the quality evaluation result, so that the resource consumption of the structured light camera is saved, and the quality of the depth map is not influenced. When the quality of the main speckle pattern reaches the standard, the depth map is directly obtained according to the main speckle pattern, so that the quality of the depth map is ensured, and the processing efficiency is improved. And when the quality of the main speckle pattern does not reach the standard, starting the auxiliary camera to obtain an auxiliary speckle pattern, and obtaining a depth map according to the main speckle pattern and the auxiliary speckle pattern. The whole depth map acquisition method gives consideration to the problems of quality, processing efficiency, resource consumption and the like of the depth map.
In addition, the depth map obtaining method provided in an embodiment of the present invention selects to turn on the auxiliary camera in one direction according to the illumination intensity value detected by each of the optical sensors, and includes: and when the reason that the quality indicated by the quality evaluation result of the main speckle pattern does not reach the standard is under exposure, starting the auxiliary camera in the direction with the maximum illumination intensity value. And when the reason that the quality indicated by the quality evaluation result of the main speckle pattern does not meet the standard is overexposure, starting the auxiliary camera in the direction with the minimum illumination intensity value. When the quality of the main speckle pattern is not up to standard, further analyzing deep-level reasons of quality not up to standard according to the quality evaluation result of the main speckle pattern, starting the corresponding auxiliary camera according to the reasons of quality not up to standard to obtain the corresponding auxiliary speckle pattern, and when the depth map is obtained according to the main speckle pattern and the auxiliary speckle pattern subsequently, the quality of the depth map can be obviously improved compared with the depth map obtained by directly obtaining the depth map according to the main speckle pattern and the auxiliary speckle pattern.
In addition, according to the depth map acquisition method provided by the embodiment of the invention, the quality evaluation result comprises a definition evaluation value and a brightness evaluation value; when the definition evaluation value of the main speckle pattern is smaller than a preset definition threshold value, indicating that the quality of the main speckle pattern does not reach the standard; under the condition that the quality of the main speckle pattern does not reach the standard, when the brightness evaluation value of the main speckle pattern belongs to a preset under-exposure brightness value range, indicating that the reason that the quality of the main speckle pattern does not reach the standard is under-exposure; and when the brightness evaluation value of the main speckle pattern belongs to a preset overexposure brightness value range, indicating that the reason why the quality of the main speckle pattern does not meet the standard is overexposure. Whether the quality of the main speckle pattern reaches the standard or not is indicated through the definition evaluation value, and the reason why the quality of the main speckle pattern does not reach the standard is indicated through the brightness evaluation value. The quality condition of the speckle pattern can be simply and quickly acquired through the two parameters.
In addition, the method for obtaining a depth map according to the embodiment of the present invention obtains a depth map according to a main speckle pattern, and includes: if the number of the main cameras is one, acquiring a main speckle pattern through the main cameras, and acquiring a depth map according to the main speckle pattern and a preset reference speckle pattern; if the number of the main cameras is two, the two main speckle patterns are obtained through the two main cameras, and a depth map is obtained according to the two main speckle patterns. When the quality of the main speckle pattern reaches the standard and the auxiliary camera does not need to be started, a method for correspondingly acquiring the depth map is provided according to the quantity of the main speckle patterns, and the problems of the quality of the depth map and the processing efficiency are balanced.
In addition, the method for obtaining a depth map according to the main speckle pattern and the auxiliary speckle pattern provided by the embodiment of the present invention includes: when the number of the main speckle patterns and the number of the started auxiliary cameras are both 1, performing matching calculation on the main speckle patterns and the auxiliary speckle patterns acquired by the auxiliary cameras to acquire a depth map; or, carrying out image fusion on the main speckle pattern and the auxiliary speckle pattern obtained by the auxiliary camera to obtain a fusion speckle pattern, and carrying out matching calculation on the fusion speckle pattern and a preset reference speckle pattern to obtain a depth map; when the number of the main speckle patterns is 1 and the number of the opened auxiliary cameras is 2 or more than 2, carrying out image fusion on a plurality of auxiliary speckle patterns obtained by the auxiliary cameras to obtain a fused speckle pattern, and carrying out matching calculation on the fused speckle pattern and the main speckle pattern to obtain a depth map; when the number of the main speckle patterns is 2 and the number of the started auxiliary cameras is 1, carrying out image fusion on the plurality of main speckle patterns to obtain a fusion speckle pattern, and carrying out matching calculation on the fusion speckle pattern and the auxiliary speckle pattern to obtain a depth map; when the number of the main speckle patterns is 2 and the number of the opened auxiliary cameras is 2 or more than 2, carrying out image fusion on the plurality of main speckle patterns to obtain a first fusion speckle pattern, carrying out image fusion on the plurality of auxiliary speckle patterns obtained by the auxiliary cameras to obtain a second fusion speckle pattern, and carrying out matching calculation on the first fusion speckle pattern and the second fusion speckle pattern to obtain a depth map. When the quality of the main speckle pattern does not reach the standard and the auxiliary cameras need to be started, corresponding or even multiple depth map acquisition modes are provided according to the quantity of the main speckle pattern and the quantity of the auxiliary cameras, a proper processing mode can be selected according to the user requirement and the condition of the structured light camera, and resources are saved and the efficiency is improved while the quality of the depth map is effectively improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a first flowchart of a depth map acquisition method provided in an embodiment of the present invention;
fig. 2 is a second flowchart of a depth map obtaining method according to an embodiment of the present invention;
fig. 3 is a flowchart three of a depth map acquisition method provided in the embodiment of the present invention;
fig. 4 is a fourth flowchart of a depth map acquisition method provided in the embodiment of the present invention;
FIG. 5 is a schematic diagram of a structure of a structured light camera provided in accordance with an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following describes details of the depth map acquisition method according to the present embodiment. The following disclosure provides implementation details for the purpose of facilitating understanding, and is not necessary to practice the present solution.
The depth map acquisition method is suitable for structured light cameras with various structures such as monocular and binocular. When the structured light camera is used in environments with high brightness such as strong lamplight and strong sunlight, the brightness of the speckle pattern collected by the structured light camera is affected, for example, the received speckle pattern is submerged in the background, or the overall brightness of the speckle pattern is high, even some speckle patterns are overexposed, adjacent speckle points are connected together and cannot be distinguished, the accuracy of an image matching algorithm is affected, and the quality of a depth image is poor. In addition, when the distance between the shot object and the structured light camera is long, the light energy reflected by the shot object is weak, and the brightness of the speckle pattern estimated and collected under the structured light is also affected, for example, the speckle pattern is underexposed, the brightness of the speckle pattern is low, and the speckle pattern cannot be accurately extracted, so that the quality of the depth image is poor. When the distance between the shot object and the structured light camera is short, the light energy reflected by the shot object is too strong, and the brightness of the speckle pattern collected by the structured light camera is also influenced, for example, the overall brightness of the speckle pattern is high, even some speckle patterns are overexposed, adjacent speckle points are connected together and cannot be distinguished, the accuracy of an image matching algorithm is influenced, and the quality of a depth image is poor.
The embodiment of the invention relates to a depth map acquisition method, which is applied to a structured light camera, wherein the structured light camera comprises a main camera and N auxiliary cameras which are arranged in different directions of the main camera. Of course, the number of the main cameras may be one, or may be 2 or more than 2. In addition, there may be one or more cameras in each direction, and the number of cameras in each direction may be the same or different. The positional relationship between the main camera and the plurality of sub cameras is not limited herein and may be in any form. And the arrangement positions of the cameras in each direction can be uniformly arranged or non-uniformly arranged. Such as: when N is 4, the structured light camera includes main camera and sets up the subsidiary camera in 4 different directions of main camera, and 4 different directions are the preceding, back, left and right direction of main camera in the main camera place plane. The present embodiment is not particularly limited to the structural composition and appearance of the structured light camera.
As shown in fig. 1, the depth map acquisition method according to the present embodiment includes:
and 101, acquiring a main speckle pattern through a main camera, performing quality evaluation on the main speckle pattern, and acquiring a quality evaluation result of the main speckle pattern.
Specifically, the number of main cameras included in the structured light camera is not limited in the present embodiment. Generally speaking, when the structured light camera works, the main camera is always opened, and the auxiliary camera is opened according to the working condition. When only one main camera in the structured light camera is used, one main speckle pattern is obtained, when the number of the main cameras is 2 or even more, a plurality of main speckle patterns can be obtained, and one main camera can be optionally selected to shoot to obtain one main speckle pattern in order to save the consumption of resources such as equipment electric quantity.
And step 102, when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern.
Specifically, when the number of the obtained main speckle patterns is 2 or more than 2, the quality of the main speckle patterns is evaluated, and if the quality of a plurality of main speckle patterns reaches the standard, one main speckle pattern with the best quality can be selected according to the quality fraction to obtain the depth map.
In addition, before the depth map is acquired according to the main speckle map, the speckle map can be subjected to filtering processing in order to improve the quality of the depth map, or the main camera and the auxiliary camera can be calibrated to acquire internal parameters, external parameters and distortion parameters of the main camera and the auxiliary camera, and the acquired speckle map is corrected by using the parameters. In order to improve the processing efficiency, the speckle pattern can be subjected to binarization processing.
And 103, detecting the illumination intensity values in the environment through N optical sensors in different directions preset in the main camera when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard.
Specifically, the optical sensor in this embodiment is disposed on the electronic device to which the structured light camera belongs, or the optical sensor is disposed in the environment in which the electronic device to which the structured light camera belongs. When the light sensor is arranged in the electronic device to which the structured light camera belongs, the light sensor can be correspondingly arranged according to the position of the main camera, such as: a plurality of optical sensors are arranged in four directions of the upper, lower, left and right of the main camera. When the optical sensor is arranged in the environment where the electronic equipment to which the structured light camera belongs is located, the position of the optical sensor can be comprehensively determined according to factors such as the shooting direction, the visual angle direction of the main camera, the position of the shot object and the like.
And 104, selectively starting the auxiliary camera in one direction according to the illumination intensity value detected by each optical sensor, and acquiring an auxiliary speckle pattern through the started auxiliary camera.
Specifically, when the auxiliary cameras in one direction are selected to be turned on according to the illumination intensity values detected by the optical sensors, all the auxiliary cameras in the one direction can be turned on, and one of the auxiliary cameras in the one direction can also be selected to be turned on.
And 105, acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern.
Specifically, when the depth map is acquired according to the main speckle pattern and the auxiliary speckle pattern, an appropriate processing mode can be selected according to the number of the main speckle patterns and the number of the started auxiliary speckle patterns.
According to the depth map acquisition method provided by the embodiment of the invention, the quality of the main speckle map acquired by the main camera of the structured light camera is evaluated, and whether the auxiliary camera needs to be started or not is determined according to the quality evaluation result, so that the resource consumption of the structured light camera is saved, and the quality of the depth map is not influenced. When the quality of the main speckle pattern reaches the standard, the depth map is directly obtained according to the main speckle pattern, so that the quality of the depth map is ensured, and the processing efficiency is improved. And when the quality of the main speckle pattern does not reach the standard, starting the auxiliary camera to obtain an auxiliary speckle pattern, and obtaining a depth map according to the main speckle pattern and the auxiliary speckle pattern. The whole depth map acquisition method gives consideration to the problems of quality, processing efficiency, resource consumption and the like of the depth map.
The embodiment of the invention relates to a depth map acquisition method, which is applied to a structured light camera, wherein the structured light camera comprises a main camera and N auxiliary cameras which are arranged in different directions of the main camera. The depth map obtaining method of the present embodiment is mainly described in detail with respect to how to select to turn on the auxiliary camera according to the illumination intensity value when the quality of the main speckle map does not meet the standard, as shown in fig. 2, and includes:
step 201, acquiring a main speckle pattern through a main camera, performing quality evaluation on the main speckle pattern, and acquiring a quality evaluation result of the main speckle pattern.
Specifically, when the quality of the main speckle pattern is evaluated, the definition and the brightness of the main speckle pattern are mainly evaluated to obtain a definition evaluation value and a brightness evaluation value of the main speckle pattern. Of course, information such as the shape of the speckle in the speckle pattern, the number of speckle points, and the regularity of the speckle points may be acquired and included in the quality evaluation index.
In addition, the sharpness evaluation value may be obtained according to the mean gray level gradient, the mean gray level second derivative, the Brenner gradient function value, and the Laplacian gradient function value of the main speckle pattern. It should be noted that the average gray level gradient reflects the amount of gray level information contained in the speckle pattern and the degree of significance of the gray level information, and the average gray level second derivative reflects the distribution form of the gray level information contained in the speckle pattern, that is, the gray level fluctuation condition. A high quality speckle pattern should have a high average gray scale gradient or a low average gray scale second derivative.
In addition, when the luminance evaluation value of the main speckle pattern is acquired, the luminance evaluation value may be calculated directly from the average value of the luminance information of the speckle pattern, or the luminance value corresponding to the region with the highest image luminance in the speckle pattern may be used as the luminance evaluation value, or the speckle pattern may be divided into a plurality of regions, each region corresponding to a different weight coefficient, the middle region of the speckle pattern having the largest weight coefficient, and the edge region of the speckle pattern having the smallest weight coefficient. As will be understood by those skilled in the art, in the structured light camera, due to the configuration of the lens, when the lens receives the laser light and images, the image brightness of the middle area is generally higher than that of the edge area. Therefore, the weighting coefficient of the central area is set to be larger, and the weighting coefficient of the edge area is set to be smaller, so that the acquired brightness evaluation value can more reasonably reflect the reality of the speckle pattern brightness.
And step 202, when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern.
Specifically, when the definition evaluation value of the main speckle pattern is smaller than a preset definition threshold, the quality of the main speckle pattern is indicated to be not up to standard, and when the definition evaluation value of the main speckle pattern is larger than or equal to the definition threshold, the quality of the main speckle pattern is indicated to be up to standard.
And step 203, detecting illumination intensity values in the environment through N optical sensors preset in different directions of the main camera when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard.
Specifically, under the condition that the evaluation value of the definition of the main speckle pattern is smaller than a preset definition threshold value, namely the quality of the main speckle pattern does not reach the standard, when the evaluation value of the brightness of the main speckle pattern belongs to a preset under-exposure brightness value range, the reason that the quality of the main speckle pattern does not reach the standard is indicated to be under-exposure; when the evaluation value of the brightness of the main speckle pattern belongs to the preset overexposure brightness value range, the reason indicating that the quality of the main speckle pattern does not meet the standard is overexposure.
And step 204, when the reason that the quality indicated by the quality evaluation result of the main speckle pattern does not reach the standard is under exposure, starting the auxiliary camera in the direction with the maximum illumination intensity value, and acquiring the auxiliary speckle pattern through the started auxiliary camera.
Specifically, the embodiment measures the overexposure and underexposure conditions of the main speckle pattern according to the brightness evaluation value of the main speckle pattern, and accordingly turns on the corresponding auxiliary camera. And when the main speckle pattern is underexposed, starting the auxiliary camera in the direction with the maximum illumination intensity value. Specifically, the specific opening of the cameras in the direction of the maximum illumination intensity value can be determined according to the degree of underexposure, such as: the brightness evaluation value of the main speckle pattern is 4.2, and the under-exposure brightness value range is 0-4.5, which indicates that the under-exposure degree is smaller, one of the auxiliary cameras in the direction with the maximum illumination intensity value can be started, and one auxiliary speckle pattern is obtained through the one auxiliary camera. If the brightness evaluation value of the main speckle pattern is 2.1 and the under-exposure brightness value range is 0-4.5, the under-exposure degree is larger, the plurality of auxiliary cameras in the direction with the maximum illumination intensity value can be started, and the plurality of auxiliary speckle patterns can be obtained through the plurality of cameras. The overexposure processing method is similar and will not be described herein. That is, the degree of underexposure and overexposure can be determined according to the magnitude of the luminance evaluation value.
And step 205, when the reason that the quality indicated by the quality evaluation result of the main speckle pattern does not reach the standard is overexposure, starting the auxiliary camera in the direction with the minimum illumination intensity value, and acquiring the auxiliary speckle pattern through the started auxiliary camera.
And step 206, acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern acquired by the turned-on auxiliary camera.
According to the depth map acquisition method provided by the embodiment of the invention, the quality of the main speckle map acquired by the main camera of the structured light camera is evaluated, and whether the auxiliary camera needs to be started or not is determined according to the quality evaluation result, so that the resource consumption of the structured light camera is saved, and the quality of the depth map is not influenced. When the quality of the main speckle pattern reaches the standard, the depth map is directly obtained according to the main speckle pattern, so that the quality of the depth map is ensured, and the processing efficiency is improved. In addition, when the quality of the main speckle pattern does not reach the standard, further analyzing deep reasons of quality not reaching the standard according to the quality evaluation result of the main speckle pattern, and starting the corresponding auxiliary camera according to the reasons of quality not reaching the standard (underexposure or overexposure) to obtain the corresponding auxiliary speckle pattern. The whole depth map acquisition method gives consideration to the problems of quality, processing efficiency, resource consumption and the like of the depth map.
The embodiment of the invention relates to a depth map acquisition method, which is applied to a structured light camera, wherein the structured light camera comprises a main camera and N auxiliary cameras which are arranged in different directions of the main camera. As shown in fig. 3, the depth map acquisition method according to the present embodiment includes:
and 301, acquiring a main speckle pattern through the main camera, performing quality evaluation on the main speckle pattern, and acquiring a quality evaluation result of the main speckle pattern.
Specifically, the details of step 301 in this embodiment are substantially the same as those of step 101 and step 201, and are not described herein again.
And step 302, when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern reaches the standard and the number of the main cameras is one, acquiring a main speckle pattern through the main cameras, and acquiring a depth map according to the main speckle pattern and a preset reference speckle pattern.
Specifically, the main speckle pattern and the reference speckle pattern are matched to determine homonymous points in the two images, a parallax value is obtained according to the pixel coordinate difference of the homonymous points, and then a depth map is obtained according to the calculation of the parallax value. In matching the main speckle pattern with the reference speckle pattern, a local matching algorithm, a global matching algorithm, or a semi-global matching algorithm may be employed.
And 303, when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern reaches the standard and the number of the main cameras is two, acquiring two main speckle patterns through the two main cameras, and acquiring a depth map according to the two main speckle patterns.
Specifically, the two main speckle patterns are matched to determine the homonymous points in the two images, the parallax value is obtained according to the pixel coordinate difference of the homonymous points, and the depth map is obtained according to the calculation of the parallax value.
And step 304, detecting illumination intensity values in the environment through N optical sensors preset in different directions of the main camera when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard.
Specifically, the specific implementation details of step 304 in this embodiment are substantially the same as those of steps 103 and 203, and are not described herein again.
And 305, selectively starting an auxiliary camera in one direction according to the illumination intensity value detected by each optical sensor, and acquiring an auxiliary speckle pattern through the started auxiliary camera.
Specifically, the details of step 305 in this embodiment are substantially the same as those of steps 204 and 205, and are not described herein again.
And step 306, acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern.
Specifically, the main speckle pattern and the auxiliary speckle pattern are matched to determine the homonymous points in the two images, the parallax value is obtained according to the pixel coordinate difference of the homonymous points, and the depth map is obtained according to the calculation of the parallax value, wherein the specific formula is as follows:
Figure BDA0003311349570000081
wherein f is the focal length of the cameras, S is the baseline distance between the two cameras, and Delta x is the pixel coordinate difference of the same-name point.
According to the depth map acquisition method provided by the embodiment of the invention, the quality of the main speckle map acquired by the main camera of the structured light camera is evaluated, and whether the auxiliary camera needs to be started or not is determined according to the quality evaluation result, so that the resource consumption of the structured light camera is saved, and the quality of the depth map is not influenced. And when the quality of the main speckle pattern does not reach the standard, starting the auxiliary camera to obtain an auxiliary speckle pattern, and obtaining a depth map according to the main speckle pattern and the auxiliary speckle pattern. The whole depth map acquisition method gives consideration to the problems of quality, processing efficiency, resource consumption and the like of the depth map. In addition, when the quality of the main speckle pattern reaches the standard and the auxiliary camera does not need to be started, a method for correspondingly acquiring the depth map is provided according to the number of the main speckle patterns, and the problems of the quality of the depth map and the processing efficiency are balanced.
The embodiment of the invention relates to a depth map acquisition method, which is applied to a structured light camera, wherein the structured light camera comprises a main camera and N auxiliary cameras which are arranged in different directions of the main camera. As shown in fig. 4, the depth map acquisition method according to the present embodiment includes:
step 401, acquiring a main speckle pattern through a main camera, performing quality evaluation on the main speckle pattern, and acquiring a quality evaluation result of the main speckle pattern.
And step 402, when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern.
And step 403, detecting illumination intensity values in the environment through N optical sensors preset in different directions of the main camera when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard.
And step 404, selectively starting an auxiliary camera in one direction according to the illumination intensity value detected by each optical sensor, and acquiring an auxiliary speckle pattern through the started auxiliary camera.
Step 405, when the number of the main speckle patterns and the number of the started auxiliary cameras are both 1, performing matching calculation on the main speckle patterns and the auxiliary speckle patterns obtained by the auxiliary cameras to obtain a depth map; or, carrying out image fusion on the main speckle pattern and the auxiliary speckle pattern obtained by the auxiliary camera to obtain a fusion speckle pattern, and carrying out matching calculation on the fusion speckle pattern and a preset reference speckle pattern to obtain a depth map.
Specifically, when the main speckle pattern is under-exposed, the auxiliary speckle pattern obtained in the most general direction of the illumination intensity value is fused with the main speckle pattern to obtain a fused speckle pattern, and compared with the main speckle pattern, the fused speckle pattern can obviously improve the under-exposed condition, so that the brightness of the fused speckle pattern reaches the normal brightness standard, and the quality of the depth map is improved. The main speckle pattern and the auxiliary speckle pattern may be fused by any of a logic filter method, an image algebra method, a high-pass filter method, a pyramid decomposition method, a wavelet transform method, and the like. Of course, before the images are fused, the two images need to be accurately registered, and in the registration, in order to improve the accuracy and precision, sub-pixel interpolation may be performed to obtain more accurate sub-pixel-level coordinates.
And 406, when the number of the main speckle patterns is 1 and the number of the opened auxiliary cameras is 2 or more than 2, performing image fusion on the plurality of auxiliary speckle patterns acquired by the auxiliary cameras to acquire a fusion speckle pattern, and performing matching calculation on the fusion speckle pattern and the main speckle pattern to acquire a depth map.
Specifically, when the number of the acquired auxiliary speckle patterns is multiple, part of the auxiliary speckle patterns may be arbitrarily selected from the multiple auxiliary speckle patterns for fusion processing, or the quality of the auxiliary speckle patterns may be evaluated, and two auxiliary speckle patterns with the best quality are selected for fusion processing.
And 407, when the number of the main speckle patterns is 2 or more than 2 and the number of the opened auxiliary cameras is 1, performing image fusion on the plurality of main speckle patterns to obtain a fusion speckle pattern, and performing matching calculation on the fusion speckle pattern and the auxiliary speckle patterns to obtain a depth map.
And 408, when the number of the main speckle patterns and the number of the started auxiliary cameras are both 2 or more than 2, carrying out image fusion on the plurality of main speckle patterns to obtain a first fusion speckle pattern, carrying out image fusion on the plurality of auxiliary speckle patterns obtained by the auxiliary cameras to obtain a second fusion speckle pattern, and carrying out matching calculation on the first fusion speckle pattern and the second fusion speckle pattern to obtain a depth map.
Specifically, the method for obtaining a depth map according to the number of the main speckle patterns and the number of the turned-on auxiliary cameras in this embodiment provides a corresponding method for obtaining a depth map, and specifically, which one of the methods can be selected according to user requirements, parameters of the structured light camera, and conditions (such as electric quantity, processor performance, memory performance, and the like) of the electronic device to which the structured light camera belongs.
According to the depth map acquisition method provided by the embodiment of the invention, the quality of the main speckle map acquired by the main camera of the structured light camera is evaluated, and whether the auxiliary camera needs to be started or not is determined according to the quality evaluation result, so that the resource consumption of the structured light camera is saved, and the quality of the depth map is not influenced. When the quality of the main speckle pattern reaches the standard, the depth map is directly obtained according to the main speckle pattern, so that the quality of the depth map is ensured, and the processing efficiency is improved. When the quality of the main speckle pattern does not reach the standard and the auxiliary cameras need to be started, corresponding or even multiple depth map acquisition modes are provided according to the quantity of the main speckle pattern and the quantity of the auxiliary cameras, a proper processing mode can be selected according to the user requirement and the condition of the structured light camera, and resources are saved and the efficiency is improved while the quality of the depth map is effectively improved. The whole depth map acquisition method gives consideration to the problems of quality, processing efficiency, resource consumption and the like of the depth map.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
An embodiment of the present invention relates to a structured light camera, as shown in fig. 5, including:
a main camera 501 for acquiring a main speckle pattern;
the auxiliary cameras 502 in the N different directions of the main camera are used for acquiring auxiliary speckle patterns;
at least one processor 503 and memory 504 communicatively coupled to the at least one processor; the memory stores instructions executed by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the depth map acquisition method according to the above embodiment.
It should be noted that fig. 5 only exemplarily shows one auxiliary camera in one direction, but does not represent that the auxiliary camera in the other direction is not present in other embodiments. This embodiment can be implemented in cooperation with the embodiment of the depth map acquisition method described above. The related technical details mentioned in the above embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described embodiments.
In addition, in order to highlight the innovative part of the present invention, a unit or a module which is not so closely related to solve the technical problem proposed by the present invention is not introduced in the present embodiment, but it does not indicate that there is no other unit or module in the present embodiment.
An embodiment of the present invention relates to an electronic apparatus, as shown in fig. 6, including:
the structured light camera 601 comprises a main camera and N auxiliary cameras in different directions of the main camera, wherein the main camera is used for acquiring a main speckle pattern, and the auxiliary cameras are used for acquiring auxiliary speckle patterns;
at least one processor 602 and a memory 603 communicatively coupled to the at least one processor; the memory stores instructions executed by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the depth map acquisition method according to the above embodiment.
The electronic device includes: one or more processors and memory, such as processor 602 in FIG. 6. The processor 602 and the memory 603 may be connected by a bus or other means, and fig. 6 illustrates the connection by the bus as an example. The memory, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the algorithms corresponding to the processing policies in the policy space in the embodiment of the present application. The processor executes various functional applications and data processing of the device by running the nonvolatile software programs, instructions and modules stored in the memory, that is, the depth map acquisition method is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the depth map acquisition method of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
Embodiments of the present invention relate to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements an embodiment of the above-described depth map acquisition method.
That is, as can be understood by those skilled in the art, all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A depth map acquisition method is applied to a structured light camera, wherein the structured light camera comprises a main camera and N auxiliary cameras arranged in different directions of the main camera, and the method comprises the following steps:
acquiring a main speckle pattern through the main camera, performing quality evaluation on the main speckle pattern, and acquiring a quality evaluation result of the main speckle pattern;
when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern meets the standard, acquiring a depth map according to the main speckle pattern;
when the quality evaluation result of the main speckle pattern indicates that the quality of the main speckle pattern does not meet the standard, detecting illumination intensity values in the environment through N optical sensors in different directions preset on the main camera;
according to the illumination intensity value detected by each optical sensor, selectively starting the auxiliary camera in one direction, and acquiring an auxiliary speckle pattern through the started auxiliary camera;
and acquiring a depth map according to the main speckle pattern and the auxiliary speckle pattern.
2. The method according to claim 1, wherein the selecting to turn on the auxiliary camera in one direction according to the illumination intensity value detected by each of the optical sensors includes:
when the reason that the quality indicated by the quality evaluation result of the main speckle pattern does not reach the standard is underexposure, starting an auxiliary camera in the direction with the maximum illumination intensity value;
and when the reason that the quality indicated by the quality evaluation result of the main speckle pattern does not meet the standard is overexposure, starting the auxiliary camera in the direction with the minimum illumination intensity value.
3. The depth map acquisition method according to claim 2, wherein the quality evaluation result includes a sharpness evaluation value and a brightness evaluation value;
when the definition evaluation value of the main speckle pattern is smaller than a preset definition threshold value, indicating that the quality of the main speckle pattern does not reach the standard;
under the condition that the quality of the main speckle pattern does not reach the standard, when the brightness evaluation value of the main speckle pattern belongs to a preset under-exposure brightness value range, indicating that the reason that the quality of the main speckle pattern does not reach the standard is under-exposure; and when the brightness evaluation value of the main speckle pattern belongs to a preset overexposure brightness value range, indicating that the reason why the quality of the main speckle pattern does not meet the standard is overexposure.
4. The method according to claim 1, wherein the obtaining the depth map according to the main speckle pattern comprises:
if the number of the main cameras is one, acquiring a main speckle pattern through the main cameras, and acquiring a depth map according to the main speckle pattern and a preset reference speckle pattern;
if the number of the main cameras is two, the two main speckle patterns are obtained through the two main cameras, and a depth map is obtained according to the two main speckle patterns.
5. The method according to any one of claims 1 to 3, wherein the acquiring a depth map from the primary speckle map and the secondary speckle map comprises:
when the number of the main speckle patterns and the number of the started auxiliary cameras are both 1, performing matching calculation on the main speckle patterns and the auxiliary speckle patterns acquired by the auxiliary cameras to acquire a depth map; or, carrying out image fusion on the main speckle pattern and the auxiliary speckle pattern obtained by the auxiliary camera to obtain a fusion speckle pattern, and carrying out matching calculation on the fusion speckle pattern and a preset reference speckle pattern to obtain a depth map;
when the number of the main speckle patterns is 1 and the number of the opened auxiliary cameras is 2 or more than 2, carrying out image fusion on a plurality of auxiliary speckle patterns obtained by the auxiliary cameras to obtain a fused speckle pattern, and carrying out matching calculation on the fused speckle pattern and the main speckle pattern to obtain a depth map;
when the number of the main speckle patterns is 2 and the number of the started auxiliary cameras is 1, carrying out image fusion on the plurality of main speckle patterns to obtain a fusion speckle pattern, and carrying out matching calculation on the fusion speckle pattern and the auxiliary speckle pattern to obtain a depth map;
when the number of the main speckle patterns is 2 and the number of the opened auxiliary cameras is 2 or more than 2, carrying out image fusion on the plurality of main speckle patterns to obtain a first fusion speckle pattern, carrying out image fusion on the plurality of auxiliary speckle patterns obtained by the auxiliary cameras to obtain a second fusion speckle pattern, and carrying out matching calculation on the first fusion speckle pattern and the second fusion speckle pattern to obtain a depth map.
6. The method according to claim 1, wherein the light sensor is disposed on an electronic device to which the structured light camera belongs, or the light sensor is disposed in an environment in which the electronic device to which the structured light camera belongs.
7. The depth map acquisition method according to claim 1, wherein when the auxiliary camera is disposed in 4 different directions of the main camera, the 4 different directions are front, rear, left, and right directions of the main camera in a plane in which the main camera is located.
8. A structured light camera, comprising:
the main camera is used for acquiring a main speckle pattern;
the auxiliary cameras in the N different directions of the main camera are used for acquiring auxiliary speckle patterns;
at least one processor and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions for execution by the at least one processor to enable the at least one processor to perform the depth map acquisition method of any one of claims 1 to 7.
9. An electronic device, comprising:
the structured light camera comprises a main camera and N auxiliary cameras in different directions of the main camera, wherein the main camera is used for acquiring a main speckle pattern, and the auxiliary cameras are used for acquiring auxiliary speckle patterns;
at least one processor and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions for execution by the at least one processor to enable the at least one processor to perform the depth map acquisition method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the depth map acquisition method of any one of claims 1 to 7.
CN202111217814.7A 2021-10-19 2021-10-19 Depth map acquisition method, structured light camera, electronic device, and storage medium Active CN113965679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111217814.7A CN113965679B (en) 2021-10-19 2021-10-19 Depth map acquisition method, structured light camera, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111217814.7A CN113965679B (en) 2021-10-19 2021-10-19 Depth map acquisition method, structured light camera, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN113965679A true CN113965679A (en) 2022-01-21
CN113965679B CN113965679B (en) 2022-09-23

Family

ID=79464636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111217814.7A Active CN113965679B (en) 2021-10-19 2021-10-19 Depth map acquisition method, structured light camera, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113965679B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693683A (en) * 2022-06-01 2022-07-01 合肥的卢深视科技有限公司 Depth camera anomaly detection method, electronic device and storage medium
CN114783041A (en) * 2022-06-23 2022-07-22 合肥的卢深视科技有限公司 Target object recognition method, electronic device, and computer-readable storage medium
CN116883249A (en) * 2023-09-07 2023-10-13 南京诺源医疗器械有限公司 Super-resolution endoscope imaging device and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845449A (en) * 2017-02-22 2017-06-13 浙江维尔科技有限公司 A kind of image processing apparatus, method and face identification system
CN107682607A (en) * 2017-10-27 2018-02-09 广东欧珀移动通信有限公司 Image acquiring method, device, mobile terminal and storage medium
CN109889809A (en) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 Depth camera mould group, depth camera, depth picture capturing method and depth camera mould group forming method
CN110505402A (en) * 2019-08-19 2019-11-26 Oppo广东移动通信有限公司 Control method, depth camera and electronic device
US20200082520A1 (en) * 2018-03-12 2020-03-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Projector, detection method thereof, and electronic device
CN111161205A (en) * 2018-10-19 2020-05-15 阿里巴巴集团控股有限公司 Image processing and face image recognition method, device and equipment
US20200267305A1 (en) * 2016-02-12 2020-08-20 Sony Corporation Imaging apparatus, imaging method, and imaging system
CN112118438A (en) * 2020-06-30 2020-12-22 中兴通讯股份有限公司 Camera system, mobile terminal and three-dimensional image acquisition method
CN113240630A (en) * 2021-04-16 2021-08-10 深圳市安思疆科技有限公司 Speckle image quality evaluation method and device, terminal equipment and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200267305A1 (en) * 2016-02-12 2020-08-20 Sony Corporation Imaging apparatus, imaging method, and imaging system
CN106845449A (en) * 2017-02-22 2017-06-13 浙江维尔科技有限公司 A kind of image processing apparatus, method and face identification system
CN107682607A (en) * 2017-10-27 2018-02-09 广东欧珀移动通信有限公司 Image acquiring method, device, mobile terminal and storage medium
US20200082520A1 (en) * 2018-03-12 2020-03-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Projector, detection method thereof, and electronic device
CN111161205A (en) * 2018-10-19 2020-05-15 阿里巴巴集团控股有限公司 Image processing and face image recognition method, device and equipment
CN109889809A (en) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 Depth camera mould group, depth camera, depth picture capturing method and depth camera mould group forming method
CN110505402A (en) * 2019-08-19 2019-11-26 Oppo广东移动通信有限公司 Control method, depth camera and electronic device
CN112118438A (en) * 2020-06-30 2020-12-22 中兴通讯股份有限公司 Camera system, mobile terminal and three-dimensional image acquisition method
CN113240630A (en) * 2021-04-16 2021-08-10 深圳市安思疆科技有限公司 Speckle image quality evaluation method and device, terminal equipment and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693683A (en) * 2022-06-01 2022-07-01 合肥的卢深视科技有限公司 Depth camera anomaly detection method, electronic device and storage medium
CN114783041A (en) * 2022-06-23 2022-07-22 合肥的卢深视科技有限公司 Target object recognition method, electronic device, and computer-readable storage medium
CN116883249A (en) * 2023-09-07 2023-10-13 南京诺源医疗器械有限公司 Super-resolution endoscope imaging device and method
CN116883249B (en) * 2023-09-07 2023-11-14 南京诺源医疗器械有限公司 Super-resolution endoscope imaging device and method

Also Published As

Publication number Publication date
CN113965679B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN113965679B (en) Depth map acquisition method, structured light camera, electronic device, and storage medium
CN107977940B (en) Background blurring processing method, device and equipment
KR102278776B1 (en) Image processing method, apparatus, and apparatus
KR102306272B1 (en) Dual camera-based imaging method, mobile terminal and storage medium
US10621729B2 (en) Adaptive focus sweep techniques for foreground/background separation
CN107370958A (en) Image virtualization processing method, device and camera terminal
WO2019105261A1 (en) Background blurring method and apparatus, and device
CN108961383B (en) Three-dimensional reconstruction method and device
CN108024057B (en) Background blurring processing method, device and equipment
WO2018093785A1 (en) Fast fourier color constancy
KR20170005009A (en) Generation and use of a 3d radon image
TW201419853A (en) Image processor and image dead pixel detection method thereof
CN103390290B (en) Messaging device and information processing method
CN207766424U (en) A kind of filming apparatus and imaging device
CN109118463B (en) SAR image and optical image fusion method based on HSL and image entropy
US8929685B2 (en) Device having image reconstructing function, method, and recording medium
CN108053438B (en) Depth of field acquisition method, device and equipment
CN114697623B (en) Projection plane selection and projection image correction method, device, projector and medium
CN112361990B (en) Laser pattern extraction method and device, laser measurement equipment and system
CN113129241A (en) Image processing method and device, computer readable medium and electronic equipment
CN114119436A (en) Infrared image and visible light image fusion method and device, electronic equipment and medium
CN112188175B (en) Shooting device and image processing method
CN110096995A (en) The multispectral more mesh camera Antiforge recognizing methods of one kind and device
JP6624785B2 (en) Image processing method, image processing device, imaging device, program, and storage medium
CN109741384B (en) Multi-distance detection device and method for depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220507

Address after: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Applicant after: Hefei lushenshi Technology Co.,Ltd.

Address before: 100083 room 3032, North B, bungalow, building 2, A5 Xueyuan Road, Haidian District, Beijing

Applicant before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

Applicant before: Hefei lushenshi Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant