CN116704001A - Multi-frame depth reconstruction method, system, equipment and storage medium - Google Patents

Multi-frame depth reconstruction method, system, equipment and storage medium Download PDF

Info

Publication number
CN116704001A
CN116704001A CN202210173591.7A CN202210173591A CN116704001A CN 116704001 A CN116704001 A CN 116704001A CN 202210173591 A CN202210173591 A CN 202210173591A CN 116704001 A CN116704001 A CN 116704001A
Authority
CN
China
Prior art keywords
image
depth
frame
frame frequency
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210173591.7A
Other languages
Chinese (zh)
Inventor
李志彬
朱力
吕方璐
汪博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202210173591.7A priority Critical patent/CN116704001A/en
Publication of CN116704001A publication Critical patent/CN116704001A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a multi-frame depth reconstruction method, a system, equipment and a storage medium, which comprise the following steps: acquiring a preset frame frequency value, and continuously and sequentially acquiring a first depth image, a background image and a second depth image of the same target according to the frame frequency value; generating a first effective image according to the subtraction of gray values of corresponding pixels of the first depth image and the background image; generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image; and performing multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image. The application controls the image acquisition sequence and frequency, and carries out corresponding processing on the images, thereby improving the multi-frame depth reconstruction result of the measured target under the condition of strong background light, shortening the acquisition time interval between two adjacent frames of images and improving the attack difficulty of the depth camera.

Description

Multi-frame depth reconstruction method, system, equipment and storage medium
Technical Field
The present application relates to three-dimensional reconstruction of structured light, and in particular, to a method, a system, an apparatus, and a storage medium for multi-frame depth reconstruction.
Background
Mobile payment has become the mainstream payment method in China and plays an increasingly important role in more and more fields. As a core device of the face-brushing payment terminal, the face recognition camera module plays a very key role. The face recognition camera module which is mature at present adopts a structured light scheme.
The structured light three scheme is based on the principle of optical triangulation. The optical projector projects a certain mode of structured light on the surface of the object, and a light bar three-dimensional image modulated by the surface shape of the measured object is formed on the surface. The three-dimensional image is detected by a camera at another location, thereby obtaining a two-dimensional distorted image of the light bar. The degree of distortion of the light bar depends on the relative position between the optical projector and the camera and the object surface profile (height). Intuitively, the displacement (or offset) displayed along the light bar is proportional to the object surface height, kinks represent changes in plane, and discontinuities represent physical gaps in the surface. When the relative position between the optical projector and the camera is fixed, the three-dimensional shape outline of the object surface can be reproduced by the distorted two-dimensional light bar image coordinates.
The depth camera module widens the dimension of front end perception, can well solve the problems of false body attack resistance and recognition accuracy reduction under extreme conditions encountered by 2D face recognition, has the effect of being accepted by the market, has strong demand, and can be applied to the scenes of door locks, access control, payment and the like based on 3D face recognition. When the requirements such as face recognition are met, not only the depth image of the target but also the gray level image are needed, the gray level image is generally divided into two types, one type is the gray level image shot by the same camera or the gray level image shot by different cameras or both. In multi-frame depth reconstruction applications, the acquired texture image contains a texture pattern projected by the projector, which corresponds to an effective signal, and background light, which corresponds to noise interference. In some cases, such as when the illumination intensity is strong, the background light is strong, the interference is severe, and the signal to noise is low.
Disclosure of Invention
Therefore, the method controls the image acquisition sequence and frequency, carries out corresponding processing on the images, can improve the multi-frame depth reconstruction result of the measured target under the condition of strong background light, can shorten the acquisition time interval between two adjacent frames of images, improves the attack difficulty of the depth camera, and ensures that the safety of the depth camera is higher and the data is clearer.
In a first aspect, the present application provides a multi-frame depth reconstruction method, which is characterized by comprising the following steps:
step S1: acquiring a preset frame frequency value, and continuously and sequentially acquiring a first depth image, a background image and a second depth image of the same target according to the frame frequency value;
step S2: generating a first effective image according to the subtraction of gray values of corresponding pixels of the first depth image and the background image; generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image;
step S3: and performing multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image.
Optionally, the above method for reconstructing multiple frames depth is characterized in that the step S1 includes the following steps:
step S101: acquiring a preset frame frequency threshold value and an original frame frequency value, wherein the original frame frequency value is a frame frequency value of a depth camera for acquiring a first depth image and a second depth image;
step S102: determining a multiple value between the original frame frequency value and the frame frequency threshold, when the multiple value is smaller than or equal to a preset multiple threshold, determining that the preset frame frequency value is the product of the multiple threshold and the frame frequency threshold, and when the multiple value is larger than or equal to the preset multiple threshold, determining that the preset frame frequency value is the original frame frequency value;
step S103: and continuously and sequentially acquiring the first depth image, the background image and the second depth image of the same target in a frame frequency threshold value sampling period according to the frame frequency value.
Optionally, the above method for reconstructing multi-frame depth is characterized in that the step S103 includes the following steps:
step S1031: projecting structural light and floodlight to a target by a light projector of a depth camera;
step S1032: when the multiple value is smaller than or equal to a preset multiple threshold value, continuously and sequentially acquiring infrared structure light images, background images and infrared images of the same target or continuously and sequentially acquiring the infrared images, background images and infrared structure light images of the same target in a plurality of sampling periods of the frame frequency threshold value according to the frame frequency value;
step S1033: when the multiple value is larger than a preset multiple threshold value, any three continuous frames of infrared structure light images, background images and infrared images of the same target are sequentially acquired in each sampling period in a plurality of sampling periods determined according to the frame frequency threshold value according to the frame frequency value, or the infrared images, the background images and the infrared structure light images of the same target are sequentially acquired in a continuous mode.
Optionally, the above method for reconstructing multiple frames depth is characterized in that the step S2 includes the following steps:
step S201: determining a pixel value of each pixel in the first depth image, the second depth image and the background image;
step S202: performing pixel level alignment on the first depth image, the second depth image and the background image;
step S203: subtracting gray values of corresponding pixels of the background image from each pixel in the first depth image to generate a first effective image; and subtracting the gray value of the corresponding pixel of the background image from each pixel in the second depth image to generate a second effective image.
Optionally, the step S3 includes the following steps:
step S301: calculating the infrared structured light image and known calibration information to obtain a parallax image of the infrared structured light image;
step S302: determining the distance between the optical center of the depth camera and each parallax value in the parallax map according to the triangulation principle, and generating the depth information of each pixel;
step S303: and carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the depth information of each pixel to generate a depth image.
Optionally, the step S3 includes the following steps:
step S31: synthesizing the first effective image and the second effective image to obtain a third effective image;
step S32: and carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the third effective image to generate a depth image.
Optionally, the method for reconstructing multiple frames depth is characterized in that the preset frame frequency threshold is 15FPS, and the preset multiple threshold is 3.
In a second aspect, the present application provides a multi-frame depth reconstruction system, configured to implement the above multi-frame depth reconstruction method, where the method includes:
the image acquisition module is used for acquiring a preset frame frequency value, and continuously and sequentially acquiring a first depth image, a background image and a second depth image of the same target according to the frame frequency value;
the image enhancement module is used for generating a first effective image according to the subtraction of gray values of corresponding pixels of the first depth image and the background image; generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image;
and the multi-frame depth reconstruction module is used for carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image.
In a third aspect, the present application provides a multi-frame depth reconstruction apparatus, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of a multi-frame depth reconstruction method as described in any one of the preceding claims via execution of the executable instructions.
In a fourth aspect, the present application provides a computer readable storage medium storing a program, wherein the program when executed implements the steps of a multi-frame depth reconstruction method as described in any one of the preceding claims.
Compared with the prior art, the application has the following beneficial effects:
according to the method, the first depth image, the background image and the second depth image of the same target are continuously and sequentially acquired according to the preset frame frequency value, so that continuous acquisition of three frames of images is realized, the acquisition time interval between two adjacent frames of images is shortened, the attack difficulty of the depth camera is improved, and the safety of the depth camera is higher;
according to the application, the first effective image and the second effective image are generated by subtracting gray values of corresponding pixels of the first depth image and the background image and the second depth image and the background image, and then multi-frame depth reconstruction or three-dimensional reconstruction is performed according to the first effective image and the second effective image to generate the depth image, so that interference of background light is reduced, and the depth camera can be suitable for environments with stronger illumination intensity.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flowchart illustrating steps of a multi-frame depth reconstruction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of transmitting and receiving according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps for determining frame rate values according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating steps for capturing images according to frame rate values according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating steps for generating a structured light image of a target in an embodiment of the present application;
FIG. 6 is a flowchart illustrating steps for generating a depth image by multi-frame depth reconstruction in accordance with an embodiment of the present application;
FIG. 7 is a flow chart of the combination of a first valid image and a second valid image in an embodiment of the application;
FIG. 8 is a schematic diagram of an acquired image of a depth camera according to an embodiment of the present application;
FIG. 9 is a block diagram of a multi-frame depth reconstruction system according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a multi-frame depth reconstruction device according to an embodiment of the present application; and
fig. 11 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The present application will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the application is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The application provides a multi-frame depth reconstruction method, which aims to solve the problems in the prior art.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating steps of a multi-frame depth reconstruction method according to an embodiment of the present application. As shown in fig. 1, the multi-frame depth reconstruction method provided by the application comprises the following steps:
step S1: and acquiring a preset frame frequency value, and continuously and sequentially acquiring a first depth image, a background image and a second depth image of the same target according to the frame frequency value.
In this step, the first depth image and the second depth image may be the same type of depth image, or may be different types of depth images, for example, the first depth image and the second depth image may be infrared images, or the first depth image may be an infrared image, and the second depth image may be a structured light image. The first depth image and the second depth image are depth images obtained by irradiating the target object with an active laser. The background image is a depth image obtained without active laser irradiation, and is often an infrared image obtained by an infrared sensor.
Fig. 2 is a schematic diagram of transmitting and receiving according to an embodiment of the present application. It can be seen from fig. 2 that f1, f4 are first depth images, f2 is a background image, and f3, f5 are second depth images. And f1, f2, f3 are one set of signals and f4, f5, f6 are another set of signals. This way the signal acquisition frequency can be made higher, and the frame frequency is increased by sharing the background signal. It should be noted that, f1 and f3 are both emission of a set of signals, which is not limited to one signal, but may be a combination of two or more signals, that is, f1 and f3 are also respectively formed by combining two or more signals, so that the embodiment can have more application forms and play a role in a wider application range.
Step S2: generating a first effective image according to the subtraction of gray values of corresponding pixels of the first depth image and the background image; and generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image.
In this step, the first depth image, the second depth image and the background image all adopt the same image size, that is, any one pixel point in the first depth image and the second depth image has a pixel point uniquely corresponding to the background image. The first depth image and the second depth image share one background image, so that the acquisition frequency of data can be increased, and the image data can be obtained more quickly.
Step S3: and performing multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image.
In this step, the first effective image and the second effective image are depth data obtained at different moments, and may be mutually matched to perform multi-frame depth reconstruction or three-dimensional reconstruction.
Fig. 3 is a flowchart illustrating steps for determining a frame rate according to an embodiment of the present application. In comparison with the step S1 in the previous embodiment, the present embodiment further includes the following steps:
step S101: and acquiring a preset frame frequency threshold value and an original frame frequency value, wherein the original frame frequency value is a frame frequency value of a depth camera for acquiring the first depth image and the second depth image.
Step S102: and determining a multiple value between the original frame frequency value and the frame frequency threshold, when the multiple value is smaller than or equal to a preset multiple threshold, determining that the preset frame frequency value is the product of the multiple threshold and the frame frequency threshold, and when the multiple value is larger than or equal to the preset multiple threshold, determining that the preset frame frequency value is the original frame frequency value.
Step S103: and continuously and sequentially acquiring the first depth image, the background image and the second depth image of the same target in a frame frequency threshold value sampling period according to the frame frequency value.
In the embodiment of the present application, the preset frame frequency threshold is 15FPS, the preset multiple threshold is 3, if the original frame frequency value is 30FPS, the multiple value is 2, and since the multiple value 2 is smaller than the preset number threshold 3, the preset frame frequency value is determined to be the preset number threshold 3 multiplied by the preset frame frequency threshold 15, and the frame frequency value is determined to be 45; when the original frame frequency value is 60FPS, the multiple value is 4, and since the multiple value 4 is greater than a preset number threshold value 3, the preset frame frequency value is determined to be 60FPS.
Fig. 4 is a flowchart illustrating steps for capturing an image according to a frame rate value according to an embodiment of the present application. Compared to step S103 in the above embodiment, the present embodiment further includes the following steps:
step S1031: structured light and floodlight are projected towards a target by a light projector of a depth camera.
In this step, two different laser types are used, namely structured light and flood light, each of which irradiates.
Step S1032: and when the multiple value is smaller than or equal to a preset multiple threshold value, continuously and sequentially acquiring infrared structure light images, background images and infrared images of the same target or continuously and sequentially acquiring the infrared images, background images and infrared structure light images of the same target in a plurality of sampling periods of the frame frequency threshold value according to the frame frequency value.
Step S1033: when the multiple value is larger than a preset multiple threshold value, any three continuous frames of infrared structure light images, background images and infrared images of the same target are sequentially acquired in each sampling period in a plurality of sampling periods determined according to the frame frequency threshold value according to the frame frequency value, or the infrared images, the background images and the infrared structure light images of the same target are sequentially acquired in a continuous mode.
In the embodiment of the present application, when the multiple value is 2, and at this time, the multiple value is less than or equal to a preset multiple threshold, then the background image, the infrared structured light image, the infrared image of the same target or the infrared image, the infrared structured light image, and the background image of the same target are continuously and sequentially acquired within 15 sampling periods of the frame frequency threshold according to the frame frequency value 45.
And so on, for example, when the multiple value is 4, and at this time, the multiple value is greater than a preset multiple threshold, then the background image, the infrared structured light image, the infrared image, or the infrared image, the infrared structured light image, or the background image of the same target are continuously and sequentially acquired every 4 frames within 15 sampling periods of the frame frequency threshold according to the frame frequency value 60.
FIG. 5 is a flowchart illustrating steps for generating a structured light image of a target in accordance with an embodiment of the present application. In comparison with step S2 in the above embodiment, the present embodiment further includes the following steps:
step S201: a pixel value of each pixel in the first depth image, the second depth image, and the background image is determined.
Step S202: and carrying out pixel level alignment on the first depth image, the second depth image and the background image.
Step S203: subtracting gray values of corresponding pixels of the background image from each pixel in the first depth image to generate a first effective image; and subtracting the gray value of the corresponding pixel of the background image from each pixel in the second depth image to generate a second effective image.
Fig. 6 is a flowchart illustrating steps for generating a depth image by performing multi-frame depth reconstruction according to an embodiment of the present application. In comparison with step S3 in the above embodiment, the present embodiment further includes the following steps:
step S301: and calculating the infrared structured light image and the known calibration information to obtain a parallax image of the infrared structured light image.
Step S302: and determining the distance between the optical center of the depth camera and each parallax value in the parallax map according to the triangulation principle, and generating the depth information of each pixel.
Step S303: and carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the depth information of each pixel to generate a depth image.
Fig. 7 is a flowchart of a combination of a first effective image and a second effective image in an embodiment of the present application. Compared to step S3 in the above embodiment, the present embodiment further includes the following steps:
step S31: and synthesizing the first effective image and the second effective image to obtain a third effective image.
In this step, the first and second valid images are combined such that a set of signals has only one valid image, i.e. the third valid image. Compared with the first effective image and the second effective image, the third effective image has better signal-to-noise ratio and better recognition for the environment with stronger environment light intensity.
Step S32: and carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the third effective image to generate a depth image.
In this step, the third effective image is very important for the reconstructed depth image, so that the quality of the third effective image needs to be considered, and different module current values, exposure time and Gain (Gain) values are selected, so that the pixel value of the obtained third effective image is in a range of not overexposure.
In some embodiments, the intensity value and the judgment threshold value pre-stored in the original system are used to calculate the intensity value and the judgment threshold value, so as to obtain the judgment threshold value applicable to the third effective image. In the prior art, the depth camera generally has an intensity value and a judgment threshold value of the camera, and is used for adjusting within a certain range according to the light intensity of the environment. The method can utilize the existing information in the system, can effectively reconstruct the external noise environment under the condition of larger environment, can not calculate or test the terminal, improves the system arrangement efficiency, and reduces the arrangement cost.
In the embodiment, two signal frames share the same background frame, so that the frame time difference is reduced, the signal failure caused by motion blur is prevented, and simultaneously, three frames of signals are overlapped, so that the signal to noise ratio is increased by two times.
Fig. 8 is a schematic diagram of an acquired image of a depth camera according to an embodiment of the present application. As shown in fig. 8, when the depth camera provided by the application is used, a background image can be collected through the infrared camera, then a structural light is projected towards a target through the structural light projector, an infrared structural light image is collected, and finally a floodlight is projected towards the target through the floodlight projector, and then an infrared image is collected; the method can also be used for collecting the background image through the infrared camera, then collecting the infrared structure light image after the structure light is projected towards the target through the structure light projector, and finally collecting the infrared image after the floodlight is projected towards the target through the floodlight projector. The infrared camera adopts 940nm infrared camera. The floodlight projector adopts an LED light source.
Fig. 9 is a schematic block diagram of a multi-frame depth reconstruction system according to an embodiment of the present application, configured to implement the multi-frame depth reconstruction method, where the method includes:
the image acquisition module 101 is configured to acquire a preset frame frequency value, and sequentially acquire a first depth image, a background image, and a second depth image of the same target according to the frame frequency value.
The image enhancement module 102 is configured to generate a first effective image according to subtraction of gray values of corresponding pixels of the first depth image and the background image; and generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image.
And the multi-frame depth reconstruction module 103 is used for performing multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image.
The embodiment of the application also provides multi-frame depth reconstruction equipment which comprises a processor. A memory having stored therein executable instructions of a processor. Wherein the processor is configured to perform the steps of a multi-frame depth reconstruction method via execution of executable instructions.
As described above, according to the embodiment, the background image, the infrared structured light image and the infrared image of the same target can be continuously and sequentially acquired according to the preset frame frequency value, or the infrared image, the infrared structured light image and the background image of the same target can be continuously and sequentially acquired, so that continuous acquisition of three frames of images is realized, the acquisition time interval between two adjacent frames of images is shortened, the attack difficulty of the depth camera is improved, and the safety of the depth camera is higher.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" platform.
Fig. 10 is a schematic structural diagram of a multi-frame depth reconstruction apparatus in an embodiment of the present application. An electronic device 600 according to this embodiment of the application is described below with reference to fig. 10. The electronic device 600 shown in fig. 10 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 10, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including memory unit 620 and processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 such that the processing unit 610 performs the steps according to various exemplary embodiments of the present application described in the above-described one multi-frame depth reconstruction method section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in fig. 8, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
The embodiment of the application also provides a computer readable storage medium for storing a program, and the steps of the multi-frame depth reconstruction method are realized when the program is executed. In some possible embodiments, the aspects of the application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the above-mentioned section of a multi-frame depth reconstruction method, when the program product is run on the terminal device.
As described above, when the program of the computer readable storage medium of this embodiment is executed, the background image, the infrared structured light image, the infrared image, or the infrared image, the infrared structured light image, and the background image of the same target are continuously and sequentially acquired according to the preset frame frequency value, so that continuous acquisition of three frames of images is realized, the time interval between acquisition of two adjacent frames of images is shortened, the attack difficulty of the depth camera is improved, and the security of the depth camera is higher.
Fig. 11 is a schematic structural view of a computer-readable storage medium in an embodiment of the present application. Referring to fig. 11, a program product 800 for implementing the above-described method according to an embodiment of the present application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
According to the embodiment of the application, the background image, the infrared structure light image and the infrared image of the same target are continuously and sequentially acquired according to the preset frame frequency value, or the infrared image, the infrared structure light image and the background image of the same target are continuously and sequentially acquired, so that the continuous acquisition of three frames of images is realized, the acquisition time interval between two adjacent frames of images is shortened, the attack difficulty of a depth camera is improved, and the safety of the depth camera is higher; according to the embodiment of the application, the target structure light image is generated by subtracting the gray values of the corresponding pixels of the background image and the infrared structure light image, and then the multi-frame depth reconstruction or the three-dimensional reconstruction is performed according to the target structure light image to generate the depth image, so that the interference of background light is reduced, and the depth camera can be suitable for an environment with stronger illumination intensity.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the application.

Claims (10)

1. The multi-frame depth reconstruction method is characterized by comprising the following steps of:
step S1: acquiring a preset frame frequency value, and continuously and sequentially acquiring a first depth image, a background image and a second depth image of the same target according to the frame frequency value;
step S2: generating a first effective image according to the subtraction of gray values of corresponding pixels of the first depth image and the background image; generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image;
step S3: and performing multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image.
2. The multi-frame depth reconstruction method according to claim 1, wherein the step S1 includes the steps of:
step S101: acquiring a preset frame frequency threshold value and an original frame frequency value, wherein the original frame frequency value is a frame frequency value of a depth camera for acquiring a first depth image and a second depth image;
step S102: determining a multiple value between the original frame frequency value and the frame frequency threshold, when the multiple value is smaller than or equal to a preset multiple threshold, determining that the preset frame frequency value is the product of the multiple threshold and the frame frequency threshold, and when the multiple value is larger than or equal to the preset multiple threshold, determining that the preset frame frequency value is the original frame frequency value;
step S103: and continuously and sequentially acquiring the first depth image, the background image and the second depth image of the same target in a frame frequency threshold value sampling period according to the frame frequency value.
3. The multi-frame depth reconstruction method according to claim 2, wherein the step S103 includes the steps of:
step S1031: projecting structural light and floodlight to a target by a light projector of a depth camera;
step S1032: when the multiple value is smaller than or equal to a preset multiple threshold value, continuously and sequentially acquiring infrared structure light images, background images and infrared images of the same target or continuously and sequentially acquiring the infrared images, background images and infrared structure light images of the same target in a plurality of sampling periods of the frame frequency threshold value according to the frame frequency value;
step S1033: when the multiple value is larger than a preset multiple threshold value, any three continuous frames of infrared structure light images, background images and infrared images of the same target are sequentially acquired in each sampling period in a plurality of sampling periods determined according to the frame frequency threshold value according to the frame frequency value, or the infrared images, the background images and the infrared structure light images of the same target are sequentially acquired in a continuous mode.
4. The multi-frame depth reconstruction method according to claim 1, wherein the step S2 comprises the steps of:
step S201: determining a pixel value of each pixel in the first depth image, the second depth image and the background image;
step S202: performing pixel level alignment on the first depth image, the second depth image and the background image;
step S203: subtracting gray values of corresponding pixels of the background image from each pixel in the first depth image to generate a first effective image; and subtracting the gray value of the corresponding pixel of the background image from each pixel in the second depth image to generate a second effective image.
5. A multi-frame depth reconstruction method according to claim 3, wherein said step S3 comprises the steps of:
step S301: calculating the infrared structured light image and known calibration information to obtain a parallax image of the infrared structured light image;
step S302: determining the distance between the optical center of the depth camera and each parallax value in the parallax map according to the triangulation principle, and generating the depth information of each pixel;
step S303: and carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the depth information of each pixel to generate a depth image.
6. The multi-frame depth reconstruction method according to claim 1, wherein the step S3 includes the steps of:
step S31: synthesizing the first effective image and the second effective image to obtain a third effective image;
step S32: and carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the third effective image to generate a depth image.
7. A multi-frame depth reconstruction method according to claim 2, wherein the preset frame rate threshold is 15FPS and the preset multiple threshold is 3.
8. A multi-frame depth reconstruction system for implementing a multi-frame depth reconstruction method according to any one of claims 1 to 7, comprising:
the image acquisition module is used for acquiring a preset frame frequency value, and continuously and sequentially acquiring a first depth image, a background image and a second depth image of the same target according to the frame frequency value;
the image enhancement module is used for generating a first effective image according to the subtraction of gray values of corresponding pixels of the first depth image and the background image; generating a second effective image according to the subtraction of gray values of corresponding pixels of the second depth image and the background image;
and the multi-frame depth reconstruction module is used for carrying out multi-frame depth reconstruction or three-dimensional reconstruction according to the first effective image and the second effective image to generate a depth image.
9. A multi-frame depth reconstruction apparatus, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of a multi-frame depth reconstruction method as claimed in any one of claims 1 to 7 via execution of the executable instructions.
10. A computer readable storage medium storing a program, wherein the program when executed implements the steps of a multi-frame depth reconstruction method as claimed in any one of claims 1 to 7.
CN202210173591.7A 2022-02-24 2022-02-24 Multi-frame depth reconstruction method, system, equipment and storage medium Pending CN116704001A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210173591.7A CN116704001A (en) 2022-02-24 2022-02-24 Multi-frame depth reconstruction method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210173591.7A CN116704001A (en) 2022-02-24 2022-02-24 Multi-frame depth reconstruction method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116704001A true CN116704001A (en) 2023-09-05

Family

ID=87831705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210173591.7A Pending CN116704001A (en) 2022-02-24 2022-02-24 Multi-frame depth reconstruction method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116704001A (en)

Similar Documents

Publication Publication Date Title
US9390505B2 (en) Method and apparatus for generating plenoptic depth maps
KR102186216B1 (en) Determining depth data for a captured image
US20100142828A1 (en) Image matching apparatus and method
JP6239594B2 (en) 3D information processing apparatus and method
KR101694927B1 (en) Noise reduction for image sequences
CN107452034B (en) Image processing method and device
CN101783024A (en) Method of filtering depth noise using depth information and apparatus for enabling the method
JP2015143679A (en) Parallax calculation system, information processing device, information processing method and program
KR20200067719A (en) Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences
CN112824934B (en) TOF multipath interference removal method, system, equipment and medium based on modulated light field
CN115272556A (en) Method, apparatus, medium, and device for determining reflected light and global light
CN116704001A (en) Multi-frame depth reconstruction method, system, equipment and storage medium
CN116520348A (en) Depth imaging system, method, equipment and medium based on modulated light field
Orozco et al. HDR multiview image sequence generation: Toward 3D HDR video
CN115514899A (en) Shooting method and device, computer readable medium and electronic equipment
CN113673285B (en) Depth reconstruction method, system, equipment and medium during capturing of depth camera
CN113673287B (en) Depth reconstruction method, system, equipment and medium based on target time node
CN113673286B (en) Depth reconstruction method, system, equipment and medium based on target area
CN115035234A (en) Depth reconstruction method, system, device and storage medium
Lee et al. Depth error compensation for camera fusion system
Sabbadin et al. Multi-View Ambient Occlusion for Enhancing Visualization of Raw Scanning Data.
CN113362236B (en) Point cloud enhancement method, point cloud enhancement device, storage medium and electronic equipment
CN117557628A (en) Structured light rapid depth reconstruction method, system, equipment and medium
Chang et al. Precise depth map upsampling and enhancement based on edge‐preserving fusion filters
CN116703997A (en) Multi-frame structured light reconstruction module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination