CN113031001B - Depth information processing method, depth information processing device, medium and electronic apparatus - Google Patents

Depth information processing method, depth information processing device, medium and electronic apparatus Download PDF

Info

Publication number
CN113031001B
CN113031001B CN202110209182.3A CN202110209182A CN113031001B CN 113031001 B CN113031001 B CN 113031001B CN 202110209182 A CN202110209182 A CN 202110209182A CN 113031001 B CN113031001 B CN 113031001B
Authority
CN
China
Prior art keywords
depth information
pixel point
original
signal
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110209182.3A
Other languages
Chinese (zh)
Other versions
CN113031001A (en
Inventor
侯烨
胡池
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110209182.3A priority Critical patent/CN113031001B/en
Publication of CN113031001A publication Critical patent/CN113031001A/en
Application granted granted Critical
Publication of CN113031001B publication Critical patent/CN113031001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The disclosure provides a depth information processing method, a depth information processing device, a computer readable storage medium and electronic equipment, and relates to the technical field of image processing. The depth information processing method comprises the following steps: acquiring original signals of a plurality of first pixel points acquired by a time-of-flight TOF sensor; overlapping the original signals of the adjacent first pixel points to generate an enhanced signal of the second pixel point; and outputting depth information based on the enhanced signal. The depth information determining method and the depth information determining device can accurately and effectively determine the depth information and improve the integrity and the accuracy of the depth image.

Description

Depth information processing method, depth information processing device, medium and electronic apparatus
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a depth information processing method, a depth information processing apparatus, a computer readable storage medium, and an electronic device.
Background
The depth information is used as important information for describing a three-dimensional image and a three-dimensional scene, and has wide application in the field of 3D (3-dimensional) vision such as intelligent security, man-machine interaction, robot interaction and the like. Existing schemes are typically based on binocular vision, structured light or TOF (Time of Flight) techniques to determine depth information of an image. The TOF technology actively transmits infrared light to the surface of an object through a TOF sensor, and receives light signals reflected back by the object, so that depth information is determined based on flight time. However, this approach tends to have a high requirement on the hardware of the terminal device, and when the object is far from the terminal device, or the object is near to the terminal device but has a low reflectivity, the generated depth information tends to have low accuracy and poor quality, and in addition, when the environmental noise intensity is high, the accuracy of the generated depth information is also affected.
Disclosure of Invention
The present disclosure provides a depth information processing method, a depth information processing apparatus, a computer-readable storage medium, and an electronic device, thereby improving, at least to some extent, the problem that it is difficult to accurately and effectively determine depth information for a long-distance object or a short-distance low-reflectivity object in the prior art.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a depth information processing method including: acquiring original signals of a plurality of first pixel points acquired by a time-of-flight TOF sensor; overlapping the original signals of the adjacent first pixel points to generate an enhanced signal of the second pixel point; and outputting depth information based on the enhanced signal.
According to a second aspect of the present disclosure, there is provided a depth information processing apparatus including: the original signal acquisition module is used for acquiring original signals of a plurality of first pixel points acquired by the time-of-flight TOF sensor; the enhancement signal generation module is used for superposing original signals of adjacent first pixel points to generate enhancement signals of second pixel points; and the depth information output module is used for outputting depth information based on the enhanced signal.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the depth information processing method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and the memory is used for storing executable instructions of the processor. Wherein the processor is configured to perform the depth information processing method of the first aspect and possible implementations thereof via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
acquiring original signals of a plurality of first pixel points acquired by a time-of-flight TOF sensor; overlapping the original signals of the adjacent first pixel points to generate an enhanced signal of the second pixel point; the depth information is output based on the enhancement signal. On one hand, the present exemplary embodiment proposes a new depth information processing method, by superposing original signals of adjacent first pixel points, generating an enhanced signal of a second pixel point, and further determining depth information according to the enhanced signal, under the condition that the TOF sensor hardware structure is not changed, and power consumption of terminal equipment is not increased, signal enhancement is performed on the original signals through an algorithm, so that the depth information determined according to the enhanced signal has higher accuracy and reliability; on the other hand, the present exemplary embodiment can determine accurate depth information by superimposing the enhancement signal generated by the original signal on the long-distance object, the short-distance object with low reflectivity, or other low-power application scenarios, and has a wide application range.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
fig. 2 shows a structural diagram of an electronic device in the present exemplary embodiment;
fig. 3 shows a flowchart of a depth information processing method in the present exemplary embodiment;
fig. 4 shows a schematic diagram of original image data in the present exemplary embodiment;
fig. 5 shows a schematic diagram of merging adjacent first pixel points into a second pixel point in the present exemplary embodiment;
fig. 6 shows a sub-flowchart of a depth information processing method in the present exemplary embodiment;
fig. 7 shows a sub-flowchart of another depth information processing method in the present exemplary embodiment;
fig. 8 shows a sub-flowchart of still another depth information processing method in the present exemplary embodiment;
fig. 9 is a diagram showing determination of target depth information in the present exemplary embodiment;
fig. 10 shows a flowchart of another depth information processing method in the present exemplary embodiment;
fig. 11 shows a structural diagram of a depth information processing apparatus in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Exemplary embodiments of the present disclosure provide a depth information processing method. Fig. 1 shows a system architecture diagram of an operating environment of the present exemplary embodiment. As shown in fig. 1, the system architecture 100 may include a server 110 and a terminal 120, where the server 110 and the terminal 120 form a communication interaction through a network, for example, the server 110 sends depth information to the terminal 120, and the terminal 120 displays a corresponding depth image based on the depth information. The server 110 refers to a background server that provides internet services; terminal 120 may refer to electronic devices including, but not limited to, smart phones, tablet computers, gaming machines, wearable devices, and the like.
It should be understood that the number of devices in fig. 1 is merely exemplary. Any number of clients may be set, or the server may be a cluster formed by a plurality of servers, according to implementation requirements.
The depth information processing method provided by the embodiment of the present disclosure may be executed by the server 110, for example, after the terminal 120 collects an original signal, the original signal is sent to the server 110, the server 110 processes the original signal to generate an enhanced signal, and the enhanced signal is returned to the terminal 120 after determining depth information according to the enhanced signal; the method may also be performed by the terminal 120, for example, by acquiring the original signal through a TOF sensor configured on the terminal 120, directly processing the original signal by the terminal 120, determining depth information, etc., which is not specifically limited in this disclosure.
The exemplary embodiments of the present disclosure also provide an electronic device for performing the above depth information processing method. The electronic device may be the server 110 or the terminal 120 described above. Generally, an electronic device includes a processor and a memory. The memory is used for storing executable instructions of the processor, and can also store application data, such as image data, game data and the like; the processor is configured to execute the depth information processing method in the present exemplary embodiment via execution of the executable instructions.
The configuration of the above-described electronic device will be exemplarily described below taking the terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes.
As shown in fig. 2, the terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, USB (Universal Serial Bus ) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and SIM (Subscriber Identification Module, subscriber identity module) card interface 295, and the like.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an AP (Application Processor ), modem processor, GPU (Graphics Processing Unit, graphics processor), ISP (Image Signal Processor ), controller, encoder, decoder, DSP (Digital Signal Processor ), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), and the like. An encoder may encode (i.e., compress) image or video data; the decoder may decode (i.e., decompress) the code stream data of the image or video to restore the image or video data. The terminal 200 may support one or more encoders and decoders.
In some embodiments, processor 210 may include one or more interfaces through which connections are made with other components of terminal 200.
The internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a volatile memory, a nonvolatile memory, and the like. The processor 210 performs various functional applications of the terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, to realize the memory capability of the extension terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing files of music, video, etc.
The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the terminal 200, or may be connected to a headset or other electronic device.
The charge management module 240 is configured to receive a charge input from a charger. The charging management module 240 may also supply power to the device through the power management module 241 while charging the battery 242; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the terminal 200. The wireless communication module 260 may provide wireless communication solutions including WLAN (Wireless Local Area Networks, wireless local area network) (e.g., wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth), GNSS (Global Navigation Satellite System ), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short range wireless communication technology), IR (Infrared technology), etc. applied on the terminal 200.
The terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like, and display a user interface. The terminal 200 may implement a photographing function through an ISP, a camera module 291, an encoder, a decoder, a GPU, a display screen 290, an AP, etc., and may implement an audio function through an audio module 270, a speaker 271, a receiver 272, a microphone 273, an earphone interface 274, an AP, etc.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, etc. to implement different sensing functions.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The motor 293 may generate vibration cues, may also be used for touch vibration feedback, or the like. The keys 294 include a power on key, a volume key, etc.
The terminal 200 may support one or more SIM card interfaces 295 for interfacing with a SIM card to perform functions such as telephony and data communications.
Fig. 3 shows an exemplary flow of the depth information processing method, which may be performed by the server 110 or the terminal 120, including the following steps S310 to S330:
in step S310, raw signals of a plurality of first pixel points acquired by the time-of-flight TOF sensor are acquired.
The TOF sensor can emit infrared pulse signals to the surrounding environment through tiny laser, when the infrared pulse signals reach objects in the environment, one part of the infrared pulse signals are absorbed and then are emitted in a radiation mode, the other part of the infrared pulse signals are reflected back, the flight time is solved according to a bottom layer algorithm, and the distance between the objects and the sensor can be determined. In the present exemplary embodiment, the TOF sensor may include an ITOF (Indirect TOF) sensor, a DTOF (Direct TOF) sensor, and the like, in which ITOF measures the time of flight by transmitting and receiving a phase difference between a sine wave/square wave by a method of measuring a phase shift; DTOF measures time of flight by measuring the time interval between the transmission of an infrared pulse and the reception of an infrared pulse.
In practical application, the image sensor may first obtain original image data, which may be an image in RAW format, and the image sensor converts the collected optical signal into digital signal. The first pixel point is any one pixel point in the pixel array of the original image data, and is the pixel point of the original image data under the original resolution. Further, the present exemplary embodiment may acquire the original signals of the first pixel points of the original image data acquired by the TOF sensor, where the original signals may include various characterization forms, for example, may be the acquired original optical signals or electrical signals, or may be other signals obtained by processing the acquired optical-electrical signals, for example, a time-photon number histogram determined when the DTOF sensor is used, or the like.
Step S320, the original signals of the adjacent first pixel points are overlapped to generate the enhancement signal of the second pixel point.
In general, the original image data includes a plurality of first pixels, for example, 8*6 pixels are included in a pixel array of the original image data as shown in fig. 4, where each pixel may be used as a first pixel, and the adjacent first pixels may be a plurality of first pixels in a direct adjacent relationship or an indirect adjacent relationship, for example, in fig. 4, pixel 1 and pixel 2, pixel 1 and pixel 3, and pixel 1, pixel 2, pixel 3, and pixel 4 may be considered as first pixels in an adjacent relationship, or the like. The adjacent first pixel points can be regarded as a larger pixel point, i.e. combined into a second pixel point, for example, as shown in fig. 5, 4 first pixel points in a 510 area in the left graph of fig. 5 are combined, and a second pixel point 520 in the right graph of fig. 5 can be determined, and corresponding enhancement signals of the second pixel point are superposition of original signals of adjacent first pixel points used for combining the second pixel point. For example, when the ITOF sensor is used, the voltage values of the reaction capacitances of the adjacent first pixel points in the raw image data may be accumulated; when the DTOF sensor is employed, a histogram of the number of photons received by adjacent first pixel points in the original image data may be accumulated, or the like. In other words, the present exemplary embodiment may accumulate the original signals of adjacent first pixel points by a method of reducing the resolution of the original image data, and generate the enhanced signal of the second pixel point, so as to improve the signal-to-noise ratio, so that the second pixel point may receive more photoelectric signals than the first pixel point, thereby facilitating the subsequent determination of more accurate depth information.
It should be noted that, according to the present exemplary embodiment, different numbers of adjacent first pixel points may be set according to actual needs to perform superposition of original signals, and an enhanced signal of a second pixel point may be generated, where the number may be a specific number set in advance, for example, adjacent 2 first pixel points may perform superposition of original signals, or 4 first pixel points may perform superposition of original signals in a 510 area as shown in fig. 5, or 6 or 8 first pixel points may be set to perform superposition of original signals, or the like. The adjacent plurality of first pixel points may be pixel points extending and determining in a horizontal, vertical or a combination manner of horizontal and vertical on the pixel array, for example, as shown in fig. 4, the adjacent 2 first pixel points may be pixel point 1 and pixel point 2, pixel point 1 and pixel point 3, and the adjacent 4 first pixel points may be pixel point 1, pixel point 2, pixel point 3 and pixel point 4. In the present exemplary embodiment, the number of adjacent first pixel points may be determined according to the size of the pixel dot matrix of the actual original image data, for example, when the size of the pixel dot matrix is large, the original signals of more adjacent first pixel points may be set to be superimposed, when the size of the pixel dot matrix is small, the original signals of less adjacent first pixel points may be set to be superimposed, and so on, which is not particularly limited in the present disclosure. In addition, when the original signals of adjacent first pixel points are superimposed with respect to the same original image data, the number of first pixel points may be the same, for example, the original signals of adjacent 4 first pixel points may be superimposed, or may be different, for example, the original signals of 2 first pixel points are superimposed on the edge of the pixel lattice, the original signals of adjacent 4 first pixel points are superimposed on the non-edge, and the like.
In an exemplary embodiment, if the original signal includes a reflected light signal, the step S310 may include:
and superposing the reflected light signals of the adjacent first pixel points to generate the reflected light signal of the second pixel point.
The present exemplary embodiment obtains the reflected light signal of each first pixel, and adds up the reflected light signals of adjacent first pixels to obtain the reflected light signal of the second pixel, and further, may determine the enhancement signal of the second pixel according to the reflected light. The reflected light signal can be obtained according to received photoelectric signal processing, for example, when a DTOF sensor is adopted, a histogram of photon number received by the first pixel point in one exposure period can be used as the reflected light signal of the first pixel point in the time range of the exposure period to perform superposition of original signals; or when the ITOF sensor is adopted, the voltage value of the reaction capacitor output by the first pixel point can also be used as the reflected light signal of the first pixel point to perform superposition of original signals and the like.
Step S330, depth information is output based on the enhanced signal.
The depth information refers to data capable of determining the distance between the object and the TOF sensor terminal equipment, and a depth image comprising the object can be determined according to the depth information. The present exemplary embodiment may determine depth information by the enhanced signal, output a depth image, specifically, determine a time of flight according to the enhanced signal, and calculate depth information of each pixel according to the time of flight.
In view of that the resolution of the depth image will be reduced after merging the adjacent first pixel points into the second pixel point, in order to determine the depth information of the first pixel point at the original resolution, in an exemplary embodiment, as shown in fig. 6, the above depth information processing method may further include the following steps:
step S610, determining original depth information based on the original signal;
the step S330 may include:
step S620, determining enhancement depth information based on the enhancement signal;
step S630, outputting target depth information according to the original depth information and the enhanced depth information.
In this exemplary embodiment, the depth information of each first pixel may be determined according to the original signals of a plurality of first pixels in the original image data, specifically, the flight time of the infrared pulse signal may be determined by using the obtained phase difference or the obtained time interval, for example, when a DTOF sensor is used, the flight time may be determined by using the time corresponding to the maximum photon number, etc., and further, the corresponding depth information may be calculated according to the flight time determined by each first pixel, where the depth information is the original depth information of the first pixel under the original resolution. And after the original signals of the adjacent first pixel points are overlapped, an enhanced signal of the second pixel point can be obtained, and according to the enhanced signal, the depth information of each second pixel point under the current resolution can be calculated. Further, the depth information of the second pixel point may be converted into depth information at the original resolution, i.e. enhanced depth information. Finally, the target depth information may be output by fusing the original depth information with the enhanced depth information.
In an exemplary embodiment, as shown in fig. 7, the original depth information includes original depth information of the first pixel point, and the step S620 may include the steps of:
step S710, determining equivalent depth information of the second pixel point based on the enhancement signal of the second pixel point;
step S720, up-sampling the equivalent depth information of the second pixel point to obtain the enhanced depth information of the first pixel point.
In this exemplary embodiment, the equivalent depth information of the second pixel point may be determined according to the superimposed enhancement signal, where the equivalent depth information is depth information obtained by superimposing and calculating the original signals based on the plurality of first pixel points, and is depth information under the low resolution depth image, but since the quality of the light received by the second pixel point is improved compared with that of the first pixel point, the signal to noise ratio may be effectively improved.
In order to facilitate the fusion with the original depth information, the present exemplary embodiment may perform an upsampling process on the equivalent depth information of the second pixel, restore the second pixel to the original resolution state, and determine the enhanced depth information of each first pixel under the original resolution according to the equivalent depth information. Different from the original depth information of each first pixel point under the original resolution, the original depth information is directly obtained by the original signals, and the enhanced depth information is the depth information obtained by combining the first pixel points to reduce the resolution and accumulating the original signals.
The up-sampling of the equivalent depth information of the second pixel point may include multiple processing manners, for example, multiple algorithms such as bilinear interpolation, nearest interpolation or transposed convolution may be used, and a specific interpolation manner may be selected comprehensively according to factors such as depth information quality, platform computing power, acceptable power consumption or actual requirement conditions required by the back-end application, which is not specifically limited in this disclosure.
In an exemplary embodiment, as shown in fig. 8, the step S630 may include the steps of:
step S810, determining the confidence coefficient of the first pixel point according to the original signal of the first pixel point;
step S820, when the confidence coefficient of the first pixel point is larger than the confidence coefficient threshold value, determining the original depth information of the first pixel point as the target depth information of the first pixel point;
in step S830, when the confidence of the first pixel is smaller than the confidence threshold, the enhanced depth information of the first pixel is determined as the target depth information of the first pixel.
The confidence level refers to a parameter for describing whether the depth information of the pixel point is reliable or not. The confidence threshold is a judging condition for judging whether the depth information of the pixel point meets a preset standard, and can be set in a self-defined manner according to actual needs, which is not specifically described in the disclosure. In this exemplary embodiment, when the TOF sensor acquires the original image data, the confidence coefficient of each first pixel point may be calculated, where the confidence coefficient of the first pixel point with the strong original signal is higher, for example, the confidence coefficient of the pixel point with the strong received light intensity is higher; the confidence of the first pixel point with weak original signal is lower, for example, the received pixel point with weak light intensity has lower confidence. In this exemplary embodiment, when adjacent first pixel points are combined to determine a second pixel point, the confidence coefficient of the second pixel point may be obtained by calculating an enhanced signal of the second pixel point, or may be obtained by accumulating a plurality of confidence coefficients of the first pixel points, which is not specifically limited in this disclosure. When the depth information of the second pixel point is up-sampled and restored to the original resolution, the confidence coefficient of each up-sampled first pixel point can be determined in an up-sampling mode, namely, the confidence coefficient of the pixel point changes correspondingly to the resolution of the image.
When the target depth information is determined according to the original depth information and the enhanced depth information, the confidence coefficient of each first pixel point can be scanned line by line, and when the confidence coefficient of the first pixel point is larger than a confidence coefficient threshold value, the original depth information of the first pixel point is determined to be the target depth information of the first pixel point; and when the confidence coefficient of the first pixel point is smaller than the confidence coefficient threshold value, determining the enhanced depth information of the first pixel point as target depth information of the first pixel point.
As shown in fig. 9, the confidence threshold may be set to be 4, the confidence of each first pixel point in the original image data is shown in fig. 9 (a), the corresponding original depth information is shown in fig. 9 (b), wherein the confidence of the upper right 3 first pixel points is respectively 2, 3 and 3, and is lower than the confidence threshold, and the depth information d corresponding to the 3 first pixel points may be considered 1 、d 2 、d 3 Unreliable. By superposing the original signals of the first pixel points in fig. 9 (a) and upsampling the equivalent depth information of the superposed second pixel points in this exemplary embodiment, the upsampled image data is obtained, wherein the confidence diagrams of the included first pixel points are shown in fig. 9 (c), the corresponding enhanced depth information is shown in fig. 9 (d), and the confidence of the upper right 3 first pixel points are respectively c 1 '、c 2 '、c 3 ', and c 1 '、c 2 '、c 3 ' all are greater than the set confidence threshold, the enhanced depth information d may be enhanced 1 '、d 2 '、d 3 As target depth information, final target depth information is generated, and as shown in fig. 9 (e), the present exemplary embodiment schematically illustrates only target depth information of the first pixel at the upper right corner 2, and the determination of other pixels is similar thereto, and will not be described in detail.
In an exemplary embodiment, the step S320 may include:
and obtaining an enhancement signal generated by superposition of original signals of non-edge pixel points in the first pixel points.
Considering that when the adjacent first pixel points are combined into the second pixel point, if the first pixel point is an edge pixel point, the original edge information may be lost, so the present exemplary embodiment may superimpose the original signals of the non-edge pixel points to generate the enhancement signal of the second pixel point. The edge and the non-edge refer to opposite concepts, and may be set in a customized manner according to actual needs, for example, the edge pixel point may be one pixel point closest to the edge, or may be two pixel points, or may be determined in a specific manner.
Further, in an exemplary embodiment, the depth information processing method may further include:
and when the difference between the original depth information of the first pixel point and the equivalent depth information of the corresponding second pixel point is smaller than a depth difference threshold value, determining the first pixel point as a non-edge pixel point.
The present exemplary embodiment may determine the non-edge pixel point by comparing the original depth information of the first pixel point with the equivalent depth information of the corresponding second pixel point. If the difference value of the original depth information of the first pixel point and the equivalent depth information of the corresponding second pixel point is larger than the preset depth difference threshold value, the first pixel point is an edge pixel point, and the superposition process of the original signals is not needed, so that the effect of guaranteeing the edge information is achieved.
Fig. 10 shows a flowchart of another depth information processing method in the present exemplary embodiment, which may specifically include the following steps:
step S1010, obtaining original image data;
step S1020, acquiring original signals of a plurality of first pixel points in original image data acquired by a time-of-flight TOF sensor;
step S1030, overlapping the original signals of the adjacent first pixel points to generate an enhanced signal of the second pixel point;
step S1040, determining original depth information based on the original signal;
step S1050, determining equivalent depth information of the second pixel point based on the enhanced signal of the second pixel point;
step S1060, up-sampling the equivalent depth information of the second pixel to obtain the enhanced depth information of the first pixel;
step S1070, outputting the target depth information according to the original depth information and the enhanced depth information.
Step S1040 is similar to the process of determining the depth information of the pixel point in step S1050, in which the depth information is determined by calculating the time of flight through the depth estimation algorithm of the TOF sensor. The present exemplary embodiment enables the determined target depth information to have higher accuracy by fusing the original depth information with the enhanced depth information, and can change the problem that the signal-to-noise ratio of the original depth information is low, thereby improving the accuracy thereof.
In summary, in the present exemplary embodiment, raw signals of a plurality of first pixel points acquired by a time-of-flight TOF sensor are acquired; overlapping the original signals of the adjacent first pixel points to generate an enhanced signal of the second pixel point; the depth information is output based on the enhancement signal. On one hand, the present exemplary embodiment proposes a new depth information processing method, by superposing original signals of adjacent first pixel points, generating an enhanced signal of a second pixel point, and further determining depth information according to the enhanced signal, under the condition that the TOF sensor hardware structure is not changed, and power consumption of terminal equipment is not increased, signal enhancement is performed on the original signals through an algorithm, so that the depth information determined according to the enhanced signal has higher accuracy and reliability; on the other hand, the present exemplary embodiment can determine accurate depth information by superimposing the enhancement signal generated by the original signal on the long-distance object, the short-distance object with low reflectivity, or other low-power application scenarios, and has a wide application range.
Exemplary embodiments of the present disclosure also provide a depth information processing apparatus. As shown in fig. 11, the depth information processing apparatus 1100 may include: a raw signal acquisition module 1110, configured to acquire raw signals of a plurality of first pixel points acquired by a time-of-flight TOF sensor; the enhanced signal generating module 1120 is configured to superimpose original signals of adjacent first pixel points to generate an enhanced signal of a second pixel point; the depth information output module 1130 is configured to output depth information based on the enhancement signal.
In an exemplary embodiment, the original signal comprises a reflected light signal; the enhancement signal generation module includes: and the signal superposition unit is used for superposing the reflected light signals of the adjacent first pixel points and generating the reflected light distribution signals of the second pixel points.
In an exemplary embodiment, the depth information processing apparatus may further include: an original depth information determining module for determining original depth information based on the original signal; the depth information output module includes: an enhanced depth information determination unit for determining enhanced depth information based on the enhanced signal; and the target depth information output unit is used for outputting target depth information according to the original depth information and the enhanced depth information.
In an exemplary embodiment, the original depth information includes original depth information of the first pixel point; the enhanced depth information determination unit includes: an equivalent depth information determining subunit, configured to determine equivalent depth information of the second pixel point based on the enhancement signal of the second pixel point; and the up-sampling subunit is used for up-sampling the equivalent depth information of the second pixel point to obtain the enhanced depth information of the first pixel point.
In an exemplary embodiment, the target depth information output unit includes: the confidence determining subunit is used for determining the confidence of the first pixel point according to the original signal of the first pixel point; a target depth information determining subunit, configured to determine, when the confidence coefficient of the first pixel point is greater than the confidence coefficient threshold, original depth information of the first pixel point as target depth information of the first pixel point; and determining the enhanced depth information of the first pixel point as the target depth information of the first pixel point when the confidence coefficient of the first pixel point is smaller than the confidence coefficient threshold value.
In an exemplary embodiment, the enhancement signal generation module includes: and the enhancement signal generation unit is used for acquiring an enhancement signal generated by superposition of original signals of non-edge pixel points in the first pixel points.
In an exemplary embodiment, the depth information processing apparatus further includes: and the non-edge pixel point determining module is used for determining the first pixel point as a non-edge pixel point when the difference between the original depth information of the first pixel point and the equivalent depth information of the corresponding second pixel point is smaller than a depth difference threshold value.
The specific details of each part in the above apparatus are already described in the method part embodiments, and thus will not be repeated.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product, comprising program code for causing a terminal device to perform the steps according to the various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" when the program product is run on the terminal device, e.g. any one or more of the steps of fig. 3, 6, 7, 8 or 10 may be performed. The program product may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A depth information processing method, comprising:
acquiring original signals of a plurality of first pixel points acquired by a time-of-flight TOF sensor;
overlapping the original signals of the adjacent first pixel points to generate an enhanced signal of the second pixel point;
outputting depth information based on the enhancement signal;
the method further comprises the steps of:
determining original depth information based on the original signal;
the outputting depth information based on the enhanced signal includes:
determining enhancement depth information based on the enhancement signal;
outputting target depth information according to the original depth information and the enhanced depth information;
the original depth information comprises original depth information of the first pixel point;
the determining enhancement depth information based on the enhancement signal includes:
determining equivalent depth information of the second pixel point based on the enhancement signal of the second pixel point;
and up-sampling the equivalent depth information of the second pixel point to obtain the enhanced depth information of the first pixel point.
2. The method of claim 1, wherein the original signal comprises a reflected light signal;
the step of superposing the original signals of the adjacent first pixel points to generate the enhancement signal of the second pixel point comprises the following steps:
and superposing the reflected light signals of the adjacent first pixel points to generate the reflected light signals of the second pixel points.
3. The method of claim 1, wherein the outputting target depth information from the original depth information and the enhanced depth information comprises:
determining the confidence coefficient of the first pixel point according to the original signal of the first pixel point;
when the confidence coefficient of the first pixel point is larger than a confidence coefficient threshold value, determining the original depth information of the first pixel point as target depth information of the first pixel point;
and when the confidence coefficient of the first pixel point is smaller than the confidence coefficient threshold value, determining the enhanced depth information of the first pixel point as target depth information of the first pixel point.
4. The method of claim 1, wherein the superimposing the original signals of adjacent first pixels to generate the enhanced signal of the second pixel comprises:
and obtaining an enhancement signal generated by superposition of original signals of non-edge pixel points in the first pixel points.
5. The method according to claim 4, wherein the method further comprises:
and when the difference between the original depth information of the first pixel point and the equivalent depth information of the corresponding second pixel point is smaller than a depth difference threshold value, determining the first pixel point as a non-edge pixel point.
6. A depth information processing apparatus, comprising:
the original signal acquisition module is used for acquiring original signals of a plurality of first pixel points acquired by the time-of-flight TOF sensor;
the enhancement signal generation module is used for superposing original signals of adjacent first pixel points to generate enhancement signals of second pixel points;
a depth information output module for outputting depth information based on the enhanced signal;
the apparatus is further configured to:
determining original depth information based on the original signal;
a depth information output module configured to:
determining enhancement depth information based on the enhancement signal;
outputting target depth information according to the original depth information and the enhanced depth information;
the original depth information comprises original depth information of the first pixel point;
the determining enhancement depth information based on the enhancement signal is configured to:
determining equivalent depth information of the second pixel point based on the enhancement signal of the second pixel point;
and up-sampling the equivalent depth information of the second pixel point to obtain the enhanced depth information of the first pixel point.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 5.
8. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 5 via execution of the executable instructions.
CN202110209182.3A 2021-02-24 2021-02-24 Depth information processing method, depth information processing device, medium and electronic apparatus Active CN113031001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110209182.3A CN113031001B (en) 2021-02-24 2021-02-24 Depth information processing method, depth information processing device, medium and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110209182.3A CN113031001B (en) 2021-02-24 2021-02-24 Depth information processing method, depth information processing device, medium and electronic apparatus

Publications (2)

Publication Number Publication Date
CN113031001A CN113031001A (en) 2021-06-25
CN113031001B true CN113031001B (en) 2024-02-13

Family

ID=76461689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110209182.3A Active CN113031001B (en) 2021-02-24 2021-02-24 Depth information processing method, depth information processing device, medium and electronic apparatus

Country Status (1)

Country Link
CN (1) CN113031001B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110333501A (en) * 2019-07-12 2019-10-15 深圳奥比中光科技有限公司 Depth measurement device and distance measurement method
CN111366941A (en) * 2020-04-20 2020-07-03 深圳奥比中光科技有限公司 TOF depth measuring device and method
CN111766606A (en) * 2020-06-19 2020-10-13 Oppo广东移动通信有限公司 Image processing method, device and equipment of TOF depth image and storage medium
CN111866369A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Image processing method and device
CN112102386A (en) * 2019-01-22 2020-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120084216A (en) * 2011-01-19 2012-07-27 삼성전자주식회사 Method of 3d image signal processing for removing pixel noise of depth information and 3d image processor of the same
RU2012154657A (en) * 2012-12-17 2014-06-27 ЭлЭсАй Корпорейшн METHODS AND DEVICE FOR COMBINING IMAGES WITH DEPTH GENERATED USING DIFFERENT METHODS FOR FORMING IMAGES WITH DEPTH
US10884109B2 (en) * 2018-03-30 2021-01-05 Microsoft Technology Licensing, Llc Analytical-adaptive multifrequency error minimization unwrapping

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102386A (en) * 2019-01-22 2020-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110333501A (en) * 2019-07-12 2019-10-15 深圳奥比中光科技有限公司 Depth measurement device and distance measurement method
CN111366941A (en) * 2020-04-20 2020-07-03 深圳奥比中光科技有限公司 TOF depth measuring device and method
CN111866369A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Image processing method and device
CN111766606A (en) * 2020-06-19 2020-10-13 Oppo广东移动通信有限公司 Image processing method, device and equipment of TOF depth image and storage medium

Also Published As

Publication number Publication date
CN113031001A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN111784614B (en) Image denoising method and device, storage medium and electronic equipment
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN112269851B (en) Map data updating method and device, storage medium and electronic equipment
EP3930321A1 (en) Large aperture blurring method based on dual camera + tof
EP4258685A1 (en) Sound collection method, electronic device, and system
CN111784734B (en) Image processing method and device, storage medium and electronic equipment
CN113096185B (en) Visual positioning method, visual positioning device, storage medium and electronic equipment
CN111935486A (en) Image processing method and device, computer readable storage medium and electronic device
CN111161176B (en) Image processing method and device, storage medium and electronic equipment
CN113077397B (en) Image beautifying processing method and device, storage medium and electronic equipment
CN113313776B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111766606A (en) Image processing method, device and equipment of TOF depth image and storage medium
CN113986177A (en) Screen projection method, screen projection device, storage medium and electronic equipment
CN113409203A (en) Image blurring degree determining method, data set constructing method and deblurring method
CN116527748B (en) Cloud rendering interaction method and device, electronic equipment and storage medium
CN111652933B (en) Repositioning method and device based on monocular camera, storage medium and electronic equipment
CN113031001B (en) Depth information processing method, depth information processing device, medium and electronic apparatus
CN113343895B (en) Target detection method, target detection device, storage medium and electronic equipment
CN112927281A (en) Depth detection method, depth detection device, storage medium, and electronic apparatus
CN113269823A (en) Depth data acquisition method and device, storage medium and electronic equipment
WO2020063718A1 (en) Point cloud encoding/decoding method and encoder/decoder
CN112700525A (en) Image processing method and electronic equipment
CN113781336B (en) Image processing method, device, electronic equipment and storage medium
CN113537194B (en) Illumination estimation method, illumination estimation device, storage medium, and electronic apparatus
CN113658070A (en) Image processing method, image processing apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant