CN111626929B - Depth image generation method and device, computer readable medium and electronic equipment - Google Patents

Depth image generation method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN111626929B
CN111626929B CN202010350139.4A CN202010350139A CN111626929B CN 111626929 B CN111626929 B CN 111626929B CN 202010350139 A CN202010350139 A CN 202010350139A CN 111626929 B CN111626929 B CN 111626929B
Authority
CN
China
Prior art keywords
depth data
data
depth
quality
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010350139.4A
Other languages
Chinese (zh)
Other versions
CN111626929A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010350139.4A priority Critical patent/CN111626929B/en
Publication of CN111626929A publication Critical patent/CN111626929A/en
Application granted granted Critical
Publication of CN111626929B publication Critical patent/CN111626929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to a depth image generating method, a depth image generating device, a computer readable medium, and an electronic device. The method comprises the following steps: acquiring original depth data, and performing quality recovery processing on the original depth data to acquire high-quality depth data corresponding to the original depth data; and determining assignment of each pixel point in the depth image based on the high-quality depth data so as to generate a depth image corresponding to the original depth data. According to the method and the device, before the depth image is generated through the original depth data, quality recovery processing can be carried out on the original depth data, and further the depth image with higher resolution and better accuracy can be generated according to the high-quality depth data after quality recovery; meanwhile, the problem that obvious errors occur in the depth image due to errors in the original depth data can be avoided to a certain extent.

Description

Depth image generation method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a depth image generating method, a depth image generating device, a computer readable medium, and an electronic device.
Background
A depth image, also called range image, refers to an image that uses the distance (depth) from an image acquisition device to points in a scene as pixel values, which can directly reflect the geometry of the visible surface of the scene. With the continuous development of image and video technologies, depth images are also widely used.
In the related art, the actual distance between the sensor and the scene or the object in the scene is generally acquired through the sensor, and then the pixel value is determined according to the actual distance to generate the depth image. However, in severe conditions such as power consumption of the device, degradation of accuracy of the sensor, or poor collection environment of the sensor, the quality of data collected by the sensor often becomes problematic. In this case, the output depth image may not only have a problem of low resolution, but may also have a problem of reduced accuracy or even significant errors.
Disclosure of Invention
The disclosure aims to provide a depth image generation method, a depth image generation device, a computer readable medium and an electronic device, so as to at least avoid the problems of low resolution and low accuracy of a depth image caused by poor quality of data collected by a sensor in a severe environment to a certain extent.
According to a first aspect of the present disclosure, there is provided a method of generating a depth image, including: acquiring original depth data, and performing quality recovery processing on the original depth data to acquire high-quality depth data corresponding to the original depth data; the quality recovery processing at least comprises a data cleaning process and a super-resolution reconstruction process; and determining the assignment of each pixel point in the depth image based on the high-quality depth data so as to generate the depth image corresponding to the original depth data.
According to a second aspect of the present disclosure, there is provided a depth image generating apparatus including: the quality recovery module is used for acquiring original depth data and carrying out quality recovery processing on the original depth data so as to acquire high-quality depth data corresponding to the original depth data; the quality recovery processing at least comprises a data cleaning process and a super-resolution reconstruction process; and the image generation module is used for determining the assignment of each pixel point in the depth image based on the high-quality depth data so as to generate the depth image corresponding to the original depth data.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the above-described depth image generation method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising:
a processor; and
and a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the depth image generation method described above.
In the method for generating the depth image provided by the embodiment of the disclosure, after the original depth data is obtained, quality recovery processing can be performed on the original depth data before the depth image is generated by the original depth data, so that the depth image with higher resolution and better accuracy can be generated according to the high-quality depth data after quality recovery; in addition, since data cleaning and super-resolution reconstruction have been performed before the depth image is generated, a problem of significant errors in the depth image due to errors in the original depth data can be avoided to some extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied;
fig. 3 schematically illustrates a schematic diagram of a method for generating a depth image in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a method of acquiring high quality depth data in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a method of fusing first depth data and second depth data according to a weighting strategy in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a method of fusion calculation according to a first weight and a second weight in an exemplary embodiment of the present disclosure;
FIG. 7 schematically illustrates a schematic diagram of another method of acquiring high quality depth data in an exemplary embodiment of the present disclosure;
FIG. 8 schematically illustrates a schematic diagram of yet another method of acquiring high quality depth data in an exemplary embodiment of the present disclosure;
fig. 9 schematically illustrates a system frame diagram of a TOF (time of flight) module in the related art;
fig. 10 schematically illustrates a system frame diagram of a TOF module in an exemplary embodiment of the present disclosure;
Fig. 11 schematically illustrates a composition diagram of a depth image generating apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 illustrates a schematic diagram of a system architecture of an exemplary application environment to which a depth image generating method and apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of the terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others. The terminal devices 101, 102, 103 may be various electronic devices having image processing functions including, but not limited to, desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 105 may be a server cluster formed by a plurality of servers.
The method for generating a depth image provided by the embodiments of the present disclosure is generally performed in the terminal devices 101, 102, 103, and accordingly, the generating means of the depth image is generally provided in the terminal devices 101, 102, 103. However, it is easily understood by those skilled in the art that the method for generating a depth image provided in the embodiment of the present disclosure may also be performed by the server 105, and accordingly, the device for generating a depth image may also be disposed in the server 105, which is not particularly limited in the present exemplary embodiment. For example, in an exemplary embodiment, the user may collect the original depth data through a depth sensor included in the terminal device 101, 102, 103 and used for acquiring the depth image, and then upload the original depth data to the server 105, and after the server generates the depth image through the depth image generating method provided by the embodiment of the present disclosure, the depth image is transmitted to the terminal device 101, 102, 103, and so on.
Exemplary embodiments of the present disclosure provide an electronic device for implementing a method of generating a depth image, which may be the terminal device 101, 102, 103 or the server 105 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform a method of generating a depth image via execution of the executable instructions.
The electronic device may be implemented in various forms, and may include mobile devices such as a mobile phone, a tablet computer, a notebook computer, a personal digital assistant (Personal Digital Assistant, PDA), a navigation device, a wearable device, a drone, and fixed devices such as a desktop computer and a smart television. The configuration of the electronic device will be exemplarily described below using the mobile terminal 200 of fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes. In other embodiments, mobile terminal 200 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is shown schematically only and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also employ a different interface from that of fig. 2, or a combination of interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, universal serial bus (Universal Serial Bus, USB) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and subscriber identity module (Subscriber Identification Module, SIM) card interface 295, and the like. Wherein the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, and the like.
Processor 210 may include, among other things, one or more processing units, such as: the processor 210 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-Network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transfer instructions, and notification instructions, and are controlled to be executed by the processor 210. In some implementations, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some implementations, the processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (Inter-Integrated Circuit, I2C) interface, an integrated circuit built-in audio (Inter-Integrated Circuit Sound, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, a mobile industry processor interface (Mobile Industry Processor Interface, MIPI), a General-Purpose Input/Output (GPIO) interface, a subscriber identity module (Subscriber Identity Module, SIM) interface, and/or a universal serial bus (Universal Serial Bus, USB) interface, among others. Connections are made through different interfaces with other components of mobile terminal 200.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a micro USB interface, a USB type c interface, or the like. The USB interface 230 may be used to connect to a charger to charge the mobile terminal 200, may also be connected to a headset to play audio, and may also be used to connect to other electronic devices, such as a computer, a peripheral device, etc. with the mobile terminal 200.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 240 may receive a charging input of a wired charger through the USB interface 230. In some wireless charging embodiments, the charge management module 240 may receive wireless charging input through a wireless charging coil of the mobile terminal 200. The charging management module 240 may also provide power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the display 290, the camera module 291, the wireless communication module 260, and the like. The power management module 241 may also be configured to monitor battery capacity, battery cycle times, battery health (leakage, impedance), and other parameters. In other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charge management module 240 may be disposed in the same device.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in mobile terminal 200 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 200. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA), etc. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering and amplifying the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 271, the receiver 272, etc.), or displays images or videos through the display screen 290. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 250 or other functional module, independent of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including wireless local area network (Wireless Local Area Networks, WLAN) (e.g., wireless fidelity (Wireless Fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field wireless communication technology (Near Field Communication, NFC), infrared technology (IR), etc., applied on the mobile terminal 200. The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 250 of mobile terminal 200 are coupled, and antenna 2 and wireless communication module 260 are coupled, so that mobile terminal 200 may communicate with a network and other devices through wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (Global System for Mobile communications, GSM), general packet Radio service (General Packet Radio Service, GPRS), code Division multiple Access (Code Division Multiple Access, CDMA), wideband code Division multiple Access (Wideband Code Division Multiple Access, WCDMA), time Division multiple Access (TD-Synchronous Code Division Multiple Access, TD-SCDMA), long term evolution (Long Term Evolution, LTE), new Radio, NR), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (Global Positioning System, GPS), a global navigation satellite system (Global Navigation Satellite System, GLONASS), a Beidou satellite navigation system (Beidou Navigation Satellite System, BDS), a Quasi-zenith satellite system (Quasi-Zenith Satellite System, QZSS) and/or a satellite-based augmentation system (Satellite Based Augmentation Systems, SBAS).
The mobile terminal 200 implements display functions through a GPU, a display screen 290, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 290 is used for displaying images, videos, and the like. The display screen 290 includes a display panel. The display panel may employ a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active-Matrix Organic Light Emitting Diode (AMOLED), a flexible Light-Emitting Diode (flex), a mini, a Micro-OLED, a quantum dot Light-Emitting Diode (Quantum dot Light Emitting Diodes, QLED), or the like. In some embodiments, mobile terminal 200 may include 1 or N displays 290, N being a positive integer greater than 1.
The mobile terminal 200 may implement a photographing function through an ISP, a camera module 291, a video codec, a GPU, a display screen 290, an application processor, and the like.
The ISP is used to process the data fed back by the camera module 291. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some implementations, an ISP may be provided in the camera module 291.
The camera module 291 is used for capturing still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the mobile terminal 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the mobile terminal 200 includes N cameras, one of the N cameras is a master camera.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the mobile terminal 200 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The mobile terminal 200 may support one or more video codecs. In this way, the mobile terminal 200 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile terminal 200. The external memory card communicates with the processor 210 via an external memory interface 222 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio data, phonebook, etc.) created during use of the mobile terminal 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS), and the like. The processor 210 performs various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The mobile terminal 200 may implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, an earphone interface 274, an application processor, and the like. Such as music playing, recording, etc.
The audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 270 may also be used to encode and decode audio signals. In some implementations, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
A speaker 271, also called "horn", is used to convert the audio electrical signal into a sound signal. The mobile terminal 200 can listen to music through the speaker 271 or listen to hands-free calls.
A receiver 272, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the mobile terminal 200 receives a telephone call or voice message, the voice can be received by placing the receiver 272 close to the human ear.
A microphone 273, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 273 through the mouth, inputting a sound signal to the microphone 273. The mobile terminal 200 may be provided with at least one microphone 273. In other embodiments, the mobile terminal 200 may be provided with two microphones 273, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the mobile terminal 200 may further be provided with three, four or more microphones 273 to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, etc.
The earphone interface 274 is used to connect a wired earphone. The headset interface 274 may be a USB interface 230 or a 3.5mm open mobile electronic device platform (Open Mobile Terminal Platform, OMTP) standard interface, a american cellular telecommunications industry association (Cellular Telecommunications Industry Association of the USA, CTIA) standard interface.
The depth sensor 2801 is used to acquire depth information of a scene. In some embodiments, a depth sensor may be provided to the camera module 291.
The pressure sensor 2802 is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 2802 may be disposed on display 290. The pressure sensor 2802 is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like.
The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In some embodiments, the angular velocity of mobile terminal 200 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 2803. The gyro sensor 2803 can be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 2803 detects the angle of the shake of the mobile terminal 200, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shake of the mobile terminal 200 by the reverse motion, thereby realizing anti-shake. The gyro sensor 2803 can also be used for navigation, somatosensory of game scenes.
The air pressure sensor 2804 is used to measure air pressure. In some embodiments, the mobile terminal 200 calculates altitude from barometric pressure values measured by the barometric pressure sensor 2804, aiding in positioning and navigation.
In addition, sensors for other functions, such as magnetic sensors, acceleration sensors, distance sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, etc., may be provided in the sensor module 280 according to actual needs.
The keys 294 include a power on key, a volume key, etc. The keys 294 may be mechanical keys. Or may be a touch key. The mobile terminal 200 may receive key inputs, generating key signal inputs related to user settings and function controls of the mobile terminal 200.
The motor 293 may generate vibration cues, such as vibration cues of a call, an alarm clock, a received message, etc., and may also be used for touch vibration feedback, such as touch operations on different applications (e.g., photographing, gaming, audio playing, etc.), or touch operations on different areas of the display screen 290, which may correspond to different vibration feedback effects. The touch vibration feedback effect may support customization.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc.
The SIM card interface 295 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to enable contact and separation from the mobile terminal 200. The mobile terminal 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 295 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 295 may also be compatible with different types of SIM cards. The SIM card interface 295 may also be compatible with external memory cards. The mobile terminal 200 interacts with the network through the SIM card to realize functions such as communication and data communication. In some implementations, the mobile terminal 200 employs esims, i.e.: an embedded SIM card. The esim card can be embedded in the mobile terminal 200 and cannot be separated from the mobile terminal 200.
In the related art, in order to obtain a depth image with higher resolution, after the depth image is generated according to the original depth data, the depth image is often processed by an upsampling or super-resolution reconstruction technology to improve the resolution of the depth image. However, since a part of original depth information has been lost when generating a depth image, the depth image obtained by the above two methods has a problem of lower accuracy although resolution is high. For example, in the system architecture of the conventional TOF module, referring to fig. 9, after the signal transmitting module 910 transmits the debug signal, the debug signal is reflected back after contacting any object 920, the signal receiving module 930 receives the returned signal and processes the returned signal to obtain the original depth data, the central processing module 940 receives the original depth data processed by the signal receiver, filters, encodes, and converts the original depth data, and then directly inputs the obtained depth data into the depth image generating module 950 to generate the depth image. Since a part of depth data is lost in the process of generating the depth image, the depth image generating module 950 still has a problem of low accuracy even if the depth image is processed by the upsampling or super-resolution reconstruction technique.
The method and apparatus for generating a depth image according to exemplary embodiments of the present disclosure will be described in detail.
Fig. 3 shows a flow of a depth image generating method in the present exemplary embodiment, including the following steps S310 and S320:
in step S310, original depth data is acquired, and quality recovery processing is performed on the original depth data to acquire high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, the quality recovery process should include at least a data cleaning process and a super-resolution reconstruction process, and may further include other processes, and the execution order of the data cleaning and super-resolution reconstruction processes, and the other processes included are not particularly limited. For example, the process of data cleaning may be performed on the original depth data, followed by super-resolution reconstruction.
The data cleaning process may include at least one of a filtering process, an encoding process, and a format conversion process. For example, the cleaning process may include a filtering process, an encoding process, and a format conversion process at the same time; the super-resolution reconstruction process may be implemented by convolutional neural network, directional interpolation, deep learning, etc., which is not particularly limited in the present disclosure.
In an exemplary embodiment, the raw depth data may include two-dimensional image information. However, the data directly collected by the sensors or the camera modules can be converted because the forms of the data directly collected by the different sensors or the camera modules are different.
Specifically, when the directly collected data is one-dimensional information, the one-dimensional information may be arranged according to a certain order to generate two-dimensional image information. For example, the electrical signals acquired by rows on the CMOS photosensitive element may be arranged in columns to form a two-dimensional charge image, so as to obtain two-dimensional image information; when the directly collected data is two-dimensional information, the two-dimensional information can be rearranged according to a certain sequence to generate two-dimensional image information. For example, the phase images may be rearranged in reverse order; when the directly collected data is three-dimensional information, a series of two-dimensional information can be obtained according to a time axis sequence, and then operations such as pixel-level weighted mapping and the like are performed according to a certain sequence, so that two-dimensional image information is generated. Wherein the certain order may be a time order, a location order, or other order specified in advance.
In an exemplary embodiment, data collection may be performed by a depth sensor for acquiring a depth image while acquiring raw depth data; the depth sensor may be disposed in the camera module, for example, may collect raw depth data based on the TOF (time of flight) module. In addition, data collection can be performed by other devices, and original depth data can be obtained by receiving or reading the data collected by other devices.
In an exemplary embodiment, referring to fig. 4, performing quality recovery processing on original depth data to obtain high quality depth data corresponding to the original depth data may include the following steps S410 to S430:
in step S410, a data cleaning process is performed on the original depth data, and a super-resolution reconstruction process is performed on the first intermediate data obtained by the data cleaning process, so as to obtain first depth data corresponding to the original depth data.
In an exemplary embodiment, after the original depth data is obtained, a data cleaning process may be performed on the original depth data, and then a super-resolution reconstruction process may be performed on the first intermediate data obtained after cleaning, so as to obtain first depth data corresponding to the original depth data. The data cleaning process is carried out on the original depth data, noise data or error data existing in the original depth data can be removed, and then the super-resolution reconstruction process is carried out according to correct data, so that the first depth data with better quality is obtained.
In step S420, a super-resolution reconstruction process is performed on the original depth data, and a data cleaning process is performed on the reconstructed second intermediate data, so as to obtain second depth data corresponding to the original depth data.
In an exemplary embodiment, after the original depth data is obtained, a super-resolution reconstruction process may be performed on the original depth data, and then a data cleaning process may be performed on the reconstructed second intermediate data, so as to obtain second depth data corresponding to the original depth data. The super-resolution reconstruction process is carried out on the original depth data, the reconstruction can be carried out based on unprocessed original depth data, second intermediate data capable of reflecting the original depth data is obtained, then the data cleaning process is carried out on the second data, error data such as noise points in the second intermediate data are removed, and further the second depth data with better quality is obtained.
In step S430, the first depth data and the second depth data are fused according to a preset fusion policy, so as to obtain high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, due to the respective generation processes of the first depth data and the second depth data, the first depth data is more focused on the data of the residual part after cleaning in the original depth data, and the second depth data is more focused on the original depth data, the first depth data and the second depth data can be fused together according to a preset fusion strategy, and high-quality depth data without focusing on directions is obtained.
In an exemplary embodiment, the preset fusion policy may include a weight policy. At this time, referring to fig. 5, the first depth data and the second depth data are fused according to a preset fusion policy to obtain high quality depth data corresponding to the original depth data, which may include the following steps S510 and S520:
in step S510, a first depth value corresponding to each pixel in the depth image is determined according to the first depth data, and a second depth value corresponding to each pixel in the depth image is determined according to the first depth data.
In an exemplary embodiment, since the depth data may be used to determine a depth value corresponding to each pixel in the depth image, when the first depth data and the second depth data exist, the first depth value and the second depth value corresponding to each pixel in the depth image may be determined according to the first depth data and the second depth data, and then fused according to the depth data.
In step S520, the first depth value and the second depth value corresponding to each pixel point are calculated according to the weight policy, and the high-quality depth data corresponding to the original depth data is determined according to the calculation result.
In an exemplary embodiment, after the first depth value and the second depth value are obtained, the first depth value and the second depth value may be calculated according to a weight policy, and the depth value corresponding to each pixel point in the depth image may be determined according to the obtained calculation result. And finally, determining high-quality depth data corresponding to the depth image according to the determined depth value of each pixel point.
In an exemplary embodiment, the weight policy may include a first weight and a second weight, where values of the first weight and the second weight may be set according to requirements of depth images in different scenes, or each pixel point may be set separately, and a specific setting manner of the first weight and the second weight is not limited in this disclosure. At this time, the first depth value and the second depth value corresponding to each pixel point are calculated according to the weight policy, and the high-quality depth data corresponding to the original depth data is determined according to the calculation result, as shown in fig. 6, and the following steps S610 and S620 may be included:
in step S610, a weighted average of the first depth value and the second depth value corresponding to each pixel is calculated by taking the first weight as the weight of the first depth value and the second weight as the weight of the second depth value.
In step S620, each weighted average is used as a high-quality depth value of the corresponding pixel, and high-quality depth data corresponding to the original depth data is generated according to the high-quality depth value.
In an exemplary embodiment, the first weight and the second weight may be used as the weights of the first depth value and the second depth value, respectively, a weighted average of the first depth data and the second depth data may be calculated, and a result of the calculation of the weighted average may be used as a high quality depth value of the corresponding pixel point, and then the corresponding high quality depth data may be generated according to the high quality depth value. The above calculation process can be expressed by the following formula (1):
wherein, the liquid crystal display device comprises a liquid crystal display device,the high-quality depth value corresponding to the ith pixel in the high-quality depth data; />For a first depth value corresponding to the ith pixel in the first depth data, +.>Pairs at ith pixel location for first depth dataA first weight to be applied; />For a second depth value corresponding to the ith pixel in the second depth data, +.>A corresponding second weight for the second depth data at the i-th pixel location.
When the preset weights are adopted for fusion, the first depth value and the second depth value, the first weight and the second weight can be calculated according to other modes, so that high-quality depth values corresponding to all pixel points in the depth image can be obtained.
In an exemplary embodiment, the quality recovery process is performed on the original depth data to obtain high quality depth data corresponding to the original depth data, as shown in fig. 7, and the steps S710 and S720 may further include:
in step S710, a data cleansing process is performed on the original depth data to obtain third intermediate data.
In an exemplary embodiment, after the original depth data is obtained, a data cleaning process may be performed on the original depth data first, to remove noise data or error data existing in the original depth data, and obtain third intermediate data. By cleaning the data, third intermediate data free of noise data or erroneous data can be obtained.
In step S720, a super-resolution reconstruction process is performed on the third intermediate data to obtain high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, after the third intermediate data is obtained, a super-resolution reconstruction process may be performed on the third intermediate data to obtain high-quality depth data corresponding to the original depth data. By performing the super-resolution reconstruction process on the third intermediate data, super-resolution reconstruction can be performed on the basis of the data without noise data or error data, and the quality of the depth data is effectively improved.
In an exemplary embodiment, the quality recovery process is performed on the original depth data to obtain high quality depth data corresponding to the original depth data, as shown in fig. 8, and the steps S810 and S820 may further include:
in step S810, a super-resolution reconstruction process is performed on the original depth data to obtain fourth intermediate data.
In an exemplary embodiment, after the original depth data is obtained, a super-resolution reconstruction process may be performed on the original depth data first to repair details of the original depth data that are not processed and increase resolution, so as to obtain fourth intermediate data. By directly reconstructing the original depth data with super resolution, the problem of reconstruction distortion caused by data loss when reconstructing the original depth data after processing can be avoided.
In step S820, a data cleaning process is performed on the fourth intermediate data to obtain high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, after the fourth intermediate data is obtained, a data cleaning process may be performed on the fourth intermediate data to obtain high-quality depth data corresponding to the original depth data. Because the fourth intermediate data is obtained by directly reconstructing the super-resolution of the original depth data, some noise data or error data may also exist in the fourth intermediate data, and therefore, the noise data or error data in the fourth intermediate data may be removed through a data cleaning process.
In step S320, the assignment of each pixel point in the depth image is determined based on the high-quality depth data, so as to generate a depth image corresponding to the original depth data.
In an exemplary embodiment, after the high-quality depth data is determined, the assignment of each pixel point in the depth image may be determined according to the high-quality image data, and a corresponding depth image may be generated. Because the high-quality depth data is the depth data obtained through the data cleaning process and the super-resolution reconstruction process, the depth image generated based on the high-quality depth data has higher resolution and higher accuracy.
In an exemplary embodiment, in order to further improve accuracy of the depth image, when the original depth data is collected, a plurality of original depth data with different phases may be obtained through a continuous collection manner, then quality recovery processing is performed on the plurality of original depth data, and after that, the obtained plurality of high-quality depth data are input into a depth image generating module, assignment of each pixel point in the depth image is determined, and corresponding depth data are obtained.
It should be noted that, after generating the depth image based on the high-quality depth data, other processing may be performed on the depth image, and the present disclosure does not particularly limit the subsequent processing procedure.
The following describes the technical solution of the embodiment of the present disclosure in detail with reference to a schematic diagram of a TOF module frame shown in fig. 10 by taking a TOF module as an example.
Referring to fig. 10, after the signal reflection module 1010 transmits a debug signal, the debug signal is reflected back after contacting any object 1020, and after the signal received back by the signal reception module 1030, the returned signal is processed to obtain original depth data. The obtained original depth data is divided into two paths for processing: the first path, firstly, a data cleaning process is performed through a central processing module 1040 to obtain first intermediate data, and then a super resolution reconstruction (SR) process is performed through a super resolution reconstruction (SR) module 1050 to obtain first depth data; the second path is to perform the super-resolution reconstruction process through the SR module 1050 to obtain second intermediate data, and then perform the data cleaning process through the central processing module 1040 to obtain second depth data. The first depth data obtained by the two paths of processing is input into the fusion module 1060 to be fused with the second depth data to obtain high-quality depth data, and finally the high-quality depth data is input into the depth image generating module 1070 to generate a depth image according to the high-quality depth data.
It is noted that the above-described figures are merely schematic illustrations of processes involved in a method according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Further, referring to fig. 11, in this exemplary embodiment, there is further provided a depth image generating apparatus 1100, including a quality recovery module 1110 and an image generating module 1120. Wherein:
the quality recovery module 1110 may be configured to obtain original depth data, and perform quality recovery processing on the original depth data to obtain high-quality depth data corresponding to the original depth data; the quality recovery processing at least comprises a data cleaning process and a super-resolution reconstruction process.
The image generation module 1120 may be configured to determine, based on the high-quality depth data, assignment of each pixel in the depth image, so as to generate a depth image corresponding to the original depth data.
In an exemplary embodiment, the quality recovery module 1110 may be configured to perform a data cleaning process on the original depth data, and perform a super-resolution reconstruction process on the first intermediate data obtained by the data cleaning process, so as to obtain first depth data corresponding to the original depth data; performing a super-resolution reconstruction process on the original depth data, and performing a data cleaning process on the reconstructed second intermediate data to obtain second depth data corresponding to the original depth data; and fusing the first depth data and the second depth data according to a preset fusion strategy to obtain high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, the quality recovery module 1110 may be configured to determine a first depth value corresponding to each pixel in the depth image according to the first depth data, and determine a second depth value corresponding to each pixel in the depth image according to the first depth data; and calculating the first depth value and the second depth value corresponding to each pixel point according to the weight strategy, and determining high-quality depth data corresponding to the original depth data according to the calculation result.
In an exemplary embodiment, the quality recovery module 1110 may be configured to calculate a weighted average of the first depth value and the second depth value corresponding to each pixel point by using the first weight as the weight of the first depth value and the second weight as the weight of the second depth value; and taking each weighted average value as a high-quality depth value of the corresponding pixel point, and generating high-quality depth data corresponding to the original depth data according to the high-quality depth value.
In an exemplary embodiment, the quality restoration module 1110 may be configured to perform a data cleansing process on the raw depth data to obtain third intermediate data; and performing super-resolution reconstruction on the third intermediate data to obtain high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, the quality restoration module 1110 may be configured to perform a super-resolution reconstruction process on the original depth data to obtain fourth intermediate data; and performing a data cleaning process on the fourth intermediate data to obtain high-quality depth data corresponding to the original depth data.
In an exemplary embodiment, the data cleaning process includes at least one of a filtering process, an encoding process, and a format conversion process.
The specific details of each module in the above apparatus are already described in the method section, and the details that are not disclosed can be referred to the embodiment of the method section, so that they will not be described in detail.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device, e.g. any one or more of the steps of fig. 3 to 8 may be carried out.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. A method for generating a depth image, comprising:
acquiring original depth data, and performing quality recovery processing on the original depth data to acquire high-quality depth data corresponding to the original depth data; the quality recovery processing at least comprises a data cleaning process and a super-resolution reconstruction process;
determining assignment of each pixel point in the depth image based on the high-quality depth data so as to generate a depth image corresponding to the original depth data;
the quality recovery processing is performed on the original depth data to obtain high-quality depth data corresponding to the original depth data, including:
performing a data cleaning process on the original depth data, and performing a super-resolution reconstruction process on first intermediate data obtained by data cleaning to obtain first depth data corresponding to the original depth data;
performing a super-resolution reconstruction process on the original depth data, and performing a data cleaning process on the reconstructed second intermediate data to obtain second depth data corresponding to the original depth data;
And fusing the first depth data and the second depth data according to a preset fusion strategy to obtain high-quality depth data corresponding to the original depth data.
2. The method of claim 1, wherein the preset fusion policy comprises a weight policy;
fusing the first depth data and the second depth data according to a preset fusion strategy to obtain high-quality depth data corresponding to the original depth data, wherein the method comprises the following steps:
determining a first depth value corresponding to each pixel point in the depth image according to the first depth data, and determining a second depth value corresponding to each pixel point in the depth image according to the first depth data;
and calculating a first depth value and a second depth value corresponding to each pixel point according to a weight strategy, and determining high-quality depth data corresponding to the original depth data according to a calculation result.
3. The method of claim 2, wherein the weight policy comprises a first weight and a second weight;
the calculating the first depth value and the second depth value corresponding to each pixel point according to the weight strategy, and determining the high-quality depth data corresponding to the original depth data according to the calculation result, including:
Taking the first weight as the weight of the first depth value, and taking the second weight as the weight of the second depth value, calculating a weighted average value of the first depth value and the second depth value corresponding to each pixel point;
and taking each weighted average value as a high-quality depth value of the corresponding pixel point, and generating high-quality depth data corresponding to the original depth data according to the high-quality depth value.
4. The method according to claim 1, wherein the performing quality recovery processing on the original depth data to obtain high quality depth data corresponding to the original depth data includes:
performing a data cleaning process on the original depth data to obtain third intermediate data;
and performing a super-resolution reconstruction process on the third intermediate data to obtain high-quality depth data corresponding to the original depth data.
5. The method according to claim 1, wherein the performing quality recovery processing on the original depth data to obtain high quality depth data corresponding to the original depth data includes:
performing a super-resolution reconstruction process on the original depth data to obtain fourth intermediate data;
And performing a data cleaning process on the fourth intermediate data to obtain high-quality depth data corresponding to the original depth data.
6. The method of any one of claims 1 to 5, wherein the data cleaning process comprises at least one of a filtering process, an encoding process, a format conversion process.
7. A depth image generating apparatus, comprising:
the quality recovery module is used for acquiring original depth data, and carrying out quality recovery processing on the original depth data so as to acquire high-quality depth data corresponding to the original depth data; the quality recovery processing at least comprises a data cleaning process and a super-resolution reconstruction process;
the image generation module is used for determining assignment of each pixel point in the depth image based on the high-quality depth data so as to generate a depth image corresponding to the original depth data;
the quality recovery processing is performed on the original depth data to obtain high-quality depth data corresponding to the original depth data, including:
performing a data cleaning process on the original depth data, and performing a super-resolution reconstruction process on first intermediate data obtained by data cleaning to obtain first depth data corresponding to the original depth data;
Performing a super-resolution reconstruction process on the original depth data, and performing a data cleaning process on the reconstructed second intermediate data to obtain second depth data corresponding to the original depth data;
and fusing the first depth data and the second depth data according to a preset fusion strategy to obtain high-quality depth data corresponding to the original depth data.
8. A computer readable medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the depth image generation method according to any one of claims 1 to 6.
9. An electronic device, comprising:
a processor; and
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of generating a depth image as claimed in any one of claims 1 to 6.
CN202010350139.4A 2020-04-28 2020-04-28 Depth image generation method and device, computer readable medium and electronic equipment Active CN111626929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350139.4A CN111626929B (en) 2020-04-28 2020-04-28 Depth image generation method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350139.4A CN111626929B (en) 2020-04-28 2020-04-28 Depth image generation method and device, computer readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111626929A CN111626929A (en) 2020-09-04
CN111626929B true CN111626929B (en) 2023-08-08

Family

ID=72270855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350139.4A Active CN111626929B (en) 2020-04-28 2020-04-28 Depth image generation method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111626929B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610079A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108765548A (en) * 2018-04-25 2018-11-06 安徽大学 Three-dimensional scenic real-time reconstruction method based on depth camera
CN110349087A (en) * 2019-07-08 2019-10-18 华南理工大学 RGB-D image superior quality grid generation method based on adaptability convolution

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI553591B (en) * 2015-12-28 2016-10-11 緯創資通股份有限公司 Depth image processing method and depth image processing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610079A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108765548A (en) * 2018-04-25 2018-11-06 安徽大学 Three-dimensional scenic real-time reconstruction method based on depth camera
CN110349087A (en) * 2019-07-08 2019-10-18 华南理工大学 RGB-D image superior quality grid generation method based on adaptability convolution

Also Published As

Publication number Publication date
CN111626929A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111476911B (en) Virtual image realization method, device, storage medium and terminal equipment
WO2020238741A1 (en) Image processing method, related device and computer storage medium
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN111462170B (en) Motion estimation method, motion estimation device, storage medium and electronic equipment
CN111161176B (en) Image processing method and device, storage medium and electronic equipment
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
CN111598919B (en) Motion estimation method, motion estimation device, storage medium and electronic equipment
CN111766606A (en) Image processing method, device and equipment of TOF depth image and storage medium
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN114257920B (en) Audio playing method and system and electronic equipment
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN112533115A (en) Method and device for improving tone quality of loudspeaker
CN111626931B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111626929B (en) Depth image generation method and device, computer readable medium and electronic equipment
CN116703995A (en) Video blurring processing method and device
CN115412678A (en) Exposure processing method and device and electronic equipment
CN114466238B (en) Frame demultiplexing method, electronic device and storage medium
CN111526321A (en) Voice communication method, voice communication device, storage medium and electronic equipment
CN111294905B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN114584913B (en) FOA signal and binaural signal acquisition method, sound field acquisition device and processing device
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111696037B (en) Depth image processing method and device, storage medium and electronic equipment
CN115696067B (en) Image processing method for terminal, terminal device and computer readable storage medium
CN115019803B (en) Audio processing method, electronic device, and storage medium
WO2023024036A1 (en) Method and apparatus for reconstructing three-dimensional model of person

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant