CN112714925A - Image processing method, electronic device, and computer-readable storage medium - Google Patents

Image processing method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN112714925A
CN112714925A CN201880097727.2A CN201880097727A CN112714925A CN 112714925 A CN112714925 A CN 112714925A CN 201880097727 A CN201880097727 A CN 201880097727A CN 112714925 A CN112714925 A CN 112714925A
Authority
CN
China
Prior art keywords
image
phase
obtaining
pixel
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880097727.2A
Other languages
Chinese (zh)
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN112714925A publication Critical patent/CN112714925A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Abstract

An image processing method includes: acquiring at least two phase images corresponding to a target image; filtering the phase image to obtain a filtered phase image; and performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.

Description

Image processing method, electronic device, and computer-readable storage medium Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an electronic device, and a computer-readable storage medium.
Background
The problem that the quality of a depth image acquired through a camera module is poor exists, the depth image needs to be enhanced, and the traditional enhancement processing method mainly comprises the following steps: image denoising, image smoothing, image super-resolution and the like. The denoising and smoothing are implemented by performing bilateral filtering and the like on the depth image to remove salt and pepper noise, so that three-dimensional space points are uniformly distributed. And then smoothing and filtering the point cloud data generated by the depth image to improve the quality of the point cloud so as to meet the quality requirement.
Disclosure of Invention
The embodiment of the application provides an image processing method, electronic equipment and a computer readable storage medium.
An image processing method comprising:
acquiring at least two phase images corresponding to a target image;
filtering the phase image to obtain a filtered phase image; and
and performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to:
acquiring at least two phase images corresponding to a target image;
filtering the phase image to obtain a filtered phase image; and
and performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
A computer-readable storage medium comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to:
acquiring at least two phase images corresponding to a target image;
filtering the phase image to obtain a filtered phase image; and
and performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
When the target image needs to be subjected to depth processing, the image processing method, the electronic device, and the computer-readable storage medium of the embodiment of the application first obtain at least two phase images corresponding to the target image as phase images, then perform filtering processing on the obtained phase images to obtain filtered phase images, and finally perform depth calculation on the filtered phase images to obtain depth images corresponding to the target image. When the image is subjected to depth processing, the quality of the depth image is improved through a guide filtering processing mode, simultaneously, the image is denoised and smoothed at the same stage, and the efficiency of the depth processing of the image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an embodiment of an application environment of an image processing method.
FIG. 2 is a flow diagram of a method of image processing in one embodiment.
FIG. 3 is a flow diagram of the steps for obtaining a confidence image in one embodiment.
FIG. 4 is a flow diagram of steps in a filtering process in one embodiment.
FIG. 5 is a flow diagram of steps performed in depth image processing in one embodiment.
FIG. 6 is a block diagram showing an example of the structure of an image processing apparatus.
FIG. 7 is a block diagram of an electronic device in an embodiment.
Fig. 8 is a block diagram showing a part of the structure of a cellular phone according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that the terms "first," "second," and the like, as used in the embodiments of the present application, may be used herein to describe various elements, but the elements are not limited by these terms. These terms are only used to distinguish one element from another.
Fig. 1 is a schematic diagram of an application environment of an image processing method in an embodiment. As shown in fig. 1, the application environment includes an electronic device 100, and when the electronic device 100 needs to perform depth processing on a target image, at least two phase images corresponding to the target image are first obtained as a phase image, then the obtained phase images are subjected to filtering processing to obtain filtered phase images, and finally the filtered phase images are subjected to depth calculation to obtain depth images corresponding to the target image. It is understood that the electronic device 100 may be a smartphone, a tablet, a personal digital assistant, or the like.
In one embodiment, an image processing method is provided and is exemplified by being applied to the electronic device 100 described above, and as shown in fig. 2, the method includes the following operations:
in operation 202, a phase image corresponding to the target image is obtained.
The target image refers to an initial image generated to the depth image.
The electronic equipment can be provided with a camera, and images are obtained through the installed camera. The camera can be divided into types such as a laser camera and a visible light camera according to the difference of the obtained images, the laser camera can obtain the image formed by irradiating the laser to the object, and the visible light image can obtain the image formed by irradiating the visible light to the object. The electronic equipment can be provided with a plurality of cameras, and the installation position is not limited. For example, one camera may be installed on a front panel of the electronic device, two cameras may be installed on a back panel of the electronic device, and the cameras may be installed in an embedded manner inside the electronic device and then opened by rotating or sliding. Specifically, a front camera and a rear camera can be mounted on the electronic device, the front camera and the rear camera can acquire images from different viewing angles, the front camera can acquire images from a front viewing angle of the electronic device, and the rear camera can acquire images from a back viewing angle of the electronic device.
In this embodiment, when the electronic device acquires and generates the target image, it is necessary to perform corresponding image processing on the acquired initial image, so as to obtain an image that meets the output requirement. Specifically, when the initial image is processed, a phase image corresponding to the initial image is obtained first, and the obtained phase image needs to be processed to a certain extent, so that the phase image is called a to-be-processed phase image, and the initial image is a target image.
Further, acquiring at least two phase images corresponding to the target image, including: and obtaining at least two phase images corresponding to the target image based on the time-of-flight (TOF) camera module.
The Time of Flight (ToF) is to continuously emit light pulses through an infrared emitter and receive the reflected light pulses to calculate a Time difference between infrared light emission and reflection, so as to form stereoscopic vision. At least two phase images corresponding to the target image are obtained through the time-of-flight (ToF) camera module, wherein the phase images comprise amplitude images, and the phase images are processed based on the phase images to obtain a finally needed depth processing image.
Further, after acquiring at least two phase images corresponding to the target image, the method further includes: and obtaining a corresponding confidence coefficient image based on the phase image. The confidence image is a reference image when the phase image is subjected to filtering processing, namely a filtered waveguide graph.
When the phase image is filtered, a filtering guide map for filtering is needed, and in order to ensure the accuracy of the filtering process, a corresponding filtering guide map, namely a confidence image, is obtained according to the obtained phase image. The confidence image is obtained from the phase image, and specifically, all the characteristic parameters and image information of the confidence image are obtained according to the phase image, including the pixel value corresponding to each pixel point in the confidence image, the saturation of the image of the confidence image, and so on.
In operation 204, the phase image is filtered to obtain a filtered phase image.
The filtering process can perform denoising process on the image, and can effectively improve the quality of the finally generated image.
When a corresponding confidence image is obtained according to the acquired phase image, the phase image is subjected to filtering processing according to the confidence image, and actually, there are various filtering processing manners, such as bilateral filtering, joint bilateral filtering, median filtering, and guided filtering (i.e., guided filtering).
The guided filtering algorithm belongs to a filtering algorithm that can maintain edges. When the guided filtering is used for filtering, a filter guide map is needed, wherein the filter guide map can be another independent image or the input image itself, and when the filter guide map is the input image itself, the guided filtering becomes a filtering operation for keeping edges. In addition, the guided filtering can be used in noise reduction, detail smoothing, HDR compression, matting, defogging, and joint sampling.
In this embodiment, the confidence image is used as a filter waveguide image to perform filtering processing on the phase image in a guided filtering manner, so as to obtain a denoised phase image.
And operation 206, performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
And when the filtering operation is carried out to obtain the filtered phase image, carrying out depth calculation on the filtered phase image, and then obtaining a depth processing image corresponding to the target image according to the obtained depth calculation result. In the actual processing process, at least two phase images are obtained firstly and used as a basis for processing the target graph, and corresponding image processing is carried out based on the phase images so as to achieve the effect of carrying out denoising and smoothing processing on the target image.
In this embodiment, when the depth processing needs to be performed on the target image, at least two phase images corresponding to the target image are first obtained to serve as a to-be-processed phase image, then the obtained phase images are subjected to filtering processing to obtain filtered phase images, and finally the depth calculation is performed on the filtered phase images to obtain depth processed images corresponding to the target image. When the image is subjected to depth processing, the quality of the depth image is improved through a guide filtering processing mode, simultaneously, the image is denoised and smoothed at the same stage, and the efficiency of the depth processing of the image is improved.
In one embodiment, as shown in fig. 3, obtaining a corresponding confidence image based on the phase image includes:
in operation 302, first pixel values corresponding to the same pixel point position in the phase image are obtained to obtain confidence pixel values corresponding to the pixel point positions, where the confidence pixel values are an average value of the first pixel values.
Operation 304, obtaining confidence pixel values corresponding to the pixel positions, and obtaining corresponding confidence images based on the confidence pixel values corresponding to the pixel positions.
When the confidence level image is obtained according to the phase image, all image information contained in the confidence level image is determined by the phase image. Specifically, there are a plurality of phase images, first pixel values corresponding to the same pixel point position in the phase images are obtained, and then confidence pixel values of corresponding positions are obtained according to the obtained first pixel values, so as to obtain corresponding confidence images.
When performing depth processing based on the time-of-flight ToF camera module, the number of obtained phase images is usually four, and in this embodiment, assuming that the number of obtained phase images is four, the first phase images corresponding to the same pixel point a in the four phase images are obtainedThe first pixel values are Q1, Q2, Q3 and Q4, which respectively represent the pixel values of the different phase images at the same pixel point position, and the confidence pixel value at the pixel point A is
Figure PCTCN2018116260-APPB-000001
I.e. the confidence pixel value is the average of all pixel values. Actually, the number of the phase images may not be only four, and when the number of the phase images is multiple, the confidence pixel value at the pixel point a is
Figure PCTCN2018116260-APPB-000002
Where N is the number of phase images, i represents the ith phase image, QiRepresenting the pixel value of the ith phase image at pixel point a.
And calculating the confidence coefficient pixel value of each pixel point position to obtain the confidence coefficient pixel value corresponding to each pixel point position in the confidence coefficient image, and further generating to obtain the corresponding confidence coefficient image. In the confidence image, the pixel value of each pixel point position has a certain correlation with all the phase images. In this embodiment, a subsequent filtering operation is performed based on a confidence image obtained from a phase image obtained from the target pattern, so that the finally obtained image after the depth processing is more accurate.
In one embodiment, the filtering the phase image to obtain a filtered phase image in operation 204 includes: and taking the confidence coefficient image as a guide graph, and filtering the phase image to obtain a filtered phase image.
Specifically, as shown in fig. 4, the filtering the phase image to obtain a filtered phase image includes:
in operation 402, original vectors corresponding to the phase images and steering vectors corresponding to the confidence images are obtained.
In operation 404, a corresponding relationship between the original vector and the guiding vector is established.
In operation 406, a filtered phase image is obtained according to the corresponding relationship.
The guided filtering is an image filtering technique, and a target image is filtered through a filter waveguide directed graph, and the target image is an input image subjected to image processing, so that the final output image is similar to the target image in general, but the texture part is similar to the filter waveguide directed graph. There are two typical applications: and smoothing and matting the edge-protected image.
And when the phase image is subjected to filtering processing, the confidence coefficient image is used as a guide graph to carry out filtering operation, and the phase image and the confidence coefficient image are processed together to obtain a filtered phase image corresponding to the currently filtered phase image. Specifically, an original vector corresponding to the phase image and a guide vector corresponding to the confidence image are obtained, a corresponding relation between the original vector of the phase image and the guide vector of the confidence image is established according to a realization principle of guide filtering, and finally the phase image after the phase image is filtered is generated according to the obtained corresponding relation.
In this embodiment, the actual processing procedure based on the guided filtering is: the steering vector of the confidence image (steering map) is denoted as g, the initial vector of the phase image (input image) is denoted as p, and the resultant vector of the filtered phase image (output image) is denoted as q. The goal of guided filtering is to make the original input and output as identical as possible, while the texture part is similar to the filtered waveguide graph.
For target 1: the phase image p and the filtered phase image q are as similar as possible and can be described by the formula: min | q-p-2. Wherein the formula min | q-p-2Indicating that | q-p shadingis obtained by calculation2(ii) q-p non-charging2The smaller the value, the higher the similarity of the phase image p to the filtered phase image q.
For target 2: the texture and confidence image g of the filtered phase image q are as similar as possible, and are described by the formula:
Figure PCTCN2018116260-APPB-000003
in the above formula, if the filtered phase image is a single-channel image and the confidence image is a multi-channel image, then a is a vector; if the number of q and g channels is the same, then a is a scalar or diagonal matrix. Obviously, the smaller the value of a, the smoother the final output image, i.e. the smaller the value of a, the higher the similarity of the texture of the filtered phase image q to the texture of the confidence image g. For the formula described in object 2, a corresponding transformation can also be performed to obtain: q ═ a × g + b.
In this embodiment, when performing filtering processing, the phase image is actually subjected to denoising processing, and for the phase image p being an original image with noise, assuming that n is the noise to be filtered, there is the following formula: q is p-n. The principle of guided filtering is: assuming that q and g are equally distributed, i.e. q and g have a linear relationship, i.e. q is a g + b as described above, n is p- (a g-b) and this term is minimized to solve for a and b. And determining corresponding depth information according to the obtained result information, and further obtaining a filtered phase image.
Further, obtaining a filtered phase image corresponding to the phase image according to the corresponding relationship includes:
and a, according to the corresponding relation, obtaining pixel values corresponding to the positions of the pixel points in the filtered phase image.
And b, generating and obtaining a filtered phase image based on the pixel values respectively corresponding to the positions of the pixel points.
As can be seen from the above description, the established correspondence includes q ═ p-n and q ═ a ×, g + b, and from the obtained correspondence, n ═ p- (a ×, g-b) can be obtained, and by minimizing n ═ p- (a ×, g-b), the corresponding values of a and b are obtained, and further, the corresponding filtered phase images are generated and obtained. In addition, an energy function may also be defined:
Figure PCTCN2018116260-APPB-000004
wherein i represents traversing each pixel point, k represents the kth phase image, and solving the energy function is the algorithm process of the guiding filtering. Meanwhile, in practice, when filtering is performed, the pixel values corresponding to the positions of the pixels are obtained through filtering, that is, in this embodiment, the established correspondence relationship is qi=p i-n iAnd q isi=a*g i+ b, where i represents the traversal of each pixel point. And after the pixel values corresponding to the positions of the pixel points are obtained, generating and obtaining a corresponding filtered phase image. Compared with other filtering methods, the guided filtering method avoids the pixel from turning at the edge of the image, and meanwhile, the noise suppression effect in a flat area is better.
In an embodiment, as shown in fig. 5, in operation 206, performing depth calculation on the filtered phase image to obtain a depth processed image corresponding to the target image, includes:
in operation 502, a second pixel value corresponding to the same pixel point position in the filtered phase image is obtained.
In operation 504, depth information corresponding to the pixel position is obtained based on the second pixel value.
In operation 506, a depth image corresponding to the target image is generated according to the depth information corresponding to the positions of the pixel points.
When the image is subjected to depth processing, a corresponding depth processing image needs to be obtained, and therefore after the filtered phase image is obtained, depth processing is performed according to the obtained filtered phase image to obtain a depth processing image after depth processing of the target image. Specifically, a second pixel value corresponding to the same pixel point position in the filtered phase image is obtained, then depth information corresponding to the pixel point position is obtained according to the obtained second pixel value, for all the pixel points, the depth information corresponding to each pixel point position is obtained in the same manner, and then a depth processing image corresponding to the target image is obtained.
When a filtered phase image is actually obtained, obtaining a second pixel value corresponding to the same pixel point position in the filtered phase image, further obtaining depth information corresponding to the pixel point position according to the obtained second pixel value, and then generating and obtaining a corresponding depth processing image after obtaining the depth information of all pixel point positions.
Further, in operation 504, based on the second pixel value, obtaining depth information corresponding to the pixel position includes:
operation c, obtaining a corresponding phase angle based on the second pixel value, and obtaining the output frequency of the time-of-flight (ToF) camera module;
d, obtaining depth information corresponding to the pixel point position according to the phase difference and the output frequency; wherein the depth information is proportional to the phase difference and inversely proportional to the output frequency.
Specifically, taking four phase images as an example, the phase angle obtained according to the pixel value corresponding to the pixel position at this time is the phase angle
Figure PCTCN2018116260-APPB-000005
Wherein Q1, Q2, Q3 and Q4 are second pixel values corresponding to the same positions of the four phase images, and when obtaining the depth information, the corresponding depth calculation formula is as follows
Figure PCTCN2018116260-APPB-000006
Wherein d is depth information, c is light speed, and f is the output frequency of the time-of-flight ToF camera module. And when the depth information corresponding to each pixel point position is obtained through calculation, generating and obtaining a corresponding depth processing image.
The image processing method improves the quality of the depth image, and aims to avoid the post-processing process in the point cloud stage. Meanwhile, the traversal of the image data can be completed within limited time complexity through the array subscript, and the efficiency is far higher than that of the traversal of the disordered point cloud data, so that the operation efficiency is higher. In addition, in the 3D scene reconstruction application, the image processing method enables the pixel points of the depth image to be uniformly distributed by improving the quality of the depth image, avoids homogenizing point cloud data and improves the running efficiency of the application.
In the 3D face reconstruction application, the quality of the depth image is improved, so that the influence of noise on the non-rigid surface reconstruction can be reduced, and facial five sense organs are clearer.
It should be understood that, although the operations in the above-described flowcharts are sequentially shown as indicated by arrows, the operations are not necessarily performed sequentially as indicated by the arrows. The operations may be performed in other sequences without a strict order of limitation unless explicitly stated otherwise. Moreover, at least a portion of the operations in the various flowcharts described above may include multiple sub-operations or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-operations or stages is not necessarily sequential, but may be performed in turn or alternately with other operations or at least a portion of the sub-operations or stages of other operations.
Fig. 6 is a block diagram showing a configuration of an image processing apparatus according to an embodiment, as shown in fig. 6, the apparatus including: an image acquisition module 602, a filtering processing module 604, and a depth processing module 606, wherein:
the image obtaining module 602 is configured to obtain at least two phase images corresponding to the target image.
And a filtering module 604, configured to perform filtering processing on the phase image to obtain a filtered phase image.
And the depth processing module 606 is configured to perform depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
In an embodiment, the provided image acquisition module is further configured to obtain at least two phase images corresponding to the target image based on the time-of-flight ToF camera module, where the phase images include amplitude images.
In one embodiment, an image processing apparatus is provided that further includes a guide map acquisition module. The guide image acquisition module is used for obtaining a corresponding confidence image based on the phase image so as to take the confidence image as a filter guide image.
In one embodiment, a guide map acquisition module is provided that includes a pixel value calculation module and a guide map generation module. The guide image generation module is used for obtaining confidence pixel values corresponding to all pixel point positions respectively, and obtaining a corresponding confidence image based on the confidence pixel values corresponding to all pixel point positions, so as to use the confidence image as a filter guide image.
In an embodiment, a filtering processing module is further configured to filter the phase image by using the confidence image as a filtered waveguide directed graph, so as to obtain a filtered phase image.
In one embodiment, the filtering processing module further includes a vector obtaining module, a relationship establishing module, and an image generating module. The vector acquisition module is used for acquiring original vectors corresponding to the phase images respectively and acquiring guide vectors corresponding to the confidence level images; the relation establishing module is used for establishing a corresponding relation between the original vector and the guide vector; and the image generation module is used for obtaining the filtered phase image according to the corresponding relation.
In one embodiment, an image generation module is provided that further comprises a pixel value determination module and a generation module. The pixel value determining module is used for obtaining pixel values corresponding to the positions of all pixel points in the filtered phase image according to the corresponding relation; the generating module is used for generating and obtaining a filtered phase image based on the pixel values respectively corresponding to the positions of the pixel points.
In one embodiment, a generation module is provided that further comprises a pixel value acquisition module, a depth information acquisition module, and a depth image generation module. The pixel value acquisition module is used for acquiring a second pixel value corresponding to the same pixel point position in the filtered phase image; the depth information acquisition module is used for acquiring depth information corresponding to the pixel point position based on the second pixel value; and the depth image generation module is used for generating and obtaining a depth image corresponding to the target image according to the depth information corresponding to each pixel point position.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device or the server, and can also be stored in a memory in the electronic device or the server in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, as shown in fig. 7, a schematic diagram of an internal structure of an electronic device is provided. The electronic device includes a processor, a memory, a display, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs, instruction codes and/or the like, and at least one computer program is stored on the memory, and the computer program can be executed by the processor to realize the image processing method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image processing method provided by various embodiments of the present application. The internal memory provides a cached execution environment for the operating system and computer programs in the non-volatile storage medium. The display may be used to display information, e.g., to display various interfaces, etc. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device, such as a server.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices or servers to which the subject application may be applied, and that a particular electronic device or server may include more or fewer components than those shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the application also provides the electronic equipment. As shown in fig. 8, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. Taking an electronic device as a mobile phone as an example:
fig. 8 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application. Referring to fig. 8, the handset includes: radio Frequency (RF) circuitry 1110, memory 1120, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (WiFi) module 870, processor 880, and power supply 890. Those skilled in the art will appreciate that the handset configuration shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 810 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink information of a base station and then process the downlink information to the processor 880; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 810 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), email, Short Message Service (SMS), and the like.
The memory 820 may be used to store software programs and modules and may also be used to store an application installation package, and the processor 880 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 820. The memory 820 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area can store data (such as audio data, address book and the like) configured according to the use of the mobile phone and the like. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 830 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 800. Specifically, the input unit 1130 may include a touch panel 831 and other input devices 832. The touch panel 831, which may also be referred to as a touch screen, may collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 or near the touch panel 831 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 831 can include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 880, and can receive and execute commands from the processor 880. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 830 may include other input devices 832 in addition to the touch panel 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
The display unit 840 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The display unit 840 may perform display according to resolution. The display unit 840 may include a display panel 841. In one embodiment, the Display panel 841 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, touch panel 831 can overlay display panel 841, and when touch panel 831 detects a touch operation thereon or nearby, communicate to processor 880 to determine the type of touch event, and processor 880 can then provide a corresponding visual output on display panel 841 based on the type of touch event. Although in fig. 8, the touch panel 831 and the display panel 841 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The cell phone 800 may also include at least one sensor 850, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 841 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 841 and/or the backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
The audio circuitry 860, speaker 861 and microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 can transmit the electrical signal converted from the received audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 and output; on the other hand, the microphone 862 converts the collected sound signal into an electrical signal, which is received by the audio circuit 860 and converted into audio data, and then the audio data is output to the processor 880 for processing, and then the audio data may be transmitted to another mobile phone through the RF circuit 810, or the audio data may be output to the memory 820 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 870, and provides wireless broadband Internet access for the user. Although fig. 8 shows WiFi module 870, it is understood that it is not an essential component of cell phone 800 and may be omitted as desired.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby integrally monitoring the mobile phone. The processor 880 may also perform application updates according to the application installation package. In one embodiment, processor 880 may include one or more processing units. In one embodiment, the processor 880 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 880.
The cell phone 800 also includes a power supply 890 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 880 via a power management system that may be used to manage charging, discharging, and power consumption.
In one embodiment, the cell phone 800 may also include a camera, a bluetooth module, and the like.
In this embodiment, the processor 880 executes the steps of the image processing method by executing the software programs and modules stored on the memory 820 during use of the cell phone 800.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (18)

  1. A method of image processing, the method comprising:
    acquiring at least two phase images corresponding to a target image;
    filtering the phase image to obtain a filtered phase image; and
    and performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
  2. The method of claim 1, wherein the obtaining at least two phase images corresponding to the target image comprises:
    the method comprises the steps of obtaining at least two phase images corresponding to a target image based on a time-of-flight (ToF) camera module, wherein the phase images comprise amplitude images.
  3. The method according to claim 1, wherein after the obtaining of the at least two phase images corresponding to the target image, further comprising:
    obtaining a corresponding confidence coefficient image based on the phase image;
    the filtering the phase image to obtain a filtered phase image includes:
    and taking the confidence coefficient image as a filter waveguide directed graph, and carrying out filter processing on the phase image to obtain a filtered phase image.
  4. The method of claim 3, wherein the deriving a corresponding confidence image based on the phase image comprises:
    acquiring first pixel values corresponding to the same pixel point position in the phase image to obtain confidence pixel values corresponding to the pixel point position, wherein the confidence pixel values are the average values of the first pixel values; and
    and obtaining confidence pixel values corresponding to the positions of the pixel points respectively, and obtaining corresponding confidence images based on the confidence pixel values corresponding to the positions of the pixel points.
  5. The method of claim 4, wherein the filtering the phase image by using the confidence image as a filtered waveguide map to obtain a filtered phase image comprises:
    acquiring original vectors corresponding to the phase images respectively, and acquiring a guide vector corresponding to the confidence coefficient image;
    establishing a corresponding relation between the original vector and the guide vector; and
    and obtaining a filtered phase image according to the corresponding relation.
  6. The method of claim 5, wherein obtaining the filtered phase image according to the correspondence comprises:
    according to the corresponding relation, obtaining pixel values corresponding to the positions of all pixel points in the filtered phase image; and
    and generating and obtaining a filtered phase image based on the pixel values respectively corresponding to the positions of the pixel points.
  7. The method according to any one of claims 1 to 6, wherein the performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image includes:
    obtaining a second pixel value corresponding to the same pixel point position in the filtered phase image;
    obtaining depth information corresponding to the pixel point position based on the second pixel value; and
    and generating and obtaining a depth image corresponding to the target image according to the depth information corresponding to the positions of the pixel points respectively.
  8. The method according to claim 7, wherein obtaining depth information corresponding to the pixel position based on the second pixel value comprises:
    obtaining a corresponding phase angle based on the second pixel value, and obtaining the output frequency of the time-of-flight (ToF) camera module;
    and obtaining depth information corresponding to the pixel point position according to the phase difference and the output frequency, wherein the depth information is in direct proportion to the phase difference and in inverse proportion to the output frequency.
  9. An image processing apparatus, characterized in that the apparatus comprises:
    the image acquisition module is used for acquiring at least two phase images corresponding to the target image;
    the filtering processing module is used for filtering the phase image to obtain a filtered phase image;
    and the depth processing module is used for carrying out depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
  10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to:
    acquiring at least two phase images corresponding to a target image;
    filtering the phase image to obtain a filtered phase image; and
    and performing depth calculation on the filtered phase image to obtain a depth image corresponding to the target image.
  11. The electronic device according to claim 10, wherein the computer program, when executed by the processor, further performs the following operations when performing the at least two phase images corresponding to the acquired target image:
    the method comprises the steps of obtaining at least two phase images corresponding to a target image based on a time-of-flight (ToF) camera module, wherein the phase images comprise amplitude images.
  12. The electronic device according to claim 10, wherein the computer program, when executed by the processor, further performs the following operations after performing the at least two phase images corresponding to the acquired target image:
    obtaining a corresponding confidence coefficient image based on the phase image;
    when the processor performs the filtering processing on the phase image to obtain a filtered phase image, the processor further performs the following operations:
    and taking the confidence coefficient image as a filter waveguide directed graph, and carrying out filter processing on the phase image to obtain a filtered phase image.
  13. The electronic device according to claim 12, wherein the computer program, when executed by the processor, further performs the following operations when performing the deriving of the corresponding confidence image from the phase image:
    acquiring first pixel values corresponding to the same pixel point position in the phase image to obtain confidence pixel values corresponding to the pixel point position, wherein the confidence pixel values are the average values of the first pixel values; and
    and obtaining confidence pixel values corresponding to the positions of the pixel points respectively, and obtaining corresponding confidence images based on the confidence pixel values corresponding to the positions of the pixel points.
  14. The electronic device of claim 12, wherein the computer program, when executed by the processor, further performs the following when performing the filtering of the phase image from the confidence image as a filtered waveguide map to obtain a filtered phase image:
    acquiring original vectors corresponding to the phase images respectively, and acquiring a guide vector corresponding to the confidence coefficient image;
    establishing a corresponding relation between the original vector and the guide vector; and
    and obtaining a filtered phase image according to the corresponding relation.
  15. The electronic device according to claim 14, wherein the computer program, when executed by the processor, further performs the following operations when performing the obtaining of the filtered phase image according to the correspondence:
    according to the corresponding relation, obtaining pixel values corresponding to the positions of all pixel points in the filtered phase image; and
    and generating and obtaining a filtered phase image based on the pixel values respectively corresponding to the positions of the pixel points.
  16. The electronic device according to any of claims 10 to 15, wherein when the computer program is executed by the processor, the processor performs the following operations when performing the depth calculation on the filtered phase image to obtain a depth image corresponding to the target image:
    obtaining a second pixel value corresponding to the same pixel point position in the filtered phase image;
    obtaining depth information corresponding to the pixel point position based on the second pixel value; and
    and generating and obtaining a depth image corresponding to the target image according to the depth information corresponding to the positions of the pixel points respectively.
  17. The electronic device according to claim 16, wherein when the computer program is executed by the processor, the processor further performs the following operation when performing the obtaining of the depth information corresponding to the pixel position based on the second pixel value:
    obtaining a corresponding phase angle based on the second pixel value, and obtaining the output frequency of the time-of-flight (ToF) camera module;
    and obtaining depth information corresponding to the pixel point position according to the phase difference and the output frequency, wherein the depth information is in direct proportion to the phase difference and in inverse proportion to the output frequency.
  18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 8.
CN201880097727.2A 2018-11-19 2018-11-19 Image processing method, electronic device, and computer-readable storage medium Pending CN112714925A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116260 WO2020102950A1 (en) 2018-11-19 2018-11-19 Image processing method, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112714925A true CN112714925A (en) 2021-04-27

Family

ID=70773094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880097727.2A Pending CN112714925A (en) 2018-11-19 2018-11-19 Image processing method, electronic device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112714925A (en)
WO (1) WO2020102950A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542739A (en) * 2021-07-15 2021-10-22 Oppo广东移动通信有限公司 Image encoding method and apparatus, image decoding method and apparatus, medium, and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184531A (en) * 2010-05-07 2011-09-14 微软公司 Deep map confidence filtering
JP2014106909A (en) * 2012-11-29 2014-06-09 Jvc Kenwood Corp Image enlargement device, image enlargement method, and image enlargement program
CN106133795A (en) * 2014-01-17 2016-11-16 诺基亚技术有限公司 For 3D being rendered the method and apparatus that in application, the media content of geo-location carries out visualization
CN108648222A (en) * 2018-04-27 2018-10-12 华中科技大学 The method for improving and device of structure light depth data spatial resolution

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007526457A (en) * 2004-03-01 2007-09-13 イアティア イメージング プロプライアタリー リミティド Method and apparatus for generating image including depth information
KR20180021509A (en) * 2016-08-22 2018-03-05 삼성전자주식회사 Method and device for acquiring distance information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184531A (en) * 2010-05-07 2011-09-14 微软公司 Deep map confidence filtering
JP2014106909A (en) * 2012-11-29 2014-06-09 Jvc Kenwood Corp Image enlargement device, image enlargement method, and image enlargement program
CN106133795A (en) * 2014-01-17 2016-11-16 诺基亚技术有限公司 For 3D being rendered the method and apparatus that in application, the media content of geo-location carries out visualization
CN108648222A (en) * 2018-04-27 2018-10-12 华中科技大学 The method for improving and device of structure light depth data spatial resolution

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542739A (en) * 2021-07-15 2021-10-22 Oppo广东移动通信有限公司 Image encoding method and apparatus, image decoding method and apparatus, medium, and device
CN113542739B (en) * 2021-07-15 2023-10-20 Oppo广东移动通信有限公司 Image encoding method and device, image decoding method and device, medium and equipment

Also Published As

Publication number Publication date
WO2020102950A1 (en) 2020-05-28

Similar Documents

Publication Publication Date Title
CN107172364B (en) Image exposure compensation method and device and computer readable storage medium
EP3370204B1 (en) Method for detecting skin region and device for detecting skin region
US20200027226A1 (en) Electronic device and method for processing image
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
US10769464B2 (en) Facial recognition method and related product
CN108366207B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
CN108388849B (en) Method and device for adjusting display image of terminal, electronic equipment and storage medium
CN109002787B (en) Image processing method and device, storage medium and electronic equipment
CN108038825B (en) Image processing method and mobile terminal
CN107124556B (en) Focusing method, focusing device, computer readable storage medium and mobile terminal
CN109165606B (en) Vehicle information acquisition method and device and storage medium
US9589327B2 (en) Apparatus and method for noise reduction in depth images during object segmentation
CN109086761B (en) Image processing method and device, storage medium and electronic equipment
CN105989572B (en) Picture processing method and device
EP2791904A1 (en) Techniques for efficient stereo block matching for gesture recognition
EP3416130B1 (en) Method, device and nonvolatile computer-readable medium for image composition
US11238563B2 (en) Noise processing method and apparatus
US10706282B2 (en) Method and mobile terminal for processing image and storage medium
CN105513098B (en) Image processing method and device
CN106851050B (en) Motion detection method and device and mobile equipment
CN112714925A (en) Image processing method, electronic device, and computer-readable storage medium
CN111145119B (en) Image processing method and electronic equipment
CN111464745B (en) Image processing method and electronic equipment
CN109785226B (en) Image processing method and device and terminal equipment
CN109194943B (en) Image processing method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination