CN116051386B - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN116051386B
CN116051386B CN202210602996.8A CN202210602996A CN116051386B CN 116051386 B CN116051386 B CN 116051386B CN 202210602996 A CN202210602996 A CN 202210602996A CN 116051386 B CN116051386 B CN 116051386B
Authority
CN
China
Prior art keywords
image
mask image
noise reduction
mask
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210602996.8A
Other languages
Chinese (zh)
Other versions
CN116051386A (en
Inventor
王振兴
荀潇阳
肖斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210602996.8A priority Critical patent/CN116051386B/en
Publication of CN116051386A publication Critical patent/CN116051386A/en
Application granted granted Critical
Publication of CN116051386B publication Critical patent/CN116051386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/73
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method and related equipment, relating to the technical field of images, wherein the method comprises the following steps: displaying a first interface, wherein the first interface comprises a first control; detecting a first operation of a first control; acquiring a noise image in response to a first operation; determining a first mask image, a second mask image and a third mask image according to the noise image; performing first noise reduction processing on the noise image to obtain a first noise reduction image; and carrying out detail restoration on the first noise reduction image according to the first mask image, the second mask image and the third mask image, and determining a target image. The application realizes the purposes of noise reduction and definition balance by utilizing the mask image to carry out detail restoration on the noise reduction image.

Description

Image processing method and related device
Technical Field
The present application relates to the field of image technology, and in particular, to an image processing method and related devices.
Background
With the widespread use of electronic devices, shooting with electronic devices has become a daily way of doing people's lives. While photographing, a charge coupled device (charge coupled device, CCD) or a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) in the camera inevitably generates high temperature. If the temperature of the chip is increased, noise signals are too strong, variegated spots with different brightness can be formed on the picture, and particularly, the spots are more obvious in dark areas, and are noise spots. The thermal noise is generated because the thermal current is superimposed on the normal signal current, so that the signal of part of the pixels is larger than the normal induced current, and finally the signal intensity of the pixels is stronger, and the brightness is higher on the picture. After the subsequent image processing, the pixel points have the problems of darker or brighter color and other noise points.
For denoising these noise points, the existing method generally smoothes the image, but this leads to reduced definition of the image. How to balance noise and sharpness is a big problem.
Disclosure of Invention
The application provides an image processing method and related equipment, which realize the purpose of noise reduction and definition balance by utilizing mask images to carry out detail restoration on noise reduction images.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, there is provided an image processing method, the method comprising:
displaying a first interface, wherein the first interface comprises a first control;
detecting a first operation of the first control;
acquiring a noise image in response to the first operation;
determining a first mask image, a second mask image and a third mask image according to the noise image, wherein the first mask image is used for distinguishing texture information, the second mask image is used for distinguishing face information, and the third mask image is used for distinguishing semantic information;
performing first noise reduction processing on the noise image to obtain a first noise reduction image;
and carrying out detail restoration on the first noise reduction image according to the first mask image, the second mask image and the third mask image, and determining a target image.
The embodiment of the application provides an image processing method, which is characterized in that different information in a noise image is analyzed to generate a first mask image capable of distinguishing texture information, a second mask image capable of distinguishing face information and a third mask image capable of distinguishing semantic information, and then detail restoration is carried out on the noise image after noise reduction according to the three mask images, so that the detail is not lost while the noise is reduced, and a clearer image is generated.
In a possible implementation manner of the first aspect, determining the first mask image, the second mask image and the third mask image according to the noise image includes:
performing second noise reduction processing on the noise image to obtain a second noise reduction image;
and determining the first mask image, the second mask image and the third mask image according to the second noise reduction image.
In this implementation, the second noise reduction process is used to remove noise in the noise image so as to eliminate interference of noise in the subsequent process of acquiring the mask image.
In a possible implementation manner of the first aspect, determining the first mask image, the second mask image, and the third mask image according to the second noise reduction image includes:
Performing error statistics on the second noise reduction image to determine the first mask image;
performing face detection on the second noise reduction image, and determining the second mask image;
and carrying out semantic segmentation on the second noise reduction image, and determining the third mask image.
In the implementation mode, by analyzing three aspects of texture, face and semantics in the noise image, a first mask image capable of distinguishing texture information, a second mask image capable of distinguishing face information and a third mask image capable of distinguishing semantic information are generated; then, noise reduction processing is carried out on the noise image to obtain a first noise reduction image, and then detail restoration is carried out on the first noise reduction image by combining the three mask images, so that detail can be restored while noise is reduced, and a clearer target image is generated.
Moreover, as the detail restoration is carried out by combining the three mask images which are analyzed and generated from different angles, the detail restoration degree of different areas in the first noise reduction image can be controlled, so that a target image with distinct primary and secondary and better visual effect can be generated.
In a possible implementation manner of the first aspect, the error statistics are variance statistics or standard deviation statistics.
In a possible implementation manner of the first aspect, performing detail restoration on the first noise reduction image according to the first mask image, the second mask image and the third mask image, and determining the target image includes:
generating an initial composite mask image according to the first mask image, the second mask image and the third mask image;
performing third noise reduction treatment on the initial composite mask image to obtain a middle composite mask image;
and carrying out detail restoration on the first noise reduction image according to the intermediate composite mask image, and determining the target image.
In the implementation manner, the third noise reduction processing is performed on the initial composite mask image, so that the problem of overlarge gray value difference of different image blocks in the initial composite mask image can be solved, and gray value transition between adjacent image blocks can be smoother after processing. Therefore, when the subsequent details are repaired, the fall feeling that the details of one block of the adjacent image block are particularly strong and the details of one block of the adjacent image block are particularly weak can not occur.
In a possible implementation manner of the first aspect, according to the intermediate synthesis mask image, performing detail restoration on the first noise reduction image, and determining the target image includes:
Determining a high-frequency image according to the noise image and the first noise reduction image, wherein the high-frequency image is used for representing high-frequency information in the noise image;
determining a detail restoration image according to the high-frequency image and the intermediate synthesis mask image;
and determining the target image according to the first noise reduction image and the detail restoration image.
In this implementation, the gray value corresponding to each pixel in the intermediate composite mask image corresponds to the importance level corresponding to each pixel, or the detail restoration level corresponding to each pixel. Then, the detail restoration image obtained according to the high-frequency image and the intermediate composite mask image is equivalent to an image for representing that different areas retain details with different degrees; and then according to the detail restoration image and the first noise reduction image, a target image with balanced noise reduction and definition can be determined.
Alternatively, the first operation is an operation of pointing to hit the camera application.
In a possible implementation manner of the first aspect, the first interface refers to a photographing interface of the electronic device, and the first control refers to a control for indicating photographing.
Optionally, the first operation is an operation of pointing to click on a control for indicating photographing. In a possible implementation manner of the first aspect, the first interface refers to a video capturing interface of the electronic device, and the first control refers to a control for indicating video capturing.
Optionally, the first operation is an operation of pointing to click on a control indicating that the video is taken.
The first operation is exemplified as a click operation; the first operation may further include a voice indication operation, or other operations for indicating the electronic device to take a photograph or take a video; the foregoing is illustrative and not intended to limit the application in any way.
In a second aspect, there is provided an electronic device comprising means for performing the first aspect or any one of the methods of the first aspect.
In a third aspect, an electronic device is provided that includes one or more processors and memory;
the memory is coupled with one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform the first aspect or any of the methods of the first aspect.
In a fourth aspect, there is provided a chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of the first aspect or any of the first aspects.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first aspect or any one of the first aspects.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform the first aspect or any of the methods of the first aspect.
The embodiment of the application provides an image processing method and related equipment, which are used for generating a first mask image capable of distinguishing texture information by counting variances or standard deviations in noise images, generating a second mask image capable of distinguishing face information by detecting faces in the noise images and generating a third mask image capable of distinguishing semantic information by dividing semantics in the noise images; then, noise reduction processing is carried out on the noise image, and a first noise reduction image is obtained; combining the intermediate synthesized mask image after the three mask images are synthesized and smoothed, and multiplying the intermediate synthesized mask image by a high-frequency image obtained according to the noise image and the first noise reduction image to obtain a detail restoration image; and finally, carrying out detail restoration on the first noise reduction image by using the detail restoration image, so that the detail can be restored while the noise is reduced, and the noise and the definition in the obtained target image are balanced.
In addition, in the detail restoration process, three mask images which are analyzed and generated from different angles are combined, so that the detail restoration degree of different areas in the first noise reduction image can be controlled and adjusted, and a target image with distinct primary and secondary and better visual effect can be generated.
Drawings
Fig. 1 is a 2-frame image processed by noise reduction of the related art;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of another image processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of determining a first mask image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of determining a second mask image according to an embodiment of the present application;
FIG. 7 is a schematic diagram of determining a third mask image according to an embodiment of the present application;
FIG. 8 is a schematic diagram of determining an initial composite mask image according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 10 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
Fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
First, some terms in the embodiments of the present application are explained for easy understanding by those skilled in the art.
1. RGB (red, green, blue) color space or RGB domain refers to a color model related to the human visual system structure. All colors are considered to be different combinations of red, green and blue depending on the structure of the human eye.
2. Pixel values refer to a set of color components corresponding to each pixel in a color image in the RGB color space. For example, each pixel corresponds to a set of three primary color components, wherein the three primary color components are red component R, green component G, and blue component B, respectively.
3. Bayer image, an image output from an image sensor based on bayer-format color filter array. Pixels of a plurality of colors in the image are arranged in bayer format. Wherein each pixel in the bayer format image corresponds to only one color of channel signal. For example, since human vision is sensitive to green, it may be set that green pixels (pixels corresponding to green channel signals) account for 50% of all pixels, and blue pixels (pixels corresponding to blue channel signals) and red pixels (pixels corresponding to red channel signals) each account for 25% of all pixels. Wherein, the minimum repeating unit of the bayer format image is: one red pixel, two green pixels, and one blue pixel are arranged in a 2×2 manner. Among them, images arranged in bayer format can be considered to be located in the RAW domain.
The foregoing is a simplified description of the terminology involved in the embodiments of the present application, and is not described in detail below.
With the widespread use of electronic devices, shooting with electronic devices has become a daily way of doing people's lives. In photographing, a high temperature is inevitably generated in a CCD chip or a CMOS chip in the camera. If the temperature of the chip is increased, noise signals are too strong, variegated spots with different brightness can be formed on the picture, and particularly, the spots are more obvious in dark areas, and are noise spots. The thermal noise is generated because the thermal current is superimposed on the normal signal current, so that the signal of part of the pixels is larger than the normal induced current, and finally the signal intensity of the pixels is stronger, and the brightness is higher on the picture. After the subsequent image processing, the pixel points have the problems of darker or brighter color and other noise points.
At present, the existing method for reducing noise of the noise points generally carries out smoothing treatment on the image through a traditional algorithm, but the noise is usually in a high-frequency area and is easy to be confused with image details, so that detail loss is easy to be brought when the image is reduced by using a related method, and the processed image becomes fuzzy and definition is reduced.
Fig. 1 shows 2-frame images before and after noise reduction processing using the related art.
As shown in fig. 1 (a), jeans are shown with texture features of jeans fabric, which is an image before noise reduction. When noise reduction processing is performed using the related art, as shown in (b) of fig. 1, texture features on jeans are lost, become very blurred, and have no detail.
Therefore, how to balance noise and sharpness is a big problem.
In view of this, an embodiment of the present application provides an image processing method, which analyzes different information in a noise image to generate a first mask image capable of distinguishing texture information, a second mask image capable of distinguishing face information, and a third mask image capable of distinguishing semantic information, and then, according to the three mask images, performs detail restoration on the noise image after noise reduction, thereby ensuring that details are not lost while noise is reduced, and generating a clearer image.
First, an application scenario of the embodiment of the present application is briefly described.
Fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application. The image processing method provided by the application can be used for removing noise on the image and keeping due definition.
In one example, an electronic device is illustrated as a cell phone. As shown in fig. 2 (a), is a graphical user interface (graphical user interface, GUI) of the electronic device. When the electronic device detects an operation in which the user clicks on an icon of the camera application on the interface, the camera application may be started, and another GUI, which may be referred to as a preview interface, is displayed as shown in (b) of fig. 2.
The preview interface may include a viewfinder window 21 thereon. In the preview state, a preview image can be displayed in real time in the viewfinder window 21. The preview interface may also include a plurality of photographing mode options and a first control, i.e., a photographing key 11. The plurality of shooting mode options include, for example: photographing mode, video recording mode, etc., the photographing key 11 is used to indicate that the current photographing mode is photographing mode, video recording mode, or other modes. Wherein the camera application is generally in a photographing mode by default when opened.
For example, as shown in (b) of fig. 2, after the electronic apparatus starts the camera application, the electronic apparatus runs a program corresponding to the image processing method, and acquires and stores an image in response to a click operation of the photographing key 11 by the user.
It should be understood that, during shooting, if noise reduction is performed by using the related art, the sharpness of a part of the content will be lost. If the image processing method is used for noise reduction, the image content can be subjected to region division, texture combination, semantic information combination, face inclusion or the like, and mask images with different positions and different superposition intensities are generated; and combining the mask image, and superposing high-frequency information on the image subjected to noise reduction treatment, so that the detail richness of the image can be improved, and the relation between noise reduction and definition of the image is balanced.
It should be understood that the scenario shown in fig. 2 is an illustration of an application scenario, and does not limit the application scenario of the present application. The image processing method provided by the embodiment of the application can be applied to but not limited to the following scenes:
video call, video conference application, long and short video application, video live broadcast application, video net class application, intelligent fortune mirror application scene, shooting scene such as system camera video recording function record video, video monitoring and intelligent cat eye, etc.
The image processing method provided by the embodiment of the application is described in detail below with reference to the accompanying drawings.
Fig. 3 is a flowchart of an image processing method 1 according to an embodiment of the present application. As shown in fig. 3, the image processing method 1 includes: s10 to S60.
S10, the electronic equipment starts the camera and displays a first interface, wherein the first interface comprises a first control.
The first interface may be a preview interface, and the first control may be a shooting key on the preview interface.
S20, the electronic equipment detects a first operation of a first control on a first interface by a user.
The first operation may be a click operation of the shooting key on the preview interface by the user, and of course, may also be other operations, which is not limited in any way in the embodiment of the present application.
S30, responding to the first operation, and acquiring a noise image.
The noise image is the image with noise points. The noise image may be an image in the RGB domain or an image in the RAW domain, which is not limited in the embodiment of the present application.
It should be understood that the noise image may be acquired by using a camera included in the electronic device itself or acquired from another device, and may be specifically set as needed, which is not limited by the embodiment of the present application.
When the camera included by the camera is used for collecting noise images, the camera can be any one of a main camera, a long-focus camera and a wide-angle camera, and the type of the camera is not limited in the embodiment of the application.
The main camera has the characteristics of large light incoming amount, high resolution and moderate angle of view. The primary camera is typically the default camera for the electronic device. When the electronic device responds to the operation of the user for starting the 'camera' application, the main camera can be started by default, and then the image acquired by the main camera is displayed on the preview interface. The long-focus camera has longer focal length and smaller field angle, and can be suitable for shooting objects far away from the mobile phone, namely far objects. The wide-angle camera has a short focal length and a large field angle, and can be suitable for shooting a shooting object which is close to a mobile phone, namely a near object.
S40, determining a first mask image, a second mask image and a third mask image according to the noise image.
The first mask image is used for distinguishing texture information, the second mask image is used for distinguishing face information, and the third mask image is used for distinguishing semantic information.
It should be understood that masking an image refers to occluding selected images locally or globally. Each pixel in the mask image corresponds to one gray value, the value range of the gray value is 0 to 1,0 represents that the pixel is black, and the selected image is completely shielded; 1 indicates that the pixel is white and is not blocked at all by the designated image.
Here, the first mask image, the second mask image and the third mask image are mask images, and the noise image is a selected image. The texture information is a visual characteristic reflecting the homogeneity phenomenon in the image, and shows the surface structure arrangement attribute of the surface of the object, such as tree texture, cloth texture and the like, which has slow change or periodical change. Semantic information refers to the category of image content, such as sand, blue sky, sea water, etc.
Based on this, by analyzing texture information in the noise image, a first mask image for distinguishing texture information from non-texture information, or distinguishing different texture information can be obtained; by analyzing the face information in the noise image, a second mask image for distinguishing the face from the non-face can be obtained; by analyzing the semantic information in the noise image, a third mask image for distinguishing different semantic information can be obtained.
S50, performing first noise reduction processing on the noise image to obtain a first noise reduction image.
The first noise reduction process may be one of a gaussian filter algorithm, a median filter algorithm, and a non-local mean (NLM) filter algorithm.
The Gaussian filtering algorithm is a linear smoothing filtering algorithm, and the Gaussian filtering algorithm can carry out weighted average on the whole image, wherein the value of each pixel point is obtained by carrying out weighted average on the value of each pixel point and the values of other pixels in the neighborhood.
The median filtering algorithm is a nonlinear smoothing filtering algorithm, and the median filtering can replace the value of a pixel with the median of the values of pixels in a neighborhood of the pixel.
The NLM algorithm slides a neighborhood window in the search window by setting two windows with fixed sizes, a large search window (D x D) and a small neighborhood window (D x D), and determines the influence degree of a corresponding center pixel on a current pixel according to the similarity between the neighborhood.
Of course, the first noise reduction process may be other filtering algorithms or a combination of multiple filtering algorithms, which is not limited in any way by the embodiment of the present application.
S60, performing detail restoration on the first noise reduction image according to the first mask image, the second mask image and the third mask image, and determining a target image.
The detail restoration refers to restoring the missing detail information in the image, and the detail restoration refers to restoring the missing detail information in the first noise reduction image.
It should be understood that after the first noise reduction processing is performed on the noise image, since the high-frequency information for presenting details and noise are coupled together and are removed by the filtering algorithm, the obtained first noise reduction image not only reduces noise, but also loses details. Moreover, as the angles of analysis of the first mask image, the second mask image and the third mask image are different, the degree of detail restoration in different areas in the first noise reduction image can be adjusted and controlled, so that finer detail restoration capability is realized.
According to the image processing method provided by the embodiment of the application, by analyzing three aspects of texture, human face and semantics in the noise image, a first mask image capable of distinguishing texture information, a second mask image capable of distinguishing human face information and a third mask image capable of distinguishing semantic information are generated; then, noise reduction processing is carried out on the noise image to obtain a first noise reduction image, and then detail restoration is carried out on the first noise reduction image by combining the three mask images, so that detail can be restored while noise is reduced, and a clearer target image is generated.
Moreover, as the detail restoration is carried out by combining the three mask images which are analyzed and generated from different angles, the detail restoration degree of different areas in the first noise reduction image can be controlled, so that a target image with distinct primary and secondary and better visual effect can be generated.
Fig. 4 is a flowchart of another image processing method according to an embodiment of the present application.
As shown in fig. 4, the method 2 includes the following S110 to S210.
S110, acquiring a noise image.
The process of acquiring the noise image may refer to the description of S30, and will not be described herein.
S120, performing second noise reduction processing on the noise image to obtain a second noise reduction image.
It will be appreciated that the second noise reduction process is used to remove noise from the noise image in order to eliminate interference from the noise during subsequent acquisition of the mask image.
The second noise reduction processing may be one of a gaussian filtering algorithm, a median filtering algorithm and an NLM filtering algorithm, and of course, the second noise reduction processing may also be other filtering algorithms or a combination of a plurality of filtering algorithms, which is not limited in any way in the embodiment of the present application. The second noise reduction process may be the same as the first noise reduction process or may be different, for example, the first noise reduction process is an NLM filter algorithm, and the second noise reduction process is a median filter algorithm.
Here, when the second noise reduction process is the same as the first noise reduction process, the second noise reduction image is the same as the first noise reduction image.
Optionally, the noise image may also be preprocessed before the second noise reduction process is performed on the noise image.
The pre-treatment may comprise at least one of: scaling, cropping, rotation, etc.
Wherein scaling refers to enlarging or reducing the image size, cropping refers to removing a portion of the image, and rotation refers to making a change in direction in the plane of the image. Of course, the pretreatment may also include other treatments, which are not limited in any way by the embodiment of the present application.
For example, the original noise image may be subjected to scaling processing and then subjected to rotation processing, and then used as a noise image to be subjected to the second noise reduction processing.
S130, performing error statistics on the second noise reduction image to determine a first mask image.
Alternatively, the error statistics may be variance statistics or standard deviation statistics.
The variance statistics refers to that a w multiplied by h window is adopted for statistics on the second noise reduction image, and the variance corresponding to pixels in a local image block in the window is determined; the standard deviation statistics refers to that a w×h window is adopted for statistics on the second noise reduction image, and the standard deviation corresponding to pixels in the local image block in the window is determined. The sizes of w and h may be the same or different.
It should be understood that the variance is statistically representative of the degree of deviation from the center and is used to measure the magnitude of the fluctuation of the data. For the second noise reduction image, when the local variance is smaller, the pixel values of the pixels of the local area are not greatly different, and at the moment, the image content is considered to have no much texture information and is relatively flat; when the local variance is relatively large, it means that the pixel values of the pixels in the local area are relatively large in difference, and at this time, the image can be considered to have a lot of texture information, and the details are relatively steep. The standard deviation is the arithmetic square root of the variance, which is substantially the same as the indication of variance.
Based on the above, the statistical variance or standard deviation can be continuously divided by setting a threshold value, then, different gray values are mapped to pixels divided into different threshold value intervals, and then, a corresponding first mask image is generated according to the gray value corresponding to each pixel.
By way of example, fig. 5 shows a schematic illustration of determining a first mask image.
As shown in fig. 5 (a), the second noise-reduced image after the second noise reduction processing is provided in the embodiment of the present application. When the variance statistics is performed on the second noise reduction image using a 3×3 window, the statistics result as shown in (b) in fig. 5 may be obtained, for example, the variance corresponding to the pixel in the P1 region is 0.2, the variance corresponding to the pixel in the P2 region is 0.8, the variance corresponding to the pixel in the P3 region is 0.3, and the variance corresponding to the pixel in the P4 region is 0.7.
Then, a variance threshold value can be set to be 0.5, if the variance corresponding to the pixels in a certain area is larger than or equal to the variance threshold value, the texture information of the area is rich, and the subsequent detail restoration can be reserved; if the variance corresponding to the pixels in a certain region is smaller than the variance threshold, the texture information of the region is insufficient, and the subsequent restoration contribution to details is not large.
Then, as shown in fig. 5 (c), the gradation values of the pixels in the P1 region and the P3 region smaller than the variance threshold value 0.5 can be all mapped to 0, the gradation values of the pixels in the P2 region and the P4 region larger than the variance threshold value 0.5 can be all mapped to 1, and since the pixel having the gradation value 1 corresponds to white and the pixel having the gradation value 0 corresponds to black, an image as shown in fig. 5 (d), which is the first mask image corresponding to the noise image, can be obtained. White areas in the figure may also be referred to as textured areas and black areas may also be referred to as non-textured areas.
It should be understood that the foregoing is merely an example, and the size of the window, the set variance threshold, the mapped gray value size, and the mapping conditions may be adjusted and modified as needed, which is not limited in any way by the embodiment of the present application.
And S140, performing face detection on the second noise reduction image, and determining a second mask image.
Face detection refers to finding the position of the face in the second noise-reduced image. For example, for the second noise reduction image, the face position in the image may be found by a face detection algorithm, resulting in one or more rectangular box positions including the face. Of course, when there is no face, a rectangular frame position including the face is not obtained.
It will be appreciated that face detection is used to distinguish between face regions and non-face regions of the second noise reduction image, so as to facilitate subsequent detail repair of the face regions and the non-face regions to different extents. The corresponding area in the face frame is the face area, and the area in the non-face frame is the non-face area.
Based on the above, the second noise reduction image may be divided according to the determined face frame position, and then different gray values are mapped to the divided face region and the non-face region, so that a corresponding second mask image may be generated according to the gray value corresponding to each pixel.
By way of example, fig. 6 shows a schematic diagram of determining a second mask image.
As shown in fig. 6 (a), the second noise-reduced image after the second noise reduction processing is provided in the embodiment of the present application. When the face detection algorithm is used to perform face detection on the second noise reduction image, a detection result as shown in (b) in fig. 6 may be obtained, for example, R is a face frame, the P5 region is a face region framed by the face frame, and the other regions (P6 regions) are non-face regions.
As shown in fig. 6 (c), the gray value of the pixel in the P5 region can be mapped to 0.5, the gray value of the pixel in the P6 region can be mapped to 0, and since the pixel having the gray value of 0.5 corresponds to gray and the pixel having the gray value of 0 corresponds to black, an image as shown in fig. 6 (d), that is, a second mask image corresponding to the second noise reduction image can be obtained. The white areas in the figure may also be referred to as non-face areas and the gray areas may also be referred to as face areas.
It should be understood that the foregoing is merely an example, and the mapped gray value size may be adjusted and modified as needed, which is not limited in any way by the embodiment of the present application.
S150, carrying out semantic segmentation on the second noise reduction image to determine a third mask image.
The semantic segmentation is to segment the second noise-reduced image into regions belonging to different categories according to the content. The categories of semantics may include: humans, animals, plants, buildings, furniture, vehicles, etc.
The semantic segmentation model may be used to semantically segment the second noise reduction image. The semantic segmentation model may be a full convolutional network (fully convolution networks, FCN) model, segNet, U-Net, deep Lab v1, deep Lab v2, deep Lab v3, deep Net, E-Net, link-Net, mask region convolutional neural network (Mask R-CNN), pyramid scene parsing network (pyramid scene parseing network, PSPNet), refinneNet, gated feedback optimization network (gated feedback refinement network, G-FRNet), and networks evolving from these networks, etc.
Based on the above, different gray values can be mapped to the semantically divided regions of different types, and then corresponding third mask images can be generated according to the gray values corresponding to each pixel.
By way of example, fig. 7 shows a schematic diagram of determining a third mask image.
As shown in fig. 7 (a), the second noise-reduced image after the second noise reduction processing according to the embodiment of the present application is shown. When the semantic segmentation is performed on the second noise reduction image using the semantic segmentation model, a segmentation result as shown in (b) of fig. 7 may be obtained, for example, the Q1 region is a region with a category of "portrait", the Q2 region is a region with a category of "tree", and the Q3 region is a region with a category of "sky".
Thus, as shown in (c) of fig. 7, the gradation value of the pixel in the Q1 region can be mapped to 0.8, the gradation value of the pixel in the Q2 region can be mapped to 0.4, and the gradation value of the pixel in the Q3 region can be mapped to 0. Since the pixels having the gradation value of 0.4 and the gradation value of 0.8 correspond to grays of different shades, and the pixels having the gradation value of 0 correspond to black, an image as shown in (d) of fig. 7, that is, a third mask image corresponding to the second noise reduction image can be obtained. The light gray areas in the image may be referred to as portrait areas, the dark gray areas may also be referred to as tree areas, and the white areas may also be referred to as sky areas.
It should be understood that the foregoing is merely an example, and the semantic category and the mapped gray value size may be set and modified as needed, which is not limited in any way by the embodiment of the present application.
S160, generating an initial composite mask image according to the first mask image, the second mask image and the third mask image.
Alternatively, the gradation values of the pixels at the same position of the first mask image, the second mask image, and the third mask image may be added, and then the initial composite mask image may be generated from the respective added gradation values.
Alternatively, the first mask image, the second mask image, and the third mask image may also be assigned different weights. In this way, the gray values of the pixels at the same position of the first mask image, the second mask image and the third mask image may be added in combination with the weights corresponding to the respective gray values, and then the corresponding initial composite mask image may be generated according to the added gray values.
By way of example, FIG. 8 shows a schematic diagram of determining an initial composite mask image.
As shown in fig. 8 (a), a first mask image determined for the present application is shown in fig. 8 (b), a second mask image determined for the present application is shown in fig. 8 (c), and a third mask image determined for the present application is shown in fig. 8 (c). For the first mask image, the assigned weight may be W1, for the second mask image, the assigned weight may be W2, and for the third mask image, the assigned weight may be W3.
Then, the first mask image, the second mask image, the third mask image, and the respective weights are combined, and an initial synthesized mask image as shown in (d) of fig. 8 can be obtained by addition.
Assuming that w1=0.1, w2=0.8, and w3=0.1, for the pixels in the fourth column of the first row among the 3 mask images, the corresponding gray value in the first mask image is 1, and the corresponding gray values in the second mask image and the third mask image are both 0, the gray value corresponding to the pixels in the fourth column of the first row should be 0.1 in the initial synthesized mask image by calculating 1×0.1+0×0.8+0×0.1=0.1. The gray values corresponding to other pixels are similar, and will not be described in detail herein.
S170, performing third noise reduction processing on the initial composite mask image to obtain a middle composite mask image.
The third noise reduction process may be one of a gaussian filter algorithm, a median filter algorithm, and an NLM filter algorithm. Of course, the third noise reduction process may also be other filtering algorithms or a combination of multiple filtering algorithms, which is not limited in any way by the embodiment of the present application.
The third noise reduction process may be the same as the first noise reduction process and the second noise reduction process, or may be different from the first noise reduction process and the second noise reduction process, which is not limited in any way in the embodiment of the present application. For example, the first noise reduction process is an NLM filter algorithm, the second noise reduction process is a median filter algorithm, and the third noise reduction process is a gaussian filter algorithm.
It should be appreciated that the third noise reduction processing is performed on the initial composite mask image, so that the problem that the gray value difference of different image blocks in the initial composite mask image is too large can be eliminated, and the gray value transition between adjacent image blocks can be smoother after the processing. Therefore, when the subsequent details are repaired, the fall feeling that the details of one block of the adjacent image block are particularly strong and the details of one block of the adjacent image block are particularly weak is avoided.
Alternatively, on the basis of the above, the initial composite mask image may be downsampled, then subjected to the third noise reduction process, and then upsampled to obtain the intermediate composite mask image. The intermediate composite mask image has the same size as the initial composite mask image.
Wherein downsampling is used to reduce the image size and upsampling is used to increase the image size.
It should be appreciated that if the third denoising process is performed on the full image of the initial composite mask image, a larger blur kernel, or blur matrix, may be required, which may generate a huge amount of computation and a larger performance loss. If the size of the initial composite mask image is reduced and then the third noise reduction process is performed, the calculated amount can be reduced, the processing efficiency can be improved, and the performance loss can be reduced.
And S190, determining a high-frequency image according to the noise image and the first noise reduction image.
The noise image is an image with noise points, the first noise reduction image is an image after noise reduction treatment, therefore, pixel values corresponding to two pixels at the same position of the noise image and the first noise reduction image can be subtracted, phase-difference high-frequency information can be obtained, and then, the high-frequency image can be generated according to the high-frequency information at each pixel position.
It will be appreciated that the high frequency information corresponds to portions of the image that vary greatly, including edges, noise, and detail portions of the image. The high-frequency information in the image is filtered during the filtering process, so that the original image and the filtered image are subtracted, and the high-frequency information can be obtained.
S200, determining a detail restoration image according to the high-frequency image and the intermediate composite mask image.
The pixel value of the high-frequency image at the same position can be multiplied by the gray value in the intermediate composite mask image, and the multiplied result is used as the pixel value corresponding to the detail repair image.
It should be understood that the gray value corresponding to each pixel in the intermediate composite mask image corresponds to the importance level corresponding to each pixel, or the detail restoration level corresponding to each pixel. For example, pixels belonging to a face region are important relative to pixels not belonging to a face region, and it is desirable to preserve more detail from the face region during subsequent processing. Then, the pixel value of the pixel belonging to the face area in the high-frequency image is multiplied by the gray value at the same position in the intermediate composite mask image, and the obtained result is equivalent to the details of the face to be retained. The resulting detail restoration image thus corresponds to an image representing different areas retaining different degrees of detail.
S210, determining a target image according to the first noise reduction image and the detail restoration image.
Wherein the pixel value of the first noise reduction image at the same position and the pixel value of the detail restoration image may be added, and the added result is taken as the pixel value of the target image.
In the embodiment of the application, a first mask image capable of distinguishing texture information is generated by counting variances or standard deviations in the noise image, a second mask image capable of distinguishing face information is generated by detecting faces in the noise image, and a third mask image capable of distinguishing semantic information is generated by dividing semantics in the noise image; then, noise reduction processing is carried out on the noise image, and a first noise reduction image is obtained; combining the intermediate synthesized mask image after the three mask images are synthesized and smoothed, and multiplying the intermediate synthesized mask image by a high-frequency image obtained according to the noise image and the first noise reduction image to obtain a detail restoration image; and finally, carrying out detail restoration on the first noise reduction image by using the detail restoration image, so that the detail can be restored while the noise is reduced, and the noise and the definition in the obtained target image are balanced.
In addition, in the detail restoration process, three mask images which are analyzed and generated from different angles are combined, so that the detail restoration degree of different areas in the first noise reduction image can be controlled, and a target image with distinct primary and secondary and better visual effect can be generated.
It should be understood that the above description is intended to aid those skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or particular scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or variations can be made, and such modifications or variations are intended to be within the scope of the embodiments of the present application.
The image processing method and the related display interface and effect diagram provided by the embodiment of the application are described in detail above with reference to fig. 1 to 8; the electronic device, the device and the chip provided by the embodiment of the application will be described in detail below with reference to fig. 9 to 12. It should be understood that the electronic device, the apparatus and the chip in the embodiments of the present application may perform the various image processing methods in the foregoing embodiments of the present application, that is, the specific working processes of the various products below may refer to the corresponding processes in the foregoing method embodiments.
Fig. 9 shows a schematic structural diagram of an electronic device suitable for use in the present application. The electronic device 100 may be used to implement the methods described in the method embodiments described above.
The electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the specific type of the electronic device 100 is not limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 9 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the application, electronic device 100 may include more or fewer components than those shown in FIG. 9, or electronic device 100 may include a combination of some of the components shown in FIG. 9, or electronic device 100 may include sub-components of some of the components shown in FIG. 9. The components shown in fig. 9 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture images or video. The shooting function can be realized by triggering and starting through an application program instruction, such as shooting and acquiring an image of any scene. The camera may include imaging lenses, filters, image sensors, and the like. Light rays emitted or reflected by the object enter the imaging lens, pass through the optical filter and finally are converged on the image sensor. The image sensor is mainly used for converging and imaging light emitted or reflected by all objects (also called a scene to be shot and a target scene, and also called a scene image expected to be shot by a user) in a shooting view angle; the optical filter is mainly used for filtering out redundant light waves (such as light waves except visible light, such as infrared light) in the light; the image sensor is mainly used for performing photoelectric conversion on the received optical signal, converting the received optical signal into an electrical signal, and inputting the electrical signal into the processor 110 for subsequent processing. The cameras 193 may be located in front of the electronic device 100 or may be located in the back of the electronic device 100, and the specific number and arrangement of the cameras may be set according to requirements, which is not limited in the present application.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The internal memory 121 may also store software codes of the image processing method provided in the embodiment of the present application, and when the processor 110 runs the software codes, the process steps of the image processing method are executed, so as to obtain a target image with details repaired.
The internal memory 121 may also store photographed images.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music are stored in an external memory card.
Of course, the software code of the image processing method provided in the embodiment of the present application may also be stored in an external memory, and the processor 110 may execute the software code through the external memory interface 120 to execute the flow steps of the image processing method, so as to obtain the target image with repaired details. The image captured by the electronic device 100 may also be stored in an external memory.
It should be understood that the user may specify whether the image is stored in the internal memory 121 or the external memory. For example, when the electronic device 100 is currently connected to the external memory, if the electronic device 100 captures 1 frame of image, a prompt message may be popped up to prompt the user whether to store the image in the external memory or the internal memory; of course, other specified manners are possible, and the embodiment of the present application does not limit this; alternatively, the electronic device 100 may automatically store the image in the external memory when detecting that the memory amount of the internal memory 121 is less than the preset amount.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates with conductive material, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the electronic device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B can also be used for scenes such as navigation and motion sensing games.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. The electronic device 100 may set the characteristics of automatic unlocking of the flip cover according to the detected open-close state of the leather sheath or the open-close state of the flip cover.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically, x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
The distance sensor 180F is used to measure a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a light-emitting diode (LED) and a light detector, for example, a photodiode. The LED may be an infrared LED. The electronic device 100 emits infrared light outward through the LED. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When the reflected light is detected, the electronic device 100 may determine that an object is present nearby. When no reflected light is detected, the electronic device 100 may determine that there is no object nearby. The electronic device 100 can use the proximity light sensor 180G to detect whether the user holds the electronic device 100 close to the ear for talking, so as to automatically extinguish the screen for power saving. The proximity light sensor 180G may also be used for automatic unlocking and automatic screen locking in holster mode or pocket mode.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key and an volume key. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive a key input signal and implement a function related to the case input signal.
The motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, which may be used to indicate a change in state of charge and charge, or may be used to indicate a message, missed call, and notification.
In an embodiment of the present application, the camera 193 may capture a noise image, and the processor 110 performs image processing on the noise image, where the image processing may include noise reduction, determining a first mask image, a second mask image, a third mask image, and the like, and obtaining a target image with details repaired by the image processing. The processor 110 may then control the display 194 to present the processed target image.
The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is described below. The software system may employ a layered architecture, an event driven architecture, a microkernel architecture, a micro-service architecture, or a cloud architecture, and the embodiment of the present application exemplarily describes the software system of the electronic device 100.
As shown in fig. 10, the software system using the hierarchical architecture is divided into several layers, each of which has a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the software system may be divided into five layers, from top to bottom, an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250, respectively.
The application layer 210 may include cameras, gallery, and may also include calendar, phone, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer 220 provides an application access interface and programming framework for the applications of the application layer 210.
For example, the application framework layer includes a camera access interface for providing a photographing service of a camera through camera management and a camera device.
Camera management in the application framework layer 220 is used to manage cameras. The camera management may obtain parameters of the camera, for example, determine an operating state of the camera, and the like.
The camera devices in the application framework layer 220 are used to provide a data access interface between the camera devices and camera management.
The hardware abstraction layer 230 is used to abstract the hardware. For example, the hardware abstraction layer may include a camera hardware abstraction layer and other hardware device abstraction layers; the camera hardware abstract layer may include a camera device 1, a camera device 2, and the like; the camera hardware abstraction layer may be coupled to a camera algorithm library, and the camera hardware abstraction layer may invoke algorithms in the camera algorithm library.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the drive layer may include a camera drive; a digital signal processor driver and a graphics processor driver.
The hardware layer 250 may include sensors, image signal processors, digital signal processors, graphics processors, and other hardware devices. The sensor may include the sensor 1, the sensor 2, and the like, and may also include a depth sensor (TOF) and a multispectral sensor, and the like, which are not limited in any way.
The workflow of the software system of the electronic device 100 is illustrated in connection with displaying a photo scene.
When a user performs a click operation on the touch sensor 180K, after the camera APP is awakened by the click operation, each camera device of the camera hardware abstraction layer is invoked through the camera access interface. For example, the camera hardware abstraction layer may send an instruction for calling a certain camera to the camera device driver, and at the same time, the camera algorithm library starts to load the image processing method utilized by the embodiment of the present application.
When a sensor of the hardware layer is called, for example, a sensor 1 in a certain camera is called to acquire a noise image. The noise image is returned to the hardware abstraction layer after being processed by the image signal processor, and the image processing method in the loaded camera algorithm library is utilized to carry out the processing of determining the first mask image, the second mask image, the third mask image, the high-frequency image, determining the intermediate synthesized mask image, the detail restoration image and the like, so as to generate the target image.
And sending the obtained target image back to the camera application for display and storage through the camera hardware abstraction layer and the camera access interface.
An embodiment of the device of the present application will be described in detail below with reference to fig. 11. It should be understood that the apparatus in the embodiments of the present application may perform the methods of the foregoing embodiments of the present application, that is, specific working procedures of the following various products may refer to corresponding procedures in the foregoing method embodiments.
Fig. 11 is a schematic structural diagram of an image processing apparatus 300 according to an embodiment of the present application. The image processing apparatus 300 includes an acquisition module 310 and a processing module 320.
Wherein the processing module 320 is configured to detect a first operation; in response to a first operation, the camera is turned on.
The acquisition module 310 is configured to acquire a noise image.
The processing module 320 is configured to determine a first mask image, a second mask image, and a third mask image according to the noise image; performing first noise reduction processing on the noise image to obtain a first noise reduction image; and performing detail restoration on the first noise reduction image according to the first mask image, the second mask image and the third mask image to determine a target image.
Optionally, as an embodiment, the processing module 320 is further configured to:
performing second noise reduction processing on the noise image to obtain a second noise reduction image;
and determining a first mask image, a second mask image and a third mask image according to the second noise reduction image.
Optionally, as an embodiment, the processing module 320 is further configured to:
performing error statistics on the second noise reduction image to determine a first mask image;
performing face detection on the second noise reduction image to determine a second mask image;
and carrying out semantic segmentation on the second noise reduction image to determine a third mask image.
Alternatively, the error statistics are variance statistics or standard deviation statistics.
Optionally, as an embodiment, the processing module 320 is further configured to:
generating an initial composite mask image according to the first mask image, the second mask image and the third mask image;
Performing third noise reduction treatment on the initial composite mask image to obtain a middle composite mask image;
and carrying out detail restoration on the noise image according to the intermediate composite mask image, and determining a target image.
Optionally, as an embodiment, the processing module 320 is further configured to:
determining a high-frequency image according to the noise image and the first noise reduction image, wherein the high-frequency image is used for representing high-frequency information in the noise image;
determining a detail restoration image according to the high-frequency image and the intermediate composite mask image;
and determining a target image according to the first noise reduction image and the detail restoration image.
The image processing apparatus 300 is embodied in the form of a functional module. The term "module" herein may be implemented in software and/or hardware, and is not specifically limited thereto.
For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions; the computer readable storage medium, when run on the folded screen angle determining means, causes the image processing apparatus 300 to perform the image processing method shown previously.
The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium, or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The embodiments of the present application also provide a computer program product comprising computer instructions which, when run on the image processing apparatus 300, enable the image processing apparatus 300 to perform the image processing method shown in the foregoing.
Fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present application. The chip shown in fig. 12 may be a general-purpose processor or a special-purpose processor. The chip includes a processor 110. Wherein the processor 110 is configured to support the image processing apparatus 300 to execute the technical solution described above.
Optionally, the chip further includes a transceiver 402, where the transceiver 402 is configured to be controlled by the processor 110 and is configured to support the image processing apparatus 300 to perform the foregoing technical solution.
Optionally, the chip shown in fig. 12 may further include: a storage medium 403.
It should be noted that the chip shown in fig. 12 may be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic devices (programmable logic device, PLD), controllers, state machines, gate logic, discrete hardware components, any other suitable circuit or combination of circuits capable of performing the various functions described throughout this application.
The electronic device, the image processing apparatus 300, the computer storage medium, the computer program product, and the chip provided in the embodiments of the present application are used to execute the method provided above, so that the advantages achieved by the method provided above can be referred to the advantages corresponding to the method provided above, and will not be described herein.
It should be understood that the above description is only intended to assist those skilled in the art in better understanding the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application. It will be apparent to those skilled in the art from the foregoing examples that various equivalent modifications or variations can be made, for example, certain steps may not be necessary in the various embodiments of the detection methods described above, or certain steps may be newly added, etc. Or a combination of any two or more of the above. Such modifications, variations, or combinations are also within the scope of embodiments of the present application.
It should also be understood that the foregoing description of embodiments of the present application focuses on highlighting differences between the various embodiments and that the same or similar elements not mentioned may be referred to each other and are not repeated herein for brevity.
It should be further understood that the sequence numbers of the above processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation process of the embodiments of the present application.
It should be further understood that, in the embodiments of the present application, the "preset" and "predefined" may be implemented by pre-storing corresponding codes, tables, or other manners that may be used to indicate relevant information in a device (including, for example, an electronic device), and the present application is not limited to the specific implementation manner thereof.
It should also be understood that the manner, the case, the category, and the division of the embodiments in the embodiments of the present application are merely for convenience of description, should not be construed as a particular limitation, and the features in the various manners, the categories, the cases, and the embodiments may be combined without contradiction.
It is also to be understood that in the various embodiments of the application, where no special description or logic conflict exists, the terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
Finally, it should be noted that: the foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An image processing method, comprising:
displaying a first interface, wherein the first interface comprises a first control;
detecting a first operation of the first control;
acquiring a noise image in response to the first operation;
determining a first mask image, a second mask image and a third mask image according to the noise image, wherein the first mask image is used for distinguishing texture information, the second mask image is used for distinguishing face information, and the third mask image is used for distinguishing semantic information;
performing first noise reduction processing on the noise image to obtain a first noise reduction image;
and carrying out detail restoration on the first noise reduction image according to the first mask image, the second mask image and the third mask image, and determining a target image.
2. The image processing method according to claim 1, wherein determining a first mask image, a second mask image, and a third mask image from the noise image comprises:
performing second noise reduction processing on the noise image to obtain a second noise reduction image;
and determining the first mask image, the second mask image and the third mask image according to the second noise reduction image.
3. The image processing method according to claim 2, wherein determining a first mask image, a second mask image, and a third mask image from the second noise reduction image includes:
performing error statistics on the second noise reduction image to determine the first mask image;
performing face detection on the second noise reduction image, and determining the second mask image;
and carrying out semantic segmentation on the second noise reduction image, and determining the third mask image.
4. The image processing method according to claim 3, wherein the error statistics are variance statistics or standard deviation statistics.
5. The image processing method according to any one of claims 1 to 4, wherein performing detail restoration on the first noise reduction image from the first mask image, the second mask image, and the third mask image, determining a target image, includes:
generating an initial composite mask image according to the first mask image, the second mask image and the third mask image;
performing third noise reduction treatment on the initial composite mask image to obtain a middle composite mask image;
and carrying out detail restoration on the first noise reduction image according to the intermediate composite mask image, and determining the target image.
6. The image processing method according to claim 5, wherein performing detail restoration on the first noise reduction image based on the intermediate synthesis mask image, determining the target image includes:
determining a high-frequency image according to the noise image and the first noise reduction image, wherein the high-frequency image is used for representing high-frequency information in the noise image;
determining a detail restoration image according to the high-frequency image and the intermediate synthesis mask image;
and determining the target image according to the first noise reduction image and the detail restoration image.
7. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program capable of running on the processor;
the processor configured to execute the image processing method according to any one of claims 1 to 6.
8. A chip, comprising: a processor for calling and running a computer program from a memory, so that a device on which the chip is mounted performs the image processing method according to any one of claims 1 to 6.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the image processing method according to any one of claims 1 to 6.
CN202210602996.8A 2022-05-30 2022-05-30 Image processing method and related device Active CN116051386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210602996.8A CN116051386B (en) 2022-05-30 2022-05-30 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210602996.8A CN116051386B (en) 2022-05-30 2022-05-30 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN116051386A CN116051386A (en) 2023-05-02
CN116051386B true CN116051386B (en) 2023-10-20

Family

ID=86113869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210602996.8A Active CN116051386B (en) 2022-05-30 2022-05-30 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN116051386B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109804619A (en) * 2016-10-14 2019-05-24 三菱电机株式会社 Image processing apparatus, image processing method and camera
CN111192201A (en) * 2020-04-08 2020-05-22 腾讯科技(深圳)有限公司 Method and device for generating face image and training model thereof, and electronic equipment
CN111292272A (en) * 2020-03-04 2020-06-16 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
CN111402146A (en) * 2020-02-21 2020-07-10 华为技术有限公司 Image processing method and image processing apparatus
CN112508799A (en) * 2020-09-26 2021-03-16 中南林业科技大学 Image enhancement and noise reduction method and system based on softmax function
CN112884637A (en) * 2021-01-29 2021-06-01 北京市商汤科技开发有限公司 Special effect generation method, device, equipment and storage medium
CN113177881A (en) * 2021-04-28 2021-07-27 广州光锥元信息科技有限公司 Processing method and device for improving picture definition
CN113315884A (en) * 2020-02-26 2021-08-27 华为技术有限公司 Real-time video noise reduction method and device, terminal and storage medium
WO2021179820A1 (en) * 2020-03-12 2021-09-16 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium and electronic device
CN113496470A (en) * 2020-04-02 2021-10-12 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN114092364A (en) * 2021-08-12 2022-02-25 荣耀终端有限公司 Image processing method and related device
CN114119376A (en) * 2020-08-25 2022-03-01 北京金山云网络技术有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI596573B (en) * 2013-04-25 2017-08-21 財團法人工業技術研究院 Image processing device for reducing image noise and the method thereof
EP4032062A4 (en) * 2019-10-25 2022-12-14 Samsung Electronics Co., Ltd. Image processing method, apparatus, electronic device and computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109804619A (en) * 2016-10-14 2019-05-24 三菱电机株式会社 Image processing apparatus, image processing method and camera
CN111402146A (en) * 2020-02-21 2020-07-10 华为技术有限公司 Image processing method and image processing apparatus
CN113315884A (en) * 2020-02-26 2021-08-27 华为技术有限公司 Real-time video noise reduction method and device, terminal and storage medium
CN111292272A (en) * 2020-03-04 2020-06-16 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, image processing medium, and electronic device
WO2021179820A1 (en) * 2020-03-12 2021-09-16 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium and electronic device
CN113496470A (en) * 2020-04-02 2021-10-12 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN111192201A (en) * 2020-04-08 2020-05-22 腾讯科技(深圳)有限公司 Method and device for generating face image and training model thereof, and electronic equipment
CN114119376A (en) * 2020-08-25 2022-03-01 北京金山云网络技术有限公司 Image processing method and device, electronic equipment and storage medium
CN112508799A (en) * 2020-09-26 2021-03-16 中南林业科技大学 Image enhancement and noise reduction method and system based on softmax function
CN112884637A (en) * 2021-01-29 2021-06-01 北京市商汤科技开发有限公司 Special effect generation method, device, equipment and storage medium
CN113177881A (en) * 2021-04-28 2021-07-27 广州光锥元信息科技有限公司 Processing method and device for improving picture definition
CN114092364A (en) * 2021-08-12 2022-02-25 荣耀终端有限公司 Image processing method and related device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Context Encoders:Feature Learning by Inpainting;Deepak Pathak et al.;《2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;第2536-2544页 *
Image inpainting for irregular holes using partial convolutions;Guilin Liu et al.;《2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA)》;第778-784页 *
In-situ Repair Qualification by Applying Computational Metrology and Inspection (CMI) Technologies;Chen CY et al.;《PHOTOMASK AND NEXT-GENERATION LITHOGRAPHY MASK TECHNOLOGY XX》;第8701卷;第1-16页 *
Nonlocal Transform-Domain Filter for Volumetric Data Denoising and Reconstruction;Maggioni M et al.;IEEE;第22卷(第1期);119-133 *
基于全局与局部相似性检索的图像自适应修复方法;晏资余,;《中国优秀硕士学位论文全文数据库信息科技辑》;第2015年卷(第3期);第I138-2554页 *
郭强,.基于斑点探测的B型超声斑点降噪方法研究.《中国优秀博硕士信息科技辑》.2017,(2017年第02期),I138-3290. *

Also Published As

Publication number Publication date
CN116051386A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108305236B (en) Image enhancement processing method and device
CN114092364B (en) Image processing method and related device
CN113538273B (en) Image processing method and image processing apparatus
CN112884666B (en) Image processing method, device and computer storage medium
CN115272138B (en) Image processing method and related device
CN116048244A (en) Gaze point estimation method and related equipment
CN116055895B (en) Image processing method and device, chip system and storage medium
CN115767290B (en) Image processing method and electronic device
CN115633262B (en) Image processing method and electronic device
CN116051386B (en) Image processing method and related device
CN117132515A (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115550556B (en) Exposure intensity adjusting method and related device
CN116437222A (en) Image processing method and electronic equipment
CN114757866A (en) Definition detection method, device and computer storage medium
CN115526786B (en) Image processing method and related device
CN116245741B (en) Image processing method and related device
CN116668773B (en) Method for enhancing video image quality and electronic equipment
CN115460343B (en) Image processing method, device and storage medium
CN115546042B (en) Video processing method and related equipment thereof
CN117499779B (en) Image preview method, device and storage medium
CN116091572B (en) Method for acquiring image depth information, electronic equipment and storage medium
CN116260927A (en) Video processing method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant