CN111340722A - Image processing method, processing device, terminal device and readable storage medium - Google Patents

Image processing method, processing device, terminal device and readable storage medium Download PDF

Info

Publication number
CN111340722A
CN111340722A CN202010105128.XA CN202010105128A CN111340722A CN 111340722 A CN111340722 A CN 111340722A CN 202010105128 A CN202010105128 A CN 202010105128A CN 111340722 A CN111340722 A CN 111340722A
Authority
CN
China
Prior art keywords
image
sample
target object
blurred
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010105128.XA
Other languages
Chinese (zh)
Other versions
CN111340722B (en
Inventor
金宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010105128.XA priority Critical patent/CN111340722B/en
Publication of CN111340722A publication Critical patent/CN111340722A/en
Application granted granted Critical
Publication of CN111340722B publication Critical patent/CN111340722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, a processing device, a terminal device and a readable storage medium. The method comprises the following steps: acquiring a to-be-processed blurred image comprising a target object and N sample sharp images each comprising the target object; for each sample sharp image, aligning the blurred image with the sample sharp image to obtain each image area forming the target object in the blurred image and corresponding sample areas respectively; for each image area, determining fusion weights corresponding to the image area and the N sample areas corresponding to the image area based on the definition degree of the image area, and performing image fusion based on the fusion weights, wherein the definition degree of the image area and the fusion weight corresponding to the image area are positively correlated for each image area. According to the method and the device, the technical problem that the product research and development period is long due to the adoption of a neural network model is avoided on the premise that the fuzzy correction is avoided from being distorted.

Description

Image processing method, processing device, terminal device and readable storage medium
Technical Field
The present application belongs to the technical field of terminal devices, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium.
Background
When a user takes a picture by using a terminal device (for example, a mobile phone), a blurred image may be taken under some circumstances, for example, when a target object to be taken is too close to or too far away from a camera, the target object may be in an out-of-focus state, which causes the target object in the picture to be blurred; for another example, when a camera under a screen is adopted or a picture is taken at night, the lighting quantity of the camera may be insufficient, and a target object to be taken in the picture may also be blurred; for another example, when a target object to be photographed and a camera move relatively, the target object in the picture is blurred.
At present, the following blur correction methods exist: and a large number of fuzzy sample images and clear sample images are adopted to train the neural network model, so that the trained neural network model can be subjected to fuzzy correction.
The fuzzy correction is performed by adopting the neural network model, so that the corrected image can be prevented from image distortion to a certain extent, but a large number of sample images are required to be used for training, and the research and development period of the product can be prolonged.
Therefore, how to avoid the technical problem of long product development cycle caused by the adoption of a neural network model on the premise of avoiding image distortion in fuzzy correction is a problem to be solved urgently at present.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium. The method can avoid the technical problem of long product research and development period caused by the adoption of a neural network model on the premise of avoiding distortion of fuzzy correction.
A first aspect of an embodiment of the present application provides an image processing method, including:
acquiring a to-be-processed blurred image comprising a target object and N sample clear images comprising the target object, wherein N is an integer and is more than or equal to 1;
for each sample sharp image, aligning the blurred image with the sample sharp image to obtain each image area forming the target object in the blurred image, and respectively corresponding to the sample areas located in the sample sharp image;
determining the definition of each image area;
for each image area, determining fusion weights respectively corresponding to the image area and N sample areas corresponding to the image area based on the definition degree of the image area, and performing image fusion on the image area and the N sample areas based on the fusion weights, wherein the definition degree of the image area is positively correlated to the fusion weight corresponding to the image area for each image area.
A second aspect of an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for acquiring a to-be-processed blurred image comprising a target object and N sample clear images comprising the target object, wherein N is an integer and is not less than 1;
the alignment module is used for aligning the blurred image with the sample sharp image for each sample sharp image to obtain each image area of the target object in the blurred image, and the image areas are respectively and correspondingly located in the sample areas of the sample sharp image;
the definition determining module is used for determining the definition of each image area;
and the fusion module is used for determining fusion weights respectively corresponding to the image area and the N sample areas corresponding to the image area based on the definition degree of the image area for each image area, and carrying out image fusion on the image area and the N sample areas based on each fusion weight, wherein the definition degree of the image area is positively correlated to the fusion weight corresponding to the image area for each image area.
A third aspect of embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the image processing method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the image processing method according to the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product comprising a computer program that, when executed by one or more processors, performs the steps of the image processing method according to the first aspect.
In view of the above, the present application provides an image processing method. Firstly, acquiring a blurred image containing a target object (the blurred image refers to an image containing the target object with low definition, for example, below a certain threshold), and acquiring N sample sharp images each including the target object (the sample sharp image refers to a sample image containing the target object with high definition, for example, above a certain threshold), it should be clear to those skilled in the art that the blurred image and each sample sharp image both contain the same target object (for example, both contain small and clear faces); secondly, aligning the blurred image with the sample sharp image for each sample sharp image to obtain sample areas, which are respectively corresponding to all image areas forming the target object, in the blurred image and are located in the sample sharp image, for example, if the image areas are pixel points, the step can determine which pixel points in a sample sharp image correspond to all pixel points forming the target object in the blurred image; then, determining the definition of each image area forming the target object in the blurred image; finally, for each image area, determining fusion weights respectively corresponding to the image area and the N sample areas corresponding to the image area based on the definition degree of the image area, and performing image fusion on the image area and the N sample areas based on each fusion weight, wherein for each image area, the definition degree of the image area is positively correlated with the fusion weight corresponding to the image area, that is, for a certain image area, if the area is relatively clear, information of the area is retained to a greater degree, less information is fused into a clear sample image, and if the area is relatively fuzzy, information of the area is retained to a lesser degree, and information is fused into a clear sample image to a greater degree.
Therefore, based on the analysis, the image processing method provided by the application does not use a neural network model when the fuzzy correction is performed, so that the technical problem that the product development cycle is long due to the adoption of the neural network model is solved.
In addition, in the technical scheme provided by the application, the following technical means are adopted: firstly, a sample clear image containing the same target object is adopted; and secondly, fusing corresponding areas of the sharp images of the sample to different degrees based on the definition of image areas forming the target object in the blurred images. Therefore, as the target object to be corrected in the blurred image is the same as the target object in the sample sharp image, the introduction of other unreasonable image information is avoided, image fusion is performed based on the sharpness, and the information of the original blurred image can be retained to a greater extent when the sharpness of the image area is greater.
In conclusion, the technical scheme provided by the application can avoid the technical problem of long product development cycle caused by the adoption of a neural network model on the premise of avoiding image distortion in the process of fuzzy correction.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image region and a sample region according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another image processing method according to the second embodiment of the present application;
FIG. 4 is a diagram illustrating the relationship between the absolute value of the gradient value and the degree of sharpness provided in the second embodiment of the present application;
FIG. 5 is a schematic diagram of fuzzy correction for pixel points A-D based on a weight map according to a second embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application;
Detailed Description
In the following description, for purposes of explanation and not limitation, specific technical details are set forth, such as particular examples, in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The image processing method provided by the embodiment of the application can be applied to terminal equipment, and the terminal equipment includes but is not limited to: smart phones, tablet computers, notebooks, desktop computers, cloud servers, and the like.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Example one
In the following, a description is given of an image processing method provided in an embodiment of the present application, where the image processing method is applied to a terminal device (for example, a smart phone), with reference to fig. 1, and the method includes the following steps:
in step S101, acquiring a to-be-processed blurred image including a target object and N sample sharp images each including the target object, wherein N is an integer and is not less than 1;
the blurred image described in the present application is: an image containing a target object with a sharpness below a first threshold. The clear image of the sample is as follows: an image containing a target object with a sharpness above a second threshold. Those skilled in the art will readily appreciate that the second threshold is greater than or equal to the first threshold.
The step S101 defines that each of the sample sharp image and the blurred image includes a target object, that is, the blurred image and the sample sharp image should be consistent with the target object included in the sample sharp image (for example, both include a small sharp face image, which is explained further that the blurred image and each of the sample sharp images include the same portion of the same subject), but it should be understood by those skilled in the art that this does not mean that the step S101 limits the blurred image and each of the sample sharp images to be identical with each other, for example, in the embodiment of the present application, the blurred image may be an image including a small sharp half-sided face with a slight inclination angle, and the sample sharp image may be a sharp image including a small sharp front face that is photographed in advance.
In the embodiment of the present application, before executing the step S101, the following step a may be executed:
a, acquiring an image X, determining whether a shooting subject in the image X belongs to a preset object set, and determining the definition of the shooting subject;
accordingly, the step S101 is specifically the following step B:
and B, if the shooting subject belongs to the preset object set and the definition degree is smaller than the first threshold value, determining the image X as the to-be-processed blurred image comprising the target object, and acquiring N sample clear images each comprising the target object.
Those skilled in the art will readily understand that the target object included in the blurred image in step B is the subject in step a.
In the step a, when the terminal device acquires the image X, it is first determined whether the subject in the image X belongs to the preset object set, and the definition of the subject in the image X is determined. It should be understood by those skilled in the art that the determining whether the subject belongs to the preset object set and the determining the degree of sharpness of the subject may be performed in parallel or in series (for example, it may be determined whether the subject belongs to the preset object set, and when the subject belongs to the preset object set, the degree of sharpness of the subject is determined), and the present application does not limit this.
When the subject in the image X belongs to the preset object set and the degree of sharpness of the subject meets the requirement, the image X is determined as the blurred image in step S101, and then N sample sharp images each including the target object are obtained.
It is easily understood by those skilled in the art that the reason why the determination of whether the subject belongs to the preset object set in step a is that the blur correction is performed subsequently by using N sample sharp images, which are usually pre-stored images, can be performed only when the subject belongs to a specific type, and therefore, it is necessary to determine whether the subject belongs to the preset object set in step a.
In step S102, for each sharp sample image, aligning the blurred image with the sharp sample image to obtain sample regions, corresponding to the image regions constituting the target object, in the blurred image, the sample regions being located in the sharp sample image;
as shown in fig. 2, it is assumed that the blurred image acquired in step S101 is an image 201, the image 201 includes a target object 202, that is, a bright front face, and the acquired sample sharp images are 2 images, that is, an image 203 and an image 204, which also include bright front faces. In this step S102, it is necessary to determine the sample regions corresponding to the respective image regions constituting the small bright front face in the blurred image 201, and in the example shown in fig. 2, there are 4 image regions constituting the target object in the image 201, which are the image region a, the image region B, the image region C, and the image region D, and it is necessary to determine, in this step S102, the sample regions corresponding to the image region a, the image region B, the image region C, and the image region D in the sample sharp image 203, and the sample regions corresponding to the image region a, the image region B, the image region C, and the image region D in the sample sharp image 204, which correspond to the sample region a1 in the image 203 and the sample region a2 in the image 204, by taking the image region a as an example.
In this embodiment of the application, if the target object in step S101 is a human face, step S102 may specifically include the following steps:
s1021, extracting the characteristic points of the human face in the blurred image, and acquiring the characteristic points of the human face in each sample clear image;
and S1022, for each sample sharp image, matching the human face characteristic points in the blurred image with the human face characteristic points in the sample sharp image to obtain sample regions, corresponding to the image regions forming the target object, in the blurred image and located in the sample sharp image, so as to align the blurred image with the sample sharp images.
In the embodiment of the present application, the larger the image area is, the less natural the image obtained after the blur correction may be (mainly reflected at the boundary line of each image area), so that, in order to make the image after the blur correction more natural, the area of the image area in the step S102 should be as small as possible, for example, each image area in the step S102 may be a pixel.
Furthermore, it should be understood by those skilled in the art that there should be no intersection between the image regions described in step S102, and the union of the image regions may constitute a complete target object in the blurred image as shown in fig. 2, but may also constitute a part of the target object, for example, when we only focus on a part of the target object, only the part may be subjected to blur correction, that is, only the part may be divided to obtain the image regions constituting the part, in which case, the union of the image regions described in step S102 is not a complete target object, but only a part of the target object. Please note that, the union of the image regions in step S102 is not necessarily a complete target object in the blurred image.
In addition, since it is necessary to obtain each image region constituting the target object in the blurred image in step S102, which corresponds to each region of the sample sharp image, a certain image region of the target object included in the blurred image should also be present in the sample sharp image (for example, when a certain image region in the blurred image is a small eye, a small eye should also be present in each sample sharp image), it should be understood by those skilled in the art that the target object included in each sample sharp image and the target object included in the blurred image are both the same part of the same subject in step S101, but in addition to the limitation that the included target object is identical to the blurred image, that is, the same part of the same subject, the sample sharp image should include sample regions corresponding to respective image regions in the blurred image, so as to avoid failure to obtain sample regions corresponding to image regions constituting the target object in the blurred image. If the target object in step S101 is a human face, in order to avoid that an image region constituting the target object in the blurred image cannot correspond to the sample sharp image, the sample sharp image may be obtained through the following steps C and D after the blurred image is obtained:
step C, after the to-be-processed blurred image including the target object is obtained, determining the face posture in the blurred image;
and D, selecting N sample sharp images which also contain the target object and have the human face posture within a preset posture difference with the human face posture in the blurred image from a sample sharp image set, wherein the sample image set comprises sample sharp images of different human face postures.
As will be readily understood by those skilled in the art, when the face pose in the sample sharp image and the face pose in the blurred image are within the preset pose difference (i.e., when the face pose in the blurred image and the face pose in the blurred image are kept consistent as much as possible), it can be ensured that each image area constituting the target object in the blurred image can be corresponded to the sample sharp image as much as possible.
In addition, in order to further ensure that the image after the blur correction is more natural, the image brightness of the sample sharp image should be consistent with the blurred image as much as possible, that is, before the step D is performed, the method may further include the step E:
step E: determining the image brightness in the blurred image;
correspondingly, the step D specifically includes: and selecting N sample sharp images of which the face pose and the face pose in the blurred image are within a preset pose difference and the image brightness of the blurred image are within a preset brightness difference from the sample sharp image set, wherein the N sample sharp images also comprise the target object.
In step S103, determining the degree of sharpness of each of the image regions;
in this step, the definition of a certain image region may be calculated by using an existing image definition calculation method, such as an existing Laplacian gradient function or a gray variance function, and the definition calculation method of the image region is not specifically limited in the present application.
In step S104, for each image region, determining fusion weights respectively corresponding to the image region and N sample regions corresponding to the image region based on the degree of sharpness of the image region, and performing image fusion on the image region and the N sample regions based on the respective fusion weights, wherein the degree of sharpness of the image region is positively correlated with the fusion weight corresponding to the image region for each image region;
this step S104 will be described in detail below with reference to fig. 2.
As shown in fig. 2, for the image area a, fusion weights corresponding to the image area a, the sample area a1, and the sample area a2 are determined according to the degree of sharpness of the image area a, and then the image area a, the sample area a1, and the sample area a2 are image-fused based on the fusion weights, and similarly, image fusion is performed for the image areas B-D.
In step S104, the fusion weight of the image area is positively correlated with the sharpness of the image area, that is, if the sharpness of the image areas a to D decreases sequentially, the fusion weight of the image area a to D should decrease sequentially, and the relationship between the sharpness and the fusion weight of the image area is described below with reference to the image areas A, B and D, for example, if the image area a is very sharp, the fusion weight of the image area a may be 1, and the fusion weights of the sample area a1 and the sample area a2 are both 0; if the image region B is slightly blurred, the fusion weight of the image region B may be 0.8, and the fusion weights of the sample region B1 and the sample region B2 may both be 0.1 (sample region B1 is the corresponding region of the image region B in the sample sharp image 203, and sample region B2 is the corresponding region of the image region B in the sample sharp image 204); if the image region D is very blurred, the fusion weight of the image region D may be 0.3, and the fusion weights of the sample region D1 and the sample region D2 may be both 0.35 (the sample region D1 is the corresponding region of the image region D in the sample sharp image 203, and the sample region D2 is the corresponding region of the image region D in the sample sharp image 204).
Based on the above analysis, the image processing method provided by the embodiment of the present application does not use a neural network model when performing the blur correction, so that the technical problem of long product development cycle due to the adoption of the neural network model is avoided. In addition, in the technical scheme provided by the first embodiment of the application, the following technical means are adopted: firstly, a sample clear image containing the same target object is adopted; and secondly, fusing corresponding areas of the sharp images of the sample to different degrees based on the definition of image areas forming the target object in the blurred images. Therefore, as the target object to be corrected in the blurred image is the same as the target object in the sample sharp image, the introduction of other unreasonable image information is avoided, image fusion is performed based on the sharpness, and the information of the original blurred image can be retained to a greater extent when the sharpness of the image area is greater.
In summary, the technical scheme provided by the first embodiment of the present application can avoid the technical problem of long product development cycle due to the adoption of the neural network model on the premise of avoiding image distortion in the blur correction.
Example two
An embodiment of the present application provides another image processing method, referring to fig. 3, the image processing method includes:
in step S301, acquiring a to-be-processed blurred image including a target object and N sample sharp images each including the target object, where N is an integer and N is greater than or equal to 1;
the specific implementation of the step S301 is completely the same as the step S101 in the first embodiment, and reference may be specifically made to the description of the first embodiment, which is not repeated herein.
In step S302, for each sample sharp image, aligning the blurred image with the sample sharp image to obtain sample pixel points located in the sample sharp image, corresponding to each pixel point constituting the target object in the blurred image;
compared with the step S102 in the first embodiment, in the step S302, the image area in the step S102 is specifically defined as a pixel. In the first embodiment of the present application, it is discussed that the area of the image region in step S102 should be as small as possible in order to make the image after blur correction more natural. Therefore, in the second embodiment of the present application, the image area described in step S102 is specifically a pixel point, so that the image after the blur correction is more natural.
It is easily understood by those skilled in the art that when the image area described in step S102 in the first embodiment is defined as a pixel, the sample area described in step S102 in the first embodiment should also be a pixel.
In step S303, determining a degree of sharpness of each pixel point constituting the target object in the blurred image;
in the second embodiment of the present application, the step S303 may include the following steps:
s3031, calculating gradient values corresponding to all pixel points forming the target object in the blurred image respectively;
s3032, determining the definition degree of each pixel point forming the target object according to the absolute value of the gradient value corresponding to each pixel point forming the target object.
In this application embodiment, the degree of definition of this pixel can be measured through the gradient value of pixel, and in this application embodiment, the greater the absolute value of gradient value is, then can explain to a certain extent that this pixel is more clear.
Therefore, in step S3032, for each pixel point, the definition degree of the pixel point can be determined based on the preset function described in fig. 4 (a).
However, it will be understood by those skilled in the art that the absolute value of the sharpness and gradient values do not necessarily exhibit the absolute positive correlation described in FIG. 4(a), but may also exhibit a functional relationship as shown in FIG. 4 (b). That is, in the step S3032, the definition of the pixel point may be determined through the following steps:
s30321, determining preset ranges in which the absolute values of the gradient values of the pixel points are respectively located according to the absolute values of the gradient values respectively corresponding to the pixel points, wherein each preset range forms a continuous numerical range, no intersection exists between every two preset ranges, and each preset range corresponds to a definition degree;
and S30322, regarding each pixel point, taking the definition degree corresponding to the preset range in which the absolute value of the gradient value of the pixel point is located as the definition degree of the pixel point.
Those skilled in the art will readily understand that the number of degrees of sharpness determined in the above steps S30321-S30322 is less than that in the method shown in fig. 4(a), and therefore, compared with a technical solution shown in fig. 4(a), the technical solution defined in the above steps S30321-S30322 has a smaller number of degrees of sharpness determined last, so that the amount of data calculation of the terminal device is smaller when step S104 is executed subsequently, and the image after blur correction can be obtained more quickly.
In step S304, determining, for each pixel point constituting the target object in the blurred image, based on the degree of sharpness of the pixel point, fusion weights corresponding to the pixel point and N sample pixel points corresponding to the pixel point, and performing image fusion on the pixel point and the N sample pixel points based on the fusion weights, where, for each pixel point located in the blurred image and constituting the target object, the degree of sharpness of the pixel point is positively correlated with the fusion weight corresponding to the pixel point;
compared to the step S104 in the first embodiment, the step S304 is implemented by specifying the "image area" in the step S104 as a pixel, except that the step S304 is implemented in a manner completely the same as the step S104 in the first embodiment, and the description of the first embodiment can be referred to specifically.
In addition, in the second embodiment of the present application, when N is 1, the step S304 may be executed by a technical method of calculating a weight map, that is, the step S304 may include the following steps:
s3041, generating a weight map based on the definition degree of each pixel point forming the target object in the blurred image, wherein the weight map comprises a plurality of fusion weights Wi,jThe value ranges of i and j are determined by the position of the target object in the blurred image and the size of the target object;
in the second embodiment of the present application, a weight map is first generated, where the weight map includes weights required for subsequent image fusion, where W is the above-mentioned Wi,jA fusion weight, 1-W, corresponding to a pixel point at a position (i, j) constituting the target object in the blurred imagei,jThe fusion weight of the sample pixel point corresponding to the pixel point at the position (i, j) in the blurred image is obtained;
s3042, based on the weight map, each pixel point in the blurred image forming the target object, and each sample pixel point in the sample sharp image corresponding to each pixel point forming the target object, performing image fusion by using an image fusion formula to obtain a modified image of the target object, where the image fusion formula is:
RFi,j=Wi,j×Fuzzyi,j+(1-Wi,j)×Cleari,j
wherein, Fuzzyi,jClear is the pixel value of the pixel point at the position (i, j) constituting the target object in the blurred imagei,jIs the pixel value, RF, of the pixel corresponding to the pixel at position (i, j) in the blurred imagei,jIs the pixel value of the fused image at position (i, j).
The above steps S3041-S3042 are further explained below specifically with reference to fig. 5, as shown in fig. 5, the to-be-processed blurred image is an image 501, and the sample sharp image is an image 502, and for convenience of description, it is assumed that only 4 pixel points in the image 501 need to be subjected to blur correction in the example shown in fig. 5. In FIG. 5, four pixels to be corrected are respectively shownThe four pixels are A, B, C and D in the image 501 and respectively correspond to the pixels A1, B1, C1 and D1 in the image 502, so that the weight map 502 can be obtained, wherein the weight map 502 comprises four elements, respectively WAi,Aj、WBi,Bj、WCi,CjAnd WDi,DjWherein, (Ai, Aj) is the position of the pixel point a in the image 501, (Bi, Bj) is the position of the pixel point B in the image 501, (Ci, Cj) is the position of the pixel point C in the image 501, and (Di, Dj) is the position of the pixel point D in the image 501, then the pixel point A, B, C and D are subjected to blur correction based on the image fusion formula, and the pixel value RA of the corrected pixel point a in the image 501 is: RA ═ WAi,Aj×A+(1-WAi,Aj) × A1 (where the formula A is the pixel value of the pixel A in the image 501 and the formula A1 is the pixel value of the pixel A1 in the image 502), the blur correction method for the pixel B, C and D in the image 501 is completely the same as that for the pixel A, and is not repeated here.
The fuzzy correction is carried out by generating the weight graph, and the following advantages are achieved: firstly, because the weight graph is generated in the middle, the user can conveniently call the weight graph and correct the weight graph, namely the user can conveniently intervene in fuzzy correction; secondly, when the blurred image in step S101 is obtained by shooting through a camera by a user, the user can analyze the blurred region based on the weight map and adjust the shooting mode.
According to the technical scheme provided by the embodiment II of the application, the unit of image fusion is limited to be the pixel point, so that the image after the fuzzy correction can be more natural. In addition, the second embodiment of the present application is the same as the first embodiment, and can also avoid the technical problem of long product development cycle due to the adoption of a neural network model on the premise of avoiding image distortion in the blur correction.
EXAMPLE III
The third embodiment of the application provides an image processing device. For convenience of explanation, only a part related to the present application is shown, and as shown in fig. 6, the image processing apparatus 600 includes:
the image obtaining module 601 is configured to obtain a to-be-processed blurred image including a target object and N sample sharp images each including the target object, where N is an integer and N is greater than or equal to 1;
an alignment module 602, configured to align the blurred image with the sample sharp image for each sample sharp image, so as to obtain sample regions, located in the sample sharp image, corresponding to each image region forming the target object in the blurred image;
a definition determining module 603, configured to determine a definition of each of the image regions;
a fusion module 604, configured to determine, for each image region, a fusion weight corresponding to the image region and N sample regions corresponding to the image region based on the degree of sharpness of the image region, and perform image fusion on the image region and the N sample regions based on the respective fusion weights, where, for each image region, the degree of sharpness of the image region is positively correlated to the fusion weight corresponding to the image region.
Optionally, each image region constituting the target object is a pixel point;
correspondingly, the sample regions corresponding to the image regions are all one pixel point.
Optionally, the clearness determining module 603 includes:
the gradient calculation unit is used for calculating gradient values corresponding to all pixel points forming the target object in the blurred image;
and the definition determining unit is used for determining the definition of each pixel point forming the target object according to the absolute value of the gradient value corresponding to each pixel point forming the target object.
Optionally, the definition determining unit includes:
the preset range determining subunit is configured to determine, according to the absolute value of the gradient value corresponding to each pixel point, a preset range in which the absolute value of the gradient value of each pixel point is located, where each preset range forms a continuous numerical range, and no intersection exists between every two preset ranges, and each preset range corresponds to a definition;
and the definition determining subunit is used for regarding each pixel point, and taking the definition corresponding to the preset range in which the absolute value of the gradient value of the pixel point is located as the definition of the pixel point.
Optionally, N is 1;
accordingly, the fusion module 604 includes:
a weight map generating unit, configured to generate a weight map based on a degree of sharpness of each pixel point constituting the target object in the blurred image, where the weight map is formed by a plurality of fusion weights Wi,jWherein the value ranges of i and j are determined by the position of the target object in the blurred image and the size of the target object, and W isi,jFusion weights, 1-W, for pixel points corresponding to locations (i, j) in the blurred image that constitute the target objecti,jThe fusion weight of the sample area in the sample sharp image corresponding to the pixel point at the position (i, j) in the blurred image;
a fusion unit, configured to perform image fusion based on the weight map, each pixel point in the blurred image forming the target object, and pixel points in the sample sharp image corresponding to each pixel point forming the target object, respectively, by using an image fusion formula to obtain a modified image of the target object, where the image fusion formula is:
RFi,j=Wi,j×Fuzzyi,j+(1-Wi,j)×Cleari,j
wherein, Fuzzyi,jClear is the pixel value of the pixel point at position (i, j) constituting the target object in the blurred imagei,jFor the pixel value of the pixel corresponding to the pixel at position (i, j) in the blurred image, RFi,jIs the pixel value of the fused image at position (i, j).
Optionally, the target object is a human face;
accordingly, the image acquiring module 601 includes:
the posture determining unit is used for acquiring the to-be-processed blurred image and determining the human face posture in the blurred image;
and the selecting unit is used for selecting N sample sharp images which have the difference between the human face posture in the sample sharp image set and the human face posture in the blurred image within a preset posture difference and also respectively comprise the target object, wherein the sample image set comprises sample sharp images of different human face postures.
Correspondingly, the image obtaining module 601 further includes:
a brightness determination unit for determining image brightness in the blurred image;
correspondingly, the selecting unit is specifically configured to:
and selecting N sample sharp images of which the face postures are within a preset posture difference with the face postures in the blurred image from the sample sharp image set, and the image brightness of the blurred image are within the preset brightness difference and also comprise the target object.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, since the first method embodiment and the second method embodiment are based on the same concept, specific functions and technical effects thereof may be specifically referred to a corresponding method embodiment part, and details are not described herein again.
Example four
Fig. 7 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 7, the terminal device 700 of this embodiment includes: a processor 701, a memory 702, and a computer program 703 stored in the memory 702 and executable on the processor 701. The steps in the various method embodiments described above are implemented when the processor 701 executes the computer program 703 described above. Alternatively, the processor 701 implements the functions of the modules/units in the device embodiments when executing the computer program 703.
Illustratively, the computer program 703 may be divided into one or more modules/units, which are stored in the memory 702 and executed by the processor 701 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program 703 in the terminal device 700. For example, the computer program 703 may be divided into an image acquisition module, an alignment module, a definition determination module, and a fusion module, and the specific functions of each module are as follows:
acquiring a to-be-processed blurred image comprising a target object and N sample clear images comprising the target object, wherein N is an integer and is more than or equal to 1;
for each sample sharp image, aligning the blurred image with the sample sharp image to obtain sample regions which are respectively located in the sample sharp image and correspond to each image region forming the target object in the blurred image;
determining the definition of each image area;
for each image area, determining fusion weights respectively corresponding to the image area and N sample areas corresponding to the image area based on the definition degree of the image area, and performing image fusion on the image area and the N sample areas based on the fusion weights, wherein the definition degree of the image area is positively correlated to the fusion weight corresponding to the image area for each image area.
The terminal device may include, but is not limited to, a processor 701 and a memory 702. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device 700 and does not constitute a limitation of terminal device 700 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 701 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 702 may be an internal storage unit of the terminal device 700, such as a hard disk or a memory of the terminal device 700. The memory 702 may also be an external storage device of the terminal device 700, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 700. Further, the memory 702 may include both an internal storage unit and an external storage device of the terminal device 700. The memory 702 is used to store the computer program and other programs and data required by the terminal device. The memory 702 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a processor, so as to implement the steps of the above method embodiments. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring a to-be-processed blurred image comprising a target object and N sample clear images comprising the target object, wherein N is an integer and is more than or equal to 1;
for each sample sharp image, aligning the blurred image with the sample sharp image to obtain each image area forming the target object in the blurred image, and respectively corresponding to the sample areas located in the sample sharp image;
determining the definition of each image area;
for each image area, determining fusion weights respectively corresponding to the image area and N sample areas corresponding to the image area based on the definition degree of the image area, and performing image fusion on the image area and the N sample areas based on the fusion weights, wherein the definition degree of the image area is positively correlated to the fusion weight corresponding to the image area for each image area.
2. The image processing method according to claim 1, wherein each of the image regions constituting the target object is a pixel point;
correspondingly, the sample regions corresponding to the image regions are all one pixel point.
3. The image processing method of claim 2, wherein said determining a degree of sharpness for each of said image regions comprises:
calculating gradient values respectively corresponding to each pixel point forming the target object in the blurred image;
and determining the definition degree of each pixel point forming the target object according to the absolute value of the gradient value corresponding to each pixel point forming the target object.
4. The image processing method according to claim 3, wherein said determining the sharpness of each pixel point constituting the target object based on the absolute value of the gradient value corresponding to each pixel point constituting the target object comprises:
determining preset ranges in which the absolute values of the gradient values of the pixel points are respectively located according to the absolute values of the gradient values respectively corresponding to the pixel points, wherein each preset range forms a continuous numerical range, no intersection exists between every two preset ranges, and each preset range corresponds to a definition degree;
and regarding each pixel point, taking the definition corresponding to the preset range in which the absolute value of the gradient value of the pixel point is positioned as the definition of the pixel point.
5. The image processing method according to claim 2, wherein N is 1;
accordingly, the determining, for each image region, fusion weights respectively corresponding to the image region and N sample regions corresponding to the image region based on the degree of sharpness of the image region, and image-fusing the image region and the N sample regions based on the respective fusion weights, includes:
generating a weight map based on the definition degree of each pixel point forming the target object in the blurred image, wherein the weight map is composed of a plurality of fusion weights Wi,jWherein the value ranges of i and j are determined by the position of the target object in the blurred image and the size of the target object, and W isi,jFusion weights, 1-W, for pixel points corresponding to locations (i, j) in the blurred image that constitute the target objecti,jThe fusion weight of the sample area in the sample sharp image corresponding to the pixel point at the position (i, j) in the blurred image;
based on the weight map, each pixel point in the blurred image forming the target object and pixel points in the sample sharp image corresponding to each pixel point forming the target object, performing image fusion by using an image fusion formula to obtain an image after the target object is corrected, wherein the image fusion formula is as follows:
RFi,j=Wi,j×Fuzzyi,j+(1-Wi,j)×Cleari,j
wherein, Fuzzyi,jFor the blurred image constituting the target object and at position (i, j)Pixel value, Clear, of a pixel pointi,jFor the pixel value of the pixel corresponding to the pixel at position (i, j) in the blurred image, RFi,jIs the pixel value of the fused image at position (i, j).
6. The image processing method according to any one of claims 1 to 5, wherein the target object is a human face;
correspondingly, the acquiring a blurred image including a target object to be processed and N sample sharp images each including the target object includes:
acquiring the to-be-processed blurred image, and determining a face posture in the blurred image;
and selecting N sample sharp images of which the difference between the human face posture and the human face posture in the blurred image is within a preset posture difference and which also comprise the target object from a sample sharp image set, wherein the sample image set comprises sample sharp images of different human face postures.
7. The image processing method according to claim 6, wherein before the step of selecting the sample sharp images in the sample sharp image set whose face pose differs from the face pose in the blurred image by a preset pose difference, the N sample sharp images each also including the target object, further comprises:
determining an image brightness in the blurred image;
correspondingly, the selecting, in the sample sharp image set, N sample sharp images each including the target object, where a difference between the face pose in the sample sharp image set and the face pose in the blurred image is within a preset pose difference, includes:
and selecting N sample sharp images of which the face postures are within a preset posture difference with the face postures in the blurred image from the sample sharp image set, and the image brightness of the blurred image are within the preset brightness difference and also comprise the target object.
8. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring a to-be-processed blurred image comprising a target object and N sample clear images comprising the target object, wherein N is an integer and is not less than 1;
the alignment module is used for aligning the blurred image with the sample sharp image for each sample sharp image to obtain each image area of the target object in the blurred image, and the image areas are respectively and correspondingly located in the sample areas of the sample sharp image;
the definition determining module is used for determining the definition of each image area;
and the fusion module is used for determining fusion weights respectively corresponding to the image area and the N sample areas corresponding to the image area based on the definition degree of the image area for each image area, and carrying out image fusion on the image area and the N sample areas based on each fusion weight, wherein the definition degree of the image area is positively correlated to the fusion weight corresponding to the image area for each image area.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the image processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
CN202010105128.XA 2020-02-20 2020-02-20 Image processing method, processing device, terminal equipment and readable storage medium Active CN111340722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010105128.XA CN111340722B (en) 2020-02-20 2020-02-20 Image processing method, processing device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010105128.XA CN111340722B (en) 2020-02-20 2020-02-20 Image processing method, processing device, terminal equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111340722A true CN111340722A (en) 2020-06-26
CN111340722B CN111340722B (en) 2023-05-26

Family

ID=71181666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010105128.XA Active CN111340722B (en) 2020-02-20 2020-02-20 Image processing method, processing device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111340722B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815504A (en) * 2020-06-30 2020-10-23 北京金山云网络技术有限公司 Image generation method and device
CN114119443A (en) * 2021-11-28 2022-03-01 特斯联科技集团有限公司 Image fusion system based on multispectral camera

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090225183A1 (en) * 2008-03-05 2009-09-10 Semiconductor Energy Laboratory Co., Ltd Image Processing Method, Image Processing System, and Computer Program
US20100021076A1 (en) * 2006-08-11 2010-01-28 Panasonic Corporation Method, apparatus and integrated circuit for improving image sharpness
CN104637064A (en) * 2015-02-28 2015-05-20 中国科学院光电技术研究所 Defocus blurred image definition detection method based on edge intensity weight
CN107133923A (en) * 2017-03-02 2017-09-05 杭州电子科技大学 A kind of blurred picture non-blind deblurring method based on self-adaption gradient sparse model
CN107292838A (en) * 2017-06-07 2017-10-24 汕头大学 The image deblurring method split based on fuzzy region
CN107563978A (en) * 2017-08-31 2018-01-09 苏州科达科技股份有限公司 Face deblurring method and device
CN108921806A (en) * 2018-08-07 2018-11-30 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109272024A (en) * 2018-08-29 2019-01-25 昆明理工大学 A kind of image interfusion method based on convolutional neural networks
CN110136091A (en) * 2019-04-12 2019-08-16 深圳云天励飞技术有限公司 Image processing method and Related product
WO2019192338A1 (en) * 2018-04-04 2019-10-10 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100021076A1 (en) * 2006-08-11 2010-01-28 Panasonic Corporation Method, apparatus and integrated circuit for improving image sharpness
US20090225183A1 (en) * 2008-03-05 2009-09-10 Semiconductor Energy Laboratory Co., Ltd Image Processing Method, Image Processing System, and Computer Program
CN104637064A (en) * 2015-02-28 2015-05-20 中国科学院光电技术研究所 Defocus blurred image definition detection method based on edge intensity weight
CN107133923A (en) * 2017-03-02 2017-09-05 杭州电子科技大学 A kind of blurred picture non-blind deblurring method based on self-adaption gradient sparse model
CN107292838A (en) * 2017-06-07 2017-10-24 汕头大学 The image deblurring method split based on fuzzy region
CN107563978A (en) * 2017-08-31 2018-01-09 苏州科达科技股份有限公司 Face deblurring method and device
WO2019192338A1 (en) * 2018-04-04 2019-10-10 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN108921806A (en) * 2018-08-07 2018-11-30 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109272024A (en) * 2018-08-29 2019-01-25 昆明理工大学 A kind of image interfusion method based on convolutional neural networks
CN110136091A (en) * 2019-04-12 2019-08-16 深圳云天励飞技术有限公司 Image processing method and Related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵竞超等: "多波段图像融合的直觉模糊化处理方法比较", 《红外技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815504A (en) * 2020-06-30 2020-10-23 北京金山云网络技术有限公司 Image generation method and device
CN114119443A (en) * 2021-11-28 2022-03-01 特斯联科技集团有限公司 Image fusion system based on multispectral camera
CN114119443B (en) * 2021-11-28 2022-07-01 特斯联科技集团有限公司 Image fusion system based on multispectral camera

Also Published As

Publication number Publication date
CN111340722B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN108921806B (en) Image processing method, image processing device and terminal equipment
CN109064428B (en) Image denoising processing method, terminal device and computer readable storage medium
CN111654594B (en) Image capturing method, image capturing apparatus, mobile terminal, and storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN109166156B (en) Camera calibration image generation method, mobile terminal and storage medium
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110660066A (en) Network training method, image processing method, network, terminal device, and medium
CN110766679A (en) Lens contamination detection method and device and terminal equipment
CN110675336A (en) Low-illumination image enhancement method and device
CN108765340B (en) Blurred image processing method and device and terminal equipment
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN111311482A (en) Background blurring method and device, terminal equipment and storage medium
CN110147708B (en) Image data processing method and related device
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
CN108764139B (en) Face detection method, mobile terminal and computer readable storage medium
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
CN113391779B (en) Parameter adjusting method, device and equipment for paper-like screen
CN108447040A (en) histogram equalization method, device and terminal device
CN111223061A (en) Image correction method, correction device, terminal device and readable storage medium
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN110766153A (en) Neural network model training method and device and terminal equipment
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant