CN110910326B - Image processing method and device, processor, electronic equipment and storage medium - Google Patents

Image processing method and device, processor, electronic equipment and storage medium Download PDF

Info

Publication number
CN110910326B
CN110910326B CN201911157608.4A CN201911157608A CN110910326B CN 110910326 B CN110910326 B CN 110910326B CN 201911157608 A CN201911157608 A CN 201911157608A CN 110910326 B CN110910326 B CN 110910326B
Authority
CN
China
Prior art keywords
pixel
neighborhood
image
processed
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911157608.4A
Other languages
Chinese (zh)
Other versions
CN110910326A (en
Inventor
吴佳飞
张广程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201911157608.4A priority Critical patent/CN110910326B/en
Publication of CN110910326A publication Critical patent/CN110910326A/en
Application granted granted Critical
Publication of CN110910326B publication Critical patent/CN110910326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The application discloses an image processing method and device, a processor, electronic equipment and a storage medium. The method comprises the following steps: acquiring a first image to be processed; obtaining a first filter of a first pixel neighborhood according to similarity between pixels in the first pixel neighborhood and structural feature data of the first pixel neighborhood in the first image to be processed, wherein the structural feature data carries edge information in the first pixel neighborhood; and filtering the first pixel point neighborhood by using the first filter to obtain an enhanced first image to be processed. A corresponding product is also disclosed to enhance the image quality of the image.

Description

Image processing method and device, processor, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a processor, an electronic device, and a storage medium.
Background
With the rapid development of image processing technology, image processing has been widely used in various fields such as: security protection field, terminal entertainment field, unmanned field. Processing an image to be processed based on an image processing technology in the above-mentioned field can perform corresponding tasks, such as: face recognition, object detection, unmanned, etc. And the image quality of the image to be processed will directly influence the execution effect of the task. Such as: the accuracy of face recognition is low due to low definition of the image to be processed, the accuracy of object detection is low due to high noise of the image to be processed, and the like. It is of great importance how the image quality of the image to be processed is enhanced.
Disclosure of Invention
The application provides an image processing method and device, a processor, an electronic device and a storage medium, so as to enhance the image quality of an image.
In a first aspect, there is provided an image processing method, the method comprising:
acquiring a first image to be processed;
obtaining a first filter of a first pixel neighborhood according to similarity between pixels in the first pixel neighborhood and structural feature data of the first pixel neighborhood in the first image to be processed, wherein the structural feature data carries edge information in the first pixel neighborhood;
and filtering the first pixel point neighborhood by using the first filter to obtain an enhanced first image to be processed.
In this aspect, the weight of the pixel in the first pixel neighborhood is determined according to the similarity between the pixels in the first pixel neighborhood and the structural feature data of the first pixel neighborhood, and then the first filter of the first pixel neighborhood is determined. The first filter is used for carrying out filtering treatment on the neighborhood of the first pixel point, the information of the pixel points except the pixel points to be treated in the neighborhood of the first pixel point can be better utilized to improve the definition of the pixel points to be treated, remove the noise of the pixel points to be treated and enrich the information of the pixel points to be treated, so that the image quality of the first image to be treated is enhanced.
In one possible implementation, the similarity includes: spatial similarity and gray scale similarity;
after the first image to be processed is acquired, before the first filter of the first pixel neighborhood is obtained according to the similarity between pixels in the first pixel neighborhood in the first image to be processed and the structural feature data of the first pixel neighborhood, the method further includes:
determining the first pixel point neighborhood from the first image to be processed;
determining a first Euclidean distance between a central pixel point of the first pixel point neighborhood and a first pixel point in the first pixel point neighborhood, and obtaining the spatial similarity;
and determining a first gray scale distance between the central pixel point and the first pixel point to obtain the gray scale similarity.
In this possible implementation manner, the similarity includes a spatial similarity and a gray-scale similarity, the spatial similarity between the pixels can be obtained according to the euclidean distance between the pixels, and the gray-scale similarity can be obtained according to the gray-scale distance between the pixels.
In another possible implementation, the size of the first filter is the same as the size of the first pixel neighborhood; the structural feature data of the first pixel neighborhood comprises structural feature values of pixels in the first pixel neighborhood, and the structural feature values are used for representing texture information of the pixels in the first pixel neighborhood;
The obtaining a first filter of the first pixel neighborhood according to the similarity between pixels in the first pixel neighborhood in the first image to be processed and the structural feature data of the first pixel neighborhood includes:
determining a value of a first element in the first filter according to the first Euclidean distance, the first gray scale distance, and a first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point; the position of the first pixel in the first filter is the same as the position of the first pixel in the first pixel neighborhood.
In yet another possible implementation manner, the determining the value of the first element in the first filter according to the first euclidean distance, the first gray scale distance, the first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point includes:
obtaining a first initial weight according to the first Euclidean distance, the first gray scale distance and the first difference;
and carrying out normalization processing on the first initial weight to obtain the value of the first element.
In this possible implementation, the value of the first element is determined as a product of the first euclidean distance, the first gray level distance, and the first difference.
In another possible implementation manner, the normalizing the first initial weight value to obtain the value of the first element includes:
determining a product of a second Euclidean distance between a second pixel point in the neighborhood of the first pixel point and the central pixel point, a second gray scale distance between the second pixel point and the central pixel point, and a second difference between a structural characteristic value of the second pixel point and a structural characteristic value of the central pixel point, so as to obtain a second initial weight;
determining the sum of the first initial weight and the second initial weight to obtain a third initial weight;
and determining the quotient of the first initial weight and the third initial weight to obtain the value of the first element.
In yet another possible implementation manner, before the determining the value of the first element in the first filter according to the first euclidean distance, the first gray scale distance, the first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point, the method further includes:
Taking a first preset value as a pixel point marking value of which the gray value in the neighborhood of the first pixel point is larger than that of the central pixel point, and taking a second preset value as a pixel point marking value of which the gray value in the neighborhood of the central pixel point is smaller than or equal to that of the central pixel point;
and sequentially arranging the marking values of the pixel points except the central pixel point in the neighborhood of the first pixel point to obtain the structural characteristic value of the central pixel point.
The possible implementation manner can obtain the structural characteristic value of the central pixel point, and the structural characteristic value can be used for representing the texture information of the pixel points around the central pixel point.
In another possible implementation manner, the filtering the first pixel point neighborhood with the first filter to obtain an enhanced first image to be processed includes:
downsampling the first image to be processed to obtain a second image to be processed;
filtering the first pixel point neighborhood by using the first filter to obtain a first to-be-processed image after filtering, and filtering the second pixel point neighborhood in the second to-be-processed image by using a second filter to obtain a second to-be-processed image after filtering; the second pixel point neighborhood is obtained by performing downsampling on the first pixel point neighborhood; the second filter is obtained according to the similarity between the pixels in the neighborhood of the second pixel and the structural feature data of the neighborhood of the second pixel;
Performing up-sampling processing on the second to-be-processed image after the filtering processing to enable the size of the second to-be-processed image after the filtering processing to be the same as the size of the first to-be-processed image after the filtering processing, so as to obtain a third to-be-processed image;
and carrying out fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image.
In this possible implementation manner, the downsampling process is performed on the first to-be-processed image to obtain a second to-be-processed image with a different scale from that of the first to-be-processed image. And filtering the first pixel neighborhood by using a first filter and filtering the second pixel neighborhood by using a second filter to realize filtering processing on the first to-be-processed image under different scales so as to remove noise in image features and details of different frequency bands. And carrying out fusion processing on the first to-be-processed image after the filtering processing and the second to-be-processed image after the filtering processing so as to improve the image quality of the enhanced first to-be-processed image.
In still another possible implementation manner, before the fusing processing is performed on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image, the method further includes:
Performing up-sampling processing on the second to-be-processed image to enable the size of the second to-be-processed image to be the same as the size of the first to-be-processed image, so as to obtain a fourth to-be-processed image;
determining the difference between the first image to be processed and the fourth image to be processed to obtain a fifth image to be processed;
the fusing processing is performed on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image, which comprises the following steps:
determining the sum of the first image to be processed and the fifth image to be processed to obtain a sixth image to be processed;
and determining the sum of the third to-be-processed image and the sixth to-be-processed image to obtain the enhanced first to-be-processed image.
Since the third image to be processed lacks the detail information lost due to the downsampling process, and the fifth image to be processed contains the detail information lost due to the downsampling process. In the possible implementation manner, the fifth to-be-processed image and the first to-be-processed image are added to obtain the sixth to-be-processed image, and then the sixth to-be-processed image and the third to-be-processed image are added to 'complement' the detail information lost in the third to-be-processed image due to the downsampling process, so that the image quality of the enhanced first to-be-processed image is improved.
In yet another possible implementation, the first image to be processed includes a human face.
In a second aspect, there is provided an image processing apparatus comprising:
an acquisition unit configured to acquire a first image to be processed;
the first processing unit is used for obtaining a first filter of a first pixel neighborhood according to the similarity between pixels in the first pixel neighborhood and the structural feature data of the first pixel neighborhood in the first image to be processed, wherein the structural feature data carries edge information in the first pixel neighborhood;
and the filtering processing unit is used for carrying out filtering processing on the first pixel point neighborhood by using the first filter to obtain an enhanced first image to be processed.
In one possible implementation, the similarity includes: spatial similarity and gray scale similarity; the apparatus further comprises:
the first determining unit is configured to determine, after the first image to be processed is acquired, a first pixel neighborhood from the first image to be processed according to a similarity between pixels in a first pixel neighborhood in the first image to be processed and structural feature data of the first pixel neighborhood, and before a first filter of the first pixel neighborhood is obtained;
The second determining unit is used for determining a first Euclidean distance between a central pixel point of the first pixel point neighborhood and a first pixel point in the first pixel point neighborhood to obtain the spatial similarity;
and the third determining unit is used for determining a first gray scale distance between the central pixel point and the first pixel point to obtain the gray scale similarity.
In another possible implementation, the size of the first filter is the same as the size of the first pixel neighborhood; the structural feature data of the first pixel neighborhood comprises structural feature values of pixels in the first pixel neighborhood, and the structural feature values are used for representing texture information of the pixels in the first pixel neighborhood;
the first processing unit is used for:
determining a value of a first element in the first filter according to the first Euclidean distance, the first gray scale distance, and a first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point; the position of the first pixel in the first filter is the same as the position of the first pixel in the first pixel neighborhood.
In a further possible implementation, the first processing unit is configured to:
obtaining a first initial weight according to the first Euclidean distance, the first gray scale distance and the first difference;
and carrying out normalization processing on the first initial weight to obtain the value of the first element.
In a further possible implementation, the first processing unit is configured to:
determining a product of a second Euclidean distance between a second pixel point in the neighborhood of the first pixel point and the central pixel point, a second gray scale distance between the second pixel point and the central pixel point, and a second difference between a structural characteristic value of the second pixel point and a structural characteristic value of the central pixel point, so as to obtain a second initial weight;
determining the sum of the first initial weight and the second initial weight to obtain a third initial weight;
and determining the quotient of the first initial weight and the third initial weight to obtain the value of the first element.
In yet another possible implementation, the apparatus further includes:
the second processing unit is configured to, before determining the value of the first element in the first filter according to the first euclidean distance, the first gray level distance, and the first difference between the structural feature value of the first pixel and the structural feature value of the central pixel, take a first preset value as a pixel point marking value in the first pixel neighborhood, where the gray level value is greater than the gray level value of the central pixel, and take a second preset value as a pixel point marking value in the central pixel neighborhood, where the gray level value is less than or equal to the gray level value of the central pixel;
And the arrangement unit is used for sequentially arranging the marking values of the pixel points except the central pixel point in the neighborhood of the first pixel point to obtain the structural characteristic value of the central pixel point.
In a further possible implementation manner, the filtering processing unit is configured to:
downsampling the first image to be processed to obtain a second image to be processed;
filtering the first pixel point neighborhood by using the first filter to obtain a first to-be-processed image after filtering, and filtering the second pixel point neighborhood in the second to-be-processed image by using a second filter to obtain a second to-be-processed image after filtering; the second pixel point neighborhood is obtained by performing downsampling on the first pixel point neighborhood; the second filter is obtained according to the similarity between the pixels in the neighborhood of the second pixel and the structural feature data of the neighborhood of the second pixel;
performing up-sampling processing on the second to-be-processed image after the filtering processing to enable the size of the second to-be-processed image after the filtering processing to be the same as the size of the first to-be-processed image after the filtering processing, so as to obtain a third to-be-processed image;
And carrying out fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image.
In yet another possible implementation, the apparatus further includes:
the up-sampling processing unit is used for up-sampling the second to-be-processed image before the first to-be-processed image after the filtering processing and the third to-be-processed image are fused to obtain the enhanced first to-be-processed image, so that the size of the second to-be-processed image is the same as the size of the first to-be-processed image to obtain a fourth to-be-processed image;
a fourth determining unit, configured to determine a difference between the first to-be-processed image and the fourth to-be-processed image, to obtain a fifth to-be-processed image;
the filtering processing unit is used for:
determining the sum of the first image to be processed and the fifth image to be processed to obtain a sixth image to be processed;
and determining the sum of the third to-be-processed image and the sixth to-be-processed image to obtain the enhanced first to-be-processed image.
In yet another possible implementation, the first image to be processed includes a human face.
In a third aspect, a processor is provided for performing the method of the first aspect and any one of its possible implementation manners described above.
In a fourth aspect, there is provided an electronic device comprising: a processor, a transmitting means, an input means, an output means and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method as described in the first aspect and any one of its possible implementation manners.
In a fifth aspect, a computer readable storage medium is provided, in which a computer program is stored, the computer program comprising program instructions which, when executed by a processor of an electronic device, cause the processor to carry out a method as in the first aspect and any one of the possible implementations thereof.
In a sixth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the first aspect and any one of its possible implementations.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following description will describe the drawings that are required to be used in the embodiments or the background of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a face image according to an embodiment of the present application;
fig. 3 is a schematic diagram of a filter according to an embodiment of the present application;
FIG. 4 is a schematic diagram of filtering an image using a filter according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a pixel neighborhood and an element (or pixel) at the same position in a filter according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a first filter and a first pixel neighborhood provided in an embodiment of the present application;
fig. 7 is a flowchart of another image processing method according to an embodiment of the present application;
FIG. 8a is a schematic diagram of a pixel neighborhood according to an embodiment of the present disclosure;
fig. 8b is a schematic diagram of adding a marking value to a pixel in a pixel neighborhood according to an embodiment of the present application;
FIG. 9a is a schematic diagram of another pixel neighborhood provided in an embodiment of the present application;
fig. 9b is a schematic diagram of adding a mark value to a pixel in a pixel neighborhood according to another embodiment of the present application;
fig. 10 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 11a is a schematic diagram of a first image to be processed according to an embodiment of the present application;
fig. 11b is a schematic diagram of a second image to be processed obtained by performing downsampling processing on a first image to be processed according to an embodiment of the present application;
fig. 12a is a schematic diagram of a second image to be processed according to an embodiment of the present application;
fig. 12b is a schematic diagram of a third to-be-processed image obtained by performing downsampling processing on a second to-be-processed image according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 14 is a schematic hardware structure of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment (a) of the present application.
101. A first image to be processed is acquired.
The execution subject of the embodiment of the application can be a server, a mobile phone, a computer, a tablet personal computer and other terminals. The first image to be processed may be any digital image. For example, the first image to be processed may include a person, where the first image to be processed may include a human face and/or a human body, may include only a human face, no trunk or limbs (hereinafter, trunk and limbs are referred to as a human body), may include only a human body, may include no human face, and may include only lower limbs or upper limbs. As another example, the first image to be processed may contain an object. The content contained in the first image to be processed is not limited in this application.
The first to-be-processed image may be obtained by receiving the first to-be-processed image input by the user through the input component, or may be the first to-be-processed image sent by the receiving terminal. The input assembly includes: a keyboard, a mouse, a touch screen, a touch pad, an audio input device, and the like. The terminal comprises a mobile phone, a computer, a tablet personal computer, a server and the like. The method for acquiring the first image to be processed is not limited in this application.
102. And obtaining a first filter of the first pixel neighborhood according to the similarity between pixels in the first pixel neighborhood in the first image to be processed and the structural feature data of the first pixel neighborhood.
The first pixel neighborhood includes at least two pixels. The structural feature data carries edge information in the neighborhood of the first pixel point.
In this embodiment of the present application, the edge information of the first pixel neighborhood includes: information of the position of the edge in the first pixel point neighborhood in the first image to be processed, and the positional relationship between different edges in the first pixel point neighborhood. For example, in the face shown in fig. 2, eyebrows, eyeboxes, nose bridges and lips are all edges, and the position of the image area covered by the edges in the face image shown in fig. 2 is the position of the edges in the face image. In the face image shown in fig. 2, the upper lip and the lower lip intersect at the left mouth corner and the right mouth corner, that is, the positional relationship between the edge corresponding to the upper lip and the area corresponding to the lower lip is intersection, and the intersecting positions are the position of the left mouth corner in the face image and the position of the right mouth corner in the face image.
The similarity between pixels in the first pixel point neighborhood includes a spatial similarity between pixels in the first pixel point neighborhood and a spatial similarity between pixels in the first pixel point neighborhoodGray scale similarity between pixel points. The spatial similarity may represent a distance between pixels, and the distance between pixels is inversely related to the spatial similarity between pixels, e.g., the distance between pixel A and pixel B is d 1 The distance between the pixel point A and the pixel point C is d 2 If d 1 Greater than d 2 The spatial similarity between pixel A and pixel B is less than the spatial similarity between pixel A and pixel C. The gray-scale similarity can be used to represent the difference of gray-scale values between pixels, and the difference of gray-scale values between pixels is positively correlated with the gray-scale similarity between pixels, e.g., d 3 The difference between the gray value of pixel A and the gray value of pixel C is d 4 If d 3 Greater than d 4 The gray-scale similarity between the pixel point A and the pixel point B is smaller than the gray-scale similarity between the pixel point A and the pixel point C.
The filter in this embodiment (including the first filter in this embodiment and the second filter to be mentioned later) may be a two-dimensional matrix including at least two weights (i.e., elements), and fig. 3 shows a filter with a size of 3*3, where A, B, C, D, E, F, G, H, I is the weight of the filter.
And performing filtering processing on the image by using a filter, namely sliding the filter on the image, multiplying the pixel value of the pixel point on the image by the corresponding weight value in the filter, taking the average value of all multiplied values as the pixel value of the pixel point corresponding to the middle weight value of the filter in the image, finally sliding all the pixel points in the image, updating the pixel values of all the pixel points in the image, and finishing the filtering processing on the image.
For example, a pixel value of a pixel point e obtained by filtering a pixel point e in the pixel point region to be processed shown in fig. 4 using a first filter shown in fig. 4 (hereinafter, will be referred to as a filtered pixel value e) satisfies the following equation: the filtered pixel value e= (weight a×pixel value of pixel a+weight b×pixel value of pixel b+weight c×pixel value of pixel c+weight d×pixel value of pixel d+weight e×pixel value of pixel e+weight f×pixel value of pixel f+weight g×pixel value of pixel g+weight h×pixel value of pixel h+weight i×pixel value of i)/9.
As can be seen from the above example, when a filter is used to perform a filter process on a pixel to be processed (such as pixel e in the above example) in an image, the pixel value of the pixel to be processed after the filter process is determined using the pixel values of the pixel to be processed and the pixel values of the pixels around the pixel to be processed. The weight in the filter corresponds to the weight of the pixel value of the pixel points around the pixel point to be processed. Obviously, the relationship (such as spatial similarity) between different pixels and the pixel to be processed is different, so the weight of the filter directly determines the effect of the filtering process performed on the image.
Based on the above consideration, the embodiment of the application determines the weight in the first filter of the first pixel neighborhood by using the information of the similarity between the pixels in the first pixel neighborhood and the information of the structural feature data of the first pixel neighborhood, so as to improve the effect of filtering the first pixel neighborhood by using the first filter.
In one possible implementation, a similarity (to be referred to as a target similarity hereinafter) between a pixel (to be referred to as a first pixel hereinafter) other than the processed pixel (i.e., a center pixel of the first pixel neighborhood) in the first pixel neighborhood and a difference (to be referred to as a first difference hereinafter) between a structural feature value of the first pixel and a structural feature value of the processed pixel are determined. And normalizing the product of the target similarity and the first difference to enable the normalized value to be in a range of more than or equal to 0 and less than or equal to 1, and taking the normalized value as the weight of the target pixel, namely the value of the first element in the first filter of the first pixel neighborhood. The position of the first element in the first filter is the same as the position of the first pixel point in the vicinity of the first pixel point.
In this embodiment of the present application, referring to fig. 5, the position of a pixel point in a pixel point neighborhood a is the same as the position of an element j in a filter B, the position of a pixel point B in a pixel point neighborhood a is the same as the position of an element k in a filter B, the position of a pixel point c in a pixel point neighborhood a is the same as the position of an element l in a filter B, the position of a pixel point d in a pixel point neighborhood a is the same as the position of an element m in a filter B, the position of a pixel point e in a pixel point neighborhood a is the same as the position of an element n in a filter B, the position of a pixel point f in a pixel point neighborhood a is the same as the position of an element o in a filter B, the position of a pixel point g in a pixel point neighborhood a is the same as the position of an element p in a filter B, the position of a pixel point h in a pixel point neighborhood is the same as the position of an element q in a filter B, and the position of a pixel point i in a pixel neighborhood is the same as the position of an element r in a filter B.
The structural feature value characterizes edge information of the target pixel point, edge information of the processed pixel point, and a positional relationship between an edge where the target pixel point is located and an edge where the processed pixel point is located. For example, the edge information of the target pixel point may include the following information: whether the target pixel belongs to the edge coverage area, the position information of the target pixel, and if the target pixel belongs to the edge coverage area, the position information of the edge where the target pixel is located. Similarly, the edge information of the processed pixel may include the following information: whether the processed pixel belongs to an edge covered area, position information of the processed pixel, and position information of an edge where the processed pixel is located if the processed pixel belongs to the edge covered area. The positional relationship between the edge where the target pixel point is located and the edge where the processed pixel point is located includes: if the target pixel point and the processed pixel point both belong to the edge covered region, the position relationship between the edge where the target pixel point is located and the edge where the processed pixel point is located (the position relationship may include the positions of intersecting, disjoint, and intersecting regions).
It is to be understood that the sum of all weights in the first filter determined by the above possible implementation is equal to 1.
For example, the center pixel in the neighborhood of the first pixel shown in fig. 6 is a pixel e (i.e., the processed pixel in the possible implementation manner), let a pixel a be the target pixel in the possible implementation manner, the similarity between the pixel a and the pixel e be d, and the structural feature value between the pixel a and the pixel e be y. Let t=d×y, and normalize t to obtain s. The weight a in the first filter is s. Similarly, the weight b, the weight c, the weight d, the weight e, the weight f, the weight g, the weight h and the weight i can be determined, and the sum of the weight a, the weight b, the weight c, the weight d, the weight e, the weight f, the weight g, the weight h and the weight i is equal to 1.
103. And filtering the first pixel point neighborhood by using the first filter to obtain an enhanced first image to be processed.
Since the information (i.e., the pixel value) of the pixel points other than the processed pixel point in the neighborhood of the first pixel point needs to be utilized when the first filter is used to perform the filtering process on the processed pixel point in the neighborhood of the first pixel point. And the relation between different pixels in the neighborhood of the first pixel and the pixel to be processed is different. For example, the pixel point a and the pixel point b are two pixels except the processed pixel point in the neighborhood of the first pixel point. The spatial similarity between the pixel point a and the pixel point to be processed is d 1 The spatial similarity between the pixel point b and the pixel point to be processed is d 2 ,d 1 And d 2 Are not equal. Therefore, the weights of the pixels in the neighborhood of the first pixel point except the processed pixel point should be different. For example, in the first pixel point neighboring area shown in fig. 6, if the similarity between the pixel point b and the pixel point e is greater than the similarity between the pixel point h and the pixel point e. The information of the pixel b used in determining the pixel value of the pixel e by the filtering process should be more than the information of the pixel h used, i.e., the weight of the pixel b should be greater than the weight of the pixel h. For another example, in the first pixel point neighboring area shown in fig. 4, if the structural feature value between the pixel point c and the pixel point e is larger than the structural feature value between the pixel point f and the pixel point e. At the position of passing filteringThe information of the pixel point c used in determining the pixel value of the pixel point e should be more than the information of the pixel point f used, i.e. the weight of the pixel point c should be larger than the weight of the pixel point f.
In step 102, a first filter is determined according to the target similarity and the first difference, that is, a weight of the target pixel is determined by using a relationship between the target pixel and the processed pixel. In this way, when the first filter is used to perform filtering processing on the first pixel point neighborhood, a better filtering effect can be obtained.
The first filter is used for carrying out filtering treatment on the neighborhood of the first pixel point, so that the filtering treatment on the central pixel point in the neighborhood of the first pixel point can be completed, the effects of enabling the central pixel point to be clearer, removing noise at the central pixel point and enriching information of the central pixel point are achieved, the enhancement of the image quality of a first image to be treated is achieved, and the enhanced first image to be treated is obtained. The image quality includes one or more of resolution of the image, signal-to-noise ratio of the image, sharpness of edges in the image. The resolution of the image is positively correlated with the image quality, the signal-to-noise ratio of the image is positively correlated with the image quality, the sharpness of the image is positively correlated with the image quality, and the sharpness of the edges in the image is positively correlated with the image quality.
Optionally, each pixel point in the first to-be-processed image is taken as a central pixel point, a pixel point neighborhood with a preset size is constructed, and filtering processing is performed on each pixel point neighborhood according to the technical scheme provided by the embodiment, so that the effects of enabling each pixel point in the first to-be-processed image to be clearer, removing noise in the first to-be-processed image and enriching information of the first to-be-processed image are achieved, and the enhanced first to-be-processed image is obtained.
According to the embodiment, the weight of the pixel in the first pixel neighborhood is determined according to the similarity between the pixels in the first pixel neighborhood and the structural feature value between the pixels, and then the first filter of the first pixel neighborhood is determined. The first filter is used for carrying out filtering treatment on the neighborhood of the first pixel point, the information of the pixel points except the pixel points to be treated in the neighborhood of the first pixel point can be better utilized to improve the definition of the pixel points to be treated, remove the noise of the pixel points to be treated and enrich the information of the pixel points to be treated, so that the image quality of the first image to be treated is enhanced.
As described in step 102, the similarity between pixels in the first pixel neighborhood may include spatial similarity and gray scale similarity. Alternatively, the similarity between the pixel points may be determined by calculating the euclidean distance between the pixel points. The gray scale similarity between the pixel points can be determined by calculating the gray scale distance between the pixel points.
In one implementation of calculating the Euclidean distance between pixel points, the coordinates of pixel point A in the image are (x 1 ,y 1 ) The coordinates of the pixel point B in the image are (x 2 ,y 2 ) The Euclidean distance between pixel A and pixel B satisfies the following equation:
D is as above 1 The sigma is the Euclidean distance between the pixel point A and the pixel point B 1 The range of the value of (2) is more than or equal to 0.1 and less than or equal to 10.
In one implementation of calculating the gray-scale similarity between pixel points, the gray-scale value of pixel point A is g 1 The gray value of the pixel point B is g 2 The gray-scale distance between pixel a and pixel B satisfies the following equation:
d is as above 2 The sigma is the gray-scale distance between the pixel point A and the pixel point B 2 The range of the value of (2) is more than or equal to 0.1 and less than or equal to 10.
And determining the similarity between the pixel points according to the Euclidean distance between the pixel points and the gray scale distance between the pixel points. In one possible implementation, the similarity between pixel a and pixel B satisfies the following equation:
d 3 =d 1 ×d 2 … formula (3)
D is as above 3 Is the similarity between pixel a and pixel B. In formula (3), σ 1 Can influence d 1 (i.e. Euclidean distance) at d 3 (i.e., similarity), σ 2 Can influence d 2 (i.e., gray scale distance) at d 3 (i.e., similarity). Specifically, sigma 1 And d 1 At d 3 The proportion of (a) is inversely related to sigma 2 And d 2 At d 3 The ratio of (2) is inversely related.
And extracting texture features of the first pixel neighborhood to obtain structural feature data of the first pixel neighborhood. The texture feature extraction process can be implemented by a gray-level co-occurrence matrix (GLCM) algorithm, a Voronoi checkerboard feature algorithm, or a Tamura texture feature algorithm. According to the embodiment of the application, the structural feature data of the first pixel neighborhood is obtained by calculating the structural feature values among the pixels in the first pixel neighborhood. Next, how to calculate the structural feature values between the pixels in the first pixel neighborhood and how to obtain the weight of the first filter according to the similarity between the pixels in the first pixel neighborhood and the structural feature values between the pixels in the first pixel neighborhood will be described in detail.
Referring to fig. 7, fig. 7 is a flowchart of another image processing method according to the second embodiment of the present application.
701. And taking the first preset value as a pixel point marking value of which the gray value in the neighborhood of the first pixel point is larger than that of the central pixel point, and taking the second preset value as a pixel point marking value of which the gray value in the neighborhood of the central pixel point is smaller than or equal to that of the central pixel point.
In this embodiment, the first pixel is a central pixel of the neighborhood of the first pixel. The first preset value and the second preset value are both numbers greater than or equal to each other. Optionally, the first preset value is 1, and the second preset value is 0.
For example, as shown in fig. 8a, the gray value of pixel a is 44, the gray value of pixel b is 118, the gray value of pixel c is 192, the gray value of pixel d is 32, the gray value of pixel e is 83, the gray value of pixel f is 204, the gray value of pixel g is 61, the gray value of pixel h is 174, and the gray value of pixel i is 250. The pixel point e is a first pixel point. Assuming that the first preset value is 1 and the second preset value is 0, the mark values may be added to the pixels except for the pixel e by comparing the gray value of the pixel a, the gray value of the pixel b, the gray value of the pixel c, the gray value of the pixel d, the gray value of the pixel f, the gray value of the pixel g, the gray value of the pixel h, the gray value of the pixel i, and the gray value of the pixel e. FIG. 8b shows the label values of pixels in the neighborhood of the first pixel except the pixel e.
702. And sequentially arranging the marking values of the pixel points except the central pixel point in the neighborhood of the first pixel point to obtain the structural characteristic value of the central pixel point.
The marking values of the pixels except the pixel e in the neighborhood of the first pixel are sequentially arranged, so that a binary value, namely the structural characteristic value of the pixel e (namely the first pixel), can be obtained.
For example (example 1), the marking values of the pixel points a are used as the starting bits, the marking values of the pixel points except the pixel point e in the neighborhood of the first pixel point are sequentially arranged in the clockwise direction, and the obtained binary value is 01111100, namely the structural characteristic value of the pixel point e. For example (example 2), the marking values of the pixels except the pixel e in the neighborhood of the first pixel are sequentially arranged in the clockwise direction with the marking value of the pixel c as the starting bit, and the obtained binary value is 11110001, namely the structural feature value of the pixel e.
Optionally, to facilitate subsequent processing based on the structural feature value of the first pixel, the binary value may be converted into a decimal value, which is used as the structural feature value of the first pixel. For example, binary value 01111100 in example 1 may be converted to decimal value 124 and binary value 241 in example 2 may be converted.
The structural feature value of the first pixel may be used to describe texture information of pixels other than the first pixel in the neighborhood of the first pixel.
703. And taking the first preset value as a pixel marking value of which the gray value in the neighborhood of the second pixel is larger than that of the first pixel, and taking the second preset value as a pixel marking value of which the gray value in the neighborhood of the second pixel is smaller than or equal to that of the first pixel.
In this embodiment, the first pixel is any one of the pixels in the neighborhood of the first pixel except the center pixel. The second pixel neighborhood is a pixel neighborhood constructed by taking the first pixel as a central pixel, and the size of the second pixel neighborhood is the same as that of the first pixel neighborhood.
For example, the second pixel neighborhood shown in fig. 9a is a pixel region constructed with the pixel h in fig. 8 as the center pixel. Fig. 9b shows the mark values of the pixels other than the pixel h in the second pixel neighborhood.
704. And sequentially arranging the marking values of the pixel points except the first pixel point in the neighborhood of the second pixel point to obtain the structural characteristic value of the first pixel point.
The implementation of this step can be referred to as step 702, and will not be described here.
705. And obtaining structural feature data of the pixel neighborhood according to the structural feature value of the first pixel and the structural feature value of the central pixel of the first pixel neighborhood.
According to the structural feature value of the pixel points, the structural feature distance between the first pixel point and the central pixel point can be determined. In one possible implementation, the structural feature value of the pixel point a is l 1 The structural characteristic value of the pixel point B is l 2 The structural feature distance between the pixel point A and the heart pixel point B satisfies the following formula:
d is as above 4 Is pixel point A and pixelThe structural feature distance between points B, σ 3 The range of the value of (2) is more than or equal to 0.1 and less than or equal to 10.
Substituting the structural feature value of the first pixel point and the structural feature value of the central pixel point of the neighborhood of the first pixel point into the formula (4) to determine the structural feature distance between the first pixel point and the central pixel point of the neighborhood of the first pixel point, and taking the structural feature distance as the structural feature data of the neighborhood of the first pixel point.
Optionally, according to the technical solution provided in the embodiment of the present application, the euclidean distance between the pixel points except the central pixel point in the first pixel point adjacent area and the central pixel point, the gray scale distance between the pixel points except the central pixel point in the first pixel point adjacent area and the central pixel point, and the structural feature distance between the pixel points except the central pixel point in the first pixel point adjacent area and the central pixel point may be determined, so as to determine the weight value in the first filter. In one possible implementation, the weights in the first filter satisfy the following equation:
Above P c Is the central pixel point P in the neighborhood of the first pixel point i Is the pixel point except the central pixel point in the neighborhood of the first pixel point, w i For the first filter and P i Corresponding weights. N is the number of pixels in the first pixel neighborhood except for the center pixel. d, d 1 (P c ,P i ) Is P c And P i Euclidean distance between d 2 (P c ,P i ) Is P c And P i The gray-scale distance d between 4 (P c ,P i ) Is P c And P i Structural feature distance between.
In formula (5), σ 3 Can influence d 3 (i.e., feature distance) at w i (i.e., weight), in particular, sigma 3 And d 4 At w i In a ratio ofAnd (5) negative correlation.
In the embodiment, the structural feature distance between the pixel points is determined according to the structural feature values of the pixel points in the neighborhood of the first pixel point. And obtaining the weight of the first filter according to the similarity between the pixels in the neighborhood of the first pixel and the structural feature value between the pixels in the neighborhood of the first pixel, wherein the weight of the first filter can be determined by further utilizing the structural feature distance between the pixels on the basis of utilizing the similarity between the pixels, so that the accuracy of the weight of the first filter can be improved, and the filtering effect is further improved.
In order to further improve the filtering processing of the first to-be-processed image, another implementation manner of filtering processing of the first to-be-processed image is further provided.
Referring to fig. 10, fig. 10 is a flowchart of another image processing method according to the third embodiment of the present application.
1001. And carrying out downsampling processing on the first image to be processed to obtain a second image to be processed.
And performing downsampling processing on the first to-be-processed image to obtain a second to-be-processed image, wherein the scale of the second to-be-processed image is different from that of the first to-be-processed image. While images of different scales contain image features and details of different frequency bands. In this way, when the first to-be-processed image and the second to-be-processed image are subjected to filtering processing respectively, the filtering processing can be performed on the image features and details of different frequency bands, and then the filtering processing effect is improved.
Alternatively, the above-described downsampling process may be performed by removing the pixels of the even rows and the pixels of the even columns in the first image to be processed. For example, the downsampling process is performed on the first to-be-processed image shown in fig. 11a, and the pixels in the second column, the fourth column, the second row and the fourth row in the first to-be-processed image may be removed, so as to obtain the second to-be-processed image shown in fig. 11 b.
Optionally, before the downsampling process is performed on the first to-be-processed image, gaussian filtering process may be performed on the first to-be-processed image to remove gaussian noise in the first to-be-processed image, so as to obtain the first to-be-processed image after gaussian filtering process. And then carrying out downsampling treatment on the first image to be treated after Gaussian filtering treatment to obtain a second image to be treated.
1002. And filtering the first pixel neighborhood by using the first filter to obtain a first image to be processed after filtering, and filtering the second pixel neighborhood in the second image to be processed by using a second filter to obtain a second image to be processed after filtering.
The second pixel neighborhood is a pixel neighborhood obtained by performing downsampling processing on the first pixel neighborhood in the first image to be processed.
And obtaining a second filter of the second pixel neighborhood according to the similarity between the pixels in the second pixel neighborhood and the structural feature data of the second pixel neighborhood. The implementation manner of obtaining the second filter according to the similarity between the pixels in the second pixel neighborhood and the structural feature data of the second pixel neighborhood is the same as the implementation manner of obtaining the first filter according to the similarity between the pixels in the first pixel neighborhood and the structural feature data of the first pixel neighborhood, and the implementation manner can be referred to embodiment (one) and embodiment (two), and will not be described herein.
By using the first filter to filter the first pixel neighborhood and using the second filter to filter the second pixel neighborhood, the filtering processing of the first to-be-processed image under different scales can be realized so as to remove noise in image features and details of different frequency bands.
1003. And carrying out up-sampling processing on the second to-be-processed image after the filtering processing, so that the size of the second to-be-processed image after the filtering processing is the same as the size of the first to-be-processed image after the filtering processing, and obtaining a third to-be-processed image.
The step 1002 is performed to remove noise in the image features and details of different frequency bands in the first to-be-processed image, so that the first to-be-processed image with enhanced image quality can be obtained by fusing the filtered first to-be-processed image with the filtered second to-be-processed image.
Because the size of the second to-be-processed image after the filtering process is different from the size of the first to-be-processed image after the filtering process, the second to-be-processed image after the filtering process needs to be up-sampled before the first to-be-processed image after the filtering process and the second to-be-processed image after the filtering process are fused, so that the size of the second to-be-processed image after the filtering process is the same as the size of the first to-be-processed image after the filtering process.
Alternatively, the upsampling process may be implemented by inserting one row of pixels between every two rows of pixels and inserting one column of pixels between every two columns of pixels. Alternatively, the pixel value of the inserted pixel point may be taken as 0. For example, the second image to be processed after the filtering process shown in fig. 12a is subjected to an upsampling process, so that a third image to be processed shown in fig. 12b can be obtained.
Optionally, since the pixel value of the pixel point inserted in the image obtained by the upsampling process is 0, after the upsampling process is performed on the second to-be-processed after the filtering process to obtain the image after the upsampling process, the filtering process may be performed on the image after the upsampling process by using a gaussian kernel to determine the pixel value of the pixel point inserted, so as to obtain the third to-be-processed image.
1004. And carrying out fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image.
The above-described fusion processing may be addition, i.e., adding pixel values of pixel points at the same position in the first to-be-processed image after the filtering processing and the third to-be-processed image.
Since the downsampling process is performed on the first to-be-processed image in step 1001, the detail information in the first to-be-processed image is lost, which reduces the image quality of the enhanced first to-be-processed image.
Optionally, the details lost due to the downsampling process may be retained before step 1004 is performed, so that the lost details may be used later when the enhanced first image to be processed is obtained: and carrying out up-sampling processing on the second to-be-processed image to enable the size of the second to-be-processed image to be the same as that of the first to-be-processed image, and obtaining a fourth to-be-processed image. And determining the difference between the first to-be-processed image and the fourth to-be-processed image to obtain a fifth to-be-processed image. The up-sampling process for the second image to be processed is implemented by referring to the up-sampling process in step 1003. And determining the difference between the first to-be-processed image and the fourth to-be-processed image, namely subtracting the pixel values of the pixel points at the same position in the first to-be-processed image and the fourth to-be-processed image.
On the basis of retaining the detail information lost due to the downsampling process, the first to-be-processed image and the third to-be-processed image after the filtering process are fused in the step, and the enhanced first to-be-processed image can be obtained by the following steps: and determining the sum of the first to-be-processed image and the fifth to-be-processed image to obtain a sixth to-be-processed image. And determining the sum of the third to-be-processed image and the sixth to-be-processed image to obtain the enhanced first to-be-processed image. In the implementation process, the third to-be-processed image lacks the detail information lost due to the downsampling process, and the fifth to-be-processed image contains the detail information lost due to the downsampling process, so that the fifth to-be-processed image and the first to-be-processed image are added to obtain the sixth to-be-processed image, and then the sixth to-be-processed image and the third to-be-processed image are added to 'complement' the detail information lost due to the downsampling process in the third to-be-processed image, so that the image quality of the enhanced first to-be-processed image is improved.
It should be understood that, in step 1001, the first image to be processed is downsampled only once, and in practical application, the first image to be processed may be downsampled at least twice according to the requirement of the user, so as to obtain at least two downsampled images. For example (example 3), after the second to-be-processed image is obtained in step 1001, the second to-be-processed image may be subjected to downsampling processing to obtain a seventh to-be-processed image. If at least two downsampling processes are performed on the first image to be processed to obtain at least two downsampled images, correspondingly, a filter can be determined for a pixel neighborhood in each downsampled image respectively, and the filter is used for carrying out filtering processing on the corresponding pixel area to obtain at least two filtered images. For example (example 4), the second to-be-processed image and the seventh to-be-processed image in example 3 are subjected to filter processing, respectively, to obtain the second to-be-processed image after the filter processing and the seventh to-be-processed image after the filter processing. And respectively carrying out up-sampling treatment on each image subjected to the filtering treatment, so that the size of each image subjected to the filtering treatment is the same as that of the first image to be treated, and at least two images before fusion are obtained. And adding all the images before fusion and the first image to be processed to obtain the enhanced first image to be processed. For example (example 5), the second to-be-processed image after the filtering processing in example 3 is subjected to up-sampling processing to obtain a third to-be-processed image, and the seventh to-be-processed image after the filtering processing in example 3 is subjected to up-sampling processing to obtain a seventh to-be-processed image before fusion. And adding the first to-be-processed image, the third to-be-processed image and the seventh to-be-processed image before fusion to obtain the enhanced first to-be-processed image.
The method comprises the steps of performing downsampling processing on a first to-be-processed image to obtain a second to-be-processed image with a different scale from that of the first to-be-processed image. And filtering the first pixel neighborhood by using a first filter and filtering the second pixel neighborhood by using a second filter to realize filtering processing on the first to-be-processed image under different scales so as to remove noise in image features and details of different frequency bands. And carrying out fusion processing on the first to-be-processed image after the filtering processing and the second to-be-processed image after the filtering processing so as to improve the image quality of the enhanced first to-be-processed image.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
The foregoing details the method of embodiments of the present application, and the apparatus of embodiments of the present application is provided below.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, where the apparatus 1 includes: an acquisition unit 11, a first processing unit 12, a filter processing unit 13, a first determination unit 14, a second determination unit 15, a third determination unit 16, a second processing unit 17, an arrangement unit 18, an upsampling processing unit 19, and a fourth determination unit, wherein:
An acquisition unit 11 for acquiring a first image to be processed;
the first processing unit 12 is configured to obtain a first filter of a first pixel neighborhood according to a similarity between pixels in the first pixel neighborhood and structural feature data of the first pixel neighborhood in the first image to be processed, where the structural feature data carries edge information in the first pixel neighborhood;
and the filtering unit 13 is configured to perform filtering processing on the first pixel point neighborhood by using the first filter, so as to obtain an enhanced first image to be processed.
In one possible implementation, the similarity includes: spatial similarity and gray scale similarity; the device 1 further comprises:
a first determining unit 14, configured to determine, after the obtaining a first image to be processed, a first pixel neighborhood from the first image to be processed according to a similarity between pixels in a first pixel neighborhood in the first image to be processed and structural feature data of the first pixel neighborhood, and before obtaining a first filter of the first pixel neighborhood;
a second determining unit 15, configured to determine a first euclidean distance between a central pixel point of the first pixel point neighborhood and a first pixel point in the first pixel point neighborhood, so as to obtain the spatial similarity;
And a third determining unit 16, configured to determine a first gray scale distance between the center pixel point and the first pixel point, so as to obtain the gray scale similarity.
In another possible implementation, the size of the first filter is the same as the size of the first pixel neighborhood; the structural feature data of the first pixel neighborhood comprises structural feature values of pixels in the first pixel neighborhood; the structural characteristic value is used for representing texture information of pixel points in the neighborhood of the first pixel point;
the first processing unit 12 is configured to:
determining a value of a first element in the first filter according to the first Euclidean distance, the first gray scale distance, and a first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point; the position of the first pixel in the first filter is the same as the position of the first pixel in the first pixel neighborhood.
In a further possible implementation, the first processing unit 12 is configured to:
obtaining a first initial weight according to the first Euclidean distance, the first gray scale distance and the first difference;
And carrying out normalization processing on the first initial weight to obtain the value of the first element.
In a further possible implementation, the first processing unit is configured to:
determining a product of a second Euclidean distance between a second pixel point in the neighborhood of the first pixel point and the central pixel point, a second gray scale distance between the second pixel point and the central pixel point, and a second difference between a structural characteristic value of the second pixel point and a structural characteristic value of the central pixel point, so as to obtain a second initial weight;
determining the sum of the first initial weight and the second initial weight to obtain a third initial weight;
and determining the quotient of the first initial weight and the third initial weight to obtain the value of the first element.
In a further possible implementation, the apparatus 1 further comprises:
a second processing unit 17, configured to, before determining the value of the first element in the first filter according to the first euclidean distance, the first gray level distance, and the first difference between the structural feature value of the first pixel and the structural feature value of the central pixel, take a first preset value as a pixel mark value in the first pixel neighborhood, where the gray level value is greater than the gray level value of the central pixel, and take a second preset value as a pixel mark value in the central pixel neighborhood, where the gray level value is less than or equal to the gray level value of the central pixel;
And an arrangement unit 18, configured to sequentially arrange the marking values of the pixels in the neighborhood of the first pixel except for the central pixel, so as to obtain a structural feature value of the central pixel.
In a further possible implementation, the filtering processing unit 13 is configured to:
downsampling the first image to be processed to obtain a second image to be processed;
filtering the first pixel point neighborhood by using the first filter to obtain a first to-be-processed image after filtering, and filtering the second pixel point neighborhood in the second to-be-processed image by using a second filter to obtain a second to-be-processed image after filtering; the second pixel point neighborhood is obtained by performing downsampling on the first pixel point neighborhood; the second filter is obtained according to the similarity between the pixels in the neighborhood of the second pixel and the structural feature data of the neighborhood of the second pixel;
performing up-sampling processing on the second to-be-processed image after the filtering processing to enable the size of the second to-be-processed image after the filtering processing to be the same as the size of the first to-be-processed image after the filtering processing, so as to obtain a third to-be-processed image;
And carrying out fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image.
In a further possible implementation, the apparatus 1 further comprises:
an up-sampling processing unit 19, configured to perform up-sampling processing on the second to-be-processed image before performing fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image, so that the size of the second to-be-processed image is the same as the size of the first to-be-processed image, to obtain a fourth to-be-processed image;
a fourth determining unit 20, configured to determine a difference between the first to-be-processed image and the fourth to-be-processed image, to obtain a fifth to-be-processed image;
the filtering processing unit 13 is configured to:
determining the sum of the first image to be processed and the fifth image to be processed to obtain a sixth image to be processed;
and determining the sum of the third to-be-processed image and the sixth to-be-processed image to obtain the enhanced first to-be-processed image.
In yet another possible implementation, the first image to be processed includes a human face.
According to the embodiment, the weight of the pixel in the first pixel neighborhood is determined according to the similarity between the pixels in the first pixel neighborhood and the structural feature value between the pixels, and then the first filter of the first pixel neighborhood is determined. The first filter is used for carrying out filtering treatment on the neighborhood of the first pixel point, the information of the pixel points except the pixel points to be treated in the neighborhood of the first pixel point can be better utilized to improve the definition of the pixel points to be treated, remove the noise of the pixel points to be treated and enrich the information of the pixel points to be treated, so that the image quality of the first image to be treated is enhanced.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
Fig. 14 is a schematic hardware structure of an image processing apparatus according to an embodiment of the present application. The image processing device 2 comprises a processor 21, a memory 22, an input device 23 and an output device 24. The processor 21, memory 22, input device 23, and output device 24 are coupled by connectors, including various interfaces, transmission lines or buses, etc., as not limited in this application. It should be understood that in various embodiments of the present application, coupled is intended to mean interconnected by a particular means, including directly or indirectly through other devices, e.g., through various interfaces, transmission lines, buses, etc.
The processor 21 may be one or more graphics processors (graphics processing unit, GPUs), which may be single-core GPUs or multi-core GPUs in the case where the processor 21 is a GPU. Alternatively, the processor 21 may be a processor group formed by a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. In the alternative, the processor may be another type of processor, and the embodiment of the present application is not limited.
Memory 22 may be used to store computer program instructions as well as various types of computer program code for performing aspects of the present application. Optionally, the memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasableprogrammable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM) for associated instructions and data.
The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The input device 23 and the output device 24 may be separate devices or may be an integral device.
It will be appreciated that, in the embodiment of the present application, the memory 22 may be used to store not only related instructions, but also related data, for example, the memory 22 may be used to store a first to-be-processed image acquired through the input device 23, or the memory 22 may also be used to store an enhanced first to-be-processed image obtained through the processor 21, etc., and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 14 shows only a simplified design of an image processing apparatus. In practical applications, the image processing apparatus may also include other necessary elements, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses capable of implementing the embodiments of the present application are within the scope of protection of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein. It will be further apparent to those skilled in the art that the descriptions of the various embodiments herein are provided with emphasis, and that the same or similar parts may not be explicitly described in different embodiments for the sake of convenience and brevity of description, and thus, parts not described in one embodiment or in detail may be referred to in the description of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: a read-only memory (ROM) or a random access memory (random access memory, RAM), a magnetic disk or an optical disk, or the like.

Claims (19)

1. An image processing method, the method comprising:
acquiring a first image to be processed;
obtaining a first filter of a first pixel neighborhood according to similarity between pixels in the first pixel neighborhood and structural feature data of the first pixel neighborhood in the first image to be processed, wherein the structural feature data carries edge information in the first pixel neighborhood; the edge information of the first pixel neighborhood includes: information of the positions of the edges in the first pixel point neighborhood in the first image to be processed, and the position relation between different edges in the first pixel point neighborhood;
And filtering the first pixel point neighborhood by using the first filter to obtain an enhanced first image to be processed.
2. The method of claim 1, wherein the similarity comprises: spatial similarity and gray scale similarity;
after the first image to be processed is acquired, before the first filter of the first pixel neighborhood is obtained according to the similarity between pixels in the first pixel neighborhood in the first image to be processed and the structural feature data of the first pixel neighborhood, the method further includes:
determining the first pixel point neighborhood from the first image to be processed;
determining a first Euclidean distance between a central pixel point of the first pixel point neighborhood and a first pixel point in the first pixel point neighborhood, and obtaining the spatial similarity;
and determining a first gray scale distance between the central pixel point and the first pixel point to obtain the gray scale similarity.
3. The method of claim 2, wherein the first filter has a size that is the same as a size of the first pixel neighborhood; the structural feature data of the first pixel neighborhood comprises structural feature values of pixels in the first pixel neighborhood, and the structural feature values are used for representing texture information of the pixels in the first pixel neighborhood;
The obtaining a first filter of the first pixel neighborhood according to the similarity between pixels in the first pixel neighborhood in the first image to be processed and the structural feature data of the first pixel neighborhood includes:
determining a value of a first element in the first filter according to the first Euclidean distance, the first gray scale distance, and a first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point; the position of the first pixel in the first filter is the same as the position of the first pixel in the first pixel neighborhood.
4. The method of claim 3, wherein determining the value of the first element in the first filter based on the first euclidean distance, the first gray level distance, and a first difference between the structural feature value of the first pixel point and the structural feature value of the center pixel point comprises:
obtaining a first initial weight according to the first Euclidean distance, the first gray scale distance and the first difference;
and carrying out normalization processing on the first initial weight to obtain the value of the first element.
5. The method of claim 4, wherein normalizing the first initial weight value to obtain the value of the first element comprises:
determining a product of a second Euclidean distance between a second pixel point in the neighborhood of the first pixel point and the central pixel point, a second gray scale distance between the second pixel point and the central pixel point, and a second difference between a structural characteristic value of the second pixel point and a structural characteristic value of the central pixel point, so as to obtain a second initial weight;
determining the sum of the first initial weight and the second initial weight to obtain a third initial weight;
and determining the quotient of the first initial weight and the third initial weight to obtain the value of the first element.
6. The method of any one of claims 3 to 5, wherein prior to said determining the value of the first element in the first filter as a function of the first euclidean distance, the first gray scale distance, the first difference between the structural feature value of the first pixel point and the structural feature value of the center pixel point, the method further comprises:
taking a first preset value as a pixel point marking value of which the gray value in the neighborhood of the first pixel point is larger than that of the central pixel point, and taking a second preset value as a pixel point marking value of which the gray value in the neighborhood of the central pixel point is smaller than or equal to that of the central pixel point;
And sequentially arranging the marking values of the pixel points except the central pixel point in the neighborhood of the first pixel point to obtain the structural characteristic value of the central pixel point.
7. The method according to any one of claims 1 to 5, wherein filtering the first pixel neighborhood with the first filter to obtain an enhanced first image to be processed includes:
downsampling the first image to be processed to obtain a second image to be processed;
filtering the first pixel point neighborhood by using the first filter to obtain a first to-be-processed image after filtering, and filtering the second pixel point neighborhood in the second to-be-processed image by using a second filter to obtain a second to-be-processed image after filtering; the second pixel point neighborhood is obtained by performing downsampling on the first pixel point neighborhood; the second filter is obtained according to the similarity between the pixels in the neighborhood of the second pixel and the structural feature data of the neighborhood of the second pixel;
performing up-sampling processing on the second to-be-processed image after the filtering processing to enable the size of the second to-be-processed image after the filtering processing to be the same as the size of the first to-be-processed image after the filtering processing, so as to obtain a third to-be-processed image;
And carrying out fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image.
8. The method of claim 7, wherein prior to said fusing the filtered first to-be-processed image and the third to-be-processed image to obtain the enhanced first to-be-processed image, the method further comprises:
performing up-sampling processing on the second to-be-processed image to enable the size of the second to-be-processed image to be the same as the size of the first to-be-processed image, so as to obtain a fourth to-be-processed image;
determining the difference between the first image to be processed and the fourth image to be processed to obtain a fifth image to be processed;
the fusing processing is performed on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image, which comprises the following steps:
determining the sum of the first image to be processed and the fifth image to be processed to obtain a sixth image to be processed;
and determining the sum of the third to-be-processed image and the sixth to-be-processed image to obtain the enhanced first to-be-processed image.
9. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire a first image to be processed;
the first processing unit is used for obtaining a first filter of a first pixel neighborhood according to the similarity between pixels in the first pixel neighborhood and the structural feature data of the first pixel neighborhood in the first image to be processed, wherein the structural feature data carries edge information in the first pixel neighborhood; the edge information of the first pixel neighborhood includes: information of the positions of the edges in the first pixel point neighborhood in the first image to be processed, and the position relation between different edges in the first pixel point neighborhood;
and the filtering processing unit is used for carrying out filtering processing on the first pixel point neighborhood by using the first filter to obtain an enhanced first image to be processed.
10. The apparatus of claim 9, wherein the similarity comprises: spatial similarity and gray scale similarity; the apparatus further comprises:
the first determining unit is configured to determine, after the first image to be processed is acquired, a first pixel neighborhood from the first image to be processed according to a similarity between pixels in a first pixel neighborhood in the first image to be processed and structural feature data of the first pixel neighborhood, and before a first filter of the first pixel neighborhood is obtained;
The second determining unit is used for determining a first Euclidean distance between a central pixel point of the first pixel point neighborhood and a first pixel point in the first pixel point neighborhood to obtain the spatial similarity;
and the third determining unit is used for determining a first gray scale distance between the central pixel point and the first pixel point to obtain the gray scale similarity.
11. The apparatus of claim 10, wherein the first filter has a size that is the same as a size of the first pixel neighborhood; the structural feature data of the first pixel neighborhood comprises structural feature values of pixels in the first pixel neighborhood, and the structural feature values are used for representing texture information of the pixels in the first pixel neighborhood;
the first processing unit is used for:
determining a value of a first element in the first filter according to the first Euclidean distance, the first gray scale distance, and a first difference between the structural feature value of the first pixel point and the structural feature value of the central pixel point; the position of the first pixel in the first filter is the same as the position of the first pixel in the first pixel neighborhood.
12. The apparatus of claim 11, wherein the first processing unit is configured to:
obtaining a first initial weight according to the first Euclidean distance, the first gray scale distance and the first difference;
and carrying out normalization processing on the first initial weight to obtain the value of the first element.
13. The apparatus of claim 12, wherein the first processing unit is configured to:
determining a product of a second Euclidean distance between a second pixel point in the neighborhood of the first pixel point and the central pixel point, a second gray scale distance between the second pixel point and the central pixel point, and a second difference between a structural characteristic value of the second pixel point and a structural characteristic value of the central pixel point, so as to obtain a second initial weight;
determining the sum of the first initial weight and the second initial weight to obtain a third initial weight;
and determining the quotient of the first initial weight and the third initial weight to obtain the value of the first element.
14. The apparatus according to any one of claims 11 to 13, characterized in that the apparatus further comprises:
the second processing unit is configured to, before determining the value of the first element in the first filter according to the first euclidean distance, the first gray level distance, and the first difference between the structural feature value of the first pixel and the structural feature value of the central pixel, take a first preset value as a pixel point marking value in the first pixel neighborhood, where the gray level value is greater than the gray level value of the central pixel, and take a second preset value as a pixel point marking value in the central pixel neighborhood, where the gray level value is less than or equal to the gray level value of the central pixel;
And the arrangement unit is used for sequentially arranging the marking values of the pixel points except the central pixel point in the neighborhood of the first pixel point to obtain the structural characteristic value of the central pixel point.
15. The apparatus according to any one of claims 9 to 13, wherein the filtering processing unit is configured to:
downsampling the first image to be processed to obtain a second image to be processed;
filtering the first pixel point neighborhood by using the first filter to obtain a first to-be-processed image after filtering, and filtering the second pixel point neighborhood in the second to-be-processed image by using a second filter to obtain a second to-be-processed image after filtering; the second pixel point neighborhood is obtained by performing downsampling on the first pixel point neighborhood; the second filter is obtained according to the similarity between the pixels in the neighborhood of the second pixel and the structural feature data of the neighborhood of the second pixel;
performing up-sampling processing on the second to-be-processed image after the filtering processing to enable the size of the second to-be-processed image after the filtering processing to be the same as the size of the first to-be-processed image after the filtering processing, so as to obtain a third to-be-processed image;
And carrying out fusion processing on the first to-be-processed image after the filtering processing and the third to-be-processed image to obtain the enhanced first to-be-processed image.
16. The apparatus of claim 15, wherein the apparatus further comprises:
the up-sampling processing unit is used for up-sampling the second to-be-processed image before the first to-be-processed image after the filtering processing and the third to-be-processed image are fused to obtain the enhanced first to-be-processed image, so that the size of the second to-be-processed image is the same as the size of the first to-be-processed image to obtain a fourth to-be-processed image;
a fourth determining unit, configured to determine a difference between the first to-be-processed image and the fourth to-be-processed image, to obtain a fifth to-be-processed image;
the filtering processing unit is used for:
determining the sum of the first image to be processed and the fifth image to be processed to obtain a sixth image to be processed;
and determining the sum of the third to-be-processed image and the sixth to-be-processed image to obtain the enhanced first to-be-processed image.
17. A processor for performing the method of any one of claims 1 to 8.
18. An electronic device, comprising: a processor, transmission means, input means, output means and memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 8.
19. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program comprising program instructions which, when executed by a processor of an electronic device, cause the processor to perform the method of any of claims 1 to 8.
CN201911157608.4A 2019-11-22 2019-11-22 Image processing method and device, processor, electronic equipment and storage medium Active CN110910326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911157608.4A CN110910326B (en) 2019-11-22 2019-11-22 Image processing method and device, processor, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911157608.4A CN110910326B (en) 2019-11-22 2019-11-22 Image processing method and device, processor, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110910326A CN110910326A (en) 2020-03-24
CN110910326B true CN110910326B (en) 2023-07-28

Family

ID=69818987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911157608.4A Active CN110910326B (en) 2019-11-22 2019-11-22 Image processing method and device, processor, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110910326B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724404A (en) * 2020-06-28 2020-09-29 深圳市慧鲤科技有限公司 Edge detection method and device, electronic equipment and storage medium
CN111724326B (en) * 2020-06-28 2023-08-01 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112446837A (en) * 2020-11-10 2021-03-05 浙江大华技术股份有限公司 Image filtering method, electronic device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 A kind of method, apparatus and terminal device of image procossing

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706954B (en) * 2009-11-13 2014-10-29 北京中星微电子有限公司 Image enhancement method and device thereof as well as image low frequency component computing method and device thereof
KR101248808B1 (en) * 2011-06-03 2013-04-01 주식회사 동부하이텍 Apparatus and method for removing noise on edge area
CN103578089A (en) * 2013-05-22 2014-02-12 武汉大学 Depth map filtering method and system based on local binary pattern operator guidance
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN105913396B (en) * 2016-04-11 2018-10-19 湖南源信光电科技有限公司 A kind of image border holding mixing denoising method of noise estimation
CN108205804B (en) * 2016-12-16 2022-05-31 斑马智行网络(香港)有限公司 Image processing method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 A kind of method, apparatus and terminal device of image procossing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Low level image processing and analysis using radius filters;K. Tsirikolias et al.;Digital Signal Processing;全文 *
基于Riemann度量的张量值图像各向异性插值;邵宇 等;《清华大学学报》;全文 *

Also Published As

Publication number Publication date
CN110910326A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN108229497B (en) Image processing method, image processing apparatus, storage medium, computer program, and electronic device
CN110910326B (en) Image processing method and device, processor, electronic equipment and storage medium
JP7446457B2 (en) Image optimization method and device, computer storage medium, computer program, and electronic equipment
US11151780B2 (en) Lighting estimation using an input image and depth map
CN111275784B (en) Method and device for generating image
CN109978077B (en) Visual recognition method, device and system and storage medium
CN111652054B (en) Joint point detection method, gesture recognition method and device
CN110619334B (en) Portrait segmentation method based on deep learning, architecture and related device
CN110414593B (en) Image processing method and device, processor, electronic device and storage medium
CN110751024A (en) User identity identification method and device based on handwritten signature and terminal equipment
CN112149732A (en) Image protection method and device, electronic equipment and storage medium
JP2022550195A (en) Text recognition method, device, equipment, storage medium and computer program
CN111814682A (en) Face living body detection method and device
CN111199169A (en) Image processing method and device
CN111882565A (en) Image binarization method, device, equipment and storage medium
JP7167359B2 (en) Image labeling method, apparatus, electronic device, storage medium and computer program
CN117197405A (en) Augmented reality method, system and storage medium for three-dimensional object
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN111489289B (en) Image processing method, image processing device and terminal equipment
CN111767924A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113065480B (en) Handwriting style identification method and device, electronic device and storage medium
CN111931794B (en) Sketch-based image matching method
CN111932466B (en) Image defogging method, electronic equipment and storage medium
CN111369425B (en) Image processing method, apparatus, electronic device, and computer readable medium
CN116543246A (en) Training method of image denoising model, image denoising method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant