CN107330905B - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN107330905B
CN107330905B CN201710455122.3A CN201710455122A CN107330905B CN 107330905 B CN107330905 B CN 107330905B CN 201710455122 A CN201710455122 A CN 201710455122A CN 107330905 B CN107330905 B CN 107330905B
Authority
CN
China
Prior art keywords
target image
boundary value
image
boundary
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710455122.3A
Other languages
Chinese (zh)
Other versions
CN107330905A (en
Inventor
刘华星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710455122.3A priority Critical patent/CN107330905B/en
Publication of CN107330905A publication Critical patent/CN107330905A/en
Application granted granted Critical
Publication of CN107330905B publication Critical patent/CN107330905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, which comprises the following steps: acquiring a first target image and a second target image, wherein the first target image and the second target image have the same content information; generating a first boundary value set of a first target image and a second boundary value set of a second target image based on a preset image edge detection algorithm; comparing the first boundary value set with the second boundary value set to generate a feature comparison result; and determining the image definition comparison result of the first target image and the second target image according to the feature comparison result. The embodiment of the invention compares the edges of the first target image and the second target image quantitatively and finely, thereby accurately comparing the image definitions of the first target image and the second target image. The invention also discloses an image processing device and a storage medium.

Description

Image processing method, device and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
Image sharpness is one of the criteria for measuring the quality of an image. Image sharpness refers to the sharpness of the changes of details and their edges on an image. At the edges of the image detail, the sharper (faster) and sharper (greater contrast) the change in optical density or brightness with position, the sharper and more discernable the edges of the image detail. The human visual system has the characteristic of enhancing the boundary region with abrupt change of optical density or brightness. Specifically, the image sharpness refers to the degree of sharpness of an image macroscopically seen by a human visual system, and is the subjective feeling of the human visual system on the image caused by the combined result of objective performance of the system and the device.
At present, the image definition of an image is generally judged by adopting the subjective feeling of a human visual system, so that the image definition of the image is difficult to be quantitatively and accurately determined. Further, when the image sharpness between two or more images having the same content information is judged, it is necessary to manually judge by a human visual system, and a problem of inaccurate judgment is likely to occur.
Disclosure of Invention
The invention aims to provide an image processing method, an image processing device and a storage medium, aiming at accurately and quickly comparing the image definition of an image.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
an image processing method, comprising:
acquiring a first target image and a second target image, wherein the first target image and the second target image have the same content information;
generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm;
comparing the first boundary value set with the second boundary value set to generate a feature comparison result;
and determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
Further, the image processing method further includes:
adjusting the image size of the second target image to enable the difference value of the image sizes of the second target image and the first target image to be smaller than or equal to a preset difference threshold value;
the generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm includes:
and generating a first boundary value set of the first target image and a second boundary value set of the adjusted second target image based on a preset image edge detection algorithm.
Further, the image processing method further specifically includes:
dividing the first target image according to a preset division algorithm to form a plurality of first area units with preset sizes, and dividing the second target image to form a plurality of second area units with preset sizes;
generating a first boundary value of the first area unit and a second boundary value of the second area unit based on a preset image edge detection algorithm;
determining all first boundary values of the first target image as a first set of boundary values of the first target image and all second boundary values of the second target image as a second set of boundary values of the second target image.
Further, the image processing method further specifically includes:
acquiring first gray information of the first area unit and second gray information of the second area unit;
and calculating a first boundary value of the first area unit according to the first gray information and calculating a second boundary value of the second area unit according to the second gray information based on a preset image edge detection algorithm.
Further, the image processing method further specifically includes:
filtering the first boundary value set and the second boundary value set according to a preset boundary threshold value;
the comparing the first set of boundary values and the second set of boundary values to generate a feature comparison result includes:
and comparing the filtered first boundary value set with the filtered second boundary value set to generate a feature comparison result.
Further, the image processing method further specifically includes:
comparing the first boundary value and the second boundary value with a preset boundary threshold value respectively;
if the first boundary value is smaller than the preset boundary threshold, setting the first boundary value as a preset value, and if the second boundary value is smaller than the preset boundary threshold, setting the second boundary value as the preset value.
Further, the image processing method further specifically includes:
calling a preset corresponding relation between the first area unit and the second area unit;
comparing first boundary values in the first boundary value set with second boundary values in the second boundary value set one by one based on the corresponding relation;
if the first boundary value is higher than the second boundary value, marking a first area unit corresponding to the first boundary value as a first marking unit; and if the second boundary value is higher than the first boundary value, marking a second area unit corresponding to the second boundary value as a second marking unit.
Further, the image processing method further specifically includes:
respectively counting the number of the first marking units and the number of the second marking units;
if the number of the first marking units is larger than that of the second marking units, determining that the image definition of the first target image is higher than that of the second target image; and if the number of the first marking units is smaller than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
In order to solve the above technical problems, embodiments of the present invention further provide the following technical solutions:
an image processing apparatus, comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first target image and a second target image, and the first target image and the second target image have the same content information;
the generating module is used for generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm;
the first comparison module is used for comparing the first boundary value set with the second boundary value set to generate a feature comparison result;
and the second comparison module is used for determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
Further, the image processing apparatus further includes:
the adjusting module is used for adjusting the image size of the second target image to enable the difference value of the image sizes of the second target image and the first target image to be smaller than or equal to a preset difference threshold value;
the generating module is used for generating a first boundary value set of the first target image and a first boundary value set of the adjusted second target image based on a preset image edge detection algorithm.
Further, the generating module includes:
the dividing submodule is used for dividing the first target image according to a preset dividing algorithm to form a plurality of first area units with preset sizes, and dividing the second target image to form a plurality of second area units with preset sizes;
the generation submodule is used for generating a first boundary value of the first area unit and a second boundary value of the second area unit based on a preset image edge detection algorithm;
a first determining sub-module for determining all first boundary values of the first target image as a first set of boundary values of the first target image and all second boundary values of the second target image as a second set of boundary values of the second target image.
Further, the generating sub-module includes:
the acquisition submodule is used for acquiring first gray information of the first area unit and second gray information of the second area unit;
and the calculation submodule is used for calculating a first boundary value of the first area unit according to the first gray information and calculating a second boundary value of the second area unit according to the second gray information based on the preset image edge detection algorithm.
Further, the image processing apparatus further includes:
the filtering module is used for filtering the first boundary value set and the second boundary value set according to a preset boundary threshold value;
the first comparison module is configured to compare the filtered first boundary value set and the filtered second boundary value set, and generate a feature comparison result.
Further, the filtration module comprises:
the first comparison submodule is used for comparing the first boundary value and the second boundary value with a preset boundary threshold value respectively;
and the setting submodule is used for setting the first boundary value as a preset value if the first boundary value is smaller than the preset boundary threshold value, and setting the second boundary value as the preset value if the second boundary value is smaller than the preset boundary threshold value.
Further, the first comparing module comprises:
the calling submodule is used for calling a preset corresponding relation between the first area unit and the second area unit;
the second comparison submodule is used for comparing the first boundary values in the first boundary value set with the second boundary values in the second boundary value set one by one on the basis of the corresponding relation;
a marking submodule, configured to mark a first area unit corresponding to the first boundary value as a first marking unit if the first boundary value is higher than the second boundary value, and mark a second area unit corresponding to the second boundary value as a second marking unit if the second boundary value is higher than the first boundary value.
Further, the second comparing module comprises:
the counting submodule is used for respectively counting the number of the first marking units and the number of the second marking units;
the second determining submodule is used for determining that the image definition of the first target image is higher than that of the second target image if the number of the first marking units is larger than that of the second marking units; and if the number of the first marking units is smaller than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
In order to solve the above technical problems, embodiments of the present invention further provide the following technical solutions:
a storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform the image processing method as described above.
Compared with the prior art, the method and the device for processing the image data have the advantages that a first target image and a second target image are obtained firstly, wherein the first target image and the second target image have the same content information; then generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm; and finally, comparing the first boundary value set with the second boundary value set to generate a characteristic comparison result, and determining the image definition comparison result of the first target image and the second target image according to the characteristic comparison result. The method comprises the steps of processing a first target image and a second target image through a preset image edge detection algorithm to obtain a first boundary value set and a second boundary value set which are respectively used for quantitatively representing the sharpness and richness of the edge of the first target image and the edge of the second target image; and comparing the first boundary value set with the second boundary value set to quantitatively and finely compare the edges of the first target image and the second target image, thereby accurately comparing the image definitions of the first target image and the second target image. The embodiment of the invention not only improves the accuracy of comparing the image definition, but also improves the efficiency of comparing the image definition.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
FIG. 1a is a schematic view of a scene of an image processing method according to an embodiment of the present invention;
FIG. 1b is a schematic flowchart of an image processing method according to the first embodiment;
FIG. 2a is a flowchart illustrating an image processing method according to a second embodiment of the present invention;
FIG. 2b is a schematic diagram of two target images provided by the second embodiment of the present invention;
FIG. 2c is a schematic diagram of resizing an image according to a second embodiment of the present invention;
FIG. 2d is a schematic diagram of a divided target image provided by a second embodiment of the present invention;
FIG. 2e is a schematic diagram of a gray scale map provided by the second embodiment of the present invention;
FIG. 2f is a schematic illustration of a coordinate representation of a region unit provided by a second embodiment of the present invention;
FIG. 2g is a diagram illustrating the noise of the filtered target image according to the second embodiment of the present invention;
FIG. 3a is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of an image processing apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present invention are described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. While the principles of the invention have been described in the foregoing context, which is not intended to be limiting, those of skill in the art will appreciate that various of the steps and operations described below may be implemented in hardware.
The term "module" as used herein may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present invention.
The embodiment of the invention provides an image processing method and device.
The image processing apparatus may be integrated in a terminal or a server, wherein the terminal may be a network device having a storage unit and a microprocessor, such as a mobile phone, a notebook Computer, a tablet Computer (PC), and the like.
For example, referring to fig. 1a, which is a scene schematic diagram of the image processing method provided by the embodiment of the present invention, the image processing apparatus is integrated in a tablet PC, and is mainly used for determining an image sharpness comparison result of a first target image and a second target image.
For example, when the image processing apparatus receives an operation instruction of a user, first, a first target image and a second target image are acquired, wherein the first target image and the second target image have the same content information; secondly, generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm; comparing the first boundary value set with the second boundary value set to generate a feature comparison result; and finally, determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
The details will be described below separately. The numbers in the following examples are not intended to limit the order of preference of the examples.
First embodiment
In the present embodiment, description will be made from the viewpoint of an image processing apparatus that can be specifically integrated in a terminal or the like, such as a mobile phone, a notebook computer, a tablet PC, or the like.
An image processing method comprising: step S101, a first target image and a second target image are obtained, wherein the first target image and the second target image have the same content information; step S102, generating a first boundary value set of a first target image and a second boundary value set of a second target image based on a preset image edge detection algorithm; step S103, comparing the first boundary value set with the second boundary value set to generate a feature comparison result; and step S104, determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
Referring to fig. 1b, fig. 1b is a schematic flow chart of an image processing method according to a first embodiment of the invention. The method comprises the following steps:
in step S101, a first target image and a second target image are acquired, wherein the first target image and the second target image have the same content information.
For example, taking the case where the image processing apparatus is integrated in a tablet PC in which at least two images having the same content information are stored in advance, the user specifies two of the images having the same content information through the tablet PC as a first target image and a second target image, respectively. In some embodiments, the user may also download two images with the same content information from the internet via the tablet PC.
For another example, in the case where the image processing apparatus is integrated in a tablet PC, at least two images having the same content scene are stored in the server in advance, and the user specifies two of the images having the same content information through the tablet PC, and the two images are respectively used as the first target image and the second target image.
It should be noted that the first target image and the second target image have the same content information, which means that the first target image and the second target image have the same scene, and the scene may be composed of a plurality of objects (such as characters, animals, people, buildings, etc.). That is, the first target image and the second target image have the same content information, which indicates that the first target image and the second target image have the same plurality of objects, and the corresponding objects have the same shape and scale.
It can be understood that the first target image and the second target image have the same image aspect ratio since the first target image and the second target image have the same content information. Wherein the image aspect ratio refers to the ratio between the length and the width of an image.
In this embodiment of the present invention, after the image processing apparatus acquires the first target image and the second target image (i.e. step S101), and before the first boundary value set of the first target image and the second boundary value set of the second target image are generated based on the preset image edge detection algorithm (i.e. step S102), the image size of the second target image also needs to be adjusted, which may specifically include:
and adjusting the image size of the second target image to ensure that the difference value of the image sizes of the second target image and the first target image is less than or equal to a preset difference threshold value.
For example, the size of the second target image is adjusted under the condition that the aspect ratio of the second target image is not changed, so that the difference value between the image sizes of the second target image and the first target image is smaller than or equal to the preset difference threshold.
Wherein the image size refers to the length and width of the image. The length and width of the image may be in pixels or centimeters.
It should be noted that the image size difference value refers to a difference between a length of the first target image and a length of the second target image, or a difference between a width of the first target image and a width of the second target image.
The preset difference threshold is used for indicating that the first target image and the second target image are considered to have the same image size if the difference value of the image sizes of the first target image and the second target image is smaller than the preset difference threshold. It is understood that the preset difference threshold may be determined according to attributes such as image processing performance of the terminal. For example, the preset difference threshold may be set to zero, that is, the difference between the length of the first target image and the length of the second target image is zero, or the difference between the width of the first target image and the width of the second target image is zero, that is, the first target image and the second target image have the same image size.
In some embodiments, the image size of the first target image may be adjusted so that the difference value between the image sizes of the first target image and the second target image is less than or equal to a preset difference threshold.
It is understood that, in some embodiments, the image sizes of the first target image and the second target image may also be adjusted, so that the difference value between the image size of the first target image and the image size of the second target image and the preset image size is smaller than or equal to the preset difference threshold.
In some embodiments, after the first target image and the second target image are acquired, the image sizes of the first target image and the second target image may be compared, and the target image with a larger image size (such as the second target image) may be adjusted so that the difference between the image size of the target image and the image size of the target image with a smaller image size (such as the first target image) is smaller than or equal to the preset difference threshold.
In step S102, a first boundary value set of the first target image and a second boundary value set of the second target image are generated based on a preset image edge detection algorithm.
In some embodiments, generating the first set of boundary values of the first target image and the second set of boundary values of the second target image based on the preset image edge detection algorithm (i.e., step S102) may specifically include:
(11) according to a preset division algorithm, the first target image is divided to form a plurality of first area units with preset sizes, and the second target image is divided to form a plurality of second area units with preset sizes.
(12) And generating a first boundary value of the first area unit and a second boundary value of the second area unit based on a preset image edge detection algorithm.
(13) All first boundary values of the first target image are determined as a first set of boundary values of the first target image and all second boundary values of the second target image are determined as a second set of boundary values of the second target image.
The preset division algorithm refers to an algorithm for dividing an image into a plurality of area units of preset sizes. For example, the preset division algorithm is configured to divide the image into a plurality of area units of a preset size according to the image size of the image and the preset size of the area unit. Since the image size of the second target image is adjusted before the first boundary value set of the first target image and the second boundary value set of the second target image are generated based on the preset image edge detection algorithm, so that the first target image and the second target image have the same image size, after the first target image and the second target image are divided by adopting the preset division algorithm, the number of the first area units of the first target image is the same as the number of the second area units of the second target image.
In the embodiment of the present invention, the area unit refers to an image area having a specific image size. For example, the area cell represents an image area of 1 pixel by 1 pixel. It will be appreciated that the first region unit in the first object image and the second region unit in the second object image each have a unique coordinate address, which can be considered as a column and row value.
In the embodiment of the present invention, the edge refers to a boundary of a region where a discontinuity of a local characteristic of an image appears, such as a sudden change of gray level, a sudden change of color, a sudden change of texture structure, and the like.
In the embodiment of the present invention, the image edge detection algorithm refers to an algorithm for detecting edge features in an image, such as a Sobel (Sobel) edge detection algorithm, a Robert edge detection algorithm, a Prewitt edge detection algorithm, and the like. For example, the preset image edge detection algorithm is set as a Sobel edge detection algorithm, and the first boundary value of the first region unit and the second boundary value of the second region unit may be generated based on the Sobel edge detection algorithm.
In an embodiment of the invention, the boundary value refers to a gradient magnitude characterizing the sharpness and richness of the edge, which may be calculated from the gradients between the region unit and the surrounding region units, wherein the gradient at a certain point in the scalar field points to the direction in which the scalar field grows fastest, and the gradient magnitude represents the maximum rate of change of the point in that direction. For example, the first boundary value refers to a gradient magnitude representing sharpness and richness of an edge of a first area unit, and the second boundary value refers to a gradient magnitude representing sharpness and richness of an edge of a second area unit, wherein the gradient magnitude is calculated according to a gradient between a corresponding image area and an image area surrounding the image area.
In the embodiment of the present invention, the first boundary value set refers to a set established based on the coordinate address of the first area unit and the first boundary value of the first area unit, that is, each coordinate address in the first target image stores a corresponding first boundary value; the second boundary value set refers to a set established based on the coordinate address of the second region unit and the second boundary value of the second region unit, that is, each coordinate address in the second target image stores a corresponding second boundary value. Since the first boundary value is a gradient magnitude characterizing the sharpness and richness of the edge of the first region unit and the second boundary value is a gradient magnitude characterizing the sharpness and richness of the edge of the second region unit, the first set of boundary values may be used to characterize the sharpness and richness of the edge of the first target image and the second set of boundary values may be used to characterize the sharpness and richness of the edge of the second target image.
In some embodiments, generating the first boundary value of the first region unit and the second boundary value of the second region unit based on a preset image edge detection algorithm may specifically include:
(21) first gray information of a first area unit and second gray information of a second area unit are obtained.
(22) Based on a preset image edge detection algorithm, a first boundary value of the first area unit is calculated according to the first gray information, and a second boundary value of the second area unit is calculated according to the second gray information.
Wherein the gray information refers to gray levels, i.e., the first gray information refers to gray levels of the first area unit, and the second gray information refers to gray levels of the second area unit. The gray scale refers to a brightness value corresponding to self color light of each color in an image, and comprises white, black and multi-level gray scales in the period. Typically, the gray scale ranges from 0 to 255, the white gray scale is 255, and the black gray scale is 0.
In the embodiment of the present invention, the edge may be a region boundary where the gray level in the image changes sharply, and the boundary value may be characterized by a gradient amplitude calculated according to the gray level.
Further, generating a first boundary value of a first region unit and a second boundary value of a second region unit based on a preset image edge detection algorithm may specifically include: acquiring the gray level of a first area unit and the gray level of a second area unit; based on a preset image edge detection algorithm, the gradient amplitude of the first area unit is calculated according to the gray level of the first area unit, and the gradient amplitude of the second area unit is calculated according to the gray level of the second area unit. So that the edges of the first target image are reflected by the gradient magnitudes of the first region unit and the edges of the second target image are reflected by the gradient magnitudes of the second region unit.
In step S103, the first boundary value set and the second boundary value set are compared to generate a feature comparison result.
In some embodiments, before comparing the first boundary value set and the second boundary value set to generate the feature comparison result (step S103), an operation of filtering the first boundary value set and the second boundary value set may further be included, and specifically, the operation may include:
and filtering the first boundary value set and the second boundary value set according to a preset boundary threshold value.
For example, noise in the first boundary value set and the second boundary value set is filtered by presetting a boundary threshold. The noise point refers to the first boundary value and/or the second boundary value that does not satisfy the condition of forming the edge.
The preset boundary threshold refers to a boundary value at which the condition for forming the edge is not reached.
The first boundary value set and the second boundary value set are filtered before being compared, noise points in the first target image and the second target image can be removed, the more accurate first boundary value set and the more accurate second boundary value set can be obtained, the more accurate comparison result of the image definition can be obtained, and due to the fact that the noise points in the first boundary value set and the second boundary value set are removed, the data quantity for subsequently comparing the first boundary value set and the second boundary value set can be reduced, and the data processing efficiency is improved.
In some embodiments, filtering the first boundary value set and the second boundary value set according to a preset boundary threshold may specifically include:
(31) comparing the first boundary value and the second boundary value with a preset boundary threshold value respectively;
(32) and if the first boundary value is smaller than the preset boundary threshold value, setting the first boundary value as a preset value, and if the second boundary value is smaller than the preset boundary threshold value, setting the second boundary value as a preset value.
It can be understood that the preset value may be set to zero, that is, if the first boundary value is smaller than the preset boundary threshold, the corresponding first area unit is not an edge; and if the second boundary value is smaller than the preset boundary threshold value, the corresponding second area unit is not an edge.
In this embodiment of the present invention, comparing the first boundary value set and the second boundary value set to generate a feature comparison result, specifically includes:
(41) calling a preset corresponding relation between the first area unit and the second area unit;
(42) and comparing the first boundary values in the first boundary value set with the second boundary values in the second boundary value set one by one based on the corresponding relation.
(43) If the first boundary value is higher than the second boundary value, marking a first area unit corresponding to the first boundary value as a first marking unit; and if the second boundary value is higher than the first boundary value, marking the second area unit corresponding to the second boundary value as a second marking unit.
The preset correspondence between the first area unit and the second area unit may be a correspondence between a coordinate address of the first area unit and a coordinate address of the second area unit, for example, the first area unit and the second area unit having the same coordinate address are determined to have a correspondence.
Further, comparing the first boundary value set with the second boundary value set to generate a feature comparison result, specifically including: calling a first area unit and a second area unit with the same coordinate address, comparing a first boundary value of the first area unit with a second boundary value of the second area unit, and comparing the first boundary value in the first boundary value set with the second boundary value in the second boundary value set one by adopting the method.
In step S104, an image sharpness comparison result of the first target image and the second target image is determined according to the feature comparison result.
In some embodiments, determining the image sharpness comparison result of the first target image and the second target image according to the feature comparison result may specifically include:
(51) and respectively counting the number of the first marking units and the second marking units.
(52) If the number of the first marking units is larger than that of the second marking units, determining that the image definition of the first target image is higher than that of the second target image; and if the number of the first marking units is less than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
It can be understood that if the number of the first marking units is greater than the number of the second marking units, it indicates that the sharpness and richness of the edge in the first target image are higher than those of the edge in the second target image, i.e. the image definition of the first target image is higher than that of the second target image; if the number of the first marking units is smaller than that of the second marking units, the sharpness and richness of the edges in the first target image are lower than those of the edges in the second target image, namely the image definition of the first target image is lower than that of the second target image.
As can be seen from the above, in the image processing method provided in the embodiment of the present invention, first, a first target image and a second target image are obtained, where the first target image and the second target image have the same content information; then, based on a preset image edge detection algorithm, generating a first boundary value set of a first target image and a second boundary value set of a second target image; comparing the first boundary value set with the second boundary value set to generate a feature comparison result; and finally, determining the image definition comparison result of the first target image and the second target image according to the feature comparison result. The method comprises the steps of processing a first target image and a second target image through a preset image edge detection algorithm to obtain a first boundary value set and a second boundary value set, wherein the first boundary value set and the second boundary value set are respectively used for quantitatively representing the sharpness and richness of the edge of the first target image and the edge of the second target image; and comparing the first boundary value set with the second boundary value set to quantitatively and finely compare the edges of the first target image and the second target image, thereby accurately comparing the image definitions of the first target image and the second target image. Therefore, the embodiment of the invention not only improves the accuracy of comparing the image definition, but also improves the efficiency of comparing the image definition.
Second embodiment
The method described in the above embodiments is further illustrated in detail by way of example.
Referring to fig. 2a, fig. 2a is a schematic flowchart of an image processing method according to a second embodiment of the invention. The method comprises the following steps:
in step S201, a first target image and a second target image are acquired, wherein the first target image and the second target image have the same content information.
For example, taking the example that the image processing apparatus is integrated in a tablet PC, at least two images with the same content information are stored in the tablet PC in advance, and a user triggers an operation instruction through the tablet PC to select the two images with the same content information, wherein the two images are respectively used as a first target image and a second target image.
It should be noted that the first target image and the second target image have the same content information, which means that the first target image and the second target image have the same scene, and the scene may be composed of a plurality of objects (such as characters, animals, people, buildings, etc.). That is, the first target image and the second target image have the same content information, and the corresponding objects have the same shape and scale. For example, referring to fig. 2b, the first target image and the second target image have the same scene, and the scene is composed of the sun, the sky, the sea, the island, the stone, and the like.
It can be understood that the first target image and the second target image have the same image aspect ratio since the first target image and the second target image have the same content information. Wherein the image aspect ratio refers to the ratio between the length and the width of an image.
In step S202, the image size of the second target image is adjusted.
In the embodiment of the present invention, adjusting the image size of the second target image specifically includes:
and under the condition of not changing the length-width ratio of the second target image, adjusting the image size of the second target image to enable the image size difference value of the second target image and the first target image to be smaller than or equal to a preset difference threshold value.
It is understood that the preset difference threshold may be determined according to attributes such as image processing performance of the terminal. For example, the preset difference threshold may be set to zero, or may be set to 0.001 mm.
For example, referring to fig. 2c, the image size of the second target image 22a is adjusted to generate an adjusted second target image 22b, so that the image size difference between the adjusted second target image 22b and the image size of the first target image 21 is zero. It should be noted that fig. 2c is a schematic diagram illustrating the image size adjustment, and therefore, the content information of the image is omitted.
In step S203, the first target image and the second target image are divided.
For example, according to a preset division algorithm, a first target image is divided to form a plurality of first area units with preset sizes, and a second target image is divided to form a plurality of second area units with preset sizes.
The preset division algorithm is set to divide the image into a plurality of area units with preset sizes according to the image size of the image and the preset sizes of the area units.
For example, referring to fig. 2d, according to a preset division algorithm, the first target image 21 is divided into 8 pixels by 6 pixels, that is, the first target image 21 is divided into 8 by 6 first area units, and the image size of each first area unit is 1 pixel by 1 pixel; the second target image 22 is also divided into 8 pixels by 6 pixels, i.e., the second target image 22 is divided into 8 pixels by 6 second area units, and the image size of each second area unit is 1 pixel by 1 pixel. In the embodiment of the present invention, the first area unit of 1 pixel × 1 pixel is referred to as a first pixel, and the second area unit of 1 pixel × 1 pixel is referred to as a second pixel. Fig. 2d is a schematic diagram of the division target image, and the content information of the image is omitted here.
In step S204, a first set of boundary values for the first target image and a second set of boundary values for the second target image are generated.
For example, first, based on a preset image edge detection algorithm, a first boundary value of a first area unit and a second boundary value of a second area unit are generated; all first boundary values of the first target image are then determined as a first set of boundary values of the first target image and all second boundary values of the second target image are determined as a second set of boundary values of the second target image.
And the preset image edge detection algorithm is set as a Sobel edge detection algorithm. Generating a first boundary value set of the first target image and a second boundary value set of the second target image may specifically include:
step a, processing the first target image and the second target image to respectively generate a gray scale image of the first target image and a gray scale image of the second target image. The gray scale maps of the first target image and the second target image may be obtained by a common gray scale transformation method, which is not described herein again. Referring to fig. 2e, the two gray scales in fig. 2e are respectively converted from the first target image and the second target image in fig. 2 b.
And b, acquiring a first gray level of the first area unit and a second gray level of the second area unit. Referring to fig. 2d, in the following steps, the first target image 21 is divided into 8 × 6 first area units, the image size of each first area unit is 1 pixel × 1 pixel, and the second target image 22 is divided into 8 × 6 second area units, the image size of each second area unit is 1 pixel × 1 pixel as an example.
And c, calculating the gradient amplitude of the first area unit according to the gray level of the first area unit and calculating the gradient amplitude of the second area unit according to the gray level of the second area unit based on a Sobel edge detection algorithm, wherein the gradient amplitude of the first area unit is determined as a first boundary value of the first area unit, and the gradient amplitude of the second area unit is determined as a second boundary value of the second area unit.
For example, the gray level of the first pixel is M, the gray level of the second pixel is N, the gradient amplitude of the first pixel is S, and the gradient amplitude of the second pixel is P.
Wherein the component Gx of the Sobel operator in the x direction is shown in table 1, and the component Gy in the y direction is shown in table 2.
-1 0 +1
-2 0 +2
-1 0 +1
TABLE 1
+1 +2 +1
0 0 0
-1 -2 -1
TABLE 2
In the embodiment of the present invention, calculating the gradient amplitude of the first pixel point by using a Sobel edge detection algorithm may specifically include: convolving the gray level M of the first pixel point with the Gx component of the Sobel operator to obtain a gradient component Sx of the first pixel point in the x direction; convolving the gray level M of the first pixel point with the Gy component of the Sobel operator to obtain a gradient component Sy of the first pixel point in the y direction; then, calculating the gradient amplitude S of the first pixel point according to Sx and Sy, wherein the calculation formula is as follows: s is 1/2| Sx | +1/2| Sy |, where | Sx | represents the absolute value of Sx and | Sy | represents the absolute value of Sy.
By the same method, a gradient component Px of the second pixel in the x direction, a gradient component Py in the y direction, and a gradient magnitude P may be obtained, where P is 1/2| Px | +1/2| Py |, where Px represents the gradient component of the second pixel in the x direction, Py represents the gradient component of the second pixel in the y direction, | Px | represents the absolute value of Px, and | Py | represents the absolute value of Py.
And d, establishing the one-to-one corresponding relation of the first area unit and the second area unit according to the coordinate address of the first area unit and the coordinate address of the second area unit.
For example, a first area unit and a second area unit having the same coordinate address are determined to have a correspondence relationship.
For example, referring to fig. 2f, the first target image and the second target image are divided into 8 pixels by 6 pixels, and the first pixel and the second pixel have a unique coordinate address, which may be a column and row value, which are respectively represented by x and y. And determining that the first pixel point and the second pixel point with the same coordinate address (x, y) have a corresponding relation.
And f, establishing a first boundary value set and a second boundary value set based on the coordinate addresses of the first area unit and the second area unit.
For example, in the first boundary value set, S (x, y) may be used to represent the gradient magnitude of the first pixel point with coordinate address (x, y); in the second boundary value set, the gradient magnitude of the second pixel point with the coordinate address (x, y) may be represented by P (x, y). Thus, the first set of boundary values may be used to characterize the sharpness and richness of the edges of the first target image, and the second set of boundary values may be used to characterize the sharpness and richness of the edges of the second target image.
In some embodiments, the first set of boundary values and the second set of boundary values may exist in the form of a matrix, and the coordinate address of the first zone unit and the coordinate address of the second zone unit may be represented by values of columns and rows of the matrix. For example, in the first boundary value set, S (1,2) is used to represent the gradient magnitude of the first pixel point in the 1 st column and the 2 nd row in the first target image; in the second boundary value set, the gradient magnitude of the second pixel point in the 8 th column and the 3 rd row in the second target image is represented by P (8, 3).
In step S205, the first set of boundary values and the second set of boundary values are filtered.
For example, setting a preset boundary threshold value to be 100, and comparing the gradient amplitude of the first pixel in the first boundary value set and the gradient amplitude of the second pixel in the second boundary value set with the preset boundary threshold value respectively; if the gradient amplitude of the first pixel point in the first boundary value set is smaller than a preset boundary threshold value, the gradient amplitude of the first pixel point is set as a preset value, and if the gradient amplitude of the second pixel point in the second boundary value set is smaller than the preset boundary threshold value, the gradient amplitude of the first pixel point is also set as a preset value. Wherein the preset value may be set to 0. Referring to fig. 2g, fig. 2g is a schematic diagram of filtering noise from an image.
In step S206, the first boundary value set and the second boundary value set are compared based on the correspondence relationship between the first region unit and the second region unit, and a feature comparison result is generated.
For example, the gradient amplitudes of a first pixel and a second pixel with the same coordinate (x, y) are respectively called from the first boundary value set and the second boundary value set for comparison, for example, S (1,1) and P (1,1) are called for comparison, and if S (x, y) is greater than P (x, y), the first pixel of the first target image is marked as a first marking unit; and if S (x, y) is smaller than P (x, y), marking the second pixel point of the second target image as a second marking unit. In fact, the first marking unit and the second marking unit are only used for counting the number, so that the number of S (x, y) larger than P (x, y) can be counted only by the value U and the number of S (x, y) smaller than P (x, y) can be counted by the value V without marking the first pixel or the second pixel.
In step S207, determining an image sharpness comparison result of the first target image and the second target image according to the feature comparison result, which specifically includes:
respectively counting the number of the first marking units and the number of the second marking units;
if the number of the first marking units is larger than that of the second marking units, determining that the image definition of the first target image is higher than that of the second target image; if the number of the first marking units is smaller than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image
In some embodiments, a number U that S (x, y) is greater than P (x, y) and a number V that S (x, y) is less than P (x, y) may be counted, respectively, and if U is greater than V, it is determined that the image sharpness of the first target image is higher than that of the second target image; and if the U is smaller than the V, determining that the image definition of the first target image is lower than that of the second target image.
In addition, for parts that are not described in detail in this embodiment, reference may be made to the detailed description of the image processing method in the first embodiment, and details are not described here again.
According to the embodiment of the invention, the first boundary value set representing the sharpness and the abundance of the edge of the first target image and the second boundary value set representing the sharpness and the abundance of the edge of the second target image are calculated to quantitatively and thinly compare the edges of the first target image and the second target image, so that the definition of the first target image and the definition of the second target image are accurately compared, the accuracy of comparing the definition of the images is improved, and the efficiency of comparing the definition of the images is improved.
Third embodiment
In order to better implement the image processing method provided by the embodiment of the present invention, an embodiment of the present invention further provides a device based on the image processing method. The terms are the same as those in the image rendering method, and specific implementation details may refer to the description in the method embodiment.
Referring to fig. 3a, fig. 3a is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention, wherein the image processing apparatus 300 may include an obtaining module 301, a generating module 302, a first comparing module 303, and a second comparing module 304.
In the image processing apparatus 300, the obtaining module 301 may be configured to obtain a first target image and a second target image, where the first target image and the second target image have the same content information.
For example, taking an example in which the image processing apparatus is integrated in a tablet PC, at least two images having the same content information are stored in the tablet PC in advance, and the user specifies two of the images having the same content information through the tablet PC as a first target image and a second target image, respectively. In some embodiments, the user may also download two images with the same content information from the internet via the tablet PC.
For another example, in the case where the image processing apparatus is integrated in a tablet PC, at least two images having the same content scene are stored in the server in advance, and the user specifies two of the images having the same content information through the tablet PC, and the two images are respectively used as the first target image and the second target image.
It should be noted that the first target image and the second target image have the same content information, which means that the first target image and the second target image have the same scene, and the scene may be composed of a plurality of objects (such as characters, animals, people, buildings, etc.). That is, the first target image and the second target image have the same content information, which means that the first target image and the second target image have the same plurality of objects and the corresponding objects have the same shape and scale.
It can be understood that, since the first target image and the second target image have the same content information, the first target image and the second target image have the same image aspect ratio. Wherein the image aspect ratio refers to the ratio between the length and the width of an image.
In the embodiment of the present invention, the image processing apparatus further includes an adjusting module 305. The adjusting module 305 may be configured to adjust an image size of the second target image such that a difference value between the image sizes of the second target image and the first target image is smaller than or equal to a preset difference threshold.
For example, the size of the second target image is adjusted under the condition that the aspect ratio of the second target image is not changed, so that the difference value between the image sizes of the second target image and the first target image is smaller than or equal to the preset difference threshold.
Wherein the image size refers to the length and width of the image. The length and width of the image may be in pixels or centimeters.
It should be noted that the image size difference value refers to a difference between a length of the first target image and a length of the second target image, or a difference between a width of the first target image and a width of the second target image.
The preset difference threshold is used for indicating that the first target image and the second target image are considered to have the same image size if the difference value of the image sizes of the first target image and the second target image is smaller than the preset difference threshold. It is understood that the preset difference threshold may be determined according to attributes such as image processing performance of the terminal. For example, the preset difference threshold may be set to zero, that is, the difference between the length of the first target image and the length of the second target image is zero, or the difference between the width of the first target image and the width of the second target image is zero, that is, the first target image and the second target image have the same image size.
In some embodiments, the image size of the first target image may be adjusted so that the difference value between the image sizes of the first target image and the second target image is less than or equal to a preset difference threshold.
It is understood that, in some embodiments, the image sizes of the first target image and the second target image may also be adjusted, so that the difference value between the image size of the first target image and the image size of the second target image and the preset image size is smaller than or equal to the preset difference threshold.
In some embodiments, after the first target image and the second target image are acquired, the image sizes of the first target image and the second target image may be compared, and the target image with a larger image size (such as the second target image) may be adjusted so that the difference between the image size of the target image and the image size of the target image with a smaller image size (such as the first target image) is smaller than or equal to the preset difference threshold.
In the image processing apparatus 300, the generating module 302 may be configured to generate a set of first target images and a second set of boundary values of second target images based on a preset image edge detection algorithm.
In some embodiments, referring to fig. 3b, the generating module 302 may include a dividing sub-module 3021, a generating sub-module 3022, and a first determining sub-module 3033.
The dividing submodule 3021 may be configured to divide the first target image according to a preset dividing algorithm to form a plurality of first area units of a preset size, and divide the second target image to form a plurality of second area units of the preset size.
The generating sub-module 3022 may be configured to generate a first boundary value of the first region unit and a second boundary value of the second region unit based on a preset image edge detection algorithm.
The first determining submodule 3033 may be configured to determine all first boundary values of the first target image as the first set of boundary values of the first target image, and all second boundary values of the second target image as the second set of boundary values of the second target image.
The preset division algorithm refers to an algorithm for dividing an image into a plurality of area units of preset sizes. For example, the preset division algorithm is configured to divide the image into a plurality of area units of preset sizes according to the image size of the image and the preset size of the area unit. Since the image size of the second target image has been adjusted before the first boundary value of the first area unit and the second boundary value of the second area unit are generated based on the preset image edge detection algorithm, so that the first target image and the second target image have the same image size, after the first target image and the second target image are divided by the preset division algorithm, the number of the first area units of the first target image is the same as the number of the second area units of the second target image.
In the embodiment of the present invention, the area unit refers to an image area having a specific image size. For example, the area cell represents an image area of 1 pixel by 1 pixel. It will be appreciated that the first region unit in the first target image and the second region unit in the second target image each have a unique coordinate address, which can be considered to be a column and row value.
In the embodiment of the present invention, the edge refers to a boundary of a region where a discontinuity of a local characteristic of an image appears, such as a sudden change of gray level, a sudden change of color, a sudden change of texture structure, and the like.
In the embodiment of the present invention, the image edge detection algorithm refers to an algorithm for detecting edge features in an image, such as a Sobel edge detection algorithm, a Robert edge detection algorithm, a Prewitt edge detection algorithm, and the like. For example, the preset image edge detection algorithm is set as a Sobel edge detection algorithm, and the first boundary value of the first region unit and the second boundary value of the second region unit may be generated based on the Sobel edge detection algorithm.
In an embodiment of the invention, the boundary value refers to a gradient magnitude characterizing the sharpness and richness of the edge, which may be calculated from the gradients between the region unit and the surrounding region units, wherein the gradient at a certain point in the scalar field points to the direction in which the scalar field grows fastest, and the gradient magnitude represents the maximum rate of change of the point in that direction. For example, the first boundary value refers to a gradient magnitude representing sharpness and richness of an edge of a first area unit, and the second boundary value refers to a gradient magnitude representing sharpness and richness of an edge of a second area unit, wherein the gradient magnitude is calculated according to a gradient between a corresponding image area and an image area surrounding the image area.
In the embodiment of the present invention, the first boundary value set refers to a set established based on the coordinate address of the first area unit and the first boundary value of the first area unit, that is, each coordinate address in the first target image stores a corresponding first boundary value; the second boundary value set refers to a set established based on the coordinate address of the second region unit and the second boundary value of the second region unit, that is, each coordinate address in the second target image stores a corresponding second boundary value. Since the first boundary value is a gradient magnitude characterizing the sharpness and richness of the edge of the first region unit and the second boundary value is a gradient magnitude characterizing the sharpness and richness of the edge of the second region unit, the first set of boundary values may be used to characterize the sharpness and richness of the edge of the first target image and the second set of boundary values may be used to characterize the sharpness and richness of the edge of the second target image.
In some embodiments, the generation submodule 3022 may include an acquisition submodule 3022a and a calculation submodule 3022 b.
The obtaining sub-module 3022a may be configured to obtain first grayscale information of a first region unit and second grayscale information of a second region unit.
The calculating sub-module 3022b may be configured to calculate a first boundary value of the first region unit according to the first gray information and a second boundary value of the second region unit according to the second gray information based on a preset image edge detection algorithm.
Wherein the gray information refers to gray levels, i.e., the first gray information refers to gray levels of the first area unit, and the second gray information refers to gray levels of the second area unit. The gray scale refers to a brightness value corresponding to self color light of each color in an image, and comprises white, black and multi-level gray scales in the period. Typically, the gray scale ranges from 0 to 255, the white gray scale is 255, and the black gray scale is 0.
In the embodiment of the present invention, the edge may be a region boundary where the gray level in the image changes sharply, and the boundary value may be characterized by a gradient amplitude calculated according to the gray level.
Further, generating a first boundary value of a first region unit and a second boundary value of a second region unit based on a preset image edge detection algorithm may specifically include: acquiring the gray level of a first area unit and the gray level of a second area unit; based on a preset image edge detection algorithm, the gradient amplitude of the first area unit is calculated according to the gray level of the first area unit, and the gradient amplitude of the second area unit is calculated according to the gray level of the second area unit. So that the edges of the first target image are reflected by the gradient magnitudes of the first region unit and the edges of the second target image are reflected by the gradient magnitudes of the second region unit.
In the image processing apparatus 300, the first comparing module 303 may be configured to compare the first boundary value set and the second boundary value set, and generate a feature comparison result.
In some embodiments, the image processing apparatus 300 may further include a filtering module 306, wherein the filtering module 306 may be configured to filter the first set of boundary values and the second set of boundary values according to a preset boundary threshold.
For example, before comparing the first boundary value set and the second boundary value set to generate the feature comparison result, the filtering module 306 filters noise in the first boundary value set and the second boundary value set by presetting a boundary threshold. The noise point refers to the first boundary value and/or the second boundary value that does not satisfy the condition of forming the edge.
The preset boundary threshold refers to a boundary value at which the condition for forming the edge is not reached.
The first boundary value set and the second boundary value set are filtered before being compared, noise points in the first target image and the second target image can be removed, the more accurate first boundary value set and the more accurate second boundary value set can be obtained, the more accurate comparison result of the image definition can be obtained, and due to the fact that the noise points in the first boundary value set and the second boundary value set are removed, the data quantity for subsequently comparing the first boundary value set and the second boundary value set can be reduced, and the data processing efficiency is improved.
In some embodiments, the filter module 306 may include a first comparison submodule 3061 and a settings submodule 3062.
The first comparing sub-module 3061 may be configured to compare the first boundary value and the second boundary value with a preset boundary threshold, respectively.
The setting sub-module 3062 may be configured to set the first boundary value to a preset value if the first boundary value is smaller than a preset boundary threshold, and set the second boundary value to a preset value if the second boundary value is smaller than the preset boundary threshold.
It can be understood that the preset value may be set to zero, that is, if the first boundary value is smaller than the preset boundary threshold, the corresponding first area unit is not an edge; and if the second boundary value is smaller than the preset boundary threshold value, the corresponding second area unit is not an edge.
In some embodiments, the first comparison module 303 may include a calling sub-module 3031, a second comparison sub-module 3032, and a labeling sub-module 3033.
The invoking submodule 3031 may be configured to invoke a preset corresponding relationship between the first region unit and the second region unit.
The second comparing submodule 3032 may be configured to compare the first boundary values in the first boundary value set with the second boundary values in the second boundary value set on a one-to-one basis based on the correspondence relationship.
The marking submodule 3033 may be configured to mark, if the first boundary value is higher than the second boundary value, the first area unit corresponding to the first boundary value as a first marking unit; and if the second boundary value is higher than the first boundary value, marking the second area unit corresponding to the second boundary value as a second marking unit.
The preset correspondence between the first area unit and the second area unit may be a correspondence between a coordinate address of the first area unit and a coordinate address of the second area unit, for example, the first area unit and the second area unit having the same coordinate address are determined to have a correspondence.
Further, comparing the first boundary value set with the second boundary value set to generate a feature comparison result, specifically including: calling a first area unit and a second area unit with the same coordinate address, comparing a first boundary value of the first area unit with a second boundary value of the second area unit, and comparing the first boundary value in the first boundary value set with the second boundary value in the second boundary value set one by adopting the method.
In the image processing apparatus 300, the second comparing module 304 may be configured to determine an image sharpness comparison result of the first target image and the second target image according to the feature comparison result.
In some embodiments, the second comparison module 304 may include a statistics submodule 3041 and a second determination submodule 3042:
the counting submodule 3041 may be configured to count the number of the first mark units and the number of the second mark units, respectively.
A second determining sub-module 3042, configured to determine that the image sharpness of the first target image is higher than the image sharpness of the second target image if the number of the first marking units is greater than the number of the second marking units; and if the number of the first marking units is less than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
It can be understood that if the number of the first marking units is greater than the number of the second marking units, it indicates that the sharpness and richness of the edge in the first target image are higher than those of the edge in the second target image, i.e. the image definition of the first target image is higher than that of the second target image; if the number of the first marking units in the first boundary value set is smaller than the number of the second marking units in the second boundary value set, the sharpness and richness of the edge in the first target image are lower than those of the edge in the second target image, that is, the image definition of the first target image is lower than that of the second target image.
In specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily, and implemented as the same or several entities, and specific implementations of the above units may refer to the foregoing method embodiment, which is not described herein again.
According to the embodiment of the invention, the first target image and the second target image are processed by a preset image edge detection algorithm to obtain a first boundary value set and a second boundary value set, wherein the first boundary value set and the second boundary value set are respectively used for quantitatively representing the sharpness and richness of the edge of the first target image and the edge of the second target image; and comparing the first boundary value set with the second boundary value set to quantitatively and finely compare the edges of the first target image and the second target image, thereby accurately comparing the image definitions of the first target image and the second target image. Therefore, the embodiment of the invention not only improves the accuracy of comparing the image definition, but also improves the efficiency of comparing the image definition.
Fourth embodiment
An embodiment of the present invention further provides a server, in which the image processing apparatus according to an embodiment of the present invention may be integrated, as shown in fig. 4, which illustrates a schematic structural diagram of a server 400 according to an embodiment of the present invention.
The server 400 may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, an input unit 403, a display unit 404, a communication unit 405, and a power supply 406. Those skilled in the art will appreciate that the server architecture shown in FIG. 4 is not meant to be limiting, and that a server may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the server. Optionally, processor 401 may include one or more processing cores; in some embodiments, processor 401 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The server 400 may further include an input unit 403. The input unit 403 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Specifically, the input unit 403 may include one or more input devices of a touch pad, a physical keyboard, a mouse, a joystick, and the like.
The server 400 may also include a display unit 404. The display unit 404 may be used to display information input by or provided to the user and various graphical user interfaces of the server, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 404 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The server 400 may further include a communication unit 405. The communication unit 405 may be used for receiving and transmitting signals during information transmission and reception, and in particular, the communication unit 405 may receive signals transmitted by a terminal and may process the signals by one or more processors 401.
The server 400 may also include a power supply 406 (e.g., a battery) to power the various components, which in some embodiments may be logically connected to the processor 401 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 406 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Specifically, in this embodiment, the processor 401 in the server 400 loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
acquiring a first target image and a second target image, wherein the first target image and the second target image have the same content information; generating a first boundary value set of a first target image and a second boundary value set of a second target image based on a preset image edge detection algorithm; comparing the first boundary value set with the second boundary value set to generate a feature comparison result; and determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
In some embodiments, after the processor 401 acquires the first target image and the second target image, it may further be configured to: and adjusting the image size of the second target image to ensure that the difference value of the image sizes of the second target image and the first target image is less than or equal to a preset difference threshold value.
In some embodiments, the processor 401 generates the first boundary value set of the first target image and the second boundary value set of the second target image based on a preset image edge detection algorithm, which may specifically include:
dividing the first target image according to a preset division algorithm to form a plurality of first area units with preset sizes, and dividing the second target image to form a plurality of second area units with preset sizes;
generating a first boundary value of a first area unit and a second boundary value of a second area unit based on a preset image edge detection algorithm;
all first boundary values of the first target image are determined as a first set of boundary values of the first target image and all second boundary values of the second target image are determined as a second set of boundary values of the second target image.
Further, the processor 401 generates a first boundary value of the first area unit and a second boundary value of the second area unit based on a preset image edge detection algorithm, and specifically may include: acquiring first gray information of a first area unit and second gray information of a second area unit; based on a preset image edge detection algorithm, a first boundary value of the first area unit is calculated according to the first gray information, and a second boundary value of the second area unit is calculated according to the second gray information.
In some embodiments, before the processor 401 compares the first set of boundary values with the second set of boundary values to generate the feature comparison result, it may further be configured to: and filtering the first boundary value set and the second boundary value set according to a preset boundary threshold value.
Further, the processor 401 filters the first boundary value set and the second boundary value set according to a preset boundary threshold, which may specifically include: comparing the first boundary value and the second boundary value with a preset boundary threshold value respectively; and if the first boundary value is smaller than the preset boundary threshold value, setting the first boundary value as a preset value, and if the second boundary value is smaller than the preset boundary threshold value, setting the second boundary value as the preset value.
In some embodiments, the comparing, by the processor 401, the first boundary value set and the second boundary value set to generate the feature comparison result may specifically include: calling a preset corresponding relation between the first area unit and the second area unit; comparing the first boundary values in the first boundary value set with the second boundary values in the second boundary value set one by one based on the corresponding relation; if the first boundary value is higher than the second boundary value, marking a first area unit corresponding to the first boundary value as a first marking unit; and if the second boundary value is higher than the first boundary value, marking the second area unit corresponding to the second boundary value as a second marking unit.
In some embodiments, the determining, by the processor 401, an image sharpness comparison result of the first target image and the second target image according to the feature comparison result may specifically include: respectively counting the number of the first marking units and the number of the second marking units; if the number of the first marking units is larger than that of the second marking units, determining that the image definition of the first target image is higher than that of the second target image; and if the number of the first marking units is smaller than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
As can be seen from the above description, the server provided in the embodiment of the present invention calculates the first boundary value set representing the sharpness and richness of the edge of the first target image and the second boundary value set representing the sharpness and richness of the edge of the second target image to quantitatively and finely compare the edges of the first target image and the second target image, so as to accurately compare the image definitions of the first target image and the second target image, thereby not only providing the accuracy of comparing the image definitions, but also improving the efficiency of comparing the image definitions.
Fifth embodiment
A storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform the image processing method of the above embodiments, such as: acquiring a first target image and a second target image, wherein the first target image and the second target image have the same content information; generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm; comparing the first boundary value set with the second boundary value set to generate a feature comparison result; and determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
In the embodiment of the present invention, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the above detailed description of the image processing method, and details are not described here again.
The image processing apparatus provided in the embodiment of the present invention is, for example, a computer, a tablet computer, a mobile phone with a touch function, and the like, and the image processing apparatus in the above embodiments belong to the same concept, and any method provided in the embodiment of the image processing method may be run on the image processing apparatus, and a specific implementation process thereof is described in the embodiment of the image processing method, and is not described herein again.
It should be noted that, for the image processing method of the present invention, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method of the embodiment of the present invention can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer readable storage medium, such as a memory of a terminal, and executed by at least one processor in the terminal, and the process of executing the process can include the process of the embodiment of the image processing method. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present invention, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided a method, an apparatus, and a storage medium for image processing according to embodiments of the present invention, and the present disclosure has been made in detail by applying specific examples to explain the principles and embodiments of the present invention, and the description of the foregoing embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. An image processing method, comprising:
acquiring a first target image and a second target image, wherein the first target image and the second target image have the same content information;
based on a preset image edge detection algorithm, calculating a corresponding first boundary value for each first area unit in the first target image, and calculating a corresponding second boundary value for each second area unit in the second target image, wherein the first boundary value is a gradient amplitude representing the sharpness and richness of the edge in the first area unit, and the second boundary value is a gradient amplitude representing the sharpness and richness of the edge in the second area unit;
generating a first boundary value set of the first target image according to the first boundary value corresponding to each first region unit, and generating a second boundary value set of the second target image according to the first boundary value corresponding to each second region unit;
comparing the first boundary value with a corresponding second boundary value based on the corresponding relation between the coordinate address of the first area unit and the coordinate address of the second area unit to obtain a feature comparison result;
and determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
2. The image processing method according to claim 1, wherein after the acquiring the first target image and the second target image, further comprising:
adjusting the image size of the second target image to enable the difference value of the image sizes of the second target image and the first target image to be smaller than or equal to a preset difference threshold value;
the generating a first boundary value set of the first target image and a second boundary value set of the second target image based on a preset image edge detection algorithm includes:
and generating a first boundary value set of the first target image and a second boundary value set of the adjusted second target image based on a preset image edge detection algorithm.
3. The image processing method according to claim 2, wherein the generating a first set of boundary values of the first target image and a second set of boundary values of the second target image based on a preset image edge detection algorithm comprises:
dividing the first target image according to a preset division algorithm to form a plurality of first area units with preset sizes, and dividing the second target image to form a plurality of second area units with preset sizes;
generating a first boundary value of the first area unit and a second boundary value of the second area unit based on a preset image edge detection algorithm;
determining all first boundary values of the first target image as a first set of boundary values of the first target image and all second boundary values of the second target image as a second set of boundary values of the second target image.
4. The image processing method according to claim 3, wherein the generating a first boundary value of the first region unit and a second boundary value of the second region unit based on a preset image edge detection algorithm comprises:
acquiring first gray information of the first area unit and second gray information of the second area unit;
and calculating a first boundary value of the first area unit according to the first gray information and calculating a second boundary value of the second area unit according to the second gray information based on a preset image edge detection algorithm.
5. The image processing method according to claim 3, wherein before comparing the first set of boundary values and the second set of boundary values to generate a feature comparison result, further comprising:
filtering the first boundary value set and the second boundary value set according to a preset boundary threshold value;
the comparing the first set of boundary values and the second set of boundary values to generate a feature comparison result includes:
and comparing the filtered first boundary value set with the filtered second boundary value set to generate a feature comparison result.
6. The image processing method according to claim 5, wherein said filtering the first set of boundary values and the second set of boundary values according to a preset boundary threshold comprises:
comparing the first boundary value and the second boundary value with a preset boundary threshold value respectively;
if the first boundary value is smaller than the preset boundary threshold, setting the first boundary value as a preset value, and if the second boundary value is smaller than the preset boundary threshold, setting the second boundary value as the preset value.
7. The image processing method of claim 3, wherein the comparing the first set of boundary values and the second set of boundary values to generate a feature comparison result comprises:
calling a preset corresponding relation between the first area unit and the second area unit;
comparing first boundary values in the first boundary value set with second boundary values in the second boundary value set one by one based on the corresponding relation;
if the first boundary value is higher than the second boundary value, marking a first area unit corresponding to the first boundary value as a first marking unit; and if the second boundary value is higher than the first boundary value, marking a second area unit corresponding to the second boundary value as a second marking unit.
8. The method according to claim 7, wherein the determining a result of image sharpness comparison of the first target image and the second target image according to the result of feature comparison comprises:
respectively counting the number of the first marking units and the number of the second marking units;
if the number of the first marking units is larger than that of the second marking units, determining that the image definition of the first target image is higher than that of the second target image; and if the number of the first marking units is smaller than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
9. An image processing apparatus characterized by comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first target image and a second target image, and the first target image and the second target image have the same content information;
a generating module, configured to calculate, based on a preset image edge detection algorithm, a corresponding first boundary value for each first region unit in the first target image, and a corresponding second boundary value for each second region unit in the second target image, where the first boundary value is a gradient amplitude representing sharpness and richness of an edge in the first region unit, and the second boundary value is a gradient amplitude representing sharpness and richness of an edge in the second region unit; generating a first boundary value set of the first target image according to the first boundary value corresponding to each first region unit, and generating a second boundary value set of the second target image according to the first boundary value corresponding to each second region unit;
the first comparison module is used for comparing the first boundary values with the corresponding second boundary values based on the corresponding relationship between the coordinate addresses of the first area units and the coordinate addresses of the second area units to obtain a feature comparison result between each first boundary value and the corresponding second boundary value;
and the second comparison module is used for determining the image definition comparison result of the first target image and the second target image according to the feature comparison result.
10. The image processing apparatus according to claim 9, characterized in that the apparatus further comprises:
the adjusting module is used for adjusting the image size of the second target image to enable the difference value of the image sizes of the second target image and the first target image to be smaller than or equal to a preset difference threshold value;
the generating module is used for generating a first boundary value set of the first target image and a first boundary value set of the adjusted second target image based on a preset image edge detection algorithm.
11. The image processing apparatus according to claim 10, wherein the generation module includes:
the dividing submodule is used for dividing the first target image according to a preset dividing algorithm to form a plurality of first area units with preset sizes, and dividing the second target image to form a plurality of second area units with preset sizes;
the generating submodule is used for generating a first boundary value of the first area unit and a second boundary value of the second area unit based on a preset image edge detection algorithm;
a first determining sub-module for determining all first boundary values of the first target image as a first set of boundary values of the first target image and all second boundary values of the second target image as a second set of boundary values of the second target image.
12. The image processing apparatus according to claim 11, wherein the generation sub-module includes:
the acquisition submodule is used for acquiring first gray information of the first area unit and second gray information of the second area unit;
and the calculation submodule is used for calculating a first boundary value of the first area unit according to the first gray information and calculating a second boundary value of the second area unit according to the second gray information based on the preset image edge detection algorithm.
13. The image processing apparatus according to claim 11, wherein the first comparing means includes:
the calling submodule is used for calling a preset corresponding relation between the first area unit and the second area unit;
the second comparison submodule is used for comparing the first boundary values in the first boundary value set with the second boundary values in the second boundary value set one by one on the basis of the corresponding relation;
a marking submodule, configured to mark a first area unit corresponding to the first boundary value as a first marking unit if the first boundary value is higher than the second boundary value, and mark a second area unit corresponding to the second boundary value as a second marking unit if the second boundary value is higher than the first boundary value.
14. The image processing apparatus according to claim 13, wherein the second comparing means comprises:
the counting submodule is used for respectively counting the number of the first marking units and the number of the second marking units;
the second determining submodule is used for determining that the image definition of the first target image is higher than that of the second target image if the number of the first marking units is larger than that of the second marking units; and if the number of the first marking units is smaller than that of the second marking units, determining that the image definition of the first target image is lower than that of the second target image.
15. A storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform the method of any one of claims 1 to 8.
CN201710455122.3A 2017-06-16 2017-06-16 Image processing method, device and storage medium Active CN107330905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710455122.3A CN107330905B (en) 2017-06-16 2017-06-16 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710455122.3A CN107330905B (en) 2017-06-16 2017-06-16 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN107330905A CN107330905A (en) 2017-11-07
CN107330905B true CN107330905B (en) 2022-05-06

Family

ID=60194195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710455122.3A Active CN107330905B (en) 2017-06-16 2017-06-16 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN107330905B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109145A (en) * 2018-01-02 2018-06-01 中兴通讯股份有限公司 Picture quality detection method, device, storage medium and electronic device
CN110827254A (en) * 2019-10-31 2020-02-21 北京京东尚科信息技术有限公司 Method and device for determining image definition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679702A (en) * 2013-11-20 2014-03-26 华中科技大学 Matching method based on image edge vectors
CN103793918A (en) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 Image definition detecting method and device
CN103795920A (en) * 2014-01-21 2014-05-14 宇龙计算机通信科技(深圳)有限公司 Photo processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679702A (en) * 2013-11-20 2014-03-26 华中科技大学 Matching method based on image edge vectors
CN103795920A (en) * 2014-01-21 2014-05-14 宇龙计算机通信科技(深圳)有限公司 Photo processing method and device
CN103793918A (en) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 Image definition detecting method and device

Also Published As

Publication number Publication date
CN107330905A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN108550101B (en) Image processing method, device and storage medium
US11410277B2 (en) Method and device for blurring image background, storage medium and electronic apparatus
CN107957294B (en) Ambient light intensity detection method and device, storage medium and electronic equipment
CN108074237B (en) Image definition detection method and device, storage medium and electronic equipment
CN110070551B (en) Video image rendering method and device and electronic equipment
CN113192470B (en) Screen adjusting method and device, storage medium and electronic equipment
CN108764139B (en) Face detection method, mobile terminal and computer readable storage medium
CN110572636B (en) Camera contamination detection method and device, storage medium and electronic equipment
US20150187051A1 (en) Method and apparatus for estimating image noise
CN109616080B (en) Special-shaped screen contour compensation method and terminal
CN111489322A (en) Method and device for adding sky filter to static picture
CN111368587A (en) Scene detection method and device, terminal equipment and computer readable storage medium
CN107330905B (en) Image processing method, device and storage medium
CN113920022A (en) Image optimization method and device, terminal equipment and readable storage medium
CN108197596B (en) Gesture recognition method and device
CN115660945A (en) Coordinate conversion method and device, electronic equipment and storage medium
CN113487473B (en) Method and device for adding image watermark, electronic equipment and storage medium
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114677319A (en) Stem cell distribution determination method and device, electronic equipment and storage medium
CN109559707B (en) Gamma value processing method and device of display panel and display equipment
CN115456983A (en) Water surface floater detection method, system, equipment and medium
CN111861965A (en) Image backlight detection method, image backlight detection device and terminal equipment
CN108965867A (en) A kind of camera image calculation method of parameters and device
CN116797954A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant