CN113781428A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113781428A
CN113781428A CN202111054581.3A CN202111054581A CN113781428A CN 113781428 A CN113781428 A CN 113781428A CN 202111054581 A CN202111054581 A CN 202111054581A CN 113781428 A CN113781428 A CN 113781428A
Authority
CN
China
Prior art keywords
image
target
labeling
determining
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111054581.3A
Other languages
Chinese (zh)
Inventor
阮国恒
叶万余
钟业荣
江嘉铭
阮伟聪
朱婷婷
杨伟山
钟恒辉
李文航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Qingyuan Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Qingyuan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Qingyuan Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202111054581.3A priority Critical patent/CN113781428A/en
Publication of CN113781428A publication Critical patent/CN113781428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, computer equipment and a storage medium. The image processing method comprises the following steps: acquiring a target area image of an image to be processed; determining multi-dimensional image annotation parameters of the target area image; determining a region labeling result of the target region image according to the image labeling parameters; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise; determining the weight of each image labeling parameter; and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result. The technical scheme of the embodiment of the invention can determine the image quality according to the multi-dimensional image marking parameters, thereby improving the accuracy and the reasonability of image quality determination and further improving the accuracy and the reasonability of image processing.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image processing method, an image processing device, computer equipment and a storage medium.
Background
In the society today, pictures and video have become more and more important as information carriers, and the processing of images has become a widespread and fundamental problem. In the existing image processing, a method of labeling an image is generally adopted to determine the image quality, so as to realize the processing of the image. However, the image labeling in the existing image processing method is not reasonable, so that the accuracy and the reasonableness of determining the image quality are low, and the accuracy and the reasonableness of image processing are low.
Disclosure of Invention
Embodiments of the present invention provide an image processing method and apparatus, an electronic device, and a storage medium, which can improve accuracy and reasonability of image quality determination, and further improve accuracy and reasonability of image processing.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a target area image of an image to be processed;
determining multi-dimensional image annotation parameters of the target area image;
determining a region labeling result of the target region image according to the image labeling parameters; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
determining the weight of each image labeling parameter;
and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the target area image acquisition module is used for acquiring a target area image of the image to be processed;
the image annotation parameter determination module is used for determining multi-dimensional image annotation parameters of the target area image;
the region labeling result determining module is used for determining a region labeling result of the target region image according to the image labeling parameters; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
the image labeling parameter weight determining module is used for determining the weight of each image labeling parameter;
and the target image annotation result determining module is used for determining a target image annotation result of the image to be processed according to the weight of the image annotation parameter and the region annotation result.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image processing method provided by any of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a computer storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the image processing method provided in any embodiment of the present invention.
According to the embodiment of the invention, the target area image of the image to be processed is obtained, the multi-dimensional image annotation parameters of the target area image are determined, and the area annotation result of the target area image is determined according to the image annotation parameters, so that after the weight of each image annotation parameter is determined, the target image annotation result of the image to be processed is determined according to the weight of the image annotation parameter and the area annotation result, the problems of poor accuracy and reasonableness of image processing and the like caused by unreasonable image annotation in the conventional image processing method are solved, the image quality can be determined according to the multi-dimensional image annotation parameters, the accuracy and reasonability of image quality determination are improved, and the accuracy and reasonability of image processing are further improved.
Drawings
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms "first" and "second," and the like in the description and claims of embodiments of the invention and in the drawings, are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include steps or elements not listed.
Example one
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where this embodiment may be applied to a case where image quality is determined according to a multi-dimensional image annotation parameter to process an image, and the method may be executed by an image processing apparatus, and the apparatus may be implemented by software and/or hardware, and may generally be directly integrated in an electronic device that executes the method. As shown in fig. 1, the image processing method may include the steps of:
and S110, acquiring a target area image of the image to be processed.
The image to be processed may be any image that needs to be processed, for example, a picture image, a video image, or the like. The target area image may be an image of a certain area of the image to be processed, for example, a preset area image, or a randomly selected area image, and the like, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, the acquiring of the target area image of the to-be-processed image may be acquiring the to-be-processed image stored in the database, or acquiring the target area image of the to-be-processed image after acquiring the to-be-processed image photographed in real time. The embodiment of the invention does not limit the acquisition mode of the image to be processed.
In an optional implementation manner of the embodiment of the present invention, before acquiring the target area image of the image to be processed, the method may further include: acquiring an original gray image of an image to be processed; dividing an original gray level image into a first set number of regional images; and respectively carrying out normalization processing on the images of the regions to obtain images of the target regions.
The original grayscale image may be an original image represented by grayscale. The first set number may be a set number, and may be a certain value, or may also be a value determined according to a specific application scenario, for example, and the embodiment of the present invention does not limit this. The area image may be an image obtained by dividing the original grayscale image, and may be, for example, an image with the same size, or an image with different sizes. The normalization process may be an image conversion method performed to reduce or even eliminate the gray level inconsistency in the image while retaining the gray level difference having the diagnostic value.
Specifically, the obtaining of the original grayscale image of the image to be processed may be obtaining of an original grayscale image stored in a database, or obtaining of an original grayscale image captured in real time. After the original gray level image of the image to be processed is obtained, the original gray level image can be further divided into a first set number of area images, and normalization processing is performed on each area image to obtain a target area image, so that processing of the target area image is achieved. Alternatively, the original grayscale image may be divided into nine region images of the same size. Illustratively, if the size of the original grayscale image is 3M × 3N, the size of the region image obtained after the division is M × N.
S120, determining multi-dimensional image annotation parameters of the target area image; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise.
The image labeling parameter may be any parameter that can represent image quality, for example, image sharpness, image symmetry, or the like, which is not limited in this embodiment.
In the embodiment of the invention, after the target area image of the image to be processed is acquired, the multi-dimensional image annotation parameter of the target area image can be further determined, so as to determine the area annotation result of the target area image according to the image annotation parameter, thereby realizing the processing of the image to be processed. Optionally, the multidimensional image labeling parameter may include at least one of image sharpness, image symmetry, image brightness, and image noise.
And S130, determining a region labeling result of the target region image according to the image labeling parameters.
The region labeling result may be a labeling result determined according to the image labeling parameter.
In the embodiment of the present invention, after determining the multidimensional image annotation parameter of the target area image, the area annotation result of the target area image may be further determined according to the image annotation parameter, so as to determine the annotation result of the image to be processed according to the area annotation result, thereby implementing image processing of the image to be processed. For example, when it is determined that the image annotation parameter of the target area image includes the image definition, the area annotation result of the target area image may be further determined according to the image definition. When it is determined that the image annotation parameter of the target area image includes image symmetry, the area annotation result of the target area image may be further determined according to the image symmetry. When it is determined that the image annotation parameter of the target area image includes the image brightness, the area annotation result of the target area image may be further determined according to the image brightness. When it is determined that the image annotation parameter of the target area image includes image noise, the area annotation result of the target area image may be further determined according to the image noise.
And S140, determining the weight of each image labeling parameter.
In the embodiment of the invention, after the region labeling result of the target region image is determined according to the image labeling parameters, the weights of the image labeling parameters can be further derived, so that the target image labeling result of the image to be processed is determined according to the weights of the image labeling parameters and the region labeling result.
In an optional implementation manner of the embodiment of the present invention, determining the weight of the image annotation parameter may include: acquiring a sample image for determining the weight of an image annotation parameter; obtaining an expected labeling result of the image labeling parameters of each sample image; and determining the weight of the image annotation parameter according to the expected annotation result.
Wherein the sample image may be an image determined by screening that can be used as a sample. The expected annotation result may be an annotation result obtained by image annotation of the sample image.
Specifically, a sample image used for determining the weight of the image annotation parameter is obtained, and the image annotation parameter of the obtained sample image is annotated to obtain an expected annotation result of the image annotation parameter of each sample image, so that the weight of the image annotation parameter is determined according to the expected annotation result.
Optionally, the expected annotation result of the image annotation parameter of each sample image is obtained, where multiple annotation results of the image annotation of each sample image are obtained, and the multiple annotation results are preprocessed and then weighted-averaged, so that the expected annotation result is obtained according to the weighted-averaged annotation result. Illustratively, 100 annotation results of the image annotation of each sample image are obtained, and the 100 annotation results are preprocessed and then weighted-averaged, so that the weighted-averaged annotation result is multiplied by 100 to obtain an expected annotation result.
S150, determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
The target image labeling result may be a labeling result of the image to be processed, and may be used to determine the image quality, for example, the target image labeling result may be excellent or unqualified, and the embodiment of the present invention does not limit this.
In the embodiment of the present invention, after determining the weight of each image annotation parameter, a target image annotation result of the image to be processed may be further determined according to the weight of the image annotation parameter and the region annotation result, so as to determine the image quality, thereby implementing image processing. Illustratively, if the target image annotation result is excellent, it may be determined that the image quality of the image to be processed is good. And if the target image labeling result is unqualified, determining that the image quality of the image to be processed is poor.
In an optional implementation manner of the embodiment of the present invention, determining a target image annotation result of an image to be processed according to a weight of an image annotation parameter and a region annotation result may include: acquiring gray values of pixel points of images in each target area; calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the gray value of each target area image pixel point; and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value and the weight of each image labeling parameter of each target area image.
The image sharpness value may be a value of the image in the sharpness dimension. The image symmetry value may be a value of the image in the dimension of symmetry. The image brightness value may be the value of the image in the brightness dimension. The image noise value may be the value of the image in the noise dimension.
Specifically, by obtaining the gray value of the pixel point of the image in each target area, the image definition value, the image symmetry value, the image brightness value and the image noise value of the image in each target area can be further calculated according to the gray value of the pixel point of the image in each target area, so as to determine the target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value and the image noise value of the image in each target area and the weight of each image labeling parameter. Illustratively, if the target region image 1 has an image sharpness value a1, an image symmetry value B1, an image brightness value C1, and an image noise value D1, and the image sharpness corresponding weight is a1, the image symmetry corresponding weight is B1, the image brightness corresponding weight is C1, and the image noise corresponding weight is D1, the labeling result of the target region image 1 is a1 a1+ B1B 1+ C1C 1+ D1D 1.
Optionally, determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value of each target region image and the weight of each image labeling parameter, which may include: acquiring the area image position of each target area image in the image to be processed; calculating the image definition value, the image symmetry value, the image brightness value and the normalized gray level difference value of the image noise value of each target area image according to the area image position; determining an initial region labeling result of the normalized gray difference value according to the weight of each image labeling parameter; and determining the target image labeling result of the image to be processed according to the initial region labeling result of each target region image.
The position of the area image may be a position of the target area image in the image to be processed, for example, an upper left corner position of the image to be processed, or an intermediate position of the image to be processed, and the embodiment of the present invention does not limit this. The normalized grayscale difference value may be a difference of grayscale values before and after the image normalization process. Illustratively, if the gray value before the image normalization process is X1 and the gray value after the normalization process is X2, the normalized gray difference value may be | X1-X2 |. The initial region labeling result may be a labeling result determined according to a normalized gray scale difference of the image labeling parameter.
Specifically, after the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image are calculated according to the gray value of the pixel point of each target area image, the area image position of each target area image in the image to be processed can be further obtained, the normalized gray difference value of the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image is calculated according to the area image position, the initial area labeling result of the normalized gray difference value is determined according to the weight of each image labeling parameter, and therefore the target image labeling result of the image to be processed is determined according to the initial area labeling result of each target area image. Illustratively, if the normalized grayscale difference of the image sharpness value of the target region image 1 is AA1, the normalized grayscale difference of the image symmetry value is BB1, the normalized grayscale difference of the image brightness value is CC1, and the normalized grayscale difference of the image noise value is DD1, and the weight corresponding to the image sharpness is a1, the weight corresponding to the image symmetry is b1, the weight corresponding to the image brightness is c1, and the weight corresponding to the image noise is d1, the initial region labeling result of the target region image 1 is a1 AA1+ b1 BB1+ c1 CC1+ d1 DD 1.
Optionally, determining a target image annotation result of the image to be processed according to the initial region annotation result of each target region image, which may include: determining initial region labeling results and values of any second set number of target region images; and determining a target image labeling result of the image to be processed according to the initial region labeling result and the value.
The second set number may be another set number, for example, a certain value, or a value determined according to a specific application scenario, and the like, which is not limited in the embodiment of the present invention.
Specifically, after the initial region labeling result of the normalized grayscale difference is determined according to the weight of each image labeling parameter, the sum of the initial region labeling results of any second set number of target region images may be further determined, so as to determine the target image labeling result of the image to be processed according to the sum of the initial region labeling results. Alternatively, the second set number may be 6. That is, the sum of the initial region labeling results that determine any second set number of target region images may be the sum of the initial expected labeling results that determine any 6 target region images.
Optionally, determining a target image annotation result of the image to be processed according to the initial region annotation result and the value may include: under the condition that the sum of the initial region labeling results is determined to be larger than a first preset threshold value, determining that the target image labeling result is a first target image labeling result; determining that the target image labeling result is a second target image labeling result under the condition that the sum of the initial region labeling results is determined to be smaller than or equal to a first preset threshold and larger than a second preset threshold; determining the target image labeling result as a third target image labeling result under the condition that the initial region labeling result sum value is smaller than or equal to a second preset threshold value and larger than a third preset threshold value; under the condition that the sum of the initial region labeling results is determined to be less than or equal to a third preset threshold and greater than a fourth preset threshold, determining that the target image labeling result is a fourth target image labeling result; and under the condition that the sum of the initial region labeling results is determined to be less than or equal to a fourth preset threshold value, determining that the target image labeling result is a fifth target image labeling result.
The first preset threshold may be a preset arbitrary threshold, and may be used for the first determination target image annotation result. Optionally, the first preset threshold may be 550. The first target image annotation result may be a target image annotation result determined according to a first preset threshold. Optionally, the first target image annotation result may be a competitive image. The second preset threshold may be another arbitrary threshold preset, and may be used to determine the second target image annotation result. Optionally, the second preset threshold may be 540. The second target image annotation result may be another target image annotation result determined according to the first preset threshold and the second preset threshold. Alternatively, the second target image annotation result may be a superior image. The third preset threshold may be another arbitrary threshold preset, and may be used to determine the third target image annotation result. Optionally, the third preset threshold may be 480. The third target image annotation result may be another target image annotation result determined according to the second preset threshold and the third preset threshold. Optionally, the third target image labeling result may be a good-quality image. The fourth preset threshold may be another arbitrary threshold preset, and may be used to determine the fourth target image annotation result. Optionally, the fourth preset threshold may be 360. The fourth target image annotation result may be another target image annotation result determined according to the third preset threshold and the fourth preset threshold. Optionally, the fourth target image annotation result may be a qualified image. The fifth target image annotation result may be another target image annotation result determined according to a fourth preset threshold. Optionally, the fifth target image labeling result may be a rejected image.
Specifically, after determining the initial region labeling results and values of any second set number of target region images, the threshold ranges in which the initial region labeling results and values are located may be further determined to determine the target image labeling results. If the initial region labeling result and the initial region labeling value are greater than a first preset threshold value, the target image labeling result can be determined to be a first target image labeling result. If the initial region labeling result sum is less than or equal to a first preset threshold and greater than a second preset threshold, the target image labeling result can be determined to be a second target image labeling result. If the initial region labeling result sum is less than or equal to the second preset threshold and greater than a third preset threshold, the target image labeling result may be determined to be a third target image labeling result. If the sum of the initial region labeling results is less than or equal to a third preset threshold and greater than a fourth preset threshold, it may be determined that the target image labeling result is a fourth target image labeling result. If the initial region labeling result and the initial region labeling value are less than or equal to a fourth preset threshold value, the target image labeling result can be determined to be a fifth target image labeling result. For example, if the initial region labeling result sum is greater than 550, it may be determined that the target image labeling result is a competitive image. If the initial region labeling result sum value is 550 or less and is 540 or more, it can be determined that the target image labeling result is an excellent image. If the initial region labeling result sum is less than or equal to 540 and greater than 480, the target image labeling result can be determined to be a good image. If the initial region labeling result sum is less than or equal to 480 and greater than 360, the target image labeling result can be determined to be a qualified image. If the initial region labeling result sum value is less than or equal to 360, the target image labeling result can be determined to be a non-qualified image.
According to the technical scheme of the embodiment, the target area image of the image to be processed is obtained, the multi-dimensional image annotation parameters of the target area image are determined, and the area annotation result of the target area image is determined according to the image annotation parameters, so that after the weight of each image annotation parameter is determined, the target image annotation result of the image to be processed is determined according to the weight of the image annotation parameter and the area annotation result, the problems that the image processing accuracy and the reasonability are poor and the like caused by unreasonable image annotation in the conventional image processing method are solved, the image quality can be determined according to the multi-dimensional image annotation parameters, the accuracy and the reasonability of image quality determination are improved, and the accuracy and the reasonability of image processing are further improved.
Example two
Fig. 2 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention, and as shown in fig. 2, the apparatus includes: a target area image obtaining module 210, an image annotation parameter determining module 220, an area annotation result determining module 230, an image annotation parameter weight determining module 240, and a target image annotation result determining module 250, wherein:
a target area image obtaining module 210, configured to obtain a target area image of an image to be processed;
an image annotation parameter determining module 220, configured to determine a multi-dimensional image annotation parameter of the target area image; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
a region labeling result determining module 230, configured to determine a region labeling result of the target region image according to the image labeling parameter;
an image labeling parameter weight determining module 240, configured to determine a weight of each image labeling parameter;
and a target image annotation result determining module 250, configured to determine a target image annotation result of the image to be processed according to the weight of the image annotation parameter and the region annotation result.
According to the technical scheme of the embodiment, the target area image of the image to be processed is obtained, the multi-dimensional image annotation parameters of the target area image are determined, and the area annotation result of the target area image is determined according to the image annotation parameters, so that after the weight of each image annotation parameter is determined, the target image annotation result of the image to be processed is determined according to the weight of the image annotation parameter and the area annotation result, the problems that the image processing accuracy and the reasonability are poor and the like caused by unreasonable image annotation in the conventional image processing method are solved, the image quality can be determined according to the multi-dimensional image annotation parameters, the accuracy and the reasonability of image quality determination are improved, and the accuracy and the reasonability of image processing are further improved.
Optionally, the image labeling parameter weight determining module 240 may be specifically configured to:
acquiring a sample image for determining the weight of an image annotation parameter; obtaining an expected labeling result of the image labeling parameters of each sample image; and determining the weight of the image annotation parameter according to the expected annotation result.
Optionally, the target area image obtaining module 210 may be specifically configured to:
acquiring an original gray image of an image to be processed; dividing an original gray level image into a first set number of regional images; and respectively carrying out normalization processing on the images of the regions to obtain images of the target regions.
Optionally, the target image annotation result determining module 250 may be specifically configured to:
acquiring gray values of pixel points of images in each target area; calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the gray value of each target area image pixel point; and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value and the weight of each image labeling parameter of each target area image.
Optionally, the target image annotation result determining module 250 may be further specifically configured to:
acquiring the area image position of each target area image in the target area image; calculating the image definition value, the image symmetry value, the image brightness value and the normalized gray level difference value of the image noise value of each target area image according to the area image position; determining an initial region labeling result of the normalized gray difference value according to the weight of each image labeling parameter; and determining the target image labeling result of the image to be processed according to the initial region labeling result of each target region image.
Optionally, the target image annotation result determining module 250 may be further configured to:
determining initial region labeling results and values of any second set number of target region images; and determining a target image labeling result of the image to be processed according to the initial region labeling result and the value.
Optionally, the target image annotation result determining module 250 may be further configured to:
under the condition that the sum of the initial region labeling results is determined to be larger than a first preset threshold value, determining that the target image labeling result is a first target image labeling result; determining that the target image labeling result is a second target image labeling result under the condition that the sum of the initial region labeling results is determined to be smaller than or equal to a first preset threshold and larger than a second preset threshold; determining the target image labeling result as a third target image labeling result under the condition that the initial region labeling result sum value is smaller than or equal to a second preset threshold value and larger than a third preset threshold value; under the condition that the sum of the initial region labeling results is determined to be less than or equal to a third preset threshold and greater than a fourth preset threshold, determining that the target image labeling result is a fourth target image labeling result; and under the condition that the sum of the initial region labeling results is determined to be less than or equal to a fourth preset threshold value, determining that the target image labeling result is a fifth target image labeling result.
The image processing device can execute the image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For details of the technique not described in detail in this embodiment, reference may be made to the image processing method provided in any embodiment of the present invention.
Since the image processing apparatus described above is an apparatus capable of executing the image processing method in the embodiment of the present invention, a person skilled in the art can understand the specific implementation of the image processing apparatus in the embodiment of the present invention and various modifications thereof based on the image processing method described in the embodiment of the present invention, and therefore, how the image processing apparatus implements the image processing method in the embodiment of the present invention is not described in detail herein. The device used by those skilled in the art to implement the image processing method in the embodiments of the present invention is within the scope of the present application.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. FIG. 3 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 3 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in FIG. 3, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors 16, a memory 28, and a bus 18 that connects the various system components (including the memory 28 and the processors 16).
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 3, and commonly referred to as a "hard drive"). Although not shown in FIG. 3, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an Input/Output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network such as the internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 3, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, (Redundant Arrays of Independent Disks, RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 16 executes various functional applications and data processing by executing programs stored in the memory 28, thereby implementing the image processing method provided by the embodiment of the present invention: acquiring a target area image of an image to be processed; determining multi-dimensional image annotation parameters of the target area image; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise; determining a region labeling result of the target region image according to the image labeling parameters; determining the weight of each image labeling parameter; and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
Example four
A fourth embodiment of the present invention further provides a computer storage medium storing a computer program, which when executed by a computer processor is configured to perform the image processing method according to any one of the above embodiments of the present invention: acquiring a target area image of an image to be processed; determining multi-dimensional image annotation parameters of the target area image; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise; determining a region labeling result of the target region image according to the image labeling parameters; determining the weight of each image labeling parameter; and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a target area image of an image to be processed;
determining multi-dimensional image annotation parameters of the target area image; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
determining a region labeling result of the target region image according to the image labeling parameters;
determining the weight of each image labeling parameter;
and determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result.
2. The method of claim 1, wherein determining the weight of the image annotation parameter comprises:
acquiring a sample image for determining the weight of an image annotation parameter;
obtaining an expected labeling result of the image labeling parameters of each sample image;
and determining the weight of the image annotation parameter according to the expected annotation result.
3. The method according to claim 1, wherein before the acquiring the target area image of the image to be processed, the method further comprises:
acquiring an original gray image of the image to be processed;
dividing the original gray level image into a first set number of regional images;
and respectively carrying out normalization processing on the area images to obtain the target area image.
4. The method according to claim 3, wherein the determining a target image labeling result of the image to be processed according to the weight of the image labeling parameter and the region labeling result comprises:
acquiring gray values of pixel points of the images of the target areas;
calculating an image definition value, an image symmetry value, an image brightness value and an image noise value of each target area image according to the gray value of each target area image pixel point;
and determining a target image labeling result of the image to be processed according to the image definition value, the image symmetry value, the image brightness value, the image noise value and the weight of each image labeling parameter of each target area image.
5. The method of claim 4, wherein determining the target image labeling result of the image to be processed according to the image sharpness value, the image symmetry value, the image brightness value, the image noise value of each target area image and the weight of each image labeling parameter comprises:
acquiring the area image position of each target area image in the image to be processed;
calculating the normalized gray level difference value of the image definition value, the image symmetry value, the image brightness value and the image noise value of each target area image according to the area image position;
determining an initial region labeling result of the normalized gray scale difference value according to the weight of each image labeling parameter;
and determining a target image labeling result of the image to be processed according to the initial region labeling result of each target region image.
6. The method according to claim 5, wherein the determining a target image labeling result of the image to be processed according to the initial region labeling result of each target region image comprises:
determining initial region labeling results and values of any second set number of target region images;
and determining a target image labeling result of the image to be processed according to the initial region labeling result and the value.
7. The method according to claim 6, wherein the determining a target image labeling result of the image to be processed according to the initial region labeling result and the value comprises:
under the condition that the sum of the initial region labeling results is determined to be larger than a first preset threshold value, determining that the target image labeling result is a first target image labeling result;
under the condition that the sum of the initial region labeling results is determined to be smaller than or equal to a first preset threshold and larger than a second preset threshold, determining that the target image labeling result is a second target image labeling result;
under the condition that the sum of the initial region labeling results is determined to be less than or equal to a second preset threshold and greater than a third preset threshold, determining the target image labeling result as a third target image labeling result;
under the condition that the sum of the initial region labeling results is determined to be less than or equal to a third preset threshold and greater than a fourth preset threshold, determining that the target image labeling result is a fourth target image labeling result;
and under the condition that the sum of the initial region labeling results is determined to be less than or equal to a fourth preset threshold value, determining that the target image labeling result is a fifth target image labeling result.
8. An image processing apparatus characterized by comprising:
the target area image acquisition module is used for acquiring a target area image of the image to be processed;
the image annotation parameter determination module is used for determining multi-dimensional image annotation parameters of the target area image; the multi-dimensional image labeling parameters comprise at least one of image definition, image symmetry, image brightness and image noise;
the region labeling result determining module is used for determining a region labeling result of the target region image according to the image labeling parameters;
the image labeling parameter weight determining module is used for determining the weight of each image labeling parameter;
and the target image annotation result determining module is used for determining a target image annotation result of the image to be processed according to the weight of the image annotation parameter and the region annotation result.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-7.
10. A computer storage medium on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 7.
CN202111054581.3A 2021-09-09 2021-09-09 Image processing method and device, electronic equipment and storage medium Pending CN113781428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111054581.3A CN113781428A (en) 2021-09-09 2021-09-09 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111054581.3A CN113781428A (en) 2021-09-09 2021-09-09 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113781428A true CN113781428A (en) 2021-12-10

Family

ID=78841998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111054581.3A Pending CN113781428A (en) 2021-09-09 2021-09-09 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113781428A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481238A (en) * 2017-09-20 2017-12-15 众安信息技术服务有限公司 Image quality measure method and device
CN111079740A (en) * 2019-12-02 2020-04-28 咪咕文化科技有限公司 Image quality evaluation method, electronic device, and computer-readable storage medium
CN112102309A (en) * 2020-09-27 2020-12-18 中国建设银行股份有限公司 Method, device and equipment for determining image quality evaluation result

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481238A (en) * 2017-09-20 2017-12-15 众安信息技术服务有限公司 Image quality measure method and device
CN111079740A (en) * 2019-12-02 2020-04-28 咪咕文化科技有限公司 Image quality evaluation method, electronic device, and computer-readable storage medium
CN112102309A (en) * 2020-09-27 2020-12-18 中国建设银行股份有限公司 Method, device and equipment for determining image quality evaluation result

Similar Documents

Publication Publication Date Title
CN110189336B (en) Image generation method, system, server and storage medium
CN111738316B (en) Zero sample learning image classification method and device and electronic equipment
CN111222509A (en) Target detection method and device and electronic equipment
CN110390295B (en) Image information identification method and device and storage medium
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN111753114A (en) Image pre-labeling method and device and electronic equipment
CN114860699A (en) Data quality detection method, device, equipment and storage medium
CN111382643B (en) Gesture detection method, device, equipment and storage medium
CN109842619B (en) User account intercepting method and device
CN111815748A (en) Animation processing method and device, storage medium and electronic equipment
CN112287734A (en) Screen-fragmentation detection and training method of convolutional neural network for screen-fragmentation detection
CN113781428A (en) Image processing method and device, electronic equipment and storage medium
CN111832354A (en) Target object age identification method and device and electronic equipment
CN115134677A (en) Video cover selection method and device, electronic equipment and computer storage medium
CN114821034A (en) Training method and device of target detection model, electronic equipment and medium
CN111696154B (en) Coordinate positioning method, device, equipment and storage medium
CN114708239A (en) Glue width detection method and device, electronic equipment and storage medium
CN110516024B (en) Map search result display method, device, equipment and storage medium
CN111124862B (en) Intelligent device performance testing method and device and intelligent device
CN114217790A (en) Interface scheduling method and device, electronic equipment and medium
CN112035732A (en) Method, system, equipment and storage medium for expanding search results
CN112559340A (en) Picture testing method, device, equipment and storage medium
CN111262727A (en) Service capacity expansion method, device, equipment and storage medium
CN102810042B (en) Method and system for generating image thumbnail on layout
CN111143346A (en) Method and device for determining difference of tag group, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination