CN113706439A - Image detection method and device, storage medium and computer equipment - Google Patents

Image detection method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN113706439A
CN113706439A CN202110260048.6A CN202110260048A CN113706439A CN 113706439 A CN113706439 A CN 113706439A CN 202110260048 A CN202110260048 A CN 202110260048A CN 113706439 A CN113706439 A CN 113706439A
Authority
CN
China
Prior art keywords
image
difference
target
pixel
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110260048.6A
Other languages
Chinese (zh)
Inventor
陈裕发
龙祖苑
谢宗兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110260048.6A priority Critical patent/CN113706439A/en
Publication of CN113706439A publication Critical patent/CN113706439A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The application discloses an image detection method, an image detection device, a storage medium and computer equipment; the method is related to the field of artificial intelligence computer vision, and can acquire an initial image and image change parameters; processing the initial image based on the image change parameters to obtain a target image; performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image; determining a target difference area and area difference characteristics of the target difference area in the difference image based on a plurality of difference pixel points in the difference image; determining a detection result of the target image according to the region difference characteristics and the image change parameters; the image detection efficiency can be effectively improved.

Description

Image detection method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of images, and in particular, to an image detection method, an image detection apparatus, a storage medium, and a computer device.
Background
With the development of the technology, local small-amplitude processing can be performed on an image in a specific scene, for example, face thinning processing, large-eye processing and the like which are common in photographing software, because the local small-amplitude processing is performed, the change of the processed image compared with the image before processing is slight, and most of the processing needs to be performed manually to detect the processing effect of the processed image.
In the course of research and time of the prior art, the inventors of the present application found that the conventional image detection method is inefficient.
Disclosure of Invention
The embodiment of the application provides an image detection method, an image detection device, a storage medium and computer equipment, which can effectively improve the image detection efficiency.
The embodiment of the application provides an image detection method, which comprises the following steps:
acquiring an initial image and an image change parameter;
processing the initial image based on the image change parameters to obtain a target image;
performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image;
determining a target difference region and a region difference characteristic of the target difference region in the difference image based on a plurality of difference pixel points in the difference image;
and determining the detection result of the target image according to the region difference characteristic and the image change parameter.
Accordingly, the present application provides an image detection apparatus comprising:
the acquisition module is used for acquiring an initial image and image change parameters;
the processing module is used for processing the initial image based on the image change parameters to obtain a target image;
the difference analysis module is used for carrying out pixel difference analysis on corresponding positions in the initial image and the target image so as to obtain a difference image;
the characteristic determining module is used for determining a target difference area and an area difference characteristic of the target difference area in the difference image based on a plurality of difference pixel points in the difference image;
and the result determining module is used for determining the detection result of the target image according to the region difference characteristic and the image change parameter.
In some embodiments, the image detection apparatus further comprises:
the calculation module is used for performing fusion calculation on all target difference pixel points of the target difference region to obtain pixel difference characteristics;
at this time, the result determination module is specifically configured to:
determining a detection result of the target image based on the region difference feature, the pixel difference feature, and the image variation parameter.
In some embodiments, the result determination module includes an acquisition sub-module, a quantization sub-module, and a determination sub-module, wherein,
the obtaining submodule is used for obtaining an image change measurement model;
the quantization submodule is used for performing difference quantization on the region difference characteristics and the pixel difference characteristics through the image change measurement model to obtain actual change data of the target image;
and the determining submodule is used for determining the detection result of the target image based on the actual change data and the image change parameter.
In some embodiments, the determination submodule is specifically configured to:
when the difference value between the actual change data and the image change parameter is smaller than a preset threshold value, determining that the detection result of the target image is normal;
and when the difference value between the actual change data and the image change parameter is larger than a preset threshold value, determining that the detection result of the target image is abnormal.
In some embodiments, the image change metric model comprises a region metric submodel and a pixel metric submodel, the actual change data comprises region actual change data and pixel actual change data, and the quantization submodule is specifically configured to:
performing difference quantization on the region difference characteristics through the region measurement sub-model to obtain region actual change data of the target image;
and carrying out difference quantization on the pixel difference characteristics through the pixel measurement sub-model to obtain the actual pixel change data of the target image.
In some embodiments, the determination submodule is specifically configured to:
and when the difference value between the area actual change data and the image change parameter is smaller than a preset first threshold value and the difference value between the pixel actual change data and the image change parameter is smaller than a preset second threshold value, determining that the detection result of the target image is normal.
In some embodiments, the determination submodule is specifically configured to:
when the difference value between the actual change data of the region and the image change parameter is larger than a preset first threshold value, determining that the detection result of the target image is abnormal in image area;
and when the difference value between the actual pixel change data and the image change parameter is larger than a preset second threshold value, determining that the detection result of the target image is image position abnormity.
In some embodiments, the initial image comprises a plurality of initial pixel points, the target image comprises a plurality of target pixel points, the difference analysis module comprises a determination submodule, a calculation submodule, and an integration submodule, wherein,
the determining submodule is used for determining a plurality of groups of pixel pairs with the same position information based on the position information of each initial pixel point in the initial image and the position information of each target pixel point in the target image, and the pixel pairs comprise the initial pixel points and the target pixel points;
the calculation submodule is used for carrying out difference calculation on each group of pixel pairs according to the color information of each initial pixel point and the color information of each target pixel point to obtain the color difference information of each group of pixel pairs;
and the integration submodule is used for integrating the position information and the color difference information of each group of pixel pairs to obtain a difference image.
In some embodiments, the computation submodule is specifically configured to:
determining difference information between the color information of the initial pixel point and the color information of the target pixel point in the pixel pair;
when the difference information meets a preset condition, determining the color difference information of the pixel pair as first color difference information;
and when the difference information does not meet the preset condition, determining the color difference information of the pixel pair as second color difference information.
In some embodiments, the difference pixel includes position information and color difference information, and the characteristic determining module is specifically configured to:
when the color difference information of the difference pixel points is target difference information, determining the difference pixel points as target difference pixel points;
and determining a target difference area in the difference image and the area difference characteristics of the target difference area according to the position information of all target difference pixel points in the difference image.
In some embodiments, the image detection apparatus further comprises:
the system comprises a sample acquisition module, a data acquisition module and a data processing module, wherein the sample acquisition module is used for acquiring an initial sample image and at least two target sample images corresponding to the initial sample image, and the target sample images carry sample change parameters relative to the initial sample image;
the sample difference module is used for carrying out difference processing on the initial sample image and the at least two target sample images to obtain sample difference characteristics of each target sample image;
and the model generation module is used for generating an image change measurement model based on the sample difference characteristics of each target sample image and the set sample change parameters.
Correspondingly, the embodiment of the present application further provides a storage medium, where a computer program is stored, and the computer program is suitable for being loaded by a processor to execute any one of the image detection methods provided by the embodiment of the present application.
Correspondingly, the embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements any one of the image detection methods provided by the embodiments of the present application when executing the computer program.
The method and the device can obtain the initial image and the image change parameters; processing the initial image based on the image change parameters to obtain a target image; performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image; determining a target difference area and area difference characteristics of the target difference area in the difference image based on a plurality of difference pixel points in the difference image; and determining the detection result of the target image according to the region difference characteristics and the image change parameters.
According to the method and the device, difference calculation can be carried out on the initial image and the target image to obtain the difference image of the initial image and the target image, the target difference area in the difference image and the area difference characteristic of the target difference area are detected based on the area difference characteristic and the image change parameter to obtain the detection result of the target image, compared with the prior art, the method and the device can effectively improve the image detection efficiency by automatically implementing the process through computer equipment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of a scene of an image detection system provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of an image detection method provided in an embodiment of the present application;
FIG. 3 is another schematic flow chart diagram of an image detection method provided in an embodiment of the present application;
FIG. 4 is a schematic image diagram of an image detection method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of an image region of an image detection method provided in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an image detection apparatus provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the embodiments described in the present application are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
The image detection method is related to the technology in the field of artificial intelligence, for example, an initial image can be processed through an artificial intelligence model to obtain a target image, and for example, an image change processing measurement model can be obtained by training based on a deep learning technology.
The image detection method can be integrated in an image detection system, the image detection system can be integrated in one or more computer devices, the computer devices can comprise terminals or servers, and the like, wherein the servers can be independent physical servers, server clusters or distributed systems formed by a plurality of physical servers, and cloud servers providing cloud computing services. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a smart home, a wearable electronic device, a vehicle-mounted computer, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
Referring to fig. 1, an image detection system may include an image detection device, wherein the image detection device may acquire an initial image and an image variation parameter; processing the initial image based on the image change parameters to obtain a target image; performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image; determining a target difference area and area difference characteristics of the target difference area in the difference image based on a plurality of difference pixel points in the difference image; and determining the detection result of the target image according to the region difference characteristics and the image change parameters.
It should be noted that the scene schematic diagram of the image detection system shown in fig. 1 is merely an example, and the image detection system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
The following are detailed below. In this embodiment, a detailed description will be given of an image detection method, which may be integrated on a terminal or a server, as shown in fig. 2, where fig. 2 is a flowchart illustrating the image detection method provided in this embodiment of the present application. The image detection method may include:
101. an initial image and image change parameters are acquired.
The initial image may include an image before processing, and the initial image may be a picture or a video frame captured in a video, for example, the initial image may be a frame image captured from a short video uploaded by a user.
The image change parameters may include data on which an initial image is processed, the image change parameters may include a percentage, a decimal, an integer, and the like, the form of the image change parameters may be flexibly determined according to an actual application scene, the image change parameters may include parameters on which some elements in the initial image are directed to be processed, for example, the image change parameters may be parameters for potato elements in the initial image.
Specifically, the manner of acquiring the initial image may include multiple manners, and may be obtained by shooting with an image acquisition device, or may be obtained by receiving a user upload or a computer device transmission manner, and if the image is shot or received, the image may be directly determined to be the initial image, and if the image is shot or received, a video frame image obtained by capturing a video may be obtained, and the video frame image is the initial image.
The image change parameter may be input by a user, may be preset and stored in a computer device such as a local terminal or a server, may be calculated by the computer device according to a specific feature (such as an area, a resolution, and the like) of the acquired initial image, and the like. For example, the computer device determines a variation parameter range of the initial image through an algorithm according to the resolution and the color range of the initial image, displays the variation parameter range on a user interface, and inputs a target variation parameter within the variation parameter range by a user, so that the computer device obtains the target variation parameter.
For example, picture a (i.e., an initial image) is obtained by shooting on a computer device, a parameter obtaining request is sent to a server, and then an image change parameter 1 returned by the server based on the parameter obtaining request is received.
102. And processing the initial image based on the image change parameters to obtain a target image.
The target image may include a processed image, the initial image is processed to obtain a target image, the target image may be obtained by processing the entire initial image, or the target image may be an image obtained by performing operations such as moving, enlarging, or reducing on an element in the initial image, and therefore, the target change parameter may refer to a distance that the element moves, a degree of enlargement or reduction of the element, and the like.
Specifically, the processing of the initial image according to the image change parameter may include at least one of image enhancement, image segmentation, image recognition, and the like, and the processing may also be performed by a trained machine learning model, for example, the initial image and the image change parameter are input into the trained change model to be processed, so as to obtain the target image.
For example, picture a is input into an image processing model to obtain processed picture B.
103. And carrying out pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image.
The difference image may include an image obtained according to a difference between the initial image and the target image, and the difference between the initial image and the target image may have a plurality of expressions according to a difference of an application scene, so that a display element of the difference image may have a plurality of different situations, the display element may be, for example, a size of the difference image, a color range of a pixel in the difference image, and the like, and the display element of the difference image is not significantly associated with information of the size, the color range of the pixel, and the like of the initial image and the target image, for example, the difference image and the initial image may have the same size, number of pixels, and color range of the pixel.
Specifically, the difference between the initial image and the target image can be obtained through pixel difference analysis, the pixel difference analysis is performed on corresponding positions in the initial image and the target image, for example, the initial image and the target image can be partitioned according to the same partitioning principle, then, pixel difference analysis is performed on corresponding areas of each group of the initial image and the target image, and a difference image is obtained according to an analysis result of the corresponding areas of each group; for another example, pixel difference analysis may be performed on pixel points belonging to the same pixel position in the initial image and the target image, so as to obtain a difference image.
Specifically, the pixel difference analysis may include analyzing at least one of position information, color information, quantity information, and the like of the pixel, for example, the number of pixel points in a preset color range in the initial image and the target image may be respectively counted, the difference quantity between the initial image and the target image is calculated, the preset pixel information is obtained, and finally, the difference image is generated based on the preset pixel information and the difference quantity.
For example, difference analysis is performed on pixels at corresponding positions in the picture a and the picture B to obtain a difference image C.
In some embodiments, the initial image includes a plurality of initial pixel points, the target image includes a plurality of target pixel points, and the step of performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain the difference image may include:
determining a plurality of groups of pixel pairs with the same position information based on the position information of each initial pixel point in the initial image and the position information of each target pixel point in the target image, wherein the pixel pairs comprise the initial pixel points and the target pixel points; according to the color information of each initial pixel point and the color information of each target pixel point, performing difference calculation on each group of pixel pairs to obtain the color difference information of each group of pixel pairs; and integrating the position information and the color difference information of each group of pixel pairs to obtain a difference image.
The image may include a plurality of pixel points, the initial image includes a plurality of initial pixel points, the target image includes a plurality of target pixel points, each pixel point is located at a specific position in the image and has a specific color, and one way of performing pixel difference analysis may be: and carrying out difference analysis on the colors of the pixel points at the same positions in the initial image and the target image so as to obtain a difference image.
The position information may include a relative position of the pixel point in the image, and the position information may be represented by a coordinate, for example, the position information of the pixel point a may be (30, 30).
In this embodiment, the initial image is processed without changing the size, resolution, and the like of the image, and the processing mainly changes the related information of elements in the initial image, that is, the size of the initial image is the same as that of the target image, and the number of pixels included in the initial image and the number of pixels included in the target image are the same, so that a plurality of pixel pairs are determined according to the position information of each initial pixel in the initial image and the position information of each target pixel in the target image, and each pixel pair includes one initial pixel and one target pixel with the same position information.
The Color information may be represented by different Color modes, such as an index model (index Color), an RGB mode, an HSB mode, etc., for example, if the Color information is represented by the RGB mode, the Color information of the pixel point a may be (10, 20, 5).
Specifically, the difference calculation may be performed on each group of pixel pairs in sequence, and the color difference information is obtained by performing calculation according to the color information of the initial pixel point and the color information of the target pixel point in the pixel pair, for example, the operation of obtaining the variance or the standard deviation may be performed on the color information of the two.
The color difference information may represent the difference between the pixel pair in color, and the color difference information may be similar to the color information, for example, the color difference information may be represented by a color model that is the same as the color information, and the types of the color difference information obtained by performing the difference calculation may be at least two.
Each pixel pair may correspond to position information (i.e., position information of an initial pixel point or a target pixel point included in the pixel pair) and color difference information, and these pieces of information are integrated to obtain a difference image. One pixel point comprises position information and color information, the obtained color difference information can be set as the color information, then the position information corresponding to the pixel is combined, a difference pixel point can be determined, the operation is carried out on all pixel pairs, a plurality of difference pixel points can be obtained, and the difference pixel points can form a difference image.
For example, the picture a includes a plurality of initial pixel points, the picture B includes a plurality of target pixel points, and a plurality of groups of pixel pairs having the same position information are determined based on the position information of each initial pixel point in the picture a and the position information of each target pixel point in the picture B, where each group of pixel pairs includes one initial pixel point and one target pixel point; according to the color information of the initial pixel point and the color information of the target pixel point in a group of pixel pairs, performing difference value calculation on the group of pixel pairs to obtain the color difference information of the group of pixels, and obtaining the color difference information of each group of pixel pairs according to the method; the position information and the color difference information of each group of pixel pairs are integrated to obtain a difference image C.
In some embodiments, the step of performing difference calculation on each group of pixel pairs according to the color information of each initial pixel point and the color information of each target pixel point to obtain the color difference information of each group of pixel pairs may include:
determining difference information between the color information of the initial pixel point and the color information of the target pixel point in the pixel pair; when the difference information meets a preset condition, determining the color difference information of the pixel pair as first color difference information; and when the difference information does not meet the preset condition, determining the color difference information of the pixel pair as second color difference information.
In this embodiment, the color information may be represented as an array including at least two elements, the color information of the initial pixel is an initial color array, the color information of the target pixel is target color data, and the difference information may include a difference between the initial color array and the target color array, for example, difference absolute values of array elements with the same position in the array may be respectively calculated, so as to obtain the difference information.
The preset condition may be flexibly set according to an actual situation, for example, the preset condition may be that the difference information is the same as the set data, or the difference information is located in a set value interval, and the like. And determining the color difference information as first color difference information when the difference information satisfies a preset condition, and determining the color difference information as second color difference information when the difference information does not satisfy the preset condition.
According to the color information of each initial pixel point and the color information of each target pixel point, difference calculation is performed on each group of pixel pairs to obtain the color difference information of each group of pixel pairs, and the color difference information can be obtained through algorithm calculation, for example, the pixel pairs comprise the initial pixel points and the target pixel points, and the color information of the initial pixel points can be expressed as
Figure BDA0002969560100000101
The color information of the target pixel point can be expressed as
Figure BDA0002969560100000102
Figure BDA0002969560100000103
The color difference information may be (R)x,y,Gx,y,Bx,y) The algorithm for determining the color difference information of the pixel pair may be:
Figure BDA0002969560100000104
104. and determining a target difference area and an area difference characteristic of the target difference area in the difference image based on a plurality of difference pixel points in the difference image.
The target difference region may be a partial region of the difference image, the target difference region is a region in the difference image that can represent a difference between the initial image and the target image, and the region difference feature may include a geometric feature of the target difference region, for example, the region difference feature may be an area, a position, a shape, and the like of the target difference region.
Determining the target difference region in the difference image may include various ways, such as randomly obtaining a target difference region in the difference image by a random algorithm. The manner of obtaining the region difference feature may also include multiple manners, for example, geometric data such as the side length and the radius of the target difference region may be measured, and then the region difference feature may be obtained by calculation according to the geometric data.
For example, the difference image C may be input into an intelligent model in the computer vision field, and the output target difference region 1 and the relative position information (i.e., the region difference feature) of the target difference region 1 in the difference image C are obtained.
In some embodiments, the disparity pixel point includes position information and color disparity information, and the step "determining a target disparity region and a region disparity feature of the target disparity region in the disparity image based on a plurality of disparity pixel points in the disparity image" may include:
when the color difference information of the difference pixel points is target difference information, determining the difference pixel points as target difference pixel points; and determining a target difference area in the difference image and the area difference characteristics of the target difference area according to the position information of all target difference pixel points in the difference image.
In this embodiment, the difference pixel points may be screened to obtain target difference pixel points, and a target difference region and a region difference feature thereof may be determined according to positions of all target difference pixel points in the difference image.
Specifically, the difference pixel point with the color difference information as the target color difference information may be determined to be the target difference pixel point by screening according to the color difference information of the difference pixel point, where the target color difference information may be predetermined and may be at least one of all color difference information, for example, when the difference color information includes the first difference color information and the second difference color information, the target difference information may be the first difference color information, and for example, the target color difference information may be a1 and A3, and the difference pixel point with the color difference information of a1 or A3 may be determined to be the target difference pixel point.
The target difference region may be in a shape of a rectangle, a circle, an irregular polygon, or the like, and the target difference region may only include all the target difference pixel points or may include all the target difference pixel points and part of the non-target difference pixel points. For example, in order to obtain a circular target difference region, the region formed by all target difference pixels is a polygon, and the target difference region necessarily includes all target difference pixels and part of non-target difference pixels.
For example, when the color difference information of the difference pixel is difference information 1, the difference pixel is determined as a target difference pixel; and determining a target difference region 1 in the difference image and a region difference characteristic 1 of the target difference region according to the position information of all target difference pixel points in the difference image C.
105. And determining the detection result of the target image according to the region difference characteristics and the image change parameters.
The detection result can include that whether the target image is the image of the initial image after the change of the image change parameters can be determined, if so, the processing process of the initial image is effective and correct, the desired target image is obtained, otherwise, the processing process of the initial image is abnormal, the actually obtained target image is not consistent with the desired image, and when the difference between the initial image and the desired image is small, whether the actually obtained target image is the desired image is detected through the scheme of the method.
Specifically, there are various ways to determine the detection result according to the region difference feature and the image variation parameter, for example, the region difference feature and the image variation parameter may be subjected to fusion calculation to determine the detection result.
For example, the detection result 1 of the picture B is determined according to the image variation parameter 1 and the relative position information of the target difference region 1 in the difference image C.
In some embodiments, the image detection method further comprises:
performing fusion calculation on all target difference pixel points of the target difference area to obtain pixel difference characteristics;
the step of determining the detection result of the target image according to the region difference feature and the image variation parameter may include:
and determining the detection result of the target image based on the region difference characteristic, the pixel difference characteristic and the image change parameter.
In order to further extract information in the difference image and obtain a more accurate detection result, fusion operation can be performed on all target difference pixel points in the target difference region to obtain pixel difference characteristics on a pixel point layer, and then the detection result of the target image is obtained based on the pixel difference characteristics and the region difference characteristics.
The pixel difference features may include features obtained based on all target difference pixel points, position features obtained based on position information of all target difference pixel points, color features obtained based on color information of all target difference pixel points, information features obtained based on position information and color information of all target difference pixel points, and the like.
Specifically, the fusion calculation may be performed in at least one of calculation manners such as averaging, weighting, variance/standard deviation calculation, normalization/normal distribution calculation, and the like, for example, the color standard deviation may be calculated for the color information of all target pixel points, the position standard deviation may be calculated for the position information of all target pixel points, and then the color standard deviation and the position standard deviation are weighted and summed to obtain the pixel difference characteristic.
For example, there may be N target difference pixels (N is positive integer) in the target difference regionNumber), the coordinates of the nth target disparity pixel point may be (x)n,yn) The pixel difference feature may include a first average position
Figure BDA0002969560100000131
And a second average position
Figure BDA0002969560100000132
The calculation formula may be:
Figure BDA0002969560100000133
Figure BDA0002969560100000134
there are various ways to determine the detection result based on the region difference feature, the pixel difference feature and the image change parameter, for example, the region difference feature, the pixel difference feature and the image change parameter may be subjected to fusion calculation to obtain the detection result of the target image.
For example, the position coordinates of all target difference pixel points in the target difference region 1 may be subjected to fusion calculation to obtain pixel difference information 1, the region difference feature 1 and the pixel difference feature 1 are subjected to weighted average, and the obtained result is compared with the image change parameter to obtain the detection result of the picture 2.
In some embodiments, the step of determining the detection result of the target image based on the region difference feature, the pixel difference feature, and the image variation parameter may include:
acquiring an image change measurement model; carrying out difference quantization on the region difference characteristics and the pixel difference characteristics through an image change measurement model to obtain actual change data of the target image; and determining the detection result of the target image based on the actual change data and the image change parameter.
The image change metric model may be used to quantify the degree of change of the target image relative to the initial image, and the image change metric model may include at least one, and if the number of the image change metric models is multiple, the change of the target image relative to the multi-dimensional (dimensions may be, for example, positions, colors, etc.) of the initial image may be quantified, and the method may include directly obtaining an existing image change metric model, or may also include constructing an initial model and training the initial model, and finally obtaining a trained image change metric model, and the like.
The actual change data may include quantized data of a portion of the target image that changes with respect to the initial image, and the image change parameter is data on which the initial image is processed, and is theoretically quantized data of a portion of the target image that changes with respect to the initial image. For example, if the actual change data is the same as the image change parameter, the detection result is determined to be that the target image is normal, and if the actual change data is different from the image change parameter, the detection result is determined to be that the target image is abnormal.
For example, the model M may be obtained, the pixel difference information 1 and the region difference feature 1 are input into the model M for difference quantization to obtain the actual change data 1 of the picture B, and detection is performed according to the actual change data 1 and the image change parameter 1 to obtain the detection result of the picture B.
In some embodiments, the step of "determining a detection result of the target image based on the actual variation data and the image variation parameter" may include:
when the difference value between the actual change data and the image change parameter is smaller than a preset threshold value, determining that the detection result of the target image is normal; and when the difference value between the actual change data and the image change parameter is larger than a preset threshold value, determining that the detection result of the target image is image abnormity.
In the actual application process, the difference between the actual change data and the image change parameter can be obtained, and if the difference is within an acceptable range, the obtained target image can be considered to pass the detection, and at this time, the difference between the target image and the desired image can be understood to be small and can be ignored.
Specifically, the actual change data may be smaller than the image change parameter or larger than the image change parameter, so that an absolute value difference between the actual change data and the image change parameter may be obtained, and a degree of difference between the actual change data and the image change parameter may be determined according to a size measurement of the absolute value difference.
The image normally indicates that the target image passes the detection, namely the target image meets the requirement of processing the initial image based on the image change parameters to obtain the image; the image anomaly indicates that the target image does not pass the detection, i.e. the target image does not meet the requirements of processing the initial image based on the image change parameters to obtain the image.
For example, when the difference between the actual change data 1 and the image change parameter 1 is smaller than a preset threshold 1, determining that the detection result of the picture B is that the image is normal; and when the difference value between the actual change data 1 and the image change parameter 1 is greater than a preset threshold value 1, determining that the detection result of the picture B is an image anomaly.
In some embodiments, the image change metric model includes a region metric submodel and a pixel metric submodel, the actual change data includes region actual change data and pixel actual change data,
the step of performing difference quantization on the region difference feature and the pixel difference feature through the image change metric model to obtain actual change data of the target image may include:
carrying out difference quantization on the region difference characteristics through a region measurement sub-model to obtain region actual change data of the target image; and carrying out difference quantization on the pixel difference characteristics through the pixel measurement sub-model to obtain the actual pixel change data of the target image.
Specifically, the area difference feature and the pixel difference feature may be respectively quantized differentially by two sub-models to obtain area actual change data and pixel actual change data of the target image, where the area measurement sub-model is used to quantize the area change degree of the target image relative to the initial image, the pixel measurement sub-model is used to quantize the pixel change degree of the target image relative to the initial image, the area actual change data may be quantized data of an area portion where the target image changes relative to the initial image, and the pixel actual change data may be quantized data of a pixel portion where the target image changes relative to the initial image.
For example, the model M includes a submodel M1 and a submodel M2, and the pixel difference information 1 is input to the submodel M1 to obtain the pixel actual change data 1, and the area difference information 1 is input to the submodel M2 to obtain the area actual change data 1.
In some embodiments, the step of "determining a detection result of the target image based on the actual variation data and the image variation parameter" may include:
and when the difference value between the area actual change data and the image change parameter is smaller than a preset first threshold value and the difference value between the pixel actual change data and the image change parameter is smaller than a preset second threshold value, determining that the detection result of the target image is normal.
The difference between the actual area change data and the image change parameter may be an absolute difference, and the difference between the actual pixel change data and the image change parameter may be an absolute difference, which is obtained by subtracting the actual pixel change data and the image change parameter and calculating an absolute value.
Tolerance of different dimensions (regions and pixels) to the error is different, so that a preset first threshold and a preset second threshold can be set for the region actual change data and the pixel actual change data respectively, and a more accurate detection result of the target image can be obtained.
For example, if the difference between the area actual change data 1 and the image change parameter 1 is smaller than the first threshold, and the difference between the pixel actual change data 1 and the image change parameter 1 is smaller than the second threshold, it is determined that the detection result of the image B is that the image is normal.
In some embodiments, the step of "determining a detection result of the target image based on the actual variation data and the image variation parameter" may include:
when the difference value between the actual change data of the region and the image change parameter is larger than a preset first threshold value, determining that the detection result of the target image is abnormal in image area; and when the difference value between the actual pixel change data and the image change parameter is larger than a preset second threshold value, determining that the detection result of the target image is image position abnormity.
The image area abnormity indicates that the difference area of the target image relative to the initial image does not accord with the image change parameter, and the image area abnormity indicates that the pixel difference of the target image relative to the initial image does not accord with the image difference parameter.
For example, if the difference between the actual change data 1 of the region and the image change parameter 1 is greater than the first threshold, determining that the detection result of the image B is an image area anomaly; and if the difference value between the pixel actual change data 1 and the image change parameter 1 is greater than a second threshold value, determining that the detection result of the image B is the image position abnormity.
In some embodiments, the image testing method further comprises:
acquiring an initial sample image and at least two target sample images corresponding to the initial sample image, wherein the target sample images carry sample change parameters relative to the initial sample image; carrying out difference processing on the initial sample image and at least two target sample images to obtain sample difference characteristics of each target sample image; and generating an image change measurement model based on the sample difference characteristics of each target sample image and the set sample change parameters.
The image change metric model of the present application may be obtained based on regression analysis or based on machine learning, for example, an initial metric equation (i.e., an initial metric model) may be set, the initial metric model may include at least one unknown parameter, an independent variable of the initial metric model is a sample difference characteristic, a dependent variable is a sample change parameter, a sample difference characteristic and a sample change parameter of each target sample image may be sequentially used as an independent variable and a dependent variable of the initial metric model to obtain at least two equations, and known values of all unknown parameters in the initial metric model are obtained by solving the at least two equations to obtain an image change metric model including known values of all unknown parameters.
The target sample image is obtained by changing the initial sample image based on the sample change parameter, the sample change parameter is a parameter according to which the target sample image is obtained by changing the initial sample image, and the sample difference characteristic is quantized data of the target sample image actually changed relative to the initial sample image.
According to the method and the device, difference calculation can be carried out on the initial image and the target image to obtain the difference image of the initial image and the target image, the target difference area in the difference image and the area difference characteristic of the target difference area are detected based on the area difference characteristic and the image change parameter to obtain the detection result of the target image, compared with the prior art, the method and the device can effectively improve the image detection efficiency by automatically implementing the process through computer equipment.
The method described in the above embodiments is further illustrated in detail by way of example.
The image detection method can detect the beauty effect of the image subjected to the beauty operation, the beauty operation can include large eyes, nose size adjustment, lip thickness adjustment, smiling lips, face thinning and the like, and referring to fig. 3, fig. 3 is a schematic flow diagram of the image detection method provided by the embodiment of the application, and the image detection method can include:
201. the computer equipment acquires an initial sample image and at least two target sample images corresponding to the initial sample image, wherein the target sample images carry sample change parameters relative to the initial sample image.
For example, the computer device obtains a sample image 1 (i.e., an initial sample image), and a sample image 2 (a target sample image) and a sample image 3 (a target sample image) corresponding to the sample image 1, where the sample image 2 carries a variation parameter 1, and the sample image 3 carries a variation parameter 2, where the sample image 1 is a human face image, the sample image 2 is an image obtained by performing 50% (sample variation parameter 1) large-eye operation on the sample image 1, and the sample image 3 is an image obtained by performing 80% (sample variation parameter 2) large-eye operation on the sample image 1.
202. And respectively carrying out pixel difference analysis on the initial sample image and the corresponding positions of the at least two corresponding target sample images by the computer equipment so as to obtain at least two sample difference images.
For example, difference analysis is performed on pixel points at corresponding positions in the sample image 1 and the sample image 2 to obtain a sample difference image 1, and difference analysis is performed on pixel points at corresponding positions in the sample image 1 and the sample image 3 to obtain a sample difference image 2. The difference analysis may be to perform difference calculation on color values (such as RGB values) of pixels at the same position, for example, to perform difference calculation on the color value of each color channel, and assign a pixel value to a pixel at the position in the difference image based on the calculation result.
203. The computer device determines a target sample difference region and a sample region difference characteristic of the target sample difference region in each sample difference image based on a plurality of sample difference pixel points in each sample difference image, respectively.
For example, the sample difference image 1 may be shown in fig. 4, where fig. 4 includes pixels (black) with color values of (0,0,0) and pixels (white) with color values of (255 ), the white portion is framed by a minimum rectangular frame to obtain the target sample difference region 1, and the target sample difference region 1 may be shown in fig. 5, and the transverse side length 1 (i.e., the sample region difference feature 11) and the longitudinal side length 1 (i.e., the template region difference feature 12) of the target sample difference region 1 are measured.
204. And the computer equipment respectively performs fusion calculation on all target sample difference pixel points of each target sample difference area to obtain the sample pixel difference characteristics of each sample difference image.
For example, in the sample difference image 1, the target sample difference pixel points are pixel points with color values (255 ), each target sample difference pixel point has two-dimensional position coordinate values, i.e., an X value and a Y value, in the sample difference image, the X values of all target sample difference pixel points are averaged to obtain a first average coordinate value (i.e., sample pixel difference feature 1), and the Y values of all target sample difference pixel points are averaged to obtain a second average coordinate value (i.e., sample pixel difference feature 2).
205. The computer device generates a pixel variation metric model based on the sample pixel difference features and the sample variation parameters of the at least two sample difference images, and generates a region variation metric model based on the sample region difference features and the sample variation parameters of the at least two sample difference images.
For example, sample difference image 1 corresponds to a lateral side length 1 (i.e., sample region difference feature 11) h1Length of longitudinal edge 1 (template area difference feature 12) v1The first X average coordinate value (i.e. sample pixel difference feature 11)
Figure BDA0002969560100000181
And a first Y average coordinate value (i.e., sample pixel difference feature 12)
Figure BDA0002969560100000182
Sample difference image 1 corresponds to a transverse edge length 2 (i.e., sample region difference feature 21) h2And a longitudinal side length 2 (template area difference feature 22) v2The second X average coordinate value (i.e. sample pixel difference feature 21)
Figure BDA0002969560100000183
And a second Y average coordinate value (i.e., sample pixel difference feature 22)
Figure BDA0002969560100000184
An initial region variation metric model 1p can be obtainedhInitial region variation metric model 2pvInitial pixel change metric model 1pxAnd an initial pixel change metric model 2pyRespectively, the following steps:
ph=ah+b
pv=cv+d
Figure BDA0002969560100000185
Figure BDA0002969560100000186
measuring model 1p with pixel variationxFor example, 50% (sample variation parameter 1) p1And a first X average coordinate value (i.e., sample pixel)Difference character 11)
Figure BDA0002969560100000187
Substitution into
Figure BDA0002969560100000188
To obtain formula 1:
Figure BDA0002969560100000189
80% (sample variation parameter 2) p2And a second X average coordinate value (i.e., sample pixel difference feature 12)
Figure BDA00029695601000001810
Respectively substitute for
Figure BDA00029695601000001811
To obtain formula 2:
Figure BDA00029695601000001812
calculations based on equations 1 and 2 yield e and f as follows:
Figure BDA00029695601000001813
further, a pixel variation measurement model 1p is obtainedx:
Figure BDA00029695601000001814
Similarly, a pixel variation metric model 2p can be obtainedy
Figure BDA0002969560100000191
Regional variation metric model 1ph
Figure BDA0002969560100000192
Regional variation metric model 2pv
Figure BDA0002969560100000193
206. A computer device receives an initial image and an image change parameter.
For example, an image to be beautified (i.e., an initial image) is received, and an beautification effectiveness percentage p set for the image to be beautified is receivedt(i.e., image change parameters).
207. And the computer equipment processes the initial image based on the image change parameters to obtain a target image.
For example, the image to be beautified is beautified according to the beautification effective percentage, so as to obtain an image (i.e., the target image) after the beautification.
208. And the computer equipment performs pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image.
For example, the difference analysis is performed on the pixel points of the corresponding positions in the image to be beautified and the beautified image to obtain the beautified difference image (i.e., difference image).
209. The computer equipment determines a target difference area and area difference characteristics of the target difference area in the difference image based on a plurality of difference pixel points in the difference image, and performs fusion calculation on all target difference pixel points in the target difference area to obtain pixel difference characteristics.
For example, according to the difference pixel points in the beauty difference image, a rectangular difference region (i.e., a target difference region) in the beauty difference image is determined, the horizontal side length and the vertical side length of the rectangular difference region are measured, and the position coordinates of all the difference pixel points in the rectangular difference region are averaged to obtain an X average value and a Y average value.
210. The computer equipment inputs the area difference characteristics into the area change measurement model to obtain area actual change data, and the pixel difference characteristic data is subjected to the pixel change measurement model to obtain pixel actual change data.
For example, the face effectiveness percentage and the lateral side length are input into the region variation measurement model 1phObtaining the transverse true variation parameter ph1Inputting the effective percentage of beauty and the longitudinal side length into the region change measurement model 2pvObtaining the longitudinal true variation parameter pv1Inputting the beauty effective percentage and the X average value into the pixel variation metric model 1pxObtaining the true variation parameter p of Xx1The beauty effective percentage and the Y average are input into the pixel variation metric model 2kyTo obtain the true variation parameter p of Yy1
211. The computer device determines a detection result of the target image based on the area actual change data, the pixel actual change data, and the image change parameter.
For example, when the lateral true variation parameter ph1Longitudinal true variation parameter pv1X true variation parameter px1Y true variation parameter py1When the following conditions are met, determining that the change of the beautified image compared with the image to be beautified meets the effective beautification percentage, wherein the conditions specifically comprise:
|ph1-pt|≤k1
|pv1-pt|≤k2
|px1-pt|≤k3
|py1-pt|≤k4
the k1, the k2, the k3 and the k4 may be flexibly set according to requirements, may be the same or different, and are not limited herein.
According to the method and the device, difference calculation can be carried out on the initial image and the target image to obtain the difference image of the initial image and the target image, the target difference area in the difference image and the area difference characteristic of the target difference area are detected based on the area difference characteristic and the image change parameter to obtain the detection result of the target image, compared with the prior art, the method and the device can effectively improve the image detection efficiency by automatically implementing the process through computer equipment.
In order to better implement the image detection method provided by the embodiment of the present application, the embodiment of the present application further provides a device based on the image detection method. The terms are the same as those in the image detection method, and details of implementation can be referred to the description in the method embodiment.
Fig. 6 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present application, as shown in fig. 6, where the image detection apparatus may include an acquisition module 301, a processing module 302, a difference analysis module 303, a feature determination module 304, and a result determination module 305, where,
an obtaining module 301, configured to obtain an initial image and an image change parameter;
a processing module 302, configured to process the initial image based on the image change parameter to obtain a target image;
a difference analysis module 303, configured to perform pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image;
a feature determining module 304, configured to determine a target difference region and a region difference feature of the target difference region in the difference image based on a plurality of difference pixel points in the difference image;
and a result determining module 305, configured to determine a detection result of the target image according to the region difference feature and the image variation parameter.
In some embodiments, the image detection apparatus further comprises:
the calculation module is used for performing fusion calculation on all target difference pixel points of the target difference region to obtain pixel difference characteristics;
at this time, the result determination module is specifically configured to:
and determining the detection result of the target image based on the region difference characteristic, the pixel difference characteristic and the image change parameter.
In some embodiments, the result determination module includes an acquisition sub-module, a quantization sub-module, and a determination sub-module, wherein,
the obtaining submodule is used for obtaining an image change measurement model;
the quantization submodule is used for performing difference quantization on the region difference characteristics and the pixel difference characteristics through an image change measurement model to obtain actual change data of the target image;
and the determining submodule is used for determining the detection result of the target image based on the actual change data and the image change parameters.
In some embodiments, the determination submodule is specifically configured to:
when the difference value between the actual change data and the image change parameter is smaller than a preset threshold value, determining that the detection result of the target image is normal;
and when the difference value between the actual change data and the image change parameter is larger than a preset threshold value, determining that the detection result of the target image is image abnormity.
In some embodiments, the image change metric model includes a region metric submodel and a pixel metric submodel, the actual change data includes region actual change data and pixel actual change data, and the quantization submodule is specifically configured to:
carrying out difference quantization on the region difference characteristics through a region measurement sub-model to obtain region actual change data of the target image;
and carrying out difference quantization on the pixel difference characteristics through the pixel measurement sub-model to obtain the actual pixel change data of the target image.
In some embodiments, the determination submodule is specifically configured to:
and when the difference value between the area actual change data and the image change parameter is smaller than a preset first threshold value and the difference value between the pixel actual change data and the image change parameter is smaller than a preset second threshold value, determining that the detection result of the target image is normal.
In some embodiments, the determination submodule is specifically configured to:
when the difference value between the actual change data of the region and the image change parameter is larger than a preset first threshold value, determining that the detection result of the target image is abnormal in image area;
and when the difference value between the actual pixel change data and the image change parameter is larger than a preset second threshold value, determining that the detection result of the target image is image position abnormity.
In some embodiments, the initial image comprises a plurality of initial pixel points, the target image comprises a plurality of target pixel points, the difference analysis module comprises a determination submodule, a calculation submodule, and an integration submodule, wherein,
the determining submodule is used for determining a plurality of groups of pixel pairs with the same position information based on the position information of each initial pixel point in the initial image and the position information of each target pixel point in the target image, and each pixel pair comprises an initial pixel point and a target pixel point;
the calculation submodule is used for carrying out difference calculation on each group of pixel pairs according to the color information of each initial pixel point and the color information of each target pixel point to obtain the color difference information of each group of pixel pairs;
and the integration submodule is used for integrating the position information and the color difference information of each group of pixel pairs to obtain a difference image.
In some embodiments, the computation submodule is specifically configured to:
determining difference information between the color information of the initial pixel point and the color information of the target pixel point in the pixel pair;
when the difference information meets a preset condition, determining the color difference information of the pixel pair as first color difference information;
and when the difference information does not meet the preset condition, determining the color difference information of the pixel pair as second color difference information.
In some embodiments, the difference pixel includes location information and color difference information, and the characteristic determining module is specifically configured to:
when the color difference information of the difference pixel points is target difference information, determining the difference pixel points as target difference pixel points;
and determining a target difference area in the difference image and the area difference characteristics of the target difference area according to the position information of all target difference pixel points in the difference image.
In some embodiments, the image detection apparatus further comprises:
the system comprises a sample acquisition module, a data acquisition module and a data processing module, wherein the sample acquisition module is used for acquiring an initial sample image and at least two target sample images corresponding to the initial sample image, and the target sample images carry sample change parameters relative to the initial sample image;
the sample difference module is used for carrying out difference processing on the initial sample image and at least two target sample images to obtain sample difference characteristics of each target sample image;
and the model generation module is used for generating an image change measurement model based on the sample difference characteristics of each target sample image and the set sample change parameters.
In the present application, the obtaining module 301 obtains an initial image and an image change parameter; the processing module 302 processes the initial image based on the image change parameter to obtain a target image; the difference analysis module 303 performs pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image; the feature determination module 304 determines a target difference region and a region difference feature of the target difference region in the difference image based on a plurality of difference pixel points in the difference image; the result determination module 305 determines a detection result of the target image based on the region difference feature and the image variation parameter.
According to the method and the device, difference calculation can be carried out on the initial image and the target image to obtain the difference image of the initial image and the target image, the target difference area in the difference image and the area difference characteristic of the target difference area are detected based on the area difference characteristic and the image change parameter to obtain the detection result of the target image, compared with the prior art, the method and the device can effectively improve the image detection efficiency by automatically implementing the process through computer equipment.
In addition, an embodiment of the present application further provides a computer device, where the computer device may be a terminal or a server, as shown in fig. 7, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, and specifically:
the computer device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 7 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device as a whole. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user pages, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
acquiring an initial image and an image change parameter; processing the initial image based on the image change parameters to obtain a target image; performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image; determining a target difference area and area difference characteristics of the target difference area in the difference image based on a plurality of difference pixel points in the difference image; and determining the detection result of the target image according to the region difference characteristics and the image change parameters.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the above embodiments.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by a computer program, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the computer program.
To this end, the present application further provides a storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the image detection methods provided in the present application. For example, the computer program may perform the steps of:
acquiring an initial image and an image change parameter; processing the initial image based on the image change parameters to obtain a target image; performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image; determining a target difference area and area difference characteristics of the target difference area in the difference image based on a plurality of difference pixel points in the difference image; and determining the detection result of the target image according to the region difference characteristics and the image change parameters.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any image detection method provided in the embodiments of the present application, the beneficial effects that can be achieved by any image detection method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiments.
The image detection method, the image detection device, the storage medium, and the computer apparatus provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. An image detection method, comprising:
acquiring an initial image and an image change parameter;
processing the initial image based on the image change parameters to obtain a target image;
performing pixel difference analysis on corresponding positions in the initial image and the target image to obtain a difference image;
determining a target difference region and a region difference characteristic of the target difference region in the difference image based on a plurality of difference pixel points in the difference image;
and determining the detection result of the target image according to the region difference characteristic and the image change parameter.
2. The method of claim 1, further comprising:
performing fusion calculation on all target difference pixel points of the target difference area to obtain pixel difference characteristics;
determining a detection result of the target image according to the region difference characteristic and the image change parameter, wherein the determination result comprises:
determining a detection result of the target image based on the region difference feature, the pixel difference feature, and the image variation parameter.
3. The method of claim 2, wherein the determining a detection result of the target image based on the region difference feature, the pixel difference feature, and the image variation parameter comprises:
acquiring an image change measurement model;
performing difference quantization on the region difference characteristics and the pixel difference characteristics through the image change measurement model to obtain actual change data of the target image;
and determining the detection result of the target image based on the actual change data and the image change parameter.
4. The method of claim 3, wherein determining the detection result of the target image based on the actual change data and the image change parameter comprises:
when the difference value between the actual change data and the image change parameter is smaller than a preset threshold value, determining that the detection result of the target image is normal;
and when the difference value between the actual change data and the image change parameter is larger than a preset threshold value, determining that the detection result of the target image is abnormal.
5. The method of claim 3, wherein the image change metric model comprises a region metric submodel and a pixel metric submodel, the actual change data comprises region actual change data and pixel actual change data,
the obtaining actual change data of the target image by performing difference quantization on the region difference feature and the pixel difference feature through the image change metric model includes:
performing difference quantization on the region difference characteristics through the region measurement sub-model to obtain region actual change data of the target image;
and carrying out difference quantization on the pixel difference characteristics through the pixel measurement sub-model to obtain the actual pixel change data of the target image.
6. The method of claim 5, wherein determining the detection result of the target image based on the actual change data and the image change parameter comprises:
and when the difference value between the area actual change data and the image change parameter is smaller than a preset first threshold value and the difference value between the pixel actual change data and the image change parameter is smaller than a preset second threshold value, determining that the detection result of the target image is normal.
7. The method of claim 5, wherein determining the detection result of the target image based on the actual change data and the image change parameter comprises:
when the difference value between the actual change data of the region and the image change parameter is larger than a preset first threshold value, determining that the detection result of the target image is abnormal in image area;
and when the difference value between the actual pixel change data and the image change parameter is larger than a preset second threshold value, determining that the detection result of the target image is image position abnormity.
8. The method of claim 1, wherein the initial image comprises a plurality of initial pixel points, wherein the target image comprises a plurality of target pixel points,
the pixel difference analysis of the corresponding positions in the initial image and the target image to obtain a difference image includes:
determining a plurality of groups of pixel pairs with the same position information based on the position information of each initial pixel point in the initial image and the position information of each target pixel point in the target image, wherein the pixel pairs comprise the initial pixel points and the target pixel points;
according to the color information of each initial pixel point and the color information of each target pixel point, performing difference calculation on each group of pixel pairs to obtain the color difference information of each group of pixel pairs;
and integrating the position information and the color difference information of each group of pixel pairs to obtain a difference image.
9. The method of claim 8, wherein performing a difference calculation on each group of pixel pairs according to the color information of each initial pixel point and the color information of each target pixel point to obtain the color difference information of each group of pixel pairs comprises:
determining difference information between the color information of the initial pixel point and the color information of the target pixel point in the pixel pair;
when the difference information meets a preset condition, determining the color difference information of the pixel pair as first color difference information;
and when the difference information does not meet the preset condition, determining the color difference information of the pixel pair as second color difference information.
10. The method of claim 1, wherein the disparity pixel includes location information and color disparity information,
the determining, based on a plurality of difference pixel points in the difference image, a target difference region and a region difference feature of the target difference region in the difference image includes:
when the color difference information of the difference pixel points is target difference information, determining the difference pixel points as target difference pixel points;
and determining a target difference area in the difference image and the area difference characteristics of the target difference area according to the position information of all target difference pixel points in the difference image.
11. The method of claim 3, further comprising:
acquiring an initial sample image and at least two target sample images corresponding to the initial sample image, wherein the target sample images carry sample variation parameters relative to the initial sample image;
performing difference processing on the initial sample image and the at least two target sample images to obtain a sample difference characteristic of each target sample image;
and generating an image change measurement model based on the sample difference characteristics of each target sample image and the set sample change parameters.
12. An image detection apparatus, characterized by comprising:
the acquisition module is used for acquiring an initial image and image change parameters;
the processing module is used for processing the initial image based on the image change parameters to obtain a target image;
the difference analysis module is used for carrying out pixel difference analysis on corresponding positions in the initial image and the target image so as to obtain a difference image;
the characteristic determining module is used for determining a target difference area and an area difference characteristic of the target difference area in the difference image based on a plurality of difference pixel points in the difference image;
and the result determining module is used for determining the detection result of the target image according to the region difference characteristic and the image change parameter.
13. A storage medium, characterized in that it stores a plurality of computer programs adapted to be loaded by a processor for performing the steps of the method according to any one of claims 1 to 11.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1 to 11 are implemented when the computer program is executed by the processor.
CN202110260048.6A 2021-03-10 2021-03-10 Image detection method and device, storage medium and computer equipment Pending CN113706439A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110260048.6A CN113706439A (en) 2021-03-10 2021-03-10 Image detection method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110260048.6A CN113706439A (en) 2021-03-10 2021-03-10 Image detection method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN113706439A true CN113706439A (en) 2021-11-26

Family

ID=78647790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110260048.6A Pending CN113706439A (en) 2021-03-10 2021-03-10 Image detection method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113706439A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842485A (en) * 2022-04-26 2022-08-02 北京百度网讯科技有限公司 Subtitle removing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842485A (en) * 2022-04-26 2022-08-02 北京百度网讯科技有限公司 Subtitle removing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
CN112733794B (en) Method, device and equipment for correcting sight of face image and storage medium
CN107679466B (en) Information output method and device
EP3674852A2 (en) Method and apparatus with gaze estimation
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
WO2014187223A1 (en) Method and apparatus for identifying facial features
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN108694719B (en) Image output method and device
CN109711268B (en) Face image screening method and device
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN111383232B (en) Matting method, matting device, terminal equipment and computer readable storage medium
CN112329851A (en) Icon detection method and device and computer readable storage medium
CN112561879B (en) Ambiguity evaluation model training method, image ambiguity evaluation method and image ambiguity evaluation device
CN114374760A (en) Image testing method and device, computer equipment and computer readable storage medium
CN110807379A (en) Semantic recognition method and device and computer storage medium
CN113808277A (en) Image processing method and related device
CN114330565A (en) Face recognition method and device
CN111784658A (en) Quality analysis method and system for face image
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN111126250A (en) Pedestrian re-identification method and device based on PTGAN
CN113706439A (en) Image detection method and device, storage medium and computer equipment
Dutta et al. Weighted low rank approximation for background estimation problems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination