CN111476780B - Image detection method and device, electronic equipment and storage medium - Google Patents

Image detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111476780B
CN111476780B CN202010266809.4A CN202010266809A CN111476780B CN 111476780 B CN111476780 B CN 111476780B CN 202010266809 A CN202010266809 A CN 202010266809A CN 111476780 B CN111476780 B CN 111476780B
Authority
CN
China
Prior art keywords
image
target image
points
point
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010266809.4A
Other languages
Chinese (zh)
Other versions
CN111476780A (en
Inventor
王君乐
许家誉
赵菁
张力柯
荆彦青
艾长青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010266809.4A priority Critical patent/CN111476780B/en
Publication of CN111476780A publication Critical patent/CN111476780A/en
Application granted granted Critical
Publication of CN111476780B publication Critical patent/CN111476780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The embodiment of the invention discloses an image detection method, an image detection device, electronic equipment and a storage medium, wherein the image detection method comprises the following steps: acquiring a pair of images to be detected, wherein the pair of images to be detected comprises a target image and a reference image; respectively extracting a plurality of feature points for representing local information in the image from the target image and the reference image to obtain a plurality of first feature points and a plurality of second feature points; selecting a second feature point for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain a feature point to be detected; constructing a matching region matched with the target image in the reference image based on the selected image information of the feature point to be detected and the reference image; and generating a detection result of the target image according to the ratio of the matching area in the reference image. Therefore, the accuracy of image detection can be improved.

Description

Image detection method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to an image detection method, an image detection device, electronic equipment and a storage medium.
Background
With the rapid development of image processing technology, the determination of image similarity is still a topic of debate, and for two same images, there may be a great difference in determining the similarity between the two images from different levels, and there are many methods for detecting image similarity at present, such as a pixel-level-based detection method and a neural network-based detection method, but these methods require that the resolutions and aspect ratios of the two input images are consistent, and when the resolutions and/or aspect ratios of the two images are different, the detection result is poor.
Disclosure of Invention
The embodiment of the invention provides an image detection method, electronic equipment and a storage medium, which can improve the accuracy of image detection.
The embodiment of the invention provides an image detection method, which comprises the following steps:
acquiring a pair of images to be detected, wherein the pair of images to be detected comprises a target image and a reference image;
extracting a plurality of feature points used for representing local information in the image from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points;
selecting second feature points for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain feature points to be detected;
constructing a matching region matched with the target image in the reference image based on the selected image information of the feature point to be detected and the reference image;
and generating a detection result of the target image according to the ratio of the matching area in the reference image.
Correspondingly, an embodiment of the present invention further provides an image detection apparatus, including:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring an image pair to be detected, and the image pair to be detected comprises a target image and a reference image;
the extraction module is used for extracting a plurality of feature points used for representing local information in the image from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points;
the selection module is used for selecting second feature points for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain feature points to be detected;
the construction module is used for constructing a matching region matched with the target image in the reference image based on the selected feature point to be detected and the image information of the reference image;
and the generating module is used for generating a detection result of the target image according to the ratio of the matching area in the reference image.
Optionally, in some embodiments of the present invention, the building module includes:
the intercepting submodule is used for intercepting images in a preset range of each feature point to be detected in the reference image according to the image information of the reference image to obtain a first matching image block;
the construction sub-module is used for constructing an image block matched with each pixel point in the target image according to all pixel points of the target image and pixel points outside the preset range of each feature point to be detected, so as to obtain a second matched image block;
and the generation sub-module is used for generating a matching area matched with the target image based on the first matching image block and the second matching image block.
Optionally, in some embodiments of the invention, the building submodule includes:
the acquiring unit is used for acquiring color values of all pixel points in the target image to obtain a plurality of first color values, and acquiring color values of pixel points outside a preset range of each feature point to be detected to obtain a plurality of second color values;
and the construction unit is used for constructing the image blocks matched with the pixel points in the target image based on the first color values and the second color values to obtain second matched image blocks.
Optionally, in some embodiments of the present invention, the building unit includes:
the calculating subunit is used for calculating the similarity between each first color value and each second color value;
the determining subunit is used for determining second pixel points corresponding to second color values with the similarity greater than a preset threshold as matching pixel points;
and the construction subunit is used for constructing an image block matched with each pixel point in the target image based on the matched pixel points to obtain a second matched image block.
Optionally, in some embodiments of the present invention, the calculating subunit is specifically configured to:
mapping each second pixel point into the target image according to the image difference information between the target image and the reference image to obtain mapped pixel points;
calculating the similarity of each mapping pixel point and the corresponding first pixel point on each color channel;
and generating the similarity between each mapping pixel point and the corresponding first pixel point based on the similarity of each mapping pixel point and the corresponding first pixel point on each color channel.
Optionally, in some embodiments of the present invention, the selecting module includes:
the first calculation unit is used for calculating Euclidean distances between the first characteristic points and the second characteristic points;
the selection unit is used for selecting at least one reference characteristic point corresponding to each first characteristic point from the plurality of second characteristic points according to the calculation result to obtain a reference characteristic point set corresponding to each first characteristic point;
a second calculation unit, configured to calculate a position error between each first feature point and a reference feature point in a corresponding reference feature point set based on image difference information between the target image and a reference image;
and the determining unit is used for determining the reference characteristic points with the position errors meeting the preset conditions as the characteristic points to be detected matched with the first characteristic points.
Optionally, in some embodiments of the present invention, the second calculating unit is specifically configured to:
mapping all the first feature points into a reference image based on image difference information between the target image and the reference image to obtain a plurality of mapping feature points;
calculating the position offset between each mapping characteristic point and the corresponding reference characteristic point to obtain the position error between each first characteristic point and the reference characteristic point in the corresponding reference characteristic point set;
the determining unit is specifically configured to: and determining the reference characteristic points with the position offset smaller than or equal to the preset offset as the characteristic points to be detected matched with the first characteristic points.
Optionally, in some embodiments of the present invention, the display module further includes a display module, where the display module is specifically configured to:
removing the image corresponding to the matching area from the target image to obtain an image reserved area;
and displaying the image reserved area based on a preset strategy.
After an image pair to be detected is collected, the image pair to be detected comprises a target image and a reference image, a plurality of feature points used for representing local information in the images are extracted from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, then second feature points used for image detection are selected from the second feature points according to image difference information between the target image and the reference image to obtain feature points to be detected, then a matching area matched with the target image is constructed in the reference image based on the image information of the selected feature points to be detected and the reference image, and finally a detection result of the target image is generated according to the proportion of the matching area in the reference image. Therefore, the accuracy of image detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of an image detection method according to an embodiment of the present invention;
FIG. 1b is a schematic flowchart of an image detection method according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of an image retention area in an image detection method according to an embodiment of the present invention;
FIG. 1d is another schematic diagram of an image retention area in the image detection method according to the embodiment of the invention;
FIG. 2a is a schematic flow chart of an image detection method according to an embodiment of the present invention;
fig. 2b is a schematic diagram of a first feature point and a second feature point in an image detection method according to an embodiment of the present invention;
FIG. 3a is a schematic structural diagram of an image detection apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of an image detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The embodiment of the invention provides an image detection method and device, electronic equipment and a storage medium.
The image detection device may be specifically integrated in a network device, such as a terminal or a server, and the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart sound box, a smart watch, and the like.
Referring to fig. 1a, an embodiment of the present invention provides an image detection apparatus, hereinafter referred to as a detection apparatus for short, which is integrated in a computer, in an automated testing scenario, for example, the computer receives images to be detected (i.e., target images) uploaded by a plurality of different devices, where a reference image and the images to be detected are both images in the same application scenario, like an image of a start interface of a game application, and it should be noted that before a test is performed on the reference image, an operation and maintenance person or a computer has already detected image content of the reference image, and the reference image is an image meeting a preset standard.
Taking an image to be detected uploaded by one device as an example for explanation, the computer may extract a plurality of feature points for characterizing local information in the image from the target image to obtain a plurality of first feature points, extract a plurality of feature points for characterizing local information in the image from the reference image to obtain a plurality of second feature points, then select a second feature point for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain a feature point to be detected, then construct a matching region matched with the target image in the reference image based on the selected feature point to be detected and image information of the reference image, and finally generate a detection result of the target image according to a ratio of the matching region in the reference image.
Compared with the existing image detection scheme, the scheme determines the feature points to be detected for image detection according to the image difference information between the target image and the reference image, and constructs the matching region matched with the target image in the reference image based on the selected image information of the feature points to be detected and the reference image to generate the detection result of the target image, namely, when the image detection is carried out, the image difference information between the target image and the reference image is taken into consideration, so that the inaccuracy of the detection result caused by the fact that the resolution ratio and/or the length-width ratio of the target image and the reference image are different is avoided, and therefore, the accuracy of the image detection can be improved.
The following are detailed below. It should be noted that the description sequence of the following embodiments is not intended to limit the priority sequence of the embodiments.
An image detection method, comprising: the method comprises the steps of collecting an image pair to be detected, extracting a plurality of feature points used for representing local information in the image from a target image and a reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, selecting second feature points used for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain the feature points to be detected, constructing a matching area matched with the target image in the reference image based on the selected feature points to be detected and image information of the reference image, and generating a detection result of the target image according to the proportion of the matching area in the reference image.
Referring to fig. 1b, fig. 1b is a schematic flow chart of an image detection method according to an embodiment of the invention. The specific flow of the image detection method can be as follows:
101. and acquiring an image pair to be detected.
The image pair to be detected comprises a target image and a reference image, wherein the reference image and the image to be detected are both images in the same application scene, and are like images of a starting interface of a game application, wherein the reference image and the target image can be pre-stored locally, can also be pulled through accessing a network interface, can also be obtained through real-time shooting through a camera, and is specifically determined according to actual conditions.
For example, in an automated testing task, the target image may be pre-stored locally, and the reference image may be obtained by pulling through an access network interface; in the picture duplication elimination task of the mobile phone album, the reference image and the target image can be obtained by real-time shooting through a camera.
102. A plurality of characteristic points used for representing local information in the image are extracted from the target image and the reference image respectively, and a plurality of first characteristic points and a plurality of second characteristic points are obtained.
The image feature extraction is a precondition for image analysis and image recognition, and is the most effective way to simplify and express high-dimensional image data, and for the image data of one image, a computer cannot directly acquire key information of the image, so that a plurality of feature points for representing local information in the image can be respectively extracted from a target image and a reference image for facilitating subsequent image detection. Specifically, a plurality of first characteristic points are obtained from the target image and used for representing local information in the target image; a plurality of second feature points are obtained from the reference image for representing local information in the reference image.
It should be noted that the local feature point is a local expression of an image feature, and it can only reverse the local specificity on the image, so it is suitable for the task of image detection, and for the task of image understanding, the task of image understanding focuses more on the global features of the image, such as color distribution, texture features, and the shape of the main object. In addition, global features are susceptible to environmental disturbances, such as illumination, rotation, and noise, which all affect the global features. In contrast, local feature points tend to correspond to structures with line intersections and/or shading in the image, and are therefore less affected by the environment.
Optionally, in some embodiments, a Scale-invariant feature transform (SIFT) algorithm may be used to extract a plurality of feature points for characterizing local information in an image from a target image and a reference image, the SIFT algorithm is a computer vision algorithm for detecting and describing local features in an image, and finds extreme points in a spatial Scale and extracts positions, scales, and rotation invariants of the extreme points, and the SIFT algorithm has the following advantages:
(a) The method has better stability and invariance, can adapt to rotation, scale scaling and brightness change, and can not be interfered by visual angle change, affine transformation and noise to a certain extent;
(b) The distinguishability is good, and the rapid and accurate distinguishing information can be matched in the massive characteristic database;
(c) The multi-quantity property can generate a large quantity of characteristic vectors even if only a single object exists;
(d) The method is high in speed, and can quickly perform feature vector matching;
(e) Scalability, which can be combined with other forms of feature vectors.
Specifically, firstly, a gaussian function can be adopted to blur and downsample an image, a gaussian convolution is used to construct an image pyramid (gaussian difference pyramid), for a pixel point in the image, the pixel point needs to be compared with 8 neighborhoods around the pixel point and 18 adjacent points in the upper layer and the lower layer of the gaussian difference pyramid to obtain a key point, then, a reverse parameter can be given to each key point, the neighborhood of each key point is obtained, and the gradient level and direction in the neighborhood are calculated to obtain a feature point for representing local information of the image.
Of course, a Speeded Up Robust Features algorithm (SURF) may also be used to extract a plurality of feature points for characterizing local information in an image from a target image and a reference image, the SURF algorithm is an improvement on the Sift algorithm, and the main difference is in the execution efficiency of the algorithm.
103. And selecting a second feature point for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain a feature point to be detected.
Because the target image and the reference image may have a problem of inconsistent resolution and/or aspect ratio, for example, the local image information expressed by the first feature point a and the second feature point b is the same, that is, the first feature point a is matched with the second feature point b, when the resolution between the target image and the reference image is inconsistent, the first feature point a is directly mapped into the reference image, and a situation that the mapped feature point is not the second feature point b may occur, and it is determined that the feature point in the first feature point a is not matched with the feature point in the reference image, which results in an inaccurate result of subsequent image recognition; for another example, the aspect ratio of the target image is 100:60, the aspect ratio of the reference image is 50:20, namely the aspect ratio between the target image and the reference image is inconsistent, the coordinate of the first characteristic point a in the target image is (3, 50), when the first characteristic point a is directly mapped to the reference image, because the maximum width value of the reference image is 20, in the reference image, the characteristic point corresponding to the first characteristic point a does not exist by adopting the method,
therefore, it is necessary to select a second feature point for image detection from the plurality of second feature points according to image difference information between the target image and the reference image, specifically, a euclidean distance between each first feature point and each second feature point may be calculated, and then, based on a calculation result and the image difference information between the target image and the reference image, select a second feature point for image detection from the plurality of second feature points, that is, alternatively, in some embodiments, the step "select a second feature point for image detection from the plurality of second feature points according to the image difference information between the target image and the reference image, to obtain a feature point to be detected" may specifically include:
(11) Calculating Euclidean distances between the first characteristic points and the second characteristic points;
(12) Selecting at least one reference characteristic point corresponding to each first characteristic point from the plurality of second characteristic points according to the calculation result to obtain a reference characteristic point set corresponding to each first characteristic point;
(13) Calculating a position error between each first characteristic point and a reference characteristic point in a corresponding reference characteristic point set based on image difference information between the target image and the reference image;
(14) And determining the reference characteristic points with the position errors meeting the preset conditions as the characteristic points to be detected for image detection.
Wherein, in euclidean space, point x = (x) 1 ,…,x n ) And y = (y) 1 ,…,y n ) The euclidean distance between them can be identified by the following equation:
Figure BDA0002441586940000081
which means that in euclidean space, point x = (x) 1 ,…,x n ) And y = (y) 1 ,…,y n ) The euclidean distance d between. After the euclidean distance between each first feature point and each second feature point is obtained through calculation, at least one reference feature point corresponding to each first feature point may be selected from a plurality of second feature points according to a calculation result, to obtain a reference feature point set corresponding to each first feature point, for convenience of description, a first feature point q is taken as an example, after the euclidean distance between the first feature point q and each second feature point is obtained through calculation, a second feature point corresponding to a euclidean distance smaller than or equal to a preset distance may be selected as a reference feature point corresponding to the first feature point q, for example, the preset distance is 3, and there are 5 second feature points whose euclidean distance with the first feature point is smaller than or equal to 3, and then these 5 second feature points may be determined as reference feature points corresponding to the first reference feature point q; in addition, a second feature point with the minimum euclidean distance with the first feature point may also be selected as the reference feature point corresponding to the first feature point q, and the selection is specifically performed according to the actual situation, which is not described herein again.
After obtaining the reference feature point set, mapping all the first feature points to the reference image based on the image difference information between the target image and the reference image, and then calculating the position offset between the mapped feature points and the reference feature points to obtain the position error between each first feature point and the reference feature point in the corresponding reference feature point set, that is, optionally, in some embodiments, the step "calculating the position error between each first feature point and the reference feature point in the corresponding reference feature point set based on the image difference information between the target image and the reference image" may specifically include:
(21) Mapping all the first feature points to a reference image based on image difference information between the target image and the reference image to obtain a plurality of mapping feature points;
(22) And calculating the position offset between each mapping characteristic point and the corresponding reference characteristic point to obtain the position error between each first characteristic point and the reference characteristic point in the corresponding reference characteristic point set.
The relative coordinates are calculated by normalizing the absolute coordinate position according to the length and width of the image, and assuming that the absolute position coordinates of the first feature point s in the target image are (Xs, ys), the width of the target image is W, and the height is H, the relative position coordinates of the first feature point s are (Xr, yr), where Xr = Xs/W and Yr = Ys/H.
Then, mapping the first feature point s into the reference image according to the relative position coordinate of the first feature point s to obtain a mapping feature point s ', wherein the coordinate of the mapping feature point s ' in the reference image is the relative position coordinate of the first feature point s, and assuming that the coordinate of the reference feature point e of the first feature point s is (Xe, ye), the position offset between the mapping feature point s ' and the reference feature point e can be identified as (Xe-Xr, ye-Yr).
Since the target image and the reference image may have a problem of inconsistent resolution and/or aspect ratio, the reference feature point whose position offset is less than or equal to the preset offset may be determined as the feature point to be detected for image detection, for example, the preset offset may be 0.1 or 0.05. If the position offset between the mapping feature point and the corresponding reference feature point is larger than the preset offset, the reference feature point with the position offset larger than the preset offset is not determined as the feature point to be detected for image detection.
It should be noted that, the image resolution is the number of pixels included in a unit inch, and for two images with different resolutions, because the number of pixels included in the unit inch is different, the image area ranges represented by the feature points are also different, in this embodiment, the position offset between the mapping feature point and the corresponding reference feature point is allowed to be within a specified threshold, and if the position offset between the mapping feature point and the corresponding reference feature point is greater than the preset offset, it indicates that, if the error between the mapping feature point and the corresponding reference feature point is too large, subsequent image detection is inconvenient, and therefore, the reference feature point whose position offset is greater than the preset offset is not determined as the feature point to be detected for image detection.
104. And constructing a matching region matched with the target image in the reference image based on the selected feature points to be detected and the image information of the reference image.
Since the feature point itself is a description of a small local area of the image, it is necessary to determine whether or not a region can be matched based on the feature point. In this embodiment, the pixels around the feature point are divided into two types according to the distance from the feature point, and different strategies are adopted to perform matching determination.
For pixels within a preset range around each feature point, if the feature point matching is successful, which means that the small region has a highly similar pattern, it may be determined that the pixels in the region are also successfully matched, that is, optionally, in some embodiments, the step "constructing a matching region matching the target image in the reference image based on the selected image information of the feature point to be detected and the reference image" may specifically include:
(31) Intercepting images in a preset range of each feature point to be detected in the reference image according to the image information of the reference image to obtain a first matching image block;
(32) Constructing an image block matched with each pixel point in the target image according to all pixel points of the target image and pixel points outside the preset range of each feature point to be detected, and obtaining a second matched image block;
(33) And generating a matching area matched with the target image based on the first matching image block and the second matching image block.
In the scheme, the characteristic points represent local information of the image, and if the characteristic points can be matched, the small area has a highly similar pattern. Therefore, when the first feature point in the target image has a corresponding feature point to be detected in the reference image, it is considered that the image within the preset range of the feature point to be detected matches the image within the preset range of the first feature point.
Since the feature point itself is a description of a small image local area, for the pixel points outside the preset range of the feature point, further matching determination needs to be performed according to color values thereof, that is, optionally, in some embodiments, the step "construct an image block matched with each pixel point in the target image according to all pixel points of the target image and the pixel points outside the preset range of each feature point to be detected, to obtain a second matched image block" may specifically include:
(41) Obtaining color values of all pixel points in a target image to obtain a plurality of first color values, and obtaining color values of pixel points outside a preset range of each feature point to be detected to obtain a plurality of second color values;
(42) And constructing an image block matched with each pixel point in the target image based on each first color value and each second color value to obtain a second matched image block.
For example, specifically, according to calculating the similarity between each first color value and each second color value, determining a second pixel point with the similarity greater than a preset value as a matching pixel point, and then constructing an image block matched with each pixel point in the target image based on the matching pixel point to obtain a second matching image block, that is, optionally, in some embodiments, the step "constructing an image block matched with each pixel point in the target image based on each first color value and each second color value to obtain a second matching image block" may specifically include:
(51) Calculating the similarity between each first color value and each second color value;
(52) Determining second pixel points corresponding to second color values with the similarity larger than a preset threshold as matching pixel points;
(53) And constructing an image block matched with each pixel point in the target image based on the matched pixel points to obtain a second matched image block.
Wherein color values are represented differently in different color spaces, for example, in RGB color space, various colors are obtained by changing three color channels of Red (Red, R), green (Green, G) and Blue (Blue, B) and superimposing them on each other, and for example, in HSV color space, HSI color space describes colors by Hue (Hue), saturation (Saturation or Chroma) and Brightness (Intensity or Brightness) from human visual system, wherein a model of HSV color space corresponds to a subset of cones in a cylindrical RGB coordinate system, and a top surface of a cone corresponds to V =1, which contains three surfaces of R =1, G =1, B =1 in the model, that is, HSV color space and RGB color space are convertible to each other.
In the RGB color space, usually one point of one channel changes, which results in a great change in the color finally fused together, and if three channels change simultaneously, only the light and shade finally changes, and the hue does not change greatly, therefore, in this embodiment, a method of color space conversion may be adopted to calculate the similarity between each first color value and each second color value, for example, each first color value and each second color value may be subjected to color space conversion, and then the similarity between each first color value and each second color value may be calculated according to the converted color values, wherein the weighting weight of the similarity may be adjusted according to the actual situation, for example, for a game application test, it is more concerned about the similarity between the target image and the reference image hue, and therefore, it may be more concerned about the similarity between the hues, and for an actually shot photograph, it is concerned about the chromaticity (a general term of hue and saturation), and therefore, it may be more concerned about the similarity between the chromas, and specifically set according to the actual situation.
It should be noted that, because there may be a problem that the resolution and/or the aspect ratio between the target image and the reference image are not consistent, after a pixel point on the target image is directly mapped on the reference image, it may occur that the mapped pixel point is not on the reference image, and therefore, according to image difference information between the target image and the reference image, a similarity between each first color value and each second color value needs to be calculated, that is, optionally, in some embodiments, the step "calculating the similarity between each first color value and each second color value" may specifically include:
(61) Mapping each second pixel point into the target image according to image difference information between the target image and the reference image to obtain mapped pixel points;
(62) Calculating the similarity of each mapping pixel point and the corresponding first pixel point on each color channel;
(63) And generating the similarity between each mapping pixel point and the corresponding first pixel point based on the similarity of each mapping pixel point and the corresponding first pixel point on each color channel.
For example, specifically, the coordinates of each second pixel point in the target image may be calculated according to image difference information between the target image and the reference image, and the specific method may refer to the foregoing calculation method of relative coordinates, which is not described herein again, then, the similarity of each mapping pixel point and the corresponding first pixel point on each color channel is calculated, and finally, the similarity between each mapping pixel point and the corresponding first pixel point is generated based on the similarity of each mapping pixel point and the corresponding first pixel point on each color channel.
105. And generating a detection result of the target image according to the ratio of the matching area in the reference image.
For example, specifically, when the ratio of the matching area in the reference image is greater than the preset ratio, it is determined that the target image matches the reference image, and when the ratio of the matching area in the reference image is less than or equal to the preset ratio, it is determined that the target image does not match the reference image, where the preset ratio may be set according to an actual situation, such as 80%, and in addition, the unmatched area may be displayed according to a preset policy, that is, optionally, in some embodiments, the step "generating the detection result of the target image according to the ratio of the matching area in the reference image" may further include:
(71) Removing the image corresponding to the matching area from the target image to obtain an image reserved area;
(72) And displaying the image reserved area based on a preset strategy.
For example, the image retention area is displayed in a highlighted color, and the transparency of the matching area S is adjusted to 0%, as shown in fig. 1 c; for another example, the image reserved area U is labeled, the labeling content is a "unmatched area", and then the labeled image reserved area U is displayed, as shown in fig. 1 d.
After an image pair to be detected is collected, the image pair to be detected comprises a target image and a reference image, a plurality of feature points used for representing local information in the images are extracted from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, then, according to image difference information between the target image and the reference image, second feature points used for image detection are selected from the second feature points to obtain feature points to be detected, then, a matching area matched with the target image is constructed in the reference image based on the selected image information of the feature points to be detected and the reference image, and finally, a detection result of the target image is generated according to the proportion of the matching area in the reference image. Compared with the existing image detection scheme, the scheme determines the feature points to be detected for image detection according to the image difference information between the target image and the reference image, and constructs the matching region matched with the target image in the reference image based on the selected feature points to be detected and the image information of the reference image to generate the detection result of the target image, namely, when the image detection is carried out, the image difference information between the target image and the reference image is considered, so that the inaccuracy of the detection result caused by the fact that the resolution ratio and/or the length-width ratio of the target image and the reference image are different is avoided, and therefore, the accuracy of the image detection can be improved.
The method according to the embodiment is further described in detail by way of example.
In the present embodiment, the image detection apparatus will be described by taking an example in which it is specifically integrated in a terminal.
Referring to fig. 2a, a specific process of the image detection method may be as follows:
201. and the terminal collects the image pair to be detected.
The image pair to be detected comprises a target image and a reference image, wherein the reference image and the image to be detected are both images in the same application scene, and are like images of a starting interface of a game application, wherein the reference image and the target image can be pre-stored locally, can also be pulled through accessing a network interface, can also be obtained through real-time shooting through a camera, and is specifically determined according to actual conditions.
202. The terminal extracts a plurality of feature points used for representing local information in the image from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points.
Optionally, in some embodiments, the terminal may extract, from the target image a and the reference image B, a plurality of feature points for characterizing local information in the images by using a Scale-invariant feature transform (SIFT) algorithm to obtain a plurality of first feature points n1 and a plurality of second feature points n2, as shown in fig. 2B, and as can be seen, a room in the target image a does not have a window and a door, so that feature points of the window and feature points of the door cannot be extracted in the target image a, and a room in the reference image B has a window and a door, so that feature points of the window and feature points of the door can be extracted in the reference image B.
203. And the terminal selects a second feature point for image detection from the plurality of second feature points according to the image difference information between the target image and the reference image to obtain the feature point to be detected.
Since there may be a problem of inconsistent resolution and/or aspect ratio between the target image and the reference image, it is necessary to select a second feature point for image detection from among the plurality of second feature points according to image difference information between the target image and the reference image, and if an absolute position (i.e., a position of the feature point in the image to which the feature point belongs) is used, it may occur that a certain feature point in the target image is out of an image area of the reference image.
204. And the terminal constructs a matching region matched with the target image in the reference image based on the selected image information of the feature point to be detected and the reference image.
For example, specifically, the terminal may determine, as a matching pixel point, a second pixel point with the similarity greater than a preset value according to calculating the similarity between each first color value and each second color value, and then construct, based on the matching pixel point, an image block matched with each pixel point in the target image, so as to obtain a second matching image block.
205. And the terminal generates a detection result of the target image according to the ratio of the matching area in the reference image.
For example, specifically, when the ratio of the matching area in the reference image is greater than the preset ratio, the terminal determines that the target image matches the reference image, and when the ratio of the matching area in the reference image is less than or equal to the preset ratio, the terminal determines that the target image does not match the reference image.
To facilitate understanding of the image detection method according to the embodiment of the present invention, a scene of a game test is taken as an example for description, in a certain game test task, pictures of 48 mobile phones need to be confirmed one by one, a tester may first detect a picture of one of the 48 mobile phones, after the detection is completed, the picture of the one mobile phone is taken as a reference image, and pictures of 47 other mobile phones are taken as target images.
As can be seen from the above, after the terminal acquires the image pair to be detected, the image pair to be detected includes a target image and a reference image, the terminal extracts a plurality of feature points used for representing local information in the images from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, then, image difference information between the target image and the reference image of the terminal selects second feature points used for image detection from the plurality of second feature points to obtain feature points to be detected, then, the terminal constructs a matching region matched with the target image in the reference image based on the selected feature points to be detected and image information of the reference image, and finally, the terminal generates a detection result of the target image according to a ratio of the matching region in the reference image. Compared with the existing image detection scheme, the terminal determines the feature points to be detected for image detection according to the image difference information between the target image and the reference image, and constructs the matching region matched with the target image in the reference image based on the selected image information of the feature points to be detected and the reference image to generate the detection result of the target image, namely, when the image detection is carried out, the image difference information between the target image and the reference image is considered, so that the inaccuracy of the detection result caused by the difference of the resolution ratio and/or the length-width ratio of the target image and the reference image is avoided, and therefore, the accuracy of the image detection can be improved.
In order to better implement the image detection method according to the embodiment of the present invention, an embodiment of the present invention further provides an image detection apparatus (referred to as a detection apparatus for short) based on the foregoing image detection method. The terms are the same as those in the image detection method, and details of implementation can be referred to the description in the method embodiment.
Referring to fig. 3a, fig. 3a is a schematic structural diagram of an image detection apparatus according to an embodiment of the present invention, where the image detection apparatus may include an acquisition module 301, an extraction module 302, a selection module 303, a construction module 304, and a generation module 305, which may specifically be as follows:
the acquisition module 301 is configured to acquire an image pair to be detected.
The image pair to be detected comprises a target image and a reference image, wherein the reference image and the image to be detected are both images in the same application scene, and are like images of a starting interface of a game application, wherein the reference image and the target image can be pre-stored locally, can also be pulled through accessing a network interface, can also be obtained through real-time shooting through a camera, and is specifically determined according to actual conditions.
An extracting module 302, configured to extract a plurality of feature points used for characterizing local information in an image from a target image and a reference image, respectively, to obtain a plurality of first feature points and a plurality of second feature points.
For example, in detail, the extracting module 302 may extract a plurality of feature points for characterizing local information in the image from the target image and the reference image by using a Scale-invariant feature transform (SIFT) algorithm, so as to obtain a plurality of first feature points and a plurality of second feature points.
The selecting module 303 is configured to select a second feature point used for image detection from the multiple second feature points according to image difference information between the target image and the reference image, so as to obtain a feature point to be detected.
Optionally, in some embodiments, the selecting module 303 may specifically include:
the first calculation unit is used for calculating Euclidean distances between the first characteristic points and the second characteristic points;
the selection unit is used for selecting at least one reference characteristic point corresponding to each first characteristic point from the plurality of second characteristic points according to the calculation result to obtain a reference characteristic point set corresponding to each first characteristic point;
a second calculation unit, configured to calculate a position error between each first feature point and a reference feature point in a corresponding reference feature point set based on image difference information between the target image and the reference image;
and the determining unit is used for determining the reference characteristic points with the position errors meeting the preset conditions as the characteristic points to be detected matched with the first characteristic points.
Optionally, in some embodiments, the second computing unit may specifically be configured to: mapping all the first feature points to the reference image based on image difference information between the target image and the reference image to obtain a plurality of mapping feature points, calculating the position offset between each mapping feature point and the corresponding reference feature point, and obtaining the position error between each first feature point and the reference feature point in the corresponding reference feature point set.
Optionally, in some embodiments, the determining unit may be specifically configured to: and determining the reference characteristic points with the position offset smaller than or equal to the preset offset as the characteristic points to be detected matched with the first characteristic points.
And the constructing module 304 is used for constructing a matching region matched with the target image in the reference image based on the selected feature points to be detected and the image information of the reference image.
Optionally, in some embodiments, the building module 304 may specifically include:
the intercepting submodule is used for intercepting images within a preset range of each feature point to be detected in the reference image according to the image information of the reference image to obtain a first matching image block;
the construction sub-module is used for constructing an image block matched with each pixel point in the target image according to all pixel points of the target image and the pixel points outside the preset range of each feature point to be detected, so as to obtain a second matched image block;
and the generation sub-module is used for generating a matching area matched with the target image based on the first matching image block and the second matching image block.
Optionally, in some embodiments, the building sub-module may specifically include:
the acquiring unit is used for acquiring color values of all pixel points in the target image to obtain a plurality of first color values, and acquiring color values of pixel points outside a preset range of each feature point to be detected to obtain a plurality of second color values;
and the construction unit is used for constructing the image blocks matched with the pixel points in the target image based on the first color values and the second color values to obtain second matched image blocks.
Optionally, in some embodiments, the building unit may specifically include:
the calculating subunit is used for calculating the similarity between each first color value and each second color value;
the determining subunit is used for determining second pixel points corresponding to second color values with the similarity larger than a preset threshold as matching pixel points;
and the construction subunit is used for constructing an image block matched with each pixel point in the target image based on the matched pixel points to obtain a second matched image block.
Optionally, in some embodiments, the calculating subunit may specifically be configured to: and mapping each second pixel point to the target image according to the image difference information between the target image and the reference image to obtain mapping pixel points, calculating the similarity of each mapping pixel point and the corresponding first pixel point on each color channel, and generating the similarity between each mapping pixel point and the corresponding first pixel point on the basis of the similarity of each mapping pixel point and the corresponding first pixel point on each color channel.
A generating module 305, configured to generate a detection result of the target image according to a ratio of the matching area in the reference image.
For example, specifically, the generating module 305 determines that the target image matches the reference image when the proportion of the matching area in the reference image is greater than the preset proportion, and the generating module 305 determines that the target image does not match the reference image when the proportion of the matching area in the reference image is less than or equal to the preset proportion.
Optionally, referring to fig. 3b, in some embodiments, the detection apparatus may further include a display module 306, where the display module 306 may be specifically configured to: and removing the image corresponding to the matching area from the target image to obtain an image reserved area, and displaying the image reserved area based on a preset strategy.
It can be seen that, after the acquisition module 301 acquires an image pair to be detected, the image pair to be detected includes a target image and a reference image, the extraction module 302 extracts a plurality of feature points used for characterizing local information in the images from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, then the selection module 303 selects a second feature point used for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain a feature point to be detected, then the construction module 304 constructs a matching region matched with the target image in the reference image based on the selected feature point to be detected and image information of the reference image, and finally the generation module 305 generates a detection result of the target image according to a ratio of the matching region in the reference image. Compared with the existing image detection scheme, the selection module 303 of the scheme selects the feature point to be detected for image detection from the second feature points according to the image difference information between the target image and the reference image, and the construction module 304 constructs a matching region matched with the target image in the reference image based on the selected feature point to be detected and the image information of the reference image to generate the detection result of the target image, that is, when image detection is performed, the problem that the detection result is not accurate due to the fact that the resolution ratio and/or the aspect ratio of the target image is different from that of the reference image is avoided in consideration of the image difference information between the target image and the reference image, so that the accuracy of image detection can be improved.
Accordingly, an embodiment of the present invention further provides a terminal, as shown in fig. 4, the terminal may include Radio Frequency (RF) circuits 401, a memory 402 including one or more computer-readable storage media, an input unit 403, a display unit 404, a sensor 405, an audio circuit 406, a Wireless Fidelity (WiFi) module 407, a processor 408 including one or more processing cores, and a power supply 409. Those skilled in the art will appreciate that the terminal structure shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 401 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 408 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 401 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 401 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), email, short Message Service (SMS), and the like.
The memory 402 may be used to store software programs and modules, and the processor 408 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 408 and the input unit 403 with access to the memory 402.
The input unit 403 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, input unit 403 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (such as operations by the user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 408, and can receive and execute commands from the processor 408. In addition, the touch sensitive surface can be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 403 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 404 may be used to display information input by or provided to a user and various graphic user interfaces of the terminal, which may be configured of graphics, text, icons, video, and any combination thereof. The Display unit 404 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation may be transmitted to the processor 408 to determine the type of touch event, and the processor 408 may then provide a corresponding visual output on the display panel based on the type of touch event. Although in FIG. 4 the touch-sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The terminal may also include at least one sensor 405, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 406, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 406 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 406 and converted into audio data, which is then processed by the audio data output processor 408, and then sent to another terminal, for example, via the RF circuit 401, or the audio data is output to the memory 402 for further processing. The audio circuitry 406 may also include an earbud jack to provide peripheral headset communication with the terminal.
WiFi belongs to short distance wireless transmission technology, and the terminal can help the user to send and receive e-mail, browse web page and access streaming media etc. through WiFi module 407, it provides wireless broadband internet access for the user. Although fig. 4 shows the WiFi module 407, it is understood that it does not belong to the essential constitution of the terminal, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 408 is a control center of the terminal, connects various parts of the entire handset using various interfaces and lines, performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby integrally monitoring the handset. Alternatively, processor 408 may include one or more processing cores; preferably, the processor 408 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 408.
The terminal also includes a power source 409 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 408 via a power management system to manage charging, discharging, and power consumption via the power management system. The power source 409 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein. Specifically, in this embodiment, the processor 408 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 408 runs the application programs stored in the memory 402, thereby implementing various functions:
the method comprises the steps of collecting an image pair to be detected, extracting a plurality of feature points used for representing local information in the image from a target image and a reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, selecting second feature points used for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain the feature points to be detected, constructing a matching area matched with the target image in the reference image based on the selected feature points to be detected and image information of the reference image, and generating a detection result of the target image according to the proportion of the matching area in the reference image.
After an image pair to be detected is collected, the image pair to be detected comprises a target image and a reference image, a plurality of feature points used for representing local information in the images are extracted from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, then, according to image difference information between the target image and the reference image, second feature points used for image detection are selected from the second feature points to obtain feature points to be detected, then, a matching area matched with the target image is constructed in the reference image based on the selected image information of the feature points to be detected and the reference image, and finally, a detection result of the target image is generated according to the proportion of the matching area in the reference image. Compared with the existing image detection scheme, the scheme determines the feature points to be detected for image detection according to the image difference information between the target image and the reference image, and constructs the matching region matched with the target image in the reference image based on the selected image information of the feature points to be detected and the reference image to generate the detection result of the target image, namely, when the image detection is carried out, the image difference information between the target image and the reference image is taken into consideration, so that the inaccuracy of the detection result caused by the fact that the resolution ratio and/or the length-width ratio of the target image and the reference image are different is avoided, and therefore, the accuracy of the image detection can be improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present invention provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute steps in any one of the image detection methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
the method comprises the steps of collecting an image pair to be detected, extracting a plurality of feature points used for representing local information in the image from a target image and a reference image respectively to obtain a plurality of first feature points and a plurality of second feature points, selecting second feature points used for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain the feature points to be detected, constructing a matching area matched with the target image in the reference image based on the selected feature points to be detected and image information of the reference image, and generating a detection result of the target image according to the proportion of the matching area in the reference image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image detection method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any image detection method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The image detection method, the image detection device, the image detection terminal and the storage medium provided by the embodiments of the present invention are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as limiting the present invention.

Claims (14)

1. An image detection method, comprising:
acquiring a pair of images to be detected, wherein the pair of images to be detected comprises a target image and a reference image;
extracting a plurality of feature points used for representing local information in the image from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points;
selecting second feature points for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain feature points to be detected;
intercepting images in a preset range of each feature point to be detected in the reference image according to the image information of the reference image to obtain a first matching image block;
constructing an image block matched with each pixel point in the target image according to all pixel points of the target image and pixel points outside the preset range of each feature point to be detected, and obtaining a second matched image block;
generating a matching area matched with the target image based on the first matching image block and the second matching image block;
and generating a detection result of the target image according to the ratio of the matching area in the reference image.
2. The method according to claim 1, wherein the step of constructing an image block matched with each pixel point in the target image according to all pixel points of the target image and pixel points outside a preset range of each feature point to be detected to obtain a second matched image block comprises the steps of:
obtaining color values of all pixel points in a target image to obtain a plurality of first color values;
obtaining color values of pixel points outside a preset range of each characteristic point to be detected to obtain a plurality of second color values;
and constructing an image block matched with each pixel point in the target image based on each first color value and each second color value to obtain a second matched image block.
3. The method of claim 2, wherein constructing an image block matched with each pixel point in the target image based on each first color value and each second color value to obtain a second matched image block comprises:
calculating the similarity between each first color value and each second color value;
determining second pixel points corresponding to second color values with the similarity larger than a preset threshold value as matching pixel points;
and constructing an image block matched with each pixel point in the target image based on the matched pixel points to obtain a second matched image block.
4. The method of claim 3, wherein calculating the similarity between each first color value and each second color value comprises:
mapping each second pixel point into the target image according to the image difference information between the target image and the reference image to obtain mapped pixel points;
calculating the similarity of each mapping pixel point and the corresponding first pixel point on each color channel;
and generating the similarity between each mapping pixel point and the corresponding first pixel point based on the similarity of each mapping pixel point and the corresponding first pixel point on each color channel.
5. The method according to any one of claims 1 to 4, wherein the selecting a second feature point for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain a feature point to be detected comprises:
calculating Euclidean distances between the first characteristic points and the second characteristic points;
selecting at least one reference characteristic point corresponding to each first characteristic point from the plurality of second characteristic points according to the calculation result to obtain a reference characteristic point set corresponding to each first characteristic point;
calculating a position error between each first characteristic point and a reference characteristic point in a corresponding reference characteristic point set based on image difference information between the target image and the reference image;
and determining the reference characteristic points with the position errors meeting the preset conditions as the characteristic points to be detected for image detection.
6. The method according to claim 5, wherein calculating a position error between each first feature point and a reference feature point in a corresponding reference feature point set based on image difference information between the target image and a reference image comprises:
mapping all the first feature points to a reference image based on image difference information between the target image and the reference image to obtain a plurality of mapping feature points;
calculating the position offset between each mapping characteristic point and the corresponding reference characteristic point to obtain the position error between each first characteristic point and the reference characteristic point in the corresponding reference characteristic point set;
the determining the reference feature points with the position errors meeting the preset conditions as the feature points to be detected for image detection comprises the following steps: and determining the reference characteristic points with the position offset smaller than or equal to the preset offset as the characteristic points to be detected for image detection.
7. The method according to any one of claims 1 to 4, wherein after the generating the detection result of the target image according to the proportion of the matching area in the reference image, the method further comprises:
removing the image corresponding to the matching area from the target image to obtain an image reserved area;
and displaying the image reserved area based on a preset strategy.
8. An image detection apparatus, characterized by comprising:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring an image pair to be detected, and the image pair to be detected comprises a target image and a reference image;
the extraction module is used for extracting a plurality of feature points used for representing local information in the image from the target image and the reference image respectively to obtain a plurality of first feature points and a plurality of second feature points;
the selection module is used for selecting second feature points for image detection from the plurality of second feature points according to image difference information between the target image and the reference image to obtain feature points to be detected;
the construction module is used for intercepting images in a preset range of each feature point to be detected in the reference image according to the image information of the reference image to obtain a first matching image block; constructing image blocks matched with all pixel points in the target image according to all pixel points of the target image and pixel points outside the preset range of all feature points to be detected to obtain second matched image blocks; generating a matching area matched with the target image based on the first matching image block and the second matching image block;
and the generating module is used for generating a detection result of the target image according to the ratio of the matching area in the reference image.
9. The apparatus of claim 8, wherein the build module comprises:
the intercepting submodule is used for intercepting images within a preset range of each feature point to be detected in the reference image according to the image information of the reference image to obtain a first matching image block;
the construction sub-module is used for constructing an image block matched with each pixel point in the target image according to all pixel points of the target image and pixel points outside the preset range of each feature point to be detected, so as to obtain a second matched image block;
and the generation sub-module is used for generating a matching area matched with the target image based on the first matching image block and the second matching image block.
10. The apparatus of claim 9, wherein the building module comprises:
the acquiring unit is used for acquiring color values of all pixel points in the target image to obtain a plurality of first color values, and acquiring color values of pixel points outside a preset range of each feature point to be detected to obtain a plurality of second color values;
and the construction unit is used for constructing the image blocks matched with the pixel points in the target image based on the first color values and the second color values to obtain second matched image blocks.
11. The apparatus of claim 10, wherein the building unit comprises:
the calculating subunit is used for calculating the similarity between each first color value and each second color value;
the determining subunit is used for determining second pixel points corresponding to second color values with the similarity greater than a preset threshold as matching pixel points;
and the construction subunit is used for constructing an image block matched with each pixel point in the target image based on the matched pixel points to obtain a second matched image block.
12. The apparatus according to claim 11, wherein the computing subunit is specifically configured to:
mapping each second pixel point into the target image according to the image difference information between the target image and the reference image to obtain mapped pixel points;
calculating the similarity of each mapping pixel point and the corresponding first pixel point on each color channel;
and generating the similarity between each mapping pixel point and the corresponding first pixel point based on the similarity of each mapping pixel point and the corresponding first pixel point on each color channel.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the image detection method according to any of claims 1 to 7 are implemented when the program is executed by the processor.
14. A storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the image detection method according to any one of claims 1 to 7.
CN202010266809.4A 2020-04-07 2020-04-07 Image detection method and device, electronic equipment and storage medium Active CN111476780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010266809.4A CN111476780B (en) 2020-04-07 2020-04-07 Image detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010266809.4A CN111476780B (en) 2020-04-07 2020-04-07 Image detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111476780A CN111476780A (en) 2020-07-31
CN111476780B true CN111476780B (en) 2023-04-07

Family

ID=71750148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010266809.4A Active CN111476780B (en) 2020-04-07 2020-04-07 Image detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111476780B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597840A (en) * 2020-12-14 2021-04-02 深圳集智数字科技有限公司 Image identification method, device and equipment
CN112686302B (en) * 2020-12-29 2024-02-13 科大讯飞股份有限公司 Image feature point matching method, computer equipment and storage device
CN112750116B (en) * 2021-01-15 2023-08-11 北京市商汤科技开发有限公司 Defect detection method, device, computer equipment and storage medium
CN112883827B (en) * 2021-01-28 2024-03-29 腾讯科技(深圳)有限公司 Method and device for identifying specified target in image, electronic equipment and storage medium
CN112818366B (en) * 2021-02-01 2023-09-26 东北大学 Image feature detection method based on ntru full homomorphic encryption
CN112990228B (en) * 2021-03-05 2024-03-29 浙江商汤科技开发有限公司 Image feature matching method, related device, equipment and storage medium
CN113111713B (en) * 2021-03-12 2024-02-27 北京达佳互联信息技术有限公司 Image detection method and device, electronic equipment and storage medium
WO2022266878A1 (en) * 2021-06-23 2022-12-29 深圳市大疆创新科技有限公司 Scene determining method and apparatus, and computer readable storage medium
CN113238972B (en) * 2021-07-12 2021-10-29 腾讯科技(深圳)有限公司 Image detection method, device, equipment and storage medium
CN114943943B (en) * 2022-05-16 2023-10-03 中国电信股份有限公司 Target track obtaining method, device, equipment and storage medium
CN114882445A (en) * 2022-07-06 2022-08-09 深圳百城精工有限公司 Elevator monitoring and early warning method, device, equipment and medium based on image vision
EP4354391A1 (en) 2022-08-26 2024-04-17 Contemporary Amperex Technology Co., Limited Method for detecting target point in image, apparatus, and computer storage medium
CN115965848B (en) * 2023-03-13 2023-05-23 腾讯科技(深圳)有限公司 Image processing method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006139579A (en) * 2004-11-12 2006-06-01 Kitakyushu Foundation For The Advancement Of Industry Science & Technology Histogram approximation reconstitution apparatus and histogram approximation reconstitution method, and image retrieval apparatus and image retrieval method
CN105513038A (en) * 2014-10-20 2016-04-20 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform
CN105701766A (en) * 2016-02-24 2016-06-22 网易(杭州)网络有限公司 Image matching method and device
CN108154526A (en) * 2016-12-06 2018-06-12 奥多比公司 The image alignment of burst mode image
CN108920580A (en) * 2018-06-25 2018-11-30 腾讯科技(深圳)有限公司 Image matching method, device, storage medium and terminal
CN110188782A (en) * 2019-06-11 2019-08-30 北京字节跳动网络技术有限公司 Image similarity determines method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110941989A (en) * 2019-10-18 2020-03-31 北京达佳互联信息技术有限公司 Image verification method, image verification device, video verification method, video verification device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5139716B2 (en) * 2007-05-16 2013-02-06 キヤノン株式会社 Image search apparatus and image search method
EP2801054B1 (en) * 2012-01-02 2017-06-28 Telecom Italia S.p.A. Method and system for comparing images
US9798949B1 (en) * 2015-03-19 2017-10-24 A9.Com, Inc. Region selection for image match

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006139579A (en) * 2004-11-12 2006-06-01 Kitakyushu Foundation For The Advancement Of Industry Science & Technology Histogram approximation reconstitution apparatus and histogram approximation reconstitution method, and image retrieval apparatus and image retrieval method
CN105513038A (en) * 2014-10-20 2016-04-20 网易(杭州)网络有限公司 Image matching method and mobile phone application test platform
CN105701766A (en) * 2016-02-24 2016-06-22 网易(杭州)网络有限公司 Image matching method and device
CN108154526A (en) * 2016-12-06 2018-06-12 奥多比公司 The image alignment of burst mode image
CN108920580A (en) * 2018-06-25 2018-11-30 腾讯科技(深圳)有限公司 Image matching method, device, storage medium and terminal
CN110188782A (en) * 2019-06-11 2019-08-30 北京字节跳动网络技术有限公司 Image similarity determines method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110941989A (en) * 2019-10-18 2020-03-31 北京达佳互联信息技术有限公司 Image verification method, image verification device, video verification method, video verification device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Yuanman Li 等.Fast and Effective Image Copy-Move Forgery Detection via Hierarchical Feature Point Matching.《IEEE Transactions on Information Forensics and Security》.2018,第14卷(第05期),1307-1322. *
孙昌 等.基于混合特征的近似重复图像检索方法.《微型电脑应用》.2015,第31卷(第09期),4+27-30. *
李丽 等.基于特征点提取匹配的蝗虫切片图像.《农业工程学报》.2015,第31卷(第07期),165-173. *
罗楠 等.针对重复模式图像的成对特征点匹配.《中国图象图形学报》.2015,第20卷(第01期),117-128. *

Also Published As

Publication number Publication date
CN111476780A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111476780B (en) Image detection method and device, electronic equipment and storage medium
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN106874906B (en) Image binarization method and device and terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN110458921B (en) Image processing method, device, terminal and storage medium
CN109495616B (en) Photographing method and terminal equipment
CN108259746B (en) Image color detection method and mobile terminal
JP7467667B2 (en) Detection result output method, electronic device and medium
CN109104578B (en) Image processing method and mobile terminal
CN109727212B (en) Image processing method and mobile terminal
CN109246351B (en) Composition method and terminal equipment
CN111556337B (en) Media content implantation method, model training method and related device
CN107516099B (en) Method and device for detecting marked picture and computer readable storage medium
CN112541489A (en) Image detection method and device, mobile terminal and storage medium
CN109068063B (en) Three-dimensional image data processing and displaying method and device and mobile terminal
CN110602384B (en) Exposure control method and electronic device
CN109348212B (en) Image noise determination method and terminal equipment
CN110717486B (en) Text detection method and device, electronic equipment and storage medium
CN110766606A (en) Image processing method and electronic equipment
CN108063884B (en) Image processing method and mobile terminal
CN111679737B (en) Hand segmentation method and electronic device
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN109492451B (en) Coded image identification method and mobile terminal
CN107734049B (en) Network resource downloading method and device and mobile terminal
CN107194363B (en) Image saturation processing method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant