CN115760703A - Image detection model training method, difference detection method and related device - Google Patents

Image detection model training method, difference detection method and related device Download PDF

Info

Publication number
CN115760703A
CN115760703A CN202211300279.6A CN202211300279A CN115760703A CN 115760703 A CN115760703 A CN 115760703A CN 202211300279 A CN202211300279 A CN 202211300279A CN 115760703 A CN115760703 A CN 115760703A
Authority
CN
China
Prior art keywords
feature
detection
difference
feature map
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211300279.6A
Other languages
Chinese (zh)
Inventor
潘国雄
郑佳
潘柄存
潘华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211300279.6A priority Critical patent/CN115760703A/en
Publication of CN115760703A publication Critical patent/CN115760703A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an image detection model training method, a difference detection method and a related device, wherein the image detection model comprises a feature extraction network and a difference detection network, and the method comprises the following steps: inputting the reference graph and the detection graph into a feature extraction network to obtain a reference feature graph and a detection feature graph; inputting the reference characteristic diagram and the detection characteristic diagram into a difference detection network to obtain a target area with difference on the reference characteristic diagram and the detection characteristic diagram; the target area is determined based on feature correction values at the same positions on the reference feature map and the detection feature map, the feature correction values are determined based on the feature value groups after correction operation, the feature value groups comprise feature values at the same positions on the reference feature map and the detection feature map, and the correction operation is related to the values of the feature values in the feature value groups; and adjusting parameters of the image detection model based on the target area until the trained image detection model is obtained. According to the scheme, the accuracy rate of difference detection of the image detection model can be improved.

Description

Image detection model training method, difference detection method and related device
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an image detection model training method, a difference detection method, and a related apparatus.
Background
With the rise of the computer vision field, the difference detection is increasingly regarded as an important branch of the computer vision field, the change of the same scene at different time points can be detected by using the difference detection technology, when a large number of images need to be subjected to difference detection, a trained image detection model is usually obtained by using a training model mode, the difference detection is performed by using the trained image detection model so as to improve the detection efficiency, and the training effect of the image detection model directly determines the accuracy of the difference detection. In view of this, how to improve the accuracy of the difference detection performed by the image detection model becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide an image detection model training method, a difference detection method and a related device, which can improve the accuracy of difference detection of an image detection model.
In order to solve the above technical problem, a first aspect of the present application provides an image detection model training method, where an image detection model includes a feature extraction network and a difference detection network, the method includes: inputting the reference graph and the detection graph into the feature extraction network to obtain a reference feature graph and a detection feature graph; inputting the reference characteristic diagram and the detection characteristic diagram into the difference detection network to obtain a target area with difference on the reference characteristic diagram and the detection characteristic diagram; wherein the target region is determined based on feature correction values at the same positions on the reference feature map and the detection feature map, the feature correction values are determined based on a feature value group after a correction operation, the feature value group includes feature values at the same positions on the reference feature map and the detection feature map, and the correction operation is associated with numerical values of the feature values in the feature value group; and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
In order to solve the above technical problem, a second aspect of the present application provides a difference detection method, including: obtaining a to-be-detected image group, wherein the to-be-detected image group comprises a reference image and a to-be-detected image; inputting the image group to be detected into an image detection model to obtain a target area with difference; wherein the image detection model is obtained after training by the method of the first aspect.
To solve the above technical problem, a third aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, wherein the memory stores program data, and the processor calls the program data to execute the method of the first or second aspect.
In order to solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium, on which program data are stored, the program data implementing the method of the first or second aspect when being executed by a processor.
According to the scheme, the reference graph and the detection graph are input into the feature extraction network, so that the feature extraction network performs feature extraction on the base graph to obtain a reference feature graph, the detection graph performs feature extraction to obtain a detection feature graph, the reference feature graph and the detection feature graph are input into the difference detection network, so that the difference detection network determines the target area with difference on the reference feature graph and the detection feature graph based on the feature correction value at the same position on the reference feature graph and the detection feature graph, wherein the feature correction value is determined based on the feature value group after the correction operation, the feature value group comprises the feature values at the same position on the reference feature graph and the detection feature graph, and the correction operation is related to the value of the feature values in the feature value group. Therefore, compared with the characteristic values on the reference characteristic diagram and the detection characteristic diagram which are directly compared, the characteristics corresponding to the characteristic value groups are corrected through correction operation to obtain the characteristic correction value, difference detection is carried out based on the characteristic correction value, the precision of the difference detection can be improved to obtain a more accurate target area, then the parameters of the image detection model are adjusted based on the target area until the preset convergence condition is met, and the trained image detection model is obtained, so that the accuracy of the difference detection of the trained image detection model is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a training method for an image detection model according to the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of the image detection model training method of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a training method for an image inspection model according to another embodiment of the present disclosure;
FIG. 4 is a schematic flowchart of another embodiment of a training method for an image inspection model according to the present application;
FIG. 5 is a schematic flowchart of another embodiment of a training method for an image inspection model according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an image inspection model according to the present application;
FIG. 7 is a schematic structural diagram of an embodiment corresponding to the feature matching module in FIG. 6;
FIG. 8 is a schematic structural diagram of an embodiment corresponding to the feature comparison module in FIG. 6;
FIG. 9 is a schematic flow chart diagram illustrating an embodiment of a method for detecting differences according to the present application;
FIG. 10 is a schematic diagram of an embodiment of an electronic device;
FIG. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
The image detection model training method is used for training an image detection model, wherein the image detection model at least comprises a feature extraction network and a difference detection network, the image detection model is used for detecting differences among images, and an execution main body corresponding to the image detection model training method is a processor capable of calling the image detection model.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of an image detection model training method according to the present application, the method including:
s101: and inputting the reference graph and the detection graph into a feature extraction network to obtain a reference feature graph and a detection feature graph.
Specifically, the reference graph and the detection graph form a training sample pair, the reference graph and the detection graph in the same training sample pair correspond to the same application scene, and the reference graph and the detection graph are input into the feature extraction network, so that the feature extraction network performs feature extraction on the basic graph to obtain a reference feature graph, and performs feature extraction on the detection graph to obtain a detection feature graph.
In an application mode, the feature extraction network comprises a first convolution module and a second convolution module which are identical in structure, and parameters of the first convolution module and the second convolution module are kept consistent all the time when the parameters are adjusted, namely, the first convolution module and the second convolution module in the feature extraction network are twin convolution modules, a reference image is input into the first convolution module to obtain a reference feature image, a detection image is input into the second convolution module to obtain a detection feature image, and therefore the efficiency of feature extraction is improved based on the twin network.
In another application mode, the feature extraction network comprises cascaded convolution modules, the reference graph and the detection graph are sequentially input into the cascaded convolution modules, feature extraction is carried out on the reference graph and the detection graph, and the reference feature graph corresponding to the reference graph and the detection feature graph corresponding to the detection graph are obtained respectively, so that the complexity of the feature extraction network is reduced.
S102: and inputting the reference characteristic diagram and the detection characteristic diagram into a difference detection network to obtain a target area with a difference between the reference characteristic diagram and the detection characteristic diagram, wherein the target area is determined based on characteristic correction values at the same positions on the reference characteristic diagram and the detection characteristic diagram, the characteristic correction values are determined based on a characteristic value group after correction operation, the characteristic value group comprises characteristic values at the same positions on the reference characteristic diagram and the detection characteristic diagram, and the correction operation is related to the values of the characteristic values in the characteristic value group.
Specifically, the reference feature map and the detection feature map are input to the difference detection network so that the difference detection network determines the target region where there is a difference between the reference feature map and the detection feature map based on feature correction values at the same positions on the reference feature map and the detection feature map, wherein the feature correction values are determined based on a feature value group after a correction operation, the feature value group including feature values at the same positions on the reference feature map and the detection feature map, the correction operation being associated with numerical values of the feature values in the feature value group.
In one application, the reference characteristic diagram and the detection characteristic diagram are input into a difference detection network, so that the difference detection network corrects characteristic values on the reference characteristic diagram and the detection characteristic diagram, wherein the correction operation comprises amplifying difference values between the characteristic values in the characteristic group, determining a characteristic correction value based on the amplified difference values, and further determining a target area with difference on the reference characteristic diagram and the detection characteristic diagram based on the characteristic correction value so as to improve the accuracy of difference detection.
In another application, the reference feature map and the detection feature map are input to a difference detection network, so that the difference detection network determines a detection area with a difference between feature values in the feature value group based on a difference between the feature values in the reference feature map and the detection feature map, and performs a correction operation on the feature values in the detection area corresponding to the reference feature map and the detection feature map, wherein the correction operation includes amplifying the difference between the feature values in the feature group in the detection area, determining a feature correction value in the detection area based on the amplified difference, and performing secondary verification on the detection area by using the feature correction value to determine a target area with a difference between the reference feature map and the detection feature map, so as to improve the accuracy of difference detection.
In an application scenario, the correction operation includes performing weighted summation on absolute difference and product between feature values in the feature value set, so as to obtain a feature correction value.
S103: and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
Specifically, the reference graph and the detection graph correspond to identification regions with differences, the target region and the identification regions are compared to obtain a loss value, parameters of the image detection model are adjusted based on the loss value until preset convergence conditions are met, and the trained image detection model is obtained.
In an application mode, the target region further comprises a confidence level, the preset convergence condition is determined based on the loss value and the confidence level of the target region, when the loss value is smaller than the loss threshold value and the confidence level exceeds the confidence level threshold value, the training process is ended, and the trained image detection model is obtained so as to obtain the image detection model with the higher confidence level.
In another application mode, the preset convergence condition is determined based on the loss value and the training frequency, and when the loss value is smaller than the loss threshold and the training frequency exceeds the frequency threshold, the training process is ended to obtain the trained image detection model, so as to obtain the image detection model with higher stability.
According to the scheme, the reference graph and the detection graph are input into the feature extraction network, so that the feature extraction network performs feature extraction on the base graph to obtain a reference feature graph, the detection graph performs feature extraction to obtain a detection feature graph, the reference feature graph and the detection feature graph are input into the difference detection network, so that the difference detection network determines the target area with difference on the reference feature graph and the detection feature graph based on the feature correction value at the same position on the reference feature graph and the detection feature graph, wherein the feature correction value is determined based on the feature value group after the correction operation, the feature value group comprises the feature values at the same position on the reference feature graph and the detection feature graph, and the correction operation is related to the value of the feature values in the feature value group. Therefore, compared with the characteristic values on the reference characteristic diagram and the detection characteristic diagram which are directly compared, the characteristics corresponding to the characteristic value set are corrected through correction operation to obtain a characteristic correction value, difference detection is carried out on the basis of the characteristic correction value, the precision of the difference detection can be improved to obtain a more accurate target area, the parameters of the image detection model are adjusted on the basis of the target area until a preset convergence condition is met, and the trained image detection model is obtained, so that the accuracy of the difference detection of the trained image detection model is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of an image detection model training method according to the present application, the method including:
s201: and inputting the reference graph and the detection graph into a feature extraction network to obtain a reference feature graph and a detection feature graph.
Specifically, the reference graph and the detection graph are input into a feature extraction network, so that the feature extraction network respectively performs feature extraction on the base graph and the detection graph to obtain a reference feature graph corresponding to the reference graph and a detection feature graph corresponding to the detection graph.
S202: and inputting the reference characteristic diagram and the detection characteristic diagram into a difference detection network, and correcting the characteristic value group based on the difference and the product between the characteristic values in the characteristic value group to obtain a characteristic correction value.
Specifically, the reference feature map and the detection feature map are input to the difference detection network, so that the difference detection network performs a correction operation on the feature value group based on the difference and the product between the feature values in the feature value group, thereby obtaining a corrected feature correction value.
In an application mode, a difference characteristic diagram and an associated characteristic diagram are obtained after a reference characteristic diagram and a detection characteristic diagram are input into a difference detection network, wherein characteristic values on the difference characteristic diagram are obtained based on differences between the characteristic values in the characteristic value group, characteristic values on the associated characteristic diagram are obtained based on a product of the characteristic values in the characteristic value group, the difference characteristic diagram and the associated characteristic diagram are spliced, dimension reduction is carried out on the spliced characteristic diagram to obtain a matched characteristic diagram, the dimensions of the matched characteristic diagram, the reference characteristic diagram and the detection characteristic diagram are the same, and corresponding feature correction values are arranged on the matched characteristic diagram.
In another application mode, the reference feature map and the detection feature map are input to a difference detection network, the difference detection network comprises a difference operation branch and a cross-correlation operation branch, the difference operation branch calculates a difference value of the features at the same positions on the reference feature map and the detection feature map and determines an absolute value of the difference value to obtain an absolute difference value, the cross-correlation operation branch calculates a product of the feature values at the same positions on the reference feature map and the detection feature map to obtain a product value, the absolute difference value and the product value are subjected to weighted summation to determine a feature correction value on the matching feature map to obtain a matching feature map, and the matching feature map has the same dimension as the reference feature map and the detection feature map.
S203: and determining a target area with a difference between the reference feature map and the detection feature map based on the feature correction value.
Specifically, the characteristic correction value is compared with the characteristic value at the same position on the reference characteristic diagram, the characteristic correction value is compared with the characteristic value at the same position on the detection characteristic diagram, the characteristic correction value with the difference value exceeding the characteristic value difference threshold value between the characteristic value and the characteristic value on the reference characteristic diagram or the detection characteristic diagram is determined, and the target area with the difference between the reference characteristic diagram and the detection characteristic diagram is determined based on the characteristic correction value with the difference value exceeding the characteristic value difference threshold value.
Further, when the difference between the feature values in the feature value group is larger, the feature correction value is more obviously distinguished from the reference feature map or the detection feature map, so that a detection area with the difference between the reference feature map and the detection feature map is determined based on the feature correction value.
S204: and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
Specifically, the reference graph and the detection graph correspond to identification regions with differences, the target region and the identification regions are compared to obtain a loss value, parameters of the image detection model are adjusted based on the loss value until preset convergence conditions are met, and the trained image detection model is obtained.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a method for training an image detection model according to another embodiment of the present application, the method including:
s301: and inputting the reference graph and the detection graph into a feature extraction network to obtain a reference feature graph and a detection feature graph.
Specifically, the reference graph and the detection graph are input into a feature extraction network, so that the feature extraction network respectively performs feature extraction on the base graph and the detection graph to obtain a reference feature graph corresponding to the reference graph and a detection feature graph corresponding to the detection graph.
S302: and inputting the reference characteristic diagram and the detection characteristic diagram into a difference detection network, and determining a detection area with difference on the reference characteristic diagram and the detection characteristic diagram based on the difference between the characteristic values in the characteristic value group.
Specifically, the reference feature map and the detection feature map are input to the difference detection network, so that the difference detection network determines an area where there is a difference between the reference feature map and the detection feature map based on a difference between the feature values in the feature value group, and the area where the difference exceeds a feature value difference threshold is used as a detection area where there is a difference between the reference feature map and the detection feature map.
S303: and based on the characteristic values at the same positions on the reference characteristic subgraph and the detection characteristic subgraph, carrying out correction operation on the characteristic values in the reference characteristic subgraph and the detection characteristic subgraph to obtain a characteristic correction value.
Specifically, a region corresponding to the detection region is extracted from the reference feature map to obtain a reference feature sub-map, a region corresponding to the detection region is extracted from the detection feature map to obtain a detection feature sub-map, and based on feature values at the same position on the reference feature sub-map and the detection feature sub-map, correction operation is performed on the feature values in the reference feature sub-map and the detection feature sub-map to obtain a feature correction value.
In an application mode, the correction operation comprises the steps of amplifying the difference value between the characteristic values at the same position on the reference characteristic subgraph and the detection characteristic subgraph, and obtaining the characteristic correction value based on the difference value and the product between the characteristic values at the same position on the reference characteristic subgraph and the detection characteristic subgraph.
In another application mode, the similarity between the reference characteristic subgraph and the detection characteristic subgraph is determined based on the characteristic values of the same positions on the reference characteristic subgraph and the detection characteristic subgraph, and the correction operation comprises correcting the characteristic values on the reference characteristic subgraph and the detection characteristic subgraph based on the similarity to obtain a characteristic correction value.
In an application scene, feature values at the same position on a reference feature subgraph and a detection feature subgraph are subjected to convolution operation to obtain a reference feature vector corresponding to the reference feature subgraph and a detection feature vector corresponding to the detection feature subgraph, the similarity between the reference feature subgraph and the detection feature subgraph is determined based on the reference feature vector and the detection feature vector, a corresponding feature correction value on the reference feature subgraph is obtained based on the product between the similarity and the feature value on the reference feature subgraph, and a corresponding feature correction value on the detection feature subgraph is obtained based on the product between the square value of the similarity and the feature value on the detection feature subgraph, so that the difference between the feature correction values on the reference feature subgraph and the detection feature subgraph is expanded, detection areas are screened based on the similarity, and the precision of difference detection is improved.
S304: and determining a target area with a difference between the reference feature map and the detection feature map based on the feature correction value.
Specifically, the characteristic correction value is compared with the characteristic value at the same position on the reference characteristic diagram, the characteristic correction value is compared with the characteristic value at the same position on the detection characteristic diagram, the characteristic correction value with the difference value exceeding the characteristic value difference threshold value between the characteristic correction value and the characteristic value on the reference characteristic diagram or the detection characteristic diagram is determined, and the target area with difference between the reference characteristic diagram and the detection characteristic diagram is determined based on the characteristic correction value with the difference value exceeding the characteristic value difference threshold value.
S305: and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
Specifically, the reference graph and the detection graph correspond to identification areas with differences, the target area and the identification areas are compared to obtain a loss value, parameters of the image detection model are adjusted based on the loss value until preset convergence conditions are met, and the trained image detection model is obtained.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a method for training an image detection model according to another embodiment of the present application, the method including:
s401: and inputting the reference graph and the detection graph into a feature extraction network to obtain a reference feature graph and a detection feature graph.
Specifically, the reference graph and the detection graph form a training sample pair, the reference graph and the detection graph in the same training sample pair correspond to the same application scene, the reference graph and the detection graph are input to the feature extraction network, so that the feature extraction network performs feature extraction on the basic graph to obtain a reference feature graph, and performs feature extraction on the detection graph to obtain a detection feature graph.
S402: and inputting the reference characteristic diagram and the detection characteristic diagram into a characteristic matching module to obtain a matching characteristic diagram, wherein the characteristic correction value on the matching characteristic diagram is determined based on the difference and the product between the characteristic values in the characteristic value group.
Specifically, the reference feature map and the detection feature map are input to the feature matching module, so that the feature matching module determines a feature correction value on the matching feature map based on a difference and a product between feature values at the same position on the reference feature map and the detection feature map, and obtains the matching feature map.
In an application mode, the feature matching module comprises a difference operation branch, a cross-correlation operation branch and a splicing module, a reference feature map and a detection feature map are input into the feature matching module, a difference feature map is obtained based on the difference operation branch, the difference operation branch obtains a difference value for the features at the same positions on the reference feature map and the detection feature map, the absolute value of the difference value is determined to obtain a feature value on the difference feature map, a correlation feature map is obtained based on the cross-correlation operation branch, the cross-correlation operation branch obtains a product of the feature values at the same positions on the reference feature map and the detection feature map to obtain a correlation feature map on the correlation feature map, the difference feature map and the correlation feature map are spliced by the splicing module, the dimension of the spliced feature map is reduced to obtain a matching feature map, and the dimension of the matching feature map is the same as that of the reference feature map and the detection feature map.
In another application mode, the feature matching module includes a difference operation branch and a cross-correlation operation branch, the reference feature map and the detected feature map are input to the feature matching module, the difference operation branch calculates a difference value for the features at the same positions on the reference feature map and the detected feature map, and determines an absolute value of the difference value to obtain an absolute difference value, the cross-correlation operation branch calculates a product of the feature values at the same positions on the reference feature map and the detected feature map to obtain a product value, the absolute difference value and the product value are subjected to weighted summation, and a feature correction value on the matched feature map is determined to obtain a matched feature map, wherein the dimensions of the matched feature map are the same as those of the reference feature map and the detected feature map.
It should be noted that, the feature correction value on the matching feature map obtained after the reference feature map and the detected feature map pass through the feature matching module is equivalent to performing amplification processing on the difference between the feature values at the same positions on the reference feature map and the detected feature map, and when the difference between the feature values corresponding to the same positions on the reference feature map and the detected feature map is larger, the feature correction value at the corresponding position on the matching feature map is more obvious to be distinguished from the reference feature map or the detected feature map, so that the detection area with the difference on the reference feature map and the detected feature map is determined based on the feature correction value on the matching feature map, and the accuracy of determining the detection area with the difference can be improved.
S403: and determining a detection area with difference on the reference characteristic diagram and the detection characteristic diagram based on the characteristic correction value on the matching characteristic diagram.
Specifically, by using the feature correction value on the matching feature map, the regions with differences on the reference feature map and the detection feature map are searched, and the detection regions with differences on the reference feature map and the detection feature map are determined.
In an application mode, the difference value between the characteristic values corresponds to a characteristic value difference threshold, the difference value between the characteristic correction value and the characteristic value at the same position on the matching characteristic diagram and the reference characteristic diagram and the difference value between the characteristic correction value and the characteristic value at the same position on the matching characteristic diagram and the detection characteristic diagram are compared with the characteristic value difference threshold, the detection coordinate corresponding to the characteristic correction value of which the difference value on the matching characteristic diagram exceeds the characteristic value difference threshold is determined, and an area surrounded by the detection coordinate is used as a detection area with difference on the reference characteristic diagram and the detection characteristic diagram, so that the efficiency of obtaining the detection area is improved.
In another application mode, the difference between the characteristic values corresponds to a characteristic value difference threshold, the difference between the characteristic correction value and the characteristic value at the same position on the matched characteristic diagram and the reference characteristic diagram and the difference between the characteristic correction value and the characteristic value at the same position on the matched characteristic diagram and the detected characteristic diagram are compared with the characteristic value difference threshold to determine the initial characteristic value of which the difference on the matched characteristic diagram exceeds the characteristic value difference threshold, whether other initial characteristic values are included in the preset radius of each initial characteristic value is determined by taking each initial characteristic value as the center, the initial characteristic value including other initial characteristic values in the preset radius is taken as a target characteristic value, and the region where the target characteristic values are gathered is taken as a detection region where the difference exists on the reference characteristic diagram and the detected characteristic diagram, so that the discrete initial characteristic values are removed, and the precision of the detection region is improved.
S404: and obtaining a reference characteristic subgraph from the reference characteristic graph based on the detection region, obtaining a detection characteristic subgraph from the detection characteristic graph, inputting the reference characteristic subgraph and the detection characteristic subgraph to a characteristic comparison module to obtain a target region, wherein the target region is determined based on characteristic values on the reference characteristic subgraph and the detection characteristic subgraph.
Specifically, a region corresponding to the detection region is extracted from the reference feature map to obtain a reference feature sub-map, a region corresponding to the detection region is extracted from the detection feature map to obtain a detection feature sub-map, and the reference feature sub-map and the detection feature sub-map are input to the feature comparison module, so that the feature comparison module determines the target region based on feature values on the reference feature sub-map and the detection feature sub-map, secondary confirmation is performed on the detection region, and the accuracy of difference detection is improved.
In an application mode, the feature comparison module comprises a convolution module with a twin structure, wherein the convolution module comprises two convolution modules with the same structure, a reference feature subgraph is input into one convolution module to obtain a reference feature vector, a detection feature subgraph is input into the other convolution module to obtain a detection feature vector, the similarity between the reference feature subgraph and the detection feature subgraph is determined based on the reference feature vector and the detection feature vector, the detection region is screened by utilizing the similarity, and the detection region with the similarity exceeding a similarity threshold value is used as a target region.
In another application mode, the feature comparison module comprises a cascade convolution module, a reference feature sub-graph and a detection feature sub-graph are sequentially input into the cascade convolution module, feature extraction is carried out on the reference feature sub-graph and the detection feature sub-graph, a reference feature vector corresponding to the reference feature sub-graph and a detection feature vector corresponding to the detection feature sub-graph are respectively obtained, the similarity between the reference feature sub-graph and the detection feature sub-graph is determined based on the reference feature vector and the detection feature vector, the detection area is screened by using the similarity, and the detection area with the similarity exceeding a similarity threshold value is used as the target area.
S405: and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
Specifically, the reference graph and the detection graph correspond to identification areas with differences, the target area and the identification areas are compared to obtain a loss value, parameters of the image detection model are adjusted based on the loss value until preset convergence conditions are met, and the trained image detection model is obtained.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating a method for training an image detection model according to another embodiment of the present application, the method including:
s501: and inputting the reference graph and the detection graph into a feature extraction network to obtain a reference feature graph and a detection feature graph.
Specifically, please refer to fig. 6, where fig. 6 is a schematic structural diagram of an embodiment of the image detection model of the present application, the feature extraction network includes a first convolution module and a second convolution module having the same structure, and parameters of the first convolution module and the second convolution module are always kept consistent when adjusting. That is, the first convolution module and the second convolution module are twin networks, and share parameters including weight and the like.
In an application scene, a reference graph in a training sample pair is input to a first convolution module to obtain a reference characteristic graph, a detection graph in the training sample pair is input to a second convolution module to obtain a detection characteristic graph, and the reference graph and the detection graph corresponding to the same application scene form the training sample pair.
Specifically, a reference graph and a detection graph in a training sample pair correspond to the same application scene, and the reference graph and the detection graph in the training sample pair correspond to different time points, so that a contrast group with differences is formed as far as possible, the reference graph in the training sample pair is input to a first convolution module for feature extraction to obtain a reference feature graph, the detection graph in the training sample pair is input to a second convolution module for feature extraction to obtain a detection feature graph, and therefore the twin network is used for improving the efficiency of feature extraction and ensuring the consistency of the reference graph and the detection graph in feature extraction.
S502: and inputting the reference characteristic diagram and the detection characteristic diagram into a characteristic matching module to obtain a matching characteristic diagram, wherein the characteristic correction value on the matching characteristic diagram is determined based on the difference and the product between the characteristic values in the characteristic value group.
Specifically, referring to fig. 6 again, the reference feature map and the detected feature map are input to the feature matching module, so that the feature matching module determines a difference feature map by using a difference between feature values at the same positions on the reference feature map and the detected feature map, and determines an associated feature map by using a product between feature values at the same positions on the reference feature map and the detected feature map, thereby obtaining a matching feature map based on the difference feature map and the associated feature map.
In an application scene, inputting the reference characteristic diagram and the detection characteristic diagram into a characteristic matching module, obtaining a difference characteristic diagram based on absolute differences between characteristic values in a characteristic value group, and obtaining an associated characteristic diagram based on a product between the characteristic values in the characteristic value group; and obtaining a matching feature map based on the difference feature map and the associated feature map.
Specifically, please refer to fig. 7, fig. 7 is a schematic structural diagram of an embodiment corresponding to the feature matching module in fig. 6, in which the feature matching module includes a difference operation branch and a cross-correlation operation branch, the reference feature map and the detected feature map are input to the feature matching module, the difference operation branch performs a difference on feature values at the same positions on the reference feature map and the detected feature map and obtains an absolute value, so as to obtain an absolute difference value as a feature value on the difference feature map, and the cross-correlation operation branch performs a product on feature values at the same positions on the reference feature map and the detected feature map, so as to obtain a product value as a feature value on the correlation feature map.
Further, a matching feature map is obtained based on the difference feature map and the associated feature map, so that feature values on the difference feature map and the associated feature map are integrated, and a more accurate difference detection result can be obtained based on the matching feature map.
In a specific application scenario, obtaining a matching feature map based on the difference feature map and the associated feature map includes: splicing the difference characteristic diagram and the associated characteristic diagram to obtain a spliced characteristic diagram; reducing the dimension of the spliced feature map to obtain a matching feature map; and the matching feature map has the same dimension as the reference feature map and the detection feature map.
Specifically, please refer to fig. 7 again, the difference feature map and the associated feature map are spliced to obtain a spliced feature map, so as to synthesize feature values of the difference feature map and the associated feature map, and obtain a spliced feature map by deep mining the difference, and then perform dimension reduction on the spliced feature map to obtain a matching feature map, so as to ensure that the matching feature map has the same dimension as the reference feature map and the detection feature map. The dimension reduction operation may specifically use a convolution layer to perform dimension reduction, for example, a convolution kernel with a convolution layer of 1 × 1 is used to perform dimension reduction on the matching feature maps, so as to keep all the feature maps having the same dimension, and improve the accuracy of difference detection.
S503: and inputting the matched feature map into a regression classification module, determining at least one initial region with difference between the reference feature map and the detection feature map based on the feature correction value on the matched feature map, and outputting the confidence degree corresponding to each initial region.
Specifically, referring to fig. 6 again, the difference detection network further includes a regression classification module, which inputs the matching feature map into the regression classification module, so that the regression classification module mines at least one initial region where there is a difference between the reference feature map and the detection feature map based on the feature values on the matching feature map, and each initial region corresponds to a confidence level to indicate the reliability of the result.
Further, the feature values on the reference feature map and the associated feature map obtained after feature extraction are both between 0 and 1, so that when the difference between the feature values at the same positions on the reference feature map and the associated feature map is large, the feature correction value on the matched feature map obtained based on the difference feature map and the associated feature map is at least a value with a large difference compared with the feature values on the reference feature map and the detected feature map, so as to obtain at least one initial region with a difference between the reference feature map and the detected feature map based on the difference, and output the confidence corresponding to the initial region, so as to improve the accuracy and reliability of difference detection.
S504: and filtering all the initial regions by using the confidence coefficient to obtain a detection region.
Specifically, the confidence level corresponds to a confidence level threshold, all initial regions are filtered based on the confidence level, the initial regions with the confidence level smaller than the confidence level threshold are deleted, and the initial regions with the confidence level larger than or equal to the confidence level threshold are used as detection regions, so that results with low reliability are filtered, and the accuracy of difference detection is improved.
S505: and obtaining a reference characteristic subgraph from the reference characteristic graph based on the detection region, obtaining a detection characteristic subgraph from the detection characteristic graph, inputting the reference characteristic subgraph and the detection characteristic subgraph to a characteristic comparison module to obtain a target region, wherein the target region is determined based on characteristic values on the reference characteristic subgraph and the detection characteristic subgraph.
Specifically, referring to fig. 6 again, an area of interest is extracted from the reference feature map based on the detection region to obtain a reference feature sub-map, and an area of interest is extracted from the detection feature map based on the detection region to obtain a detection feature sub-map; inputting the reference characteristic subgraph and the detection characteristic subgraph into a characteristic comparison module to obtain the similarity between the reference characteristic subgraph and the detection characteristic subgraph; and determining the probability of difference between the reference characteristic subgraph and the detection characteristic subgraph based on the similarity, and filtering the detection region by using the probability to obtain a target region.
In an application scene, extracting interesting regions from a reference feature map and a detection feature map respectively by using detection regions, thereby obtaining a reference feature sub-map corresponding to the reference feature map and a detection feature sub-map corresponding to the detection feature map, inputting the reference feature sub-map and the detection feature sub-map as input images into a feature comparison module for carrying out recheck, determining the similarity between the reference feature sub-map and the detection feature sub-map, judging whether a difference exists between the reference feature sub-map and the detection feature sub-map based on the similarity, obtaining the probability of the difference between the reference feature sub-map and the detection feature sub-map, filtering the detection regions by using the probability, rejecting the detection regions with the probability greater than a probability threshold, and taking the detection regions with the probability less than or equal to the probability threshold as target regions, namely taking the detection regions with lower similarity probability as the target regions with the difference, thereby improving the accuracy and reliability of difference detection results through the second verification of the reference feature sub-map and the detection feature sub-map.
In a specific application scenario, please refer to fig. 8, where fig. 8 is a schematic structural diagram of an embodiment corresponding to the feature comparison module in fig. 6, where the feature comparison module includes convolution modules with the same structure to form a twin network structure, a reference feature sub-graph and a detection feature sub-graph are input to the feature comparison module, so that the reference feature sub-graph and the detection feature sub-graph respectively pass through the convolution modules to obtain feature vectors, the similarity between the reference feature sub-graph and the detection feature sub-graph is determined based on the feature vectors, and the similarity is converted into a similarity probability, where the similarity probability is positively correlated with the similarity.
S506: and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
Specifically, the reference map and the detection map correspond to an identification region with a difference, a loss value of the image detection model is determined based on the target region and the identification region, parameters of the image detection model are adjusted based on the loss value until a preset convergence condition is met, and the trained image detection model is obtained.
In an application mode, based on a target region and an identification region, determining the confidence coefficient loss and the position prediction loss of a regression classification module and the similarity loss of a feature comparison module; carrying out weighted summation on the reliability loss, the position prediction loss and the similarity loss to obtain the detection loss of the difference detection network; determining a total loss of the image detection model based on the detection loss, and adjusting parameters of the image detection model based on the total loss.
Specifically, comparing the target area with the identification area, determining confidence loss and position prediction loss of the regression classification module, and determining similarity loss of the feature comparison module, so as to perform weighted summation on the confidence loss, the position prediction loss and the similarity loss to obtain detection loss of the difference detection network, wherein the weighted sum of the confidence loss, the position prediction loss and the similarity loss is 1, the rationality of the detection loss is improved in a weighted summation mode, the total loss of the image detection model is determined on the basis of the detection loss, and then parameters of the image detection model are adjusted on the basis of the total loss to improve the training effect of the image detection model, wherein the total loss can also include a loss value of the feature extraction network.
In an application scenario, based on a target region and an identification region, determining a confidence degree loss and a position prediction loss of a regression classification module and a similarity degree loss of a feature comparison module, including: determining confidence coefficient loss of the regression classification module based on the confidence coefficient and the identification region corresponding to each initial region by using a first loss function; determining the position prediction loss of the regression classification module based on each target region and the identification region by using a second loss function; determining similarity loss of the feature comparison module based on a difference value between feature values of the reference feature subgraph and the detection feature subgraph by using a third loss function; wherein the first loss function, the second loss function and the third loss function are distinguished from each other.
Specifically, the confidence coefficient Loss, the position prediction Loss and the similarity Loss respectively correspond to a Loss function, the first Loss function is a Focal local Loss function, and the confidence coefficient Loss of the regression classification module is determined by using the Focal local Loss function based on the confidence coefficient and the identification region corresponding to each initial region. The above process is formulated as follows:
Figure BDA0003902959460000171
wherein l cls For confidence loss, α and β are two hyper-parametersAnd are each a number of 2 and 4,
Figure BDA0003902959460000172
denotes the class prediction probability, L, at coordinates (x, y) x,y A supervisory signal representing a real category at coordinates (x, y).
Further, the detection area is represented in the form of a detection frame, the detection frame may be a rectangular frame or other shape, the second Loss function is an L1 Loss function, and the position prediction Loss of the regression classification module is determined based on each target area and the identification area by using the L1 Loss function. The above process is formulated as follows:
Figure BDA0003902959460000173
wherein l bbox In order to predict the loss for the location,
Figure BDA0003902959460000174
a predictive detection block representing a detection map,
Figure BDA0003902959460000175
supervisory information representing the real detection box of the detection map,
Figure BDA0003902959460000176
wherein x center ,y center The center point of the detection frame is h, and w is the width and height of the detection frame.
Further, the third Loss function is a contrast Loss function, and the similarity Loss of the feature comparison module is determined by using the contrast Loss function based on the difference between the feature values of the reference feature subgraph and the detected feature subgraph. The above process is formulated as follows:
Figure BDA0003902959460000177
wherein l cl In order to be a loss of the degree of similarity,
Figure BDA0003902959460000178
Figure BDA0003902959460000179
represents X 1 And 2 2 The Euclidean distance (two norms) of the characteristics of the two samples, P represents the characteristic dimension of the samples, Y is a label for judging whether the two samples are matched or not, Y =1 represents similarity or matching of the two samples, m is a set threshold value, and N is the number of the samples.
In this embodiment, the image detection model is an end-to-end system framework, and a target region with a difference can be obtained after an image is input, wherein the feature matching module determines a difference feature map by using a difference between feature values at the same positions on a reference feature map and a detection feature map, determines a correlation feature map by using a product between feature values at the same positions on the reference feature map and the detection feature map, obtains a feature correction value on the matched feature map based on the difference feature map and the correlation feature map, the feature correction value having at least one numerical value with a larger difference compared with the feature values on the reference feature map and the detection feature map, thereby obtaining at least one initial region with a difference between the reference feature map and the detection feature map based on the difference, and outputs a confidence coefficient corresponding to the initial region so as to improve accuracy and reliability of the difference detection, extracts regions from the reference feature map and the detection feature map by using the detection region, thereby obtaining a detection feature corresponding to the reference feature map and the detection feature map, and the detection feature map corresponding to input the reference feature sub-map as an input image for verifying again, removes regions with a probability greater than a threshold, thereby obtaining a detection result that the detection region with a secondary detection probability that the detection region is equal to the target region with a secondary detection sub-image, and a secondary detection region with a probability that the detection sub-detection result that the detection sub-map and the detection sub-map.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating an embodiment of a method for detecting differences according to the present application, the method including:
s901: and obtaining an image group to be detected, wherein the image group to be detected comprises a reference image and a diagram to be detected.
Specifically, the reference image and the to-be-detected image correspond to the same application scene, an image group to be detected is obtained, and the reference image and the to-be-detected image are extracted from the image group to be detected.
S902: and inputting the image group to be detected into an image detection model to obtain a target area with difference.
Specifically, the group of images to be detected is input to the image detection model, so that the image detection model performs difference detection on the group of images to be detected, and a target region with a difference is obtained, wherein the image detection model is obtained based on the image detection model training method in any of the above embodiments, and for the description of related contents, reference is made to the detailed description of the above method embodiments, which is not repeated herein.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of an electronic device of the present application, the electronic device 100 includes a memory 1001 and a processor 1002 coupled to each other, where the memory 1001 stores program data (not shown), and the processor 1002 calls the program data to implement the method in any of the embodiments described above, and for a description of related contents, reference is made to the detailed description of the method embodiment described above, which is not repeated here.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium 110 of the present application, the computer-readable storage medium 110 stores program data 1100, and the program data 1100 implements the method of any of the above embodiments when executed by a processor, and the related contents are described in detail with reference to the above method embodiments, which are not repeated herein.
It should be noted that, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (14)

1. An image detection model training method, wherein the image detection model comprises a feature extraction network and a difference detection network, the method comprising:
inputting the reference graph and the detection graph into the feature extraction network to obtain a reference feature graph and a detection feature graph;
inputting the reference characteristic diagram and the detection characteristic diagram into the difference detection network to obtain a target area with difference on the reference characteristic diagram and the detection characteristic diagram; wherein the target region is determined based on feature correction values at the same positions on the reference feature map and the detection feature map, the feature correction values are determined based on a feature value group after a correction operation, the feature value group includes feature values at the same positions on the reference feature map and the detection feature map, and the correction operation is associated with numerical values of the feature values in the feature value group;
and adjusting parameters of the image detection model based on the target area until a preset convergence condition is met, and obtaining the trained image detection model.
2. The method for training an image detection model according to claim 1, wherein the inputting the reference feature map and the detection feature map into the difference detection network to obtain a target area with a difference between the reference feature map and the detection feature map comprises:
inputting the reference characteristic diagram and the detection characteristic diagram into the difference detection network, and performing correction operation on the characteristic value set based on the difference and the product between the characteristic values in the characteristic value set to obtain the characteristic correction value;
and determining a target area with a difference on the reference characteristic diagram and the detection characteristic diagram based on the characteristic correction value.
3. The method for training an image detection model according to claim 1, wherein the inputting the reference feature map and the detection feature map into the difference detection network to obtain a target area with a difference between the reference feature map and the detection feature map comprises:
inputting the reference feature map and the detection feature map into the difference detection network, and determining a detection area with a difference on the reference feature map and the detection feature map based on a difference value between feature values in the feature value group;
obtaining a reference feature subgraph from the reference feature graph based on the detection area, obtaining a detection feature subgraph from the detection feature graph, and performing correction operation on feature values in the reference feature subgraph and the detection feature subgraph based on feature values at the same positions on the reference feature subgraph and the detection feature subgraph to obtain a feature correction value;
and determining a target area with a difference on the reference characteristic diagram and the detection characteristic diagram based on the characteristic correction value.
4. The method according to claim 1, wherein the difference detection network includes a feature matching module and a feature comparison module, and the inputting the reference feature map and the detection feature map into the difference detection network to obtain the target region with a difference between the reference feature map and the detection feature map includes:
inputting the reference feature map and the detection feature map into the feature matching module to obtain a matching feature map; wherein the feature modification value on the matching feature map is determined based on a difference and a product between feature values in the set of feature values;
determining a detection area with difference on the reference feature map and the detection feature map based on the feature correction value on the matching feature map;
obtaining a reference characteristic subgraph from the reference characteristic graph based on the detection region, obtaining a detection characteristic subgraph from the detection characteristic graph, and inputting the reference characteristic subgraph and the detection characteristic subgraph to the characteristic comparison module to obtain a target region; wherein the target region is determined based on feature values on the reference feature subgraph and the detection feature subgraph.
5. The image detection model training method according to claim 4, wherein the inputting the reference feature map and the detection feature map into the feature matching module to obtain a matching feature map comprises:
inputting the reference feature map and the detection feature map into the feature matching module, obtaining a difference feature map based on absolute differences between feature values in the feature value set, and obtaining a correlation feature map based on a product between the feature values in the feature value set;
and obtaining the matching feature map based on the difference feature map and the associated feature map.
6. The method for training the image detection model according to claim 5, wherein the obtaining the matching feature map based on the difference feature map and the associated feature map comprises:
splicing the difference characteristic diagram and the associated characteristic diagram to obtain a spliced characteristic diagram;
reducing the dimension of the spliced feature map to obtain the matched feature map; wherein the matching feature map has the same dimension as the reference feature map and the detection feature map.
7. The method according to claim 4, wherein the difference detection network further includes a regression classification module, and the determining the detection region where the difference exists between the reference feature map and the detection feature map based on the feature correction value on the matching feature map includes:
inputting the matching feature map into the regression classification module, determining at least one initial region with difference between the reference feature map and the detection feature map based on the feature correction value on the matching feature map, and outputting a confidence degree corresponding to each initial region;
and filtering all the initial regions by using the confidence degrees to obtain the detection regions.
8. The method for training the image detection model according to claim 7, wherein the reference map and the detection map correspond to an identification region having a difference, and the adjusting the parameters of the image detection model based on the target region includes:
determining confidence loss and position prediction loss of the regression classification module and similarity loss of the feature comparison module based on the target region and the identification region;
carrying out weighted summation on the confidence coefficient loss, the position prediction loss and the similarity loss to obtain the detection loss of the difference detection network;
determining a total loss of the image detection model based on the detection loss, and adjusting parameters of the image detection model based on the total loss.
9. The method for training an image detection model according to claim 8, wherein the determining confidence loss and position prediction loss of the regression classification module and similarity loss of the feature comparison module based on the target region and the identification region comprises:
determining confidence coefficient loss of the regression classification module based on the confidence coefficient corresponding to each initial region and the identification region by using a first loss function;
determining a position prediction loss of the regression classification module based on each of the target region and the identification region using a second loss function;
determining similarity loss of the feature comparison module based on a difference value between feature values of the reference feature subgraph and the detection feature subgraph by using a third loss function;
wherein the first loss function, the second loss function, and the third loss function are distinct from one another.
10. The method for training an image detection model according to claim 4, wherein the obtaining a reference feature sub-graph from the reference feature graph based on the detection region, obtaining a detection feature sub-graph from the detection feature graph, and inputting the reference feature sub-graph and the detection feature sub-graph to the feature comparison module to obtain a target region comprises:
extracting an interested area from the reference feature map based on the detection area to obtain a reference feature subgraph, and extracting the interested area from the detection feature map based on the detection area to obtain a detection feature subgraph;
inputting the reference characteristic subgraph and the detection characteristic subgraph into the characteristic comparison module to obtain the similarity between the reference characteristic subgraph and the detection characteristic subgraph;
and determining the probability of difference between the reference characteristic subgraph and the detection characteristic subgraph based on the similarity, and filtering the detection region by using the probability to obtain the target region.
11. The image detection model training method according to claim 1, wherein the feature extraction network comprises a first convolution module and a second convolution module which have the same structure, and parameters of the first convolution module and the second convolution module are consistent all the time when the parameters are adjusted;
inputting the reference graph and the detection graph into the feature extraction network to obtain a reference feature graph and a detection feature graph, wherein the method comprises the following steps:
inputting the reference graph in the training sample pair to the first convolution module to obtain the reference feature graph, and inputting the detection graph in the training sample pair to the second convolution module to obtain the detection feature graph; and the reference graph and the detection graph corresponding to the same application scene form the training sample pair.
12. A method of discrepancy detection, the method comprising:
acquiring an image group to be detected, wherein the image group to be detected comprises a reference image and a diagram to be detected;
inputting the image group to be detected into an image detection model to obtain a target area with difference; wherein the image detection model is obtained after training based on the method of any one of claims 1-11.
13. An electronic device, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor calls to perform the method of any of claims 1-11 or 12.
14. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1-11 or 12.
CN202211300279.6A 2022-10-21 2022-10-21 Image detection model training method, difference detection method and related device Pending CN115760703A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211300279.6A CN115760703A (en) 2022-10-21 2022-10-21 Image detection model training method, difference detection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211300279.6A CN115760703A (en) 2022-10-21 2022-10-21 Image detection model training method, difference detection method and related device

Publications (1)

Publication Number Publication Date
CN115760703A true CN115760703A (en) 2023-03-07

Family

ID=85352847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211300279.6A Pending CN115760703A (en) 2022-10-21 2022-10-21 Image detection model training method, difference detection method and related device

Country Status (1)

Country Link
CN (1) CN115760703A (en)

Similar Documents

Publication Publication Date Title
CN110084095B (en) Lane line detection method, lane line detection apparatus, and computer storage medium
CN109948497B (en) Object detection method and device and electronic equipment
CN110378297B (en) Remote sensing image target detection method and device based on deep learning and storage medium
CN110348437B (en) Target detection method based on weak supervised learning and occlusion perception
CN109522852B (en) Artificial target detection method, device and equipment based on optical remote sensing image
CN115861400B (en) Target object detection method, training device and electronic equipment
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111814846B (en) Training method and recognition method of attribute recognition model and related equipment
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
CN114841920A (en) Flame identification method and device based on image processing and electronic equipment
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN115533902A (en) Visual guidance-based unstacking method and device, electronic equipment and system
CN111339869A (en) Face recognition method, face recognition device, computer readable storage medium and equipment
CN112700469A (en) Visual target tracking method and device based on ECO algorithm and target detection
CN111027551B (en) Image processing method, apparatus and medium
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN115760703A (en) Image detection model training method, difference detection method and related device
CN111179287A (en) Portrait instance segmentation method, device, equipment and storage medium
CN113688810B (en) Target capturing method and system of edge device and related device
CN115731458A (en) Processing method and device for remote sensing image and electronic equipment
CN112749702B (en) Image recognition method, device, terminal and storage medium
CN112614154A (en) Target tracking track obtaining method and device and computer equipment
CN113688671A (en) Fingerprint similarity calculation method and device, storage medium and terminal
CN113283396A (en) Target object class detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination