CN113850773A - Detection method, device, equipment and computer readable storage medium - Google Patents

Detection method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113850773A
CN113850773A CN202111101814.0A CN202111101814A CN113850773A CN 113850773 A CN113850773 A CN 113850773A CN 202111101814 A CN202111101814 A CN 202111101814A CN 113850773 A CN113850773 A CN 113850773A
Authority
CN
China
Prior art keywords
detection
image
trained
result
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111101814.0A
Other languages
Chinese (zh)
Inventor
徐霄
王建勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111101814.0A priority Critical patent/CN113850773A/en
Publication of CN113850773A publication Critical patent/CN113850773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The application discloses a detection method, a detection device, equipment and a computer readable storage medium, wherein the detection method comprises the following steps: acquiring an image to be detected, a trained first detection model and a trained second detection model; the first detection model is used for carrying out overall detection on the image to be detected; the second detection model is used for carrying out local detection on the image to be detected; detecting the image to be detected by using the trained first detection model to obtain a first detection result; determining a target area in the image to be detected, and detecting the target area by using the trained second detection model to obtain a second detection result; and carrying out fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.

Description

Detection method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of information processing technology, and relates to, but is not limited to, a detection method, apparatus, device, and computer-readable storage medium.
Background
With the continuous development of scientific technology, high-speed rails have the advantages of high passenger capacity, less time consumption, good safety, high punctuation rate, energy conservation, environmental protection and the like, and accordingly, the high-speed rails are rapidly developed, the economic development is greatly promoted, and the scientific research progress is promoted.
However, the high-speed rail has high cost and technical requirements and strict construction standards, and brings complex and difficult management and maintenance, especially for a high-speed rail overhead contact system, which is a special power transmission line for supplying power to an electric locomotive and is erected overhead along a railway line.
In practice, high-precision imaging detection is required to be carried out on parts of a system in high-speed rail contact network state monitoring, the detection is usually carried out through manual detection or an integral detection method, and the data volume acquired in one-time routing inspection of a routing inspection vehicle is large, so that the analysis time is long and the real-time performance is poor. In addition, shoot the environment when gathering data and be not the inject environment such as laboratory or production line, still shoot night moreover, shoot distance, angle and shelter from the influence of thing for the picture quality is difficult to guarantee, increases the defect analysis degree of difficulty. Therefore, the inspection vehicle often needs several months to analyze the pictures once, and the analysis quality is unstable and the analysis efficiency is low.
Disclosure of Invention
In view of this, embodiments of the present application provide a detection method, an apparatus, a device, and a computer-readable storage medium.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a detection method, which comprises the following steps:
acquiring an image to be detected, a trained first detection model and a trained second detection model; the first detection model is used for carrying out overall detection on the image to be detected; the second detection model is used for carrying out local detection on the image to be detected;
detecting the image to be detected by using the trained first detection model to obtain a first detection result;
determining a target area in the image to be detected, and detecting the target area by using the trained second detection model to obtain a second detection result;
and carrying out fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.
The embodiment of the application provides a detection device, includes:
the acquisition module is used for acquiring an image to be detected, a trained first detection model and a trained second detection model; the first detection model is used for carrying out overall detection on the image to be detected; the second detection model is used for carrying out local detection on the image to be detected;
the first detection module is used for detecting the image to be detected by utilizing the trained first detection model to obtain a first detection result;
the second detection module is used for determining a target area in the image to be detected and detecting the target area by using the trained second detection model to obtain a second detection result;
and the fusion module is used for carrying out fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.
The embodiment of the application provides a detection device, the detection device at least comprises:
a processor; and
a memory for storing a computer program operable on the processor;
wherein the computer program realizes the above detection method when executed by a processor.
An embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are configured to execute the detection method.
The embodiment of the application provides a detection method, a device, equipment and a computer readable storage medium, wherein the detection method comprises the following steps: firstly, acquiring an image to be detected, a trained first detection model and a trained second detection model, wherein the first detection model can carry out overall and global detection on the image to be detected; the second detection model can perform partial and local detection on the image to be detected; then, detecting the image to be detected by using the trained first detection model so as to obtain a first detection result representing the whole detection result; then, a target area of the image to be detected is also determined, and the target area is detected by using a trained second detection model, so that a second detection result representing a partial detection result of the target area is obtained; and finally, the first detection result and the second detection result are subjected to fusion processing, so that a target detection result containing the whole and regional detection results of the image to be detected is obtained, the whole and local detection of the image to be detected can be realized through the two detection models, the image to be detected is comprehensively detected, and the detection accuracy is improved. In addition, because through two detection models, compare in an overall detection model, the degree of difficulty of these two detection models is lower to simplify the testing process, promote and detect the real-time.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1 is a schematic flow chart of an implementation of a detection method provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an implementation of the rapid detection method provided in the embodiment of the present application;
fig. 3 is a schematic flow chart of an implementation of the hierarchical warning method according to the embodiment of the present application;
FIG. 4 is a schematic diagram of an implementation process of training a detection model according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of an implementation of determining a target detection result according to an embodiment of the present application;
fig. 6 is a schematic flow chart illustrating an implementation of classifying and displaying target detection results according to an embodiment of the present application;
fig. 7 is a schematic diagram of a flow chart of an implementation of a detection method according to an embodiment of the present application;
FIG. 8 is a schematic diagram showing locations of different types of defects provided by an embodiment of the present application;
fig. 9 is a schematic flow chart of another implementation of displaying target detection results in a classified manner according to an embodiment of the present application;
FIG. 10A is a schematic diagram of an image to be detected according to an embodiment of the present disclosure;
fig. 10B is a schematic diagram of an image to be detected after being brightened according to an embodiment of the present application;
FIG. 10C is another schematic diagram of an image to be detected after being brightened according to an embodiment of the present disclosure;
fig. 11 is a block diagram schematically illustrating a module composition in the hierarchical warning provided in the embodiment of the present application.
Fig. 12 is a schematic structural diagram of a detecting device provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of another component of the detection apparatus according to the embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Based on the problems in the related art, the embodiments of the present application provide a detection method, which can be applied to a detection device, and the method provided by the embodiments of the present application can be implemented by a computer program, and when the computer program is executed, each step in the detection method provided by the embodiments of the present application is completed. In some embodiments, the computer program may control a processor in the detection device to execute. Fig. 1 is a schematic flow chart of an implementation of a detection method provided in an embodiment of the present application, and as shown in fig. 1, the detection method includes:
step S101, obtaining an image to be detected, a trained first detection model and a trained second detection model.
Here, the image to be detected may be an image of the object to be detected acquired by the image acquisition device, in this embodiment of the application, the image to be detected may be an image of a high-speed rail overhead line system, the detection image may also be an image of a machine tool, a manufacturing device, or the like, and whether the object to be detected has a defect or not may be determined by detecting and analyzing the image to be detected.
In the embodiment of the application, the first detection model is used for carrying out global detection on the image to be detected, and the second detection model is used for carrying out local detection on the image to be detected. That is, when the detection is performed through the first detection model, the detection is mainly performed on the whole and the whole of the image to be detected; when the second detection model is used for detection, detection is mainly performed on key areas of the image to be detected, taking a contact network as an example, the key areas can be corresponding image areas such as a heart ring, a dropper wire clamp and a current-carrying ring. The trained first detection model is a model obtained by training a preset first detection model through a sample, and the trained second detection model is a model obtained by training a preset second detection model through a sample.
In practice, the trained first detection model may be an artificial intelligence algorithm model such as a trained neural network model, a trained bayesian network model, a trained genetic algorithm model, and the like, and the trained first detection model may determine a corresponding first detection result based on the image to be detected, so as to achieve the purpose of automatic and intelligent global detection. The trained second detection model can also be an artificial intelligent algorithm model such as a trained neural network model, a trained Bayesian network model and a trained genetic algorithm model, and can determine a corresponding second detection result based on a target area of an image to be detected, so that the aim of automatic and intelligent area detection is fulfilled.
And S102, detecting the image to be detected by using the trained first detection model to obtain a first detection result.
Here, the image to be detected is input to the trained first detection model, global detection is performed on the image to be detected, and therefore a first detection result is output. Exemplarily, taking the trained first detection model as a neural network model as an example, the image to be detected may be processed layer by layer through an input layer neuron, a hidden layer neuron and an output layer neuron, and finally the first detection result may be output through an output layer.
For example, if the catenary has large-area foreign matters such as bird nests or the line pipe has a break, the image of the catenary is input to the trained first detection model, and then the trained first detection model can output the defects of the large-area foreign matters such as bird nests or the line pipe has a break.
In some embodiments, since the defect type of a large-area foreign object such as a bird nest belongs to an unknown defect type, and the defect type of a broken spool belongs to a known defect type, based on this, the trained first detection model may include a trained first sub-detection model and a trained second sub-detection model, where the trained first sub-detection model is used to detect the defect of the unknown type, and obtain a detection result for the defect of the unknown type; and the trained second detection submodel is used for detecting the defects of the known type to obtain a detection result aiming at the defects of the known type. Finally, the detection result for the defect of unknown type and the detection result for the defect of known type are determined as the first detection result.
And S103, determining a target area in the image to be detected, and detecting the target area by using the trained second detection model to obtain a second detection result.
Here, the target region of the image to be detected may be a key region, a region that needs attention, and the like in the object to be detected, and for example, the target region may be an image region corresponding to a heart ring, a dropper clip, a current-carrying ring, and the like, for example, a catenary.
In some embodiments, the target region may be determined from the image to be detected by a feature extraction and comparison method. In practical implementation, the image to be detected may be segmented according to a set shape to obtain each segmented sub-image to be detected, where the set shape may be a circle, a rectangle, an ellipse, or the like; then, extracting the characteristics of each sub-image to be detected to obtain each characteristic vector; and then, determining difference information between each feature vector and the reference feature vector, and if target difference information with the difference information smaller than a difference threshold exists, determining the area of the sub-image to be detected corresponding to the target difference information as a target area.
Based on the above, the related information of the target area is input into the trained second detection model, so as to obtain a second detection result representing the local defect condition of the image to be detected.
And step S104, carrying out fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.
In the embodiment of the application, the first detection result and the second detection result may be spliced to obtain a spliced detection result, and then the spliced detection result is subjected to redundancy elimination, which is equivalent to eliminating repeated detection results to obtain a redundancy elimination detection result; and finally, determining the redundancy-removed detection result as a target detection result corresponding to the image to be detected.
In the embodiment of the application, an image to be detected, a trained first detection model and a trained second detection model are obtained, wherein the first detection model can perform overall and global detection on the image to be detected; the second detection model can perform partial and local detection on the image to be detected; then, detecting the image to be detected by using the trained first detection model so as to obtain a first detection result representing the whole detection result; then, a target area of the image to be detected is also determined, and the target area is detected by using a trained second detection model, so that a second detection result representing a partial detection result of the target area is obtained; and finally, the first detection result and the second detection result are subjected to fusion processing, so that a target detection result containing the whole and regional detection results of the image to be detected is obtained, the whole and local detection of the image to be detected can be realized through the two detection models, the image to be detected is comprehensively detected, and the detection accuracy is improved. In addition, because through two detection models, compare in an overall detection model, the degree of difficulty of these two detection models is lower to simplify the testing process, promote and detect the real-time.
In some embodiments, in order to reduce the serious consequences caused by the defects of conduit pipe breakage, damage and the like, whether the defects of conduit pipe breakage, damage and the like exist in the image to be detected is rapidly identified through a lightweight detection model, and if the conditions of conduit pipe breakage, damage and the like exist in the image to be detected are determined, an alarm message is rapidly sent out. As shown in fig. 2, the detection method further includes steps S201 to S205:
step S201, a trained third detection model is obtained.
Here, the third detection model is used for rapidly detecting the image to be detected, and the third detection model may be a model of a lightweight neural network, and the model of the model has a simple structure and can rapidly obtain a detection result.
In the embodiment of the present application, the third detection model is used for detecting defects with larger influence and higher defect levels.
And S202, detecting the image to be detected by using the trained third detection model to obtain a third detection result.
Taking the trained third to-be-detected model as a lightweight neural network model as an example, inputting an image to be detected into the lightweight neural network model, processing the image to be detected layer by layer through an input layer neuron, a hidden layer neuron and an output layer neuron, and finally outputting a third detection result through an output layer, wherein the third result can represent whether the image to be detected has defects with larger influence and higher defect grade.
Step S203, determining whether a first defect result matching the target level defect exists in the third detection result.
Here, the target grade defect may refer to a high grade defect, which may be, for example, a broken or broken spool, and may cause serious adverse consequences, such as causing a sudden speed change of a high-speed rail, a runaway, and the like.
If the first defect result matched with the target grade defect exists in the third detection result, the third detection result is represented to have a high grade defect, and attention needs to be paid immediately, and then the step S204 is executed; and if the first defect result matched with the target grade defect does not exist in the third detection result, representing that the high-grade defect does not exist in the third detection result, and returning to the step S202 to continuously detect the image to be detected without giving attention to the high-grade defect.
Step S204, generating a first warning message based on the first defect result.
Here, the form of the first warning message may be at least one of a character form, a voice form, or a video form, and the form of the first warning message is not limited in the embodiment of the present application.
For example, taking the first defect result as a broken conduit as an example, the first warning message generated may be "there is a broken conduit in picture X, there is a great risk, please arrange maintenance by grasping".
And step S205, outputting a first warning message.
Here, the first warning message may be output in the form of a character through a pop-up box, may be output in the form of voice through a sound output device, and may be output in the form of light through a light emitting device.
In some embodiments, the detection device may further establish a communication connection with the terminal through a communication link, and based on this, the first warning message may further be output through an output device of the terminal, so as to achieve the purpose of warning.
It should be noted that steps S201 to S205 may be executed synchronously with steps S101 to S104, and there is no precedence order, that is, the image to be detected is detected globally and locally, and the image to be detected is also detected rapidly.
Through the steps S201 to S205, a trained third detection model is obtained first, and the detection model is a lightweight rapid detection model capable of rapidly detecting an image to be detected; and then, detecting the image to be detected by using the trained third detection model to obtain a third detection result, if the fact that a first defect result matched with the target grade defect exists in the third detection result is determined, and the third detection result is characterized in that the high-grade defect exists, generating an alarm message based on the third detection result, and giving an alarm based on the alarm message, so that the target grade defect is quickly detected and timely alarmed, the detection efficiency is improved, and the loss of lives and properties is reduced.
In some embodiments, after obtaining the target detection result, the method may further perform a warning in a hierarchical manner, and referring to fig. 3, after step S104, the detection method further includes steps S105 to S109:
and S105, judging whether a second defect result matched with the target grade defect exists in the target detection result.
Here, the target level defect may still be a high-level defect, and because the influence caused by the target level defect is large, it is also determined whether a second defect result matching the target level exists in the target detection result, and if it is determined that a second defect result matching the target level defect exists in the target detection result, the high-level defect exists in the representation target detection result, and attention needs to be paid immediately, then step S106 is performed; and if the second defect result matched with the target grade defect does not exist in the target detection result, representing that the high-grade defect does not exist in the target detection result, and giving attention to the high-grade defect without standing, entering the step S108, and continuously judging whether a third defect result except the target grade defect exists in the target detection result.
In this embodiment of the application, the second defect result is different from the first defect result, that is, if the first defect with a high level has been detected by the third detection module, after the target detection result determines the defect with the high level, it is determined whether the defect with the high level in the target detection result includes the first defect by a comparison method, and if the defect with the high level in the target detection result does not include the first defect, the defect with the high level in the target detection result is determined as the second defect; and if the high-grade defects in the target detection result comprise first defects, in order to avoid repeated alarm of the same defect, the first defects are removed, and the removed defects are determined as second defects.
And step S106, generating second alarm information based on the second defect result.
At this time, it is determined that there is a second defect result matching the target level defect in the target detection result, then, in actual implementation, the implementation of step S106 is similar to that of step S204, and therefore, the implementation of step S106 may refer to that of step S204.
And step S107, outputting a second alarm message.
In actual implementation, the implementation of step S107 is similar to the implementation of step S205, and therefore, the implementation of step S107 can refer to the implementation of step S205.
And step S108, judging whether a third defect result except the target grade defect exists in the target detection result.
At this time, it is determined that there is no second defect result matching the target level defect in the target detection result, and then, it is continuously determined whether there is a third defect result other than the target level defect in the target detection result.
In the embodiment of the application, since the object to be detected has a plurality of defects, and some of the defects bring serious adverse consequences and some of the defects bring slight consequences, the defects of the object to be detected are classified based on the defects, for example, the defects are classified into high-grade defects, medium-grade defects, low-grade defects and the like according to the grades. As described in the above embodiments, the high-level defect may be determined as the target-level defect. Taking the example that the defects include high-level defects, medium-level defects, and low-level defects, when step S108 is implemented, it is continuously determined whether the target detection result includes medium-level defects and low-level defects, and if the target detection result includes medium-level defects and low-level defects, step S109 is performed; if the target detection result does not include the medium-level defect and the low-level defect, the process returns to step S108.
Step S109, determining a defect level corresponding to the third defect result.
In connection with the above example, medium level defects may include bulges, bends, unstressed, etc.; low level defects may include contamination, electrical connection clip failure, and the like. Based on the defect matching method, the defects contained in the third defect result are respectively matched with the reference defects contained in the medium-grade defects and the low-grade defects, if the defects are consistent, the matching is considered to be successful, and the defect grade corresponding to the reference defects is determined as the defect grade corresponding to the third defect result.
And step S110, storing the image to be detected, the third defect result and the defect grade into a defect result database.
And storing the image to be detected, the third defect result, the defect grade and the corresponding relation among the image to be detected, the third defect result and the defect grade in a defect result database without immediately alarming, so that later-stage checking is facilitated, and alarming can be performed in sequence according to the defect grade from high to low.
Through the steps S105 to S110, whether a second defect result matched with the target grade defect exists in the target detection result is judged, if the second defect result matched with the target grade defect exists in the target detection result and the second defect result is determined, the high-grade defect exists in the target detection result and the attention is required to be given immediately, a second alarm message is generated based on the second defect result, and the second alarm message is output; if the second defect result matched with the target grade defect does not exist in the target detection, continuously judging whether a third defect result except the target grade defect exists in the target detection result, and if the third defect result except the target grade defect exists in the detection result, further determining a defect grade corresponding to the third defect result; and finally, storing the image to be detected, the third detection result and the defect grade into a database for subsequent checking, and alarming after the detection is finished.
In practice, according to the type of the defect, the defect can be divided into a defect of an unknown type and a defect of a known type, and based on this, in order to improve the detection efficiency, the defects of different types are detected through different detection models, so the trained first detection model can include a trained first sub-detection model and a trained second sub-detection model, wherein the trained first sub-detection model is used for detecting the defect of the unknown type, and the trained second sub-detection model is used for detecting the defect of the known type, so that different detection models are used for the defects of different types, and the detection efficiency and the detection real-time performance are improved.
In the embodiment of the present application, step S102 may be implemented by the following steps:
detecting an image to be detected by using a trained first sub-detection model to obtain a fourth detection result; and detecting the image to be detected by using the trained second sub-detection model to obtain a fifth detection result.
In the step, the image to be detected is respectively input into a trained first sub-detection model and a trained second sub-detection model, and the unknown type defect in the image to be detected is detected through the trained first sub-detection model, so that a fourth detection result representing whether the image to be detected contains the unknown type defect is obtained; and detecting the known type defects in the image to be detected through the trained second sub-detection model, thereby obtaining a fifth detection result representing whether the known type defects are contained in the image to be detected.
And step two, determining the fourth detection result and the fifth detection result as the first detection result.
In the step, the fourth detection result and the fifth detection result are spliced, so that a first detection result is obtained; or determining a collection of the fourth detection result and the fifth detection result, and determining the collection as the first detection result.
In some embodiments, the known type defects are generally high-level defects, in which case the target-level defects refer to high-level defects, such as: the line pipe breakage and breakage defects belong to the known type defects and the target grade defects. Then, the trained third detection model and the second sub-detection model in the above embodiment may be the same model, that is, the model can detect the target level defect quickly and detect the known type defect, so that the function of the model can be improved, the number of detection models is reduced, and the detection timeliness is improved.
In the embodiment of the present application, through the two steps, according to different types of defects, the global detection of the image to be detected is divided into two detections, which are respectively: the trained first sub-detection model is used for detecting the defects of unknown types, and the trained second sub-detection model is used for detecting the defects of known types, so that the detection efficiency is improved, the detection accuracy is improved, and the detection comprehensiveness is also improved through the two sub-detection models.
In some embodiments, on one hand, each trained detection model is obtained by performing model training on each detection model, and on the other hand, in the detection process, if the detection result does not match the actual detection result, the trained detection model may be iteratively updated based on the actual detection result, so as to obtain an updated detection model, as shown in fig. 4, the detection method further includes:
step S401, respectively obtaining a preset first sub-detection model, a preset second detection model, a preset third detection model, a positive sample image, and a negative sample image.
Here, the preset first sub-detection model, the preset second detection model, the preset third detection model, the positive sample image, and the negative sample image may be acquired from a dedicated server or a general-purpose server. The preset first sub-detection model, the preset second detection model and the preset third detection model may be artificial intelligence algorithm models such as a neural network model, a bayesian network model and a genetic algorithm model, and the different detection models may be the same type of algorithm model or different types of algorithm models, which is not limited in the embodiments of the present application. The positive sample image represents an image without defects, namely the positive sample image is a normal image; the negative sample image represents an image with defects, that is, the negative sample image is an abnormal sample image.
Step S402, training a preset first sub-detection model at least based on the positive sample image to obtain a trained first sub-detection model.
Here, the preset first detection submodel is used for detecting an unknown type defect, for example, the preset first detection submodel is used for judging whether an unknown type defect such as a bird nest, a large foreign matter and the like exists in an image to be detected, and since the defect belongs to the unknown type defect, what the defect is cannot be obtained, in the embodiment of the present application, the preset first detection submodel may be trained based on a positive sample image to obtain a trained first detection submodel, whether the image to be detected is an image without a defect may be judged through the trained first detection submodel, and if the image to be detected is an image without a defect, the unknown type defect does not exist in the image to be detected; and if the image to be detected is not the image without the defects, representing that the unknown type defects exist in the image to be detected.
Step S403, respectively training a preset second sub-detection model, a preset second detection model, and a preset third detection model based on at least the negative sample image, to obtain a trained second sub-detection model, a trained second detection model, and a trained third detection model.
Here, the preset second sub-detection model is used for detecting the known type of defects, which can be line pipe fracture, breakage and the like; the preset second detection model is used for detecting local defects, and the local defects can be faults of a chicken heart ring, a dropper wire clamp, a current-carrying ring and the like; the preset third detection model is used for rapidly detecting the image to be detected, and detecting whether the image to be detected comprises a target grade defect, wherein the target grade defect can be spool fracture, breakage and the like.
Based on the method, the preset second sub-detection model can be trained through the negative example sample comprising the known type defect, and the preset second sub-detection model can also be trained through the positive example sample and the negative example sample comprising the known type defect to obtain the trained second sub-detection model which is used for detecting whether the known type defect exists in the image to be detected. Similarly, the preset second detection model can be trained through the negative example sample including the local defect, and the preset second detection model can also be trained through the positive example sample and the negative example sample including the local defect to obtain the trained second detection model for detecting whether the local defect exists in the image to be detected. Similarly, the preset third detection model can be trained through the negative example sample comprising the target grade defect, and the preset third detection model can also be trained through the positive example sample and the negative example sample comprising the target grade defect to obtain the trained third detection model which is used for detecting whether the target grade defect exists in the image to be detected.
Step S404, acquiring an actual detection result aiming at the image to be detected.
Here, the actual detection result refers to an actual situation of the image to be detected after the on-site confirmation, and the actual detection result for the image to be detected can be obtained based on the input operation.
Step S405, it is determined whether the target detection result is consistent with the actual detection result.
Here, if the target detection result is consistent with the actual detection result, the representation is accurate based on the detection result obtained by the trained detection model, and the selection of the trained detection model is reflected to be proper, the step returns to the step S404; if the target detection result is inconsistent with the actual detection result, the characterization is inaccurate, the trained detection model is not properly selected, and the trained detection model needs to be trained continuously to obtain a proper detection model, and then the step S406 is performed.
And S406, continuing to train the trained first detection model, the trained second detection model and the trained third detection model based on the image to be detected and the actual detection result to obtain the updated first detection model, the updated second detection model and the updated third detection model.
At this time, if the target detection result is inconsistent with the actual detection result, the trained detection model needs to be trained based on the image to be detected and the actual detection result, the detection model is optimized, the updated detection model is obtained, and the problem of inaccurate detection result is solved, so that an accurate detection result can be obtained through the updated detection model, and the detection accuracy is improved. The detection model comprises a first detection model, a second detection model and a third detection model.
Step S407, performing defect detection using the updated first detection model, the updated second detection model, and the updated third detection model.
Here, the updated first detection model, the updated second detection model, and the updated third detection model are obtained in step S406, and the updated models can obtain detection results with better accuracy, so that defect detection is performed using the updated detection models in subsequent detection.
Through the steps S401 to S407, each preset detection model is obtained first, and then each preset detection model is trained based on the sample, so that a trained detection model is obtained for performing defect detection on the image to be detected. In the detection process, if the actual detection result of the image to be detected is inconsistent with the target detection result, the trained detection model is continuously trained based on the image to be detected and the actual detection result to obtain an updated detection model, the updated detection model can detect a detection result with higher accuracy, the problem that the actual detection result is inconsistent with the target detection result is solved, and the actual detection result is consistent with the target detection result.
In some embodiments, as shown in fig. 5, the step S104 "performing fusion processing on the first detection result and the second detection result to obtain the target detection result corresponding to the image to be detected" may be implemented by the following steps S1041 to S1043:
and S1041, splicing the first detection result and the second detection result to obtain a spliced detection result.
Here, the operation of collecting the first detection result and the second detection result may be determined as the stitching process, and the obtained collection is the stitched detection result.
And step S1042, performing redundancy removal processing on the spliced detection result to obtain a redundancy removal detection result.
Here, the trained first detection model and the trained second detection model may detect the same defect, and therefore, a duplicate detection result may occur, and in order to avoid the duplicate detection result, the spliced detection result is subjected to a de-redundancy process to obtain a de-redundancy detection result, so that the de-redundancy detection result does not have the duplicate detection result.
And step S1043, determining the redundancy-removed detection result as a target detection result.
Here, the redundancy-removed detection result can be directly used as the target detection result corresponding to the image to be detected.
Through the steps S1041 to S1043, the first detection result of the overall detection and the second detection result of the local detection are spliced, and redundancy removal processing is performed after splicing, so that a target detection result of the image to be detected is obtained, and the integrity and the simplicity of the target detection result are improved.
In some embodiments, to facilitate the overall recognition of the detection result, as shown in fig. 6, the detection method further includes:
step S601, a detection result set in the target time length is obtained.
Here, the target time period may be 1 day, 2 days, 5 days, or the like, and the set of detection results within the target time period may be acquired from the detection device by reading the instruction.
Step S602, determining a defect type corresponding to each detection result in the detection result set.
Here, each reference defect matching each detection result is determined, and a reference defect type corresponding to each reference defect is determined as a defect type corresponding to each detection result.
Step S603, determining the number of statistics corresponding to each anomaly type based on the anomaly type corresponding to each detection result.
Here, the statistical number of each abnormality type may be counted by a counting method.
Step S604, displaying each exception type and the statistical frequency corresponding to each exception type.
Here, each abnormal type and the statistical frequency corresponding to each abnormal type can be displayed in the form of a bar graph, a pie graph, a line graph, and the like, so that the purpose of grouping and displaying each detection result according to the defect type is achieved.
In the embodiment of the present application, through the steps S601 to S604, the detection results within the target duration are obtained first, and the defect type corresponding to each detection result is determined; then, determining the corresponding statistical times of each defect type; and finally, visually displaying each defect type and the corresponding statistical frequency of the defect type, and achieving the purpose of displaying each detection result in groups according to the defect type.
Based on the foregoing embodiment, the present application further provides a detection method, where the detection method is applied to detecting a high-speed rail catenary from a full view to a local part, fig. 7 is another implementation flow diagram of the detection method provided in the embodiment of the present application, and referring to fig. 7, the detection step includes the following steps one to five:
step one, detecting unknown abnormality of the whole image
Firstly, the whole image anomaly detection is performed by using an anomaly detection algorithm, referring to fig. 8, which is equivalent to detecting 81 defects in the whole image by using a detection model, and judging whether the defects are unknown, such as bird nests, large foreign matters, large-area damages and the like which are very obvious, the unknown anomaly is characterized by being very obvious or large in coverage area, wherein the anomaly detection algorithm corresponds to the trained first sub-detection model in the above embodiment.
Next, the anomaly detection method employed may be trained with normal samples, where the normal samples correspond to the positive sample images in the above embodiments.
And step two, detecting the known defects of the whole image.
Firstly, the target detection algorithm is used for detecting the defects of the whole image target, and whether the specified obvious defects 82 exist or not is judged by referring to FIG. 8, wherein the defects are known and appear frequently, and the characteristics are obvious, the defect grades are high, and important attention is needed; wherein the defect levels of the known defects from high to low comprise: breakage and breakage of various line pipes; bulging, bending and no stress; contamination, and the like.
Secondly, the anomaly detection algorithm employed may be trained based on the defectively labeled samples, where the defectively labeled samples correspond to the negative example sample images in the above embodiments.
And step three, detecting the defects of the key parts.
First, a Region Of Interest (ROI) Of the part is located, and then abnormality detection is performed based on the ROI Region. The defects which cannot be detected in the second step of the partial emphasis processing are high in grade, and the defects are relatively low in grade.
Among them, referring to fig. 8, the key component region 83 includes: the separate combination areas are not well defined, such as areas of a heart ring, a dropper wire clamp, a current-carrying ring, a screw cap and a crimping pipe at the upper end and the lower end of the dropper; small component areas, such as various wire clamp areas, such as center anchor clamps, electrical connection clamps, and the like; other areas of the dropper system.
Secondly, the adopted technology comprises the following steps: firstly, accurately positioning a component and outputting an ROI (region of interest); the anomaly detection algorithm employed may then be based on anomaly detection of the ROI.
And step four, outputting the final result.
Here, the final result is output after integration according to the results of the first step to the third step, wherein the integration may be fusion and redundancy removal processing.
And step five, self-adaptive interactive display.
Here, as shown in fig. 9, for example, the defect types such as fracture, bulge, and breakage are displayed in groups, the web page is displayed in groups according to the defect types, and in addition, some pictures are dark, as shown in fig. 10A to 10C, a user can manually adjust the brightness of the pictures to better see the defects, where fig. 10A is a collected picture, fig. 10B is a brightened picture, and fig. 10C is a picture brightened again on the basis of fig. 10B, and as can be seen by comparing fig. 10A, 10B, and 10C, by increasing the brightness of the images, the shot object to be detected can be more clearly shown, which is convenient for viewing and detecting.
Through the steps one to five, the progressive defect detection scheme focuses on defects from large to small from the whole to the local, so that all defects can be comprehensively and efficiently detected, and missing detection is reduced; the steps I to III are not interfered with each other, can be carried out simultaneously, and have high processing speed, thereby greatly saving labor and time cost; in addition, the abnormal detection algorithm is trained only by using normal samples, actual normal samples are far more than abnormal samples, training data are sufficient, and the stability of the model is guaranteed, so that the performance stability of the algorithm is guaranteed.
In some embodiments, for the detection of a high-speed rail overhead line system, a computer vision is used to replace manual detection, so as to accelerate the detection speed and accuracy, but in such a large-data-volume scene, even if the computer vision is used to accelerate, the detection result per cycle cannot reach real time, so in the embodiment of the present application, as shown in fig. 11, the hierarchical early warning can be implemented according to the following modules:
first, a risk database 1101 is constructed for storing various defects detected by the background real-time detection algorithm.
Next, using a fast detection algorithm 1102, for high-risk level defect detection, such as fracture, breakage, etc., detection results are stored in a risk database 1101, and are transmitted to a risk processing platform 1103 in real time, and then an early warning notification is sent out, and is distributed to a maintenance worker for field maintenance 1104.
Then, the detailed detection of defects other than the high risk level is performed, including the full graph detection 1105 and the local detailed detection 1107 based on the key component 1106, and the risk level is matched according to the detection result, and the result is stored in the risk database 1101. When performing detailed defect detection outside the high risk level, various detection algorithms may be used, operating in series or in parallel, so that the speed is slow and the result time may not be synchronized.
By means of the hierarchical early warning, high-risk defects can be repaired and repaired as soon as possible, low-risk batch processing is achieved, the repair efficiency and safety are improved, results can be obtained only by monitoring the risk database in real time, the background detection algorithm results do not need to be concerned all the time, and the content of the risk database can be traced back.
Finally, data backtracking 1108 means that after the whole detection period is over, a closed-loop mechanism is constructed to perform defect verification and adjust manual maintenance tasks. The defect verification means that the actual field condition of the inspection by a maintainer is compared with a detection result, and the comparison result is stored in a database for an algorithm worker to optimize a corresponding detection algorithm; the manual maintenance task is adjusted according to the occurrence frequency of the defects, the parts with high failure rate investigate whether the parts have problems, and the maintenance frequency is increased in the section with high failure rate.
By the aid of the grading early warning method in the embodiment of the application, high-risk quick response can be realized, quick overhaul is guaranteed, data are detected carefully after quick detection, and defects and leakage are checked; all data information and detection results are stored in a database, and the whole backtracking can be realized and a closed loop can be formed.
Based on the foregoing embodiments, the embodiments of the present application provide a detection apparatus, where each module included in the apparatus and each unit included in each module may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the processor may be a CPU, a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 12 is a schematic structural diagram of a detection apparatus provided in an embodiment of the present application, and as shown in fig. 12, the detection apparatus 1200 includes:
an obtaining module 1201, configured to obtain an image to be detected, a trained first detection model, and a trained second detection model; the first detection model is used for carrying out overall detection on the image to be detected; the second detection model is used for carrying out local detection on the image to be detected;
a first detection module 1202, configured to detect the image to be detected by using the trained first detection model to obtain a first detection result;
a second detection module 1203, configured to determine a target region in the image to be detected, and detect the target region by using the trained second detection model to obtain a second detection result;
and a fusion module 1204, configured to perform fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.
In some embodiments, the obtaining module 1201 is further configured to obtain a trained third detection model, where the third detection model is used to perform fast detection on the image to be detected; the detection apparatus 1200 further includes:
the third detection module is used for detecting the image to be detected by using the trained third detection model to obtain a third detection result;
the first generation module is used for determining that a first defect result matched with the target grade defect exists in the third detection result and generating a first alarm message based on the first defect result;
and the first output module is used for outputting the first alarm message.
In some embodiments, the detection apparatus 1200 further comprises:
a second generating module, configured to determine that a second defect result matching the target level defect exists in the target detection result, and generate second warning information based on the second defect result, where the second defect result is different from the first defect result;
and the second output module is used for outputting the second alarm message.
In some embodiments, the detection apparatus 1200 further comprises:
the first determining module is used for determining a third defect result except the target grade defect in the target detection result and determining a defect grade corresponding to the third defect result;
and the storage module is used for storing the image to be detected, the third defect result and the defect grade into a defect result database.
In some embodiments, the trained first detection model comprises a trained first sub-detection model and a trained second sub-detection model, the trained first sub-detection model is used for detecting the defects of unknown types, and the trained second sub-detection model is used for detecting the defects of known types; the first detection module 1202 includes:
the first detection submodule is used for detecting the image to be detected by utilizing the trained first sub-detection model to obtain a fourth detection result; detecting the image to be detected by using the trained second sub-detection model to obtain a fifth detection result;
a first determining submodule, configured to determine the fourth detection result and the fifth detection result as the first detection result.
In some embodiments, the obtaining module 1201 is further configured to obtain a preset first sub-detection model, a preset second detection model, a preset third detection model, a positive sample image, and a negative sample image, respectively, where the positive sample image represents an image without defects, and the negative sample image represents an image with defects; the detection apparatus 1200 further includes:
the first obtaining module is used for training the preset first sub-detection model at least based on the positive sample image to obtain the trained first sub-detection model;
and a second obtaining module, configured to train the preset second sub-detection model, the preset second detection model, and the preset third detection model respectively based on at least the negative sample image, so as to obtain the trained second sub-detection model, the trained second detection model, and the trained third detection model.
In some embodiments, the obtaining module 1201 is further configured to obtain an actual detection result for the image to be detected; the detection apparatus 1200 further includes:
a training module, configured to continue training the trained first detection model, the trained second detection model, and the trained third detection model based on the to-be-detected image and the actual detection result if the target detection result is inconsistent with the actual detection result, so as to obtain an updated first detection model, an updated second detection model, and an updated third detection model;
and the fourth detection module is used for detecting defects by using the updated first detection model, the updated second detection model and the updated third detection model.
In some embodiments, the fusion module 1204 comprises:
the splicing submodule is used for splicing the first detection result and the second detection result to obtain a spliced detection result;
the redundancy removing submodule is used for performing redundancy removing processing on the spliced detection result to obtain a redundancy removing detection result;
and the second determining submodule is used for determining the redundancy-removed detection result as the target detection result.
In some embodiments, the obtaining module 1201 is further configured to obtain a detection result set within the target duration; the detection apparatus 1200 further includes:
a second determining module, configured to determine a defect type corresponding to each detection result in the detection result set;
a third determining module, configured to determine, based on the defect type corresponding to each detection result, a statistical number corresponding to each defect type;
and the display module is used for displaying each defect type and the corresponding statistical times of each defect type.
It should be noted that the description of the detection apparatus in the embodiment of the present application is similar to the description of the method embodiment described above, and has similar beneficial effects to the method embodiment. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the system upgrading method is implemented in the form of a software functional module and is sold or used as a standalone product, the system upgrading method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the detection method provided in the above embodiments.
An embodiment of the present application provides a detection apparatus, fig. 13 is a schematic diagram of a composition structure of the detection apparatus provided in the embodiment of the present application, and as shown in fig. 13, the detection apparatus 1300 includes: a processor 1301, at least one communication bus 1302, a user interface 1303, at least one external communication interface 1304, and memory 1305. Wherein the communication bus 1302 is configured to enable connective communication between these components. The user interface 1303 may include a display screen, and the external communication interface 1304 may include a standard wired interface and a wireless interface. The processor 1301 is configured to execute a program of the detection method stored in the memory to implement the detection method provided in the above embodiments.
The above description of the detection device and storage medium embodiments is similar to the description of the method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the detection device and the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an AC to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of detection, the method comprising:
acquiring an image to be detected, a trained first detection model and a trained second detection model; the first detection model is used for carrying out overall detection on the image to be detected; the second detection model is used for carrying out local detection on the image to be detected;
detecting the image to be detected by using the trained first detection model to obtain a first detection result;
determining a target area in the image to be detected, and detecting the target area by using the trained second detection model to obtain a second detection result;
and carrying out fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.
2. The method as recited in claim 1, the method further comprising:
acquiring a trained third detection model, wherein the third detection model is used for rapidly detecting the image to be detected;
detecting the image to be detected by using the trained third detection model to obtain a third detection result;
determining that a first defect result matched with the target grade defect exists in the third detection result, and generating a first warning message based on the first defect result;
and outputting the first alarm message.
3. The method as recited in claim 2, the method further comprising:
determining that a second defect result matched with the target grade defect exists in the target detection result, and generating second alarm information based on the second defect result, wherein the second defect result is different from the first defect result;
and outputting the second alarm message.
4. The method as recited in claim 3, the method further comprising:
determining a third defect result except the target grade defect in the target detection result, and determining a defect grade corresponding to the third defect result;
and storing the image to be detected, the third defect result and the defect grade into a defect result database.
5. The method according to claim 2, wherein the trained first detection model comprises a trained first sub-detection model and a trained second sub-detection model, the trained first sub-detection model is used for detecting the defects of unknown types, and the trained second sub-detection model is used for detecting the defects of known types; the utilization the trained first detection model is right it detects to wait to detect the image, obtains first testing result, includes:
detecting the image to be detected by using the trained first sub-detection model to obtain a fourth detection result; detecting the image to be detected by using the trained second sub-detection model to obtain a fifth detection result;
determining the fourth detection result and the fifth detection result as the first detection result.
6. The method as recited in claim 5, the method further comprising:
respectively obtaining a preset first sub-detection model, a preset second detection model, a preset third detection model, a positive sample image and a negative sample image, wherein the positive sample image represents an image without defects, and the negative sample image represents an image with defects;
training the preset first sub-detection model at least based on the positive sample image to obtain the trained first sub-detection model;
and training the preset second sub-detection model, the preset second detection model and the preset third detection model respectively at least based on the negative sample image to obtain the trained second sub-detection model, the trained second detection model and the trained third detection model.
7. The method as recited in claim 6, the method further comprising:
acquiring an actual detection result aiming at the image to be detected;
if the target detection result is inconsistent with the actual detection result, continuing to train the trained first detection model, the trained second detection model and the trained third detection model based on the image to be detected and the actual detection result to obtain an updated first detection model, an updated second detection model and an updated third detection model;
and utilizing the updated first detection model, the updated second detection model and the updated third detection model to detect defects.
8. The method according to claim 1, wherein the fusing the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected comprises:
splicing the first detection result and the second detection result to obtain a spliced detection result;
performing redundancy removal processing on the spliced detection result to obtain a redundancy removal detection result;
and determining the redundancy-removed detection result as the target detection result.
9. The method of any of claims 1-8, further comprising:
acquiring a detection result set within a target time length;
determining the defect type corresponding to each detection result in the detection result set;
determining the statistical times corresponding to each defect type based on the defect type corresponding to each detection result;
and displaying each defect type and the corresponding statistical times of each defect type.
10. A detection device, the detection device comprising:
the acquisition module is used for acquiring an image to be detected, a trained first detection model and a trained second detection model; the first detection model is used for carrying out overall detection on the image to be detected; the second detection model is used for carrying out local detection on the image to be detected;
the first detection module is used for detecting the image to be detected by utilizing the trained first detection model to obtain a first detection result;
the second detection module is used for determining a target area in the image to be detected and detecting the target area by using the trained second detection model to obtain a second detection result;
and the fusion module is used for carrying out fusion processing on the first detection result and the second detection result to obtain a target detection result corresponding to the image to be detected.
CN202111101814.0A 2021-09-18 2021-09-18 Detection method, device, equipment and computer readable storage medium Pending CN113850773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111101814.0A CN113850773A (en) 2021-09-18 2021-09-18 Detection method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111101814.0A CN113850773A (en) 2021-09-18 2021-09-18 Detection method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113850773A true CN113850773A (en) 2021-12-28

Family

ID=78974687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111101814.0A Pending CN113850773A (en) 2021-09-18 2021-09-18 Detection method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113850773A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820543A (en) * 2022-05-07 2022-07-29 苏州方石科技有限公司 Defect detection method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820543A (en) * 2022-05-07 2022-07-29 苏州方石科技有限公司 Defect detection method and device

Similar Documents

Publication Publication Date Title
US8761490B2 (en) System and method for automated borescope inspection user interface
CN110221145B (en) Power equipment fault diagnosis method and device and terminal equipment
CN112115927B (en) Intelligent machine room equipment identification method and system based on deep learning
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN112613569B (en) Image recognition method, training method and device for image classification model
CN109085174A (en) Display screen peripheral circuit detection method, device, electronic equipment and storage medium
CN111401418A (en) Employee dressing specification detection method based on improved Faster r-cnn
CN111044149A (en) Method and device for detecting temperature abnormal point of voltage transformer and readable storage medium
CN108921840A (en) Display screen peripheral circuit detection method, device, electronic equipment and storage medium
CN112560816A (en) Equipment indicator lamp identification method and system based on YOLOv4
CN115471487A (en) Insulator defect detection model construction and insulator defect detection method and device
CN111695493A (en) Method and system for detecting hidden danger of power transmission line
CN113343998A (en) Reading monitoring system and method for electric power mechanical meter, computer equipment and application
CN112819780A (en) Method and system for detecting surface defects of silk ingots and silk ingot grading system
CN103913150B (en) Intelligent electric energy meter electronic devices and components consistency detecting method
CN113850773A (en) Detection method, device, equipment and computer readable storage medium
CN114708584A (en) Big data based cigarette product quality defect prevention and control learning system and method
CN115035328A (en) Converter image increment automatic machine learning system and establishment training method thereof
CN112836724A (en) Object defect recognition model training method and device, electronic equipment and storage medium
CN111047731A (en) AR technology-based telecommunication room inspection method and system
CN115965625B (en) Instrument detection device based on visual recognition and detection method thereof
CN112529836A (en) High-voltage line defect detection method and device, storage medium and electronic equipment
CN114768158B (en) Intelligent fire fighting system and automatic inspection method thereof
CN111931721B (en) Method and device for detecting color and number of annual inspection label and electronic equipment
CN110196152A (en) The method for diagnosing faults and system of large-scale landscape lamp group based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination