CN113160217A - Method, device and equipment for detecting foreign matters in circuit and storage medium - Google Patents

Method, device and equipment for detecting foreign matters in circuit and storage medium Download PDF

Info

Publication number
CN113160217A
CN113160217A CN202110516439.XA CN202110516439A CN113160217A CN 113160217 A CN113160217 A CN 113160217A CN 202110516439 A CN202110516439 A CN 202110516439A CN 113160217 A CN113160217 A CN 113160217A
Authority
CN
China
Prior art keywords
image
line
training
segmentation
original video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110516439.XA
Other languages
Chinese (zh)
Inventor
姚秀军
桂晨光
董林
王超
唐亚哲
蔡禹丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202110516439.XA priority Critical patent/CN113160217A/en
Publication of CN113160217A publication Critical patent/CN113160217A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for detecting foreign matters in a circuit. The method comprises the following steps: acquiring an original video, wherein the original video comprises at least one frame of line image obtained by shooting a traffic line; inputting an original video into a segmentation model to obtain a segmentation image of each frame of line image in the original video, wherein the segmentation model is obtained by training a convolutional neural network according to a line training image and a label and is used for performing image segmentation on at least one frame of line image; and determining whether foreign matters exist at the line position corresponding to the line image according to the segmented image of each frame of line image. The method improves the accuracy of the foreign matter detection of the traffic line.

Description

Method, device and equipment for detecting foreign matters in circuit and storage medium
Technical Field
The present disclosure relates to deep learning technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting a foreign object in a circuit.
Background
The presence of foreign objects on the traffic route may cause traffic accidents, so the detection of foreign objects on the traffic route is very important for traffic safety.
At present, foreign matter detection of a traffic line is to install cameras in front of vehicles or at different sections of the traffic line to take videos of the traffic line. And carrying out difference operation according to the video images of two adjacent frames in the video so as to determine whether foreign matters exist on the traffic line.
However, the detection method of performing a difference operation on a video image is not accurate in detecting a foreign object.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for detecting a line foreign matter, which are used for solving the problem of low accuracy of detecting the line foreign matter.
In a first aspect, the present application provides a method for detecting a foreign object in a line, including: acquiring an original video, wherein the original video comprises at least one frame of line image obtained by shooting a traffic line; inputting the original video into a segmentation model to obtain a segmentation image of each frame of line image in the original video, wherein the segmentation model is obtained by training a convolutional neural network according to a line training image and a label and is used for performing image segmentation on at least one frame of line image; and determining whether foreign matters exist at the line position corresponding to the line image according to the segmented image of each frame of line image.
In a second aspect, the present application provides a training method for a segmentation model, including: acquiring a line training image, wherein the line training image corresponds to a label, and the label is a pixel point coordinate of a line in the line training image; and performing iterative training on the convolutional neural network according to the line training image and the label to obtain the segmentation model.
In a third aspect, the present application provides a line foreign object detection apparatus, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an original video, and the original video comprises at least one frame of line image obtained by shooting a traffic line; the segmentation module is used for inputting the original video into a segmentation model to obtain a segmentation image of each frame of line image in the original video, wherein the segmentation model is obtained by training a convolutional neural network according to a line training image and a label and is used for carrying out image segmentation on at least one frame of line image; and the determining module is used for determining whether foreign matters exist at the line position corresponding to the line image according to the segmented image of each frame of line image.
In a fourth aspect, the present application provides a training apparatus for segmentation models, including: the second acquisition module is used for acquiring a line training image, wherein the line training image corresponds to a label, and the label is a pixel point coordinate of a line in the line training image; and the training module is used for carrying out iterative training on the convolutional neural network according to the line training image and the label to obtain the segmentation model.
In a fifth aspect, the present application provides a computer device comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method of the first aspect;
and/or the presence of a gas in the gas,
the processor is configured to perform the method according to the second aspect.
In a sixth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the method of the first aspect when executed by a processor;
and/or the presence of a gas in the gas,
the processor is configured to perform the method according to the second aspect.
In a seventh aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, performs the method of the first aspect;
and/or the presence of a gas in the gas,
which when executed by a processor implements the method of the second aspect.
According to the method, the device, the equipment and the storage medium for detecting the foreign matters on the lines, the original video is obtained and comprises at least one frame of line image obtained by shooting the traffic lines, the image segmentation is carried out on each frame of line image by adopting a pre-trained segmentation model, and then whether the foreign matters are contained in each frame of line image is determined according to the image segmentation result. Because each frame of line image is segmented by adopting a pre-trained segmentation model, and each pixel point in the line image is classified by the segmentation model, objects of different types can be expressed in an image segmentation result, and whether foreign matters exist on a traffic line can be well detected only by including the traffic line or including the traffic line and the foreign matters under a traffic line foreign matter detection scene, so that the accuracy of traffic line foreign matter detection is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of a scene of a method for detecting a foreign object on a line according to an embodiment of the present disclosure;
fig. 2 is a first flowchart of a method for detecting a foreign object in a circuit according to an embodiment of the present disclosure;
fig. 3 is a second flowchart of a method for detecting a foreign object in a circuit according to an embodiment of the present disclosure;
FIG. 4 is an architecture diagram of a convolutional neural network provided in an embodiment of the present application;
fig. 5 is a third flowchart of a method for detecting a foreign object in a line according to an embodiment of the present disclosure;
fig. 6 is an exemplary diagram of a binary image of a track line without foreign matter according to an embodiment of the present application;
fig. 7 is an exemplary diagram of a binary image of a track line with a foreign object according to an embodiment of the present disclosure;
fig. 8 is a fourth flowchart of a training method of a segmentation model according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a device for detecting a foreign object on a line according to an embodiment of the present disclosure;
FIG. 10 is a schematic structural diagram of a training apparatus for a segmentation model according to an embodiment of the present disclosure;
FIG. 11 is a schematic structural diagram of a computer device provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The transportation Line (Transport Line) is built according to a certain technical standard and scale, is provided with necessary transportation facilities and technical equipment, and aims at transporting various passenger and freight transportation roads. Including railways, highways, inland waterways, maritime airlines, airlines in the air, pipelines, and ropeways, among others. And the wide range also includes various traffic roads in the city. For the traffic lines such as railways, highways, pipelines, cableways and various traffic roads in cities, if foreign matters exist, the traffic lines can obstruct the passing of vehicles, and even cause traffic accidents seriously. Therefore, detecting foreign objects on a traffic route is an important issue in the field of transportation.
At present, the foreign matter detection on the traffic line mostly adopts an interframe difference method. When an abnormal object appears on a traffic line, a relatively obvious difference appears between two adjacent frames of video images, the two frames of video images are subtracted to obtain a gray difference value of the two frames of images, and whether the abnormal object exists on the traffic line can be judged according to the gray difference value and a preset threshold value.
The inter-frame difference method is essentially to detect abnormal objects by calculating pixel points of two frames of images, and the mode cannot accurately express the characteristics of the abnormal objects in the images; in addition, this method requires manual setting of some threshold values, which are subject to human experience. Therefore, the accuracy of the existing foreign object detection method for the traffic line is not high.
In addition, for the scheme of installing a camera on a fixed road section of a traffic line to shoot the traffic line, the shot video is a substantially static background, the image in the shot video is greatly influenced by the shot scene and the shot light, and the detection performance is unstable at a place far away from the camera, so that the detection result is inaccurate.
In addition, the above method is performed in a Central Processing Unit (CPU), and the Processing speed of the CPU cannot meet the real-time detection requirement of the foreign matter on the traffic line.
In view of at least one of the above technical problems, the inventors of the present application propose the following technical idea: through a deep learning mode, image segmentation is carried out on each frame of video image in the traffic route video sequence, and the image segmentation can be understood as classifying each pixel point in each frame of image. By classifying each pixel point in each frame of video image, the pixel points of different objects can be classified, so that different objects can be obtained by segmentation. If no foreign matter exists on the traffic route, the finally segmented image only comprises one type of object, namely the route, and the route is presented as a continuous and non-truncated independent area; if there is a foreign object on the traffic route, the image obtained by the final segmentation will include two types of objects, i.e. the route and the foreign object, and the route appears as a plurality of independent areas which are cut off. Then by statistically dividing the number of independent areas in the image, it is possible to determine whether or not there is a foreign object on the traffic route. Thereby improving the detection accuracy of the foreign matters on the traffic lines.
The data updating method provided by the application is described in detail in specific embodiments with reference to the accompanying drawings. The following specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic view of a scene of a method for detecting a line foreign object according to an embodiment of the present disclosure. As shown in fig. 1, the scenario of the present embodiment includes: image acquisition device 11, database server 12, computing processing device 13, and user device 14. The image acquisition equipment 11 is in communication connection with the database server 12; the database server 12 is in communication connection with the computing processing device 13; the database server 12 is also communicatively coupled to the user device 14.
The image capturing device 11 may be a camera, which may be installed on a vehicle or on a robot that patrols a traffic line, and is used to shoot the traffic line to obtain an original video. The original video is uploaded to the database server 12 to be stored, the computing processing device 13 obtains the original video from the database server 12, foreign matter detection is carried out according to the original video, then foreign matter detection results are stored in the database server 12, and the foreign matter detection results in the database server 12 are bound with the original video to be stored. The user device 14 may view the foreign object detection result of the transportation route through the database server 12.
In some scenes, the image capturing device 11 may directly send the captured video to the computing device 13, and after the computing device 13 performs foreign object detection on the captured original video, the original video and the foreign object detection result are uploaded to the database server 12 together for storage.
It should be understood that the devices in the above scenarios are exemplary and are not necessarily the several devices described above. For example, in some scenarios, the present embodiment may also integrate the functions of the computing processing device 13 and the user device 14 into one device, and implement them by one device.
The method for detecting a line foreign object according to the embodiment of the present application is described in detail below based on the scenario shown in fig. 1.
Fig. 2 is a first flowchart of a method for detecting a line foreign object according to an embodiment of the present disclosure, and as shown in fig. 2, the method for detecting a line foreign object according to the present embodiment includes the following steps:
step S201, obtaining an original video, wherein the original video comprises at least one frame of line image obtained by shooting a traffic line.
The execution subject of the method of the present embodiment may be the calculation processing device in fig. 1. The computing processing device retrieves the stored raw video from the database server.
Step S202, inputting the original video into a segmentation model to obtain a segmentation image of each frame of line image in the original video, wherein the segmentation model is obtained by training a convolutional neural network according to a line training image and a label and is used for performing image segmentation on at least one frame of line image.
In this embodiment, the segmentation model is a model that is trained in advance and stored in the computing device. After the original video is obtained, the computing and processing equipment calls a segmentation model to perform image segmentation on each frame of line image in the original video to obtain a segmented image of each frame of line image.
Step S203, determining whether foreign matters exist at the track line position corresponding to the line image according to the segmented image of each frame of line image.
As described above, the traffic route and the foreign matter on the traffic route can be regarded as different objects. The segmentation model classifies each pixel point in the line image, so that the pixel points of different objects are classified. If no foreign matter exists on the traffic route, the finally segmented image only comprises one type of object, namely the route, and the route is presented as a continuous and non-truncated independent area; if there is a foreign object on the traffic route, the image obtained by the final segmentation will include two types of objects, i.e. the route and the foreign object, and the route appears as a plurality of independent areas which are cut off. Then by statistically dividing the number of independent areas in the image, it is possible to determine whether or not there is a foreign object on the traffic route.
In the embodiment, an original video is obtained, the original video includes at least one frame of line image obtained by shooting a traffic line, a pre-trained segmentation model is adopted to segment each frame of line image, and then whether each frame of line image includes a foreign object is determined according to an image segmentation result. Because each frame of line image is segmented by adopting a pre-trained segmentation model, and each pixel point in the line image is classified by the segmentation model, objects of different types can be expressed in an image segmentation result, and whether foreign matters exist on a traffic line can be well detected only by including the traffic line or including the traffic line and the foreign matters under a traffic line foreign matter detection scene, so that the accuracy of traffic line foreign matter detection is improved.
On the basis of the foregoing embodiment, fig. 3 is a second flowchart of a method for detecting a line foreign object according to an embodiment of the present application, and as shown in fig. 3, the method for detecting a line foreign object according to the present embodiment includes the following steps:
step S301, for each frame of line image in the original video, down-sampling is carried out for multiple times to obtain a down-sampled image.
Wherein each downsampling of the downsampling comprises a plurality of convolution operations, the plurality of convolution operations may be at least three convolution operations, and a first convolution operation, a second convolution operation, or a third convolution operation of the at least three convolution operations is an M × N convolution operation, or a first convolution operation, a second convolution operation, or a third convolution operation of the at least three convolution operations is an N × M convolution operation, where M and N are positive integers, and M ≠ N.
In some embodiments, M may take the value of 1 and N may take the value of 3, 5, 7, or 9. The first convolution operation, the second convolution operation, or the third convolution operation of the at least three convolution operations is 1 × N, or the first convolution operation, the second convolution operation, or the third convolution operation of the at least three convolution operations is an N × 1 convolution operation.
Specifically, in this embodiment, for each frame of line image in the original video, downsampling is sequentially performed multiple times to obtain a downsampled image.
For example, the multiple downsampling includes 4 downsampling, the steps of this embodiment include:
performing downsampling on a frame of line image a in an original video to obtain a feature map a 1;
downsampling the feature map a1 to obtain a feature map a 2;
downsampling the feature map a2 to obtain a feature map a 3;
and (4) downsampling the feature map a3 to obtain a feature map a 4.
And step S302, performing up-sampling on the down-sampled image for multiple times to obtain a segmented image of the line image.
Wherein, the times of the multiple upsampling are the same as the times of the multiple downsampling.
For example, if the multiple upsampling includes 4 upsampling, the steps of this embodiment include:
upsampling the feature map a4, and splicing the upsampling result map with the feature map a4 to obtain a feature map a 5;
upsampling the feature map a5, and splicing the upsampling result map with the feature map a3 to obtain a feature map a 6;
upsampling the feature map a6, and splicing the upsampling result map with the feature map a2 to obtain a feature map a 7;
and upsampling the feature map a7, and splicing the upsampled result map with the feature map a1 to obtain a feature map a 8.
The present embodiment is an operation based on a convolutional neural network. The convolutional neural network may be a u-net network or an fcn network. The U-net network is an image segmentation network, is applied to medical images at first, is a coding and decoding network, is similar to an English letter U in capital letters, comprises 4 down-sampling layers in a coding stage, performs feature fusion with the corresponding down-sampling layer after up-sampling a feature map of each layer of a decoder, and performs fusion of low-layer information and high-layer semantic information, so that the segmentation precision of a target edge can be effectively improved.
The line in the embodiment is a slender target, and the u-net network is directly applied to line foreign matter detection, so that the segmentation effect is poor. In order to improve the segmentation effect of the u-net network on the line, namely the slender type target, the embodiment improves the u-net network, and the improved u-net network focuses more on the continuous characteristic of the slender type target in a certain direction, such as the continuous characteristic in the driving direction of the line. The architecture of the convolutional neural network and how the convolutional neural network performs image segmentation will be described below by taking the improved u-net network as an example.
Fig. 4 is an architecture diagram of a convolutional neural network according to an embodiment of the present application. As shown in fig. 4, the convolutional neural network includes:
a first up-sampling layer 41, a second up-sampling layer 42, a third up-sampling layer 43, and a fourth up-sampling layer 44 in cascade;
a first downsampling layer 45, a second downsampling layer 46, a third downsampling layer 47, and a fourth downsampling layer 48 in cascade;
the first downsampling layer 45 is connected to the first upsampling layer 41, the second downsampling layer 46 is connected to the second upsampling layer 42, the third downsampling layer 47 is connected to the third upsampling layer 43, and the fourth downsampling layer 48 is connected to the fourth upsampling layer 44.
In addition, each of the first downsampling layer 45, the second downsampling layer 46, the third downsampling layer 47 and the fourth downsampling layer 48 includes 3 concatenated convolutional layers, a2 nd convolutional layer of the 3 convolutional layers adopts a convolution operation of 1 × N or N × 1, N is a positive integer, and N may take a value of 3, 5, 7 or 9.
The convolution operation of 1 × N or N × 1 can extract local features of the main direction of the slender type target, the relevance of feature extraction is improved, meanwhile, the convolution of 1 × N or N × 1 has fewer parameters, and the improved convolution neural network is lighter in weight.
Based on the convolutional neural network, the method for processing each frame of line image comprises the following steps:
step a1, each frame of line image is input to the first downsampling layer 45 and downsampled to obtain a first feature map.
Wherein the first lower sampling layer 45 includes a first convolution layer 451, a second convolution layer 452, and a third convolution layer 453; wherein the second convolutional layer 452 is a1 × N or N × 1 convolutional layer.
The step a1 includes:
step a11, inputting each frame of line image into a first convolution layer 451 to carry out convolution operation, and obtaining a first result image;
step a12, inputting the first result image into the second convolution layer 452 to perform convolution operation of 1 × N or N × 1, so as to obtain a second result image;
step a12, the second result image is input to the third convolution layer 453 to be convolved, and the first feature map is obtained.
The convolution kernels of the first convolution layer and the third convolution layer can be referred to in the description of the related art of the u-net network, and the embodiment is not described in detail here.
Step a2, the first feature map is input to the second downsampling layer 46 and downsampled to obtain a second feature map.
The specific implementation of step a2 is similar to that of step a1, and reference may be made to the specific implementation of step a1, which is not described herein again.
Step a3, the second feature map is input to the third downsampling layer 47 and downsampled to obtain a third feature map.
The specific implementation of step a3 is similar to that of step a1, and reference may be made to the specific implementation of step a1, which is not described herein again.
Step a4, the third feature map is input to the fourth down-sampling layer 48 and down-sampled to obtain a fourth feature map.
The specific implementation of step a4 is similar to that of step a1, and reference may be made to the specific implementation of step a1, which is not described herein again.
Step a5, inputting the fourth feature map into the first upsampling layer 41 for upsampling, and splicing the upsampled image and the fourth feature map to obtain a fifth feature map.
And a6, inputting the fifth feature map into the second upsampling layer 42 for upsampling, and splicing the upsampled image with the third feature map to obtain a sixth feature map.
And a7, inputting the sixth feature map into the third upsampling layer 43 for upsampling, and splicing the upsampled image with the second feature map to obtain a seventh feature map.
And a8, inputting the seventh feature map into the fourth upsampling layer 44 for upsampling, and splicing the upsampled image with the first feature map to obtain an eighth feature map.
In the step a5, the step a6, the step a7 and the step a8, the number of convolutional layers of each upsampled layer and the convolutional kernel parameter of each convolutional layer can be referred to the description of the related art of the u-net network, and the embodiment is not described in detail here.
It should be understood that for other convolutional neural networks such as fcn, each downsampled layer of the convolutional neural network needs to include at least 3 convolutional layers, and in general, a convolutional layer corresponding to a1 × N or N × 1 convolutional core can be set as any one of the first 3 convolutional layers of each downsampled layer.
For example, the downsampled layer includes 4 convolutional layers or more than 4 convolutional layers, and a convolutional layer corresponding to a1 × N or N × 1 convolutional core may be set as the 1 st, 2 nd, or 3 rd convolutional layer of the downsampled layer.
On the basis of the foregoing embodiment, fig. 5 is a flowchart of a method for detecting a line foreign object according to an embodiment of the present application, and as shown in fig. 5, the method for detecting a line foreign object according to the present embodiment includes the following steps:
step S501, determining the number of independent areas in a segmented image, wherein the segmented image comprises at least one line.
In this embodiment, the number of independent areas in the segmented image is determined according to a minimum bounding rectangle method.
Specifically, determining the number of independent areas in the segmented image includes:
and S5011, aiming at one line in the segmentation image, determining the circumscribed rectangle of the object segmented from the segmentation image according to a minimum circumscribed rectangle method.
For how to determine the specific implementation process of the bounding rectangle of the object segmented from the segmented image according to the minimum bounding rectangle method, reference may be made to the related technical description of the minimum bounding rectangle method, and this embodiment is not described herein again.
And step S5012, taking the number of the circumscribed rectangles in the segmentation image as the number of the independent areas in the segmentation image.
Step S502, for one line in the segmented image, if the number of the independent areas is greater than or equal to the preset number, it is determined that a foreign object exists at a line position corresponding to the line image.
Step S503, if the number of the independent areas is smaller than the preset number, determining that no foreign object exists at the line position corresponding to the line image.
Taking the track line as an example, if there is no foreign object in the track line, the track line in the divided image is a continuous complete area. If a foreign object is present on the track line, the track line appears as a plurality of regions that are cut off in the divided image. The number of mutually independent areas in the divided image is searched, and for a single track line, if the number of the independent areas is greater than or equal to 2, the track line is indicated to have foreign matters, and if the number is 1, the track line is indicated to have no foreign matters.
The output of the convolutional neural network is a Red Green Blue (RGB) image, and whether a foreign object exists on the track line is directly judged according to the RGB image, so that the judgment process is complex and the calculation pressure is high. In order to simplify the judgment process and reduce the calculation pressure, the segmentation image can be firstly converted into a binary image, and then the number of independent areas in the binary image can be determined.
Fig. 6 is an exemplary diagram of a binary image of a rail line without a foreign object according to an embodiment of the present application. As shown in fig. 6, the white strip-shaped area is the track line. It can be seen that, if the number of white strip-shaped areas in the binary image is 1, it indicates that no foreign matter exists on the track line.
Fig. 7 is an exemplary diagram of a binary image of a track line with a foreign object according to an embodiment of the present application. As shown in fig. 7, the white strip-shaped area is the track line. It can be seen that the number of white strip-shaped areas in the binary image is 2, which indicates that a foreign object exists on the track line.
In some optional embodiments, the foreign object detection result of the original video may also be uploaded to a database, and a user may invoke and view the foreign object detection result of the original video through a user device.
On the basis of the foregoing embodiment, fig. 8 is a fourth flowchart of a training method for a segmentation model provided in the embodiment of the present application, and as shown in fig. 8, the training method for a segmentation model of the present embodiment includes the following steps:
step S801, a line training image is acquired.
The line training image corresponds to a label, and the label is a pixel point coordinate of the line in the line training image.
The execution subject of the method of the present embodiment may be a calculation processing device as shown in fig. 1. Of course, the main body of execution of the method of this embodiment may also be other computer devices independent of the device shown in fig. 1. In this scenario, the image capture device captures a large number of line images, which may be used as line training images. The computer device performs iterative training according to the line training images to obtain a segmentation model, and stores the segmentation model into the calculation processing device shown in fig. 1 for foreign matter detection of the traffic line.
In this embodiment, the image acquisition device captures a track in a tunnel to obtain acquired image data, and then cleans the acquired image data, and selects a number of image data samples, for example, 1500 image data samples, as a line training image. And then, labeling the single track line in each line training image to obtain a label. The label is the coordinate of a pixel point of the track line, the category is 1, and the data set is divided into a training set and a test set according to the proportion of 7: 3. The training set comprises a line training image and is used for training the segmentation model, and the test set comprises a line test image and is used for testing the training effect of the trained segmentation model. The labeling process can be manual labeling or automatic labeling by using automatic labeling software.
And S802, performing iterative training on the convolutional neural network according to the line training image and the label to obtain a segmentation model.
The iterative training is carried out on the convolutional neural network according to the line training image and the label to obtain a segmentation model, and the iterative training comprises the following steps:
and b1, performing down-sampling on the line training image for multiple times to obtain a down-sampled training image.
The specific implementation of step b1 is similar to step S301, and reference may be made to the description of the specific implementation of step S301, which is not described herein again.
And b2, performing up-sampling on the down-sampling training image for multiple times to obtain a segmentation image of the line training image.
The specific implementation of step b2 is similar to step S302, and reference may be made to the description of the specific implementation of step S302, which is not described herein again.
Wherein each downsampling of the plurality of downsampling comprises a plurality of convolution operations, the plurality of convolution operations comprising at least three convolution operations; and the second convolution operation or the third convolution operation in the at least three convolution operations is M multiplied by N, or the second convolution operation or the third convolution operation in the at least three convolution operations is N multiplied by M, wherein M and N are positive integers, and M is not equal to N.
And b3, repeating the steps b1 and b2 until the training effect reaches the expected effect, and ending the training.
Specifically, the training effect reaches the expected effect, and may be that a certain training index reaches a preset value, or that the training times reach preset times.
In addition, the Processing process of the segmentation model is performed on a Graphic Processing Unit (GPU), and the line image in the video can be dynamically processed by the computing power of the GPU, so that the dynamic real-time detection of the line foreign object is realized. The video memory occupied by processing the single-frame line image is 1.3G, and the consumed time is 7.6ms, so that the requirement on real-time detection of the video stream image can be met.
On the basis of the above-mentioned method for detecting a foreign object on a line, fig. 9 is a schematic structural diagram of a device for detecting a foreign object on a line according to an embodiment of the present application. As shown in fig. 9, the line foreign matter detection apparatus includes: a first obtaining module 91, a dividing module 92 and a determining module 93;
the first obtaining module 91 is configured to obtain an original video, where the original video includes at least one frame of line image obtained by shooting a traffic line; a segmentation module 92, configured to input the original video into a segmentation model to obtain a segmented image of each frame of line image in the original video, where the segmentation model is a model obtained by training a convolutional neural network according to a line training image and a label, and is used to perform image segmentation on the at least one frame of line image; and the determining module 93 is configured to determine whether a foreign object exists at a line position corresponding to each line image according to the segmented image of each frame of line image.
In one possible design, the segmentation module 92 is specifically configured to: performing down-sampling for each frame of line image in the original video for multiple times to obtain a down-sampled image; performing up-sampling on the down-sampled image for multiple times to obtain a segmented image of the line image; wherein each of the plurality of downsampling comprises a plurality of convolution operations; the multiple convolution operations comprise at least three convolution operations, wherein the first convolution operation, the second convolution operation or the third convolution operation in the at least three convolution operations is an M multiplied by N convolution operation, or the first convolution operation, the second convolution operation or the third convolution operation in the at least three convolution operations is an N multiplied by M convolution operation, M and N are positive integers, and M is not equal to N.
In one possible design, the determining module 93 is specifically configured to: determining a number of independent regions in the segmented image, the segmented image comprising at least one line; for one line in the segmented image, if the number of the independent areas is greater than or equal to a preset number, determining that foreign matters exist at a line position corresponding to the line image; and if the number of the independent areas is smaller than the preset number, determining that no foreign matters exist in the line position corresponding to the line image.
In one possible design, the determining module 93 is specifically configured to: determining a circumscribed rectangle of an object segmented from the segmented image according to a minimum circumscribed rectangle method aiming at a line in the segmented image; and taking the number of the external rectangles in the segmentation image as the number of the independent areas in the segmentation image.
In one possible design, the apparatus further includes: a conversion module 94; the conversion module 94 is configured to convert the segmented image of each frame of line image into a binary image; the determining module 93 is further configured to determine the number of independent areas in the binary image according to a minimum bounding rectangle method.
In one possible design, the apparatus further includes: an upload module 95; and an uploading module 95, configured to upload the foreign object detection result of the original video to a database.
The detection device for the line foreign matter provided by the embodiment of the application can be used for executing the technical scheme of the detection method for the line foreign matter in the embodiment, the implementation principle and the technical effect are similar, and the description is omitted here.
In the embodiment, an original video is obtained, the original video includes at least one frame of line image obtained by shooting a traffic line, a pre-trained segmentation model is adopted to segment each frame of line image, and then whether each frame of line image includes a foreign object is determined according to an image segmentation result. Because each frame of line image is segmented by adopting a pre-trained segmentation model, and each pixel point in the line image is classified by the segmentation model, objects of different types can be expressed in an image segmentation result, and whether foreign matters exist on a traffic line can be well detected only by including the traffic line or including the traffic line and the foreign matters under a traffic line foreign matter detection scene, so that the accuracy of traffic line foreign matter detection is improved.
On the basis of the above embodiment of the training method of the segmentation model, fig. 10 is a schematic structural diagram of a training apparatus of the segmentation model according to the embodiment of the present application. As shown in fig. 10, the training device for the segmentation model includes: a second obtaining module 1001 and a training module 1002;
a second obtaining module 1001, configured to obtain a line training image, where the line training image corresponds to a label, and the label is a coordinate of a pixel point of a line in the line training image; and the training module 1002 is configured to perform iterative training on the convolutional neural network according to the line training image and the label to obtain the segmentation model.
In some possible designs, training module 1002 is specifically configured to: performing down-sampling on the line training image for multiple times to obtain a down-sampling training image; performing up-sampling on the down-sampling training image for multiple times to obtain a segmentation image of the line training image; wherein each of the plurality of downsampling comprises a plurality of convolution operations comprising at least three convolution operations; and the first convolution operation, the second convolution operation or the third convolution operation in the at least three convolution operations is an M multiplied by N or an N multiplied by M convolution operation, wherein M and N are positive integers, and M is not equal to N.
The training device for the segmentation model provided by the embodiment of the application can be used for executing the technical scheme of the training method for the segmentation model in the embodiment, the implementation principle and the technical effect are similar, and the details are not repeated herein.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the dividing module 92 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the dividing module 92. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 11, the computer apparatus may include: a processor 111, a memory 112, and a transceiver 113.
The processor 111 executes computer-executable instructions stored in the memory, causing the processor 111 to perform the aspects of the embodiments described above. The processor 111 may be a general-purpose processor including a central processing unit CPU, a Network Processor (NP), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
A memory 112 is coupled to the processor 111 via the system bus and is in communication with each other, the memory 112 being used to store computer program instructions.
The transceiver 113 may be used to obtain raw video.
The system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The transceiver is used to enable communication between the database access device and other computers (e.g., clients, read-write libraries, and read-only libraries). The memory may include Random Access Memory (RAM) and may also include non-volatile memory (non-volatile memory).
The computer device provided in the embodiment of the present application may be used to implement the technical solution of the method for detecting a line foreign object and/or the method for training a segmentation model in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be noted that the computer device shown in fig. 11 may be the computing processing device in fig. 1.
Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 12, the computer apparatus may include: a processor 121, a memory 122, and a transceiver 123.
Processor 121 executes computer-executable instructions stored by the memory, causing processor 121 to perform the aspects of the embodiments described above. The processor 121 may be a general-purpose processor including a central processing unit CPU, a Network Processor (NP), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
A memory 122 is coupled to the processor 121 via the system bus and communicates with each other, and the memory 122 is used for storing computer program instructions.
The transceiver 123 may be used to acquire line training images.
The system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The transceiver is used to enable communication between the database access device and other computers (e.g., clients, read-write libraries, and read-only libraries). The memory may include Random Access Memory (RAM) and may also include non-volatile memory (non-volatile memory).
The computer device provided in the embodiment of the present application may be used to implement the technical solution of the method for detecting a line foreign object and/or the method for training a segmentation model in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be noted that the computer device shown in fig. 12 may still be the computing processing device in fig. 1. The computer device shown in fig. 11 and the computer device shown in fig. 12 may be the same computer device or may be different computer devices. If the segmentation model is the same computer equipment, the training method of the segmentation model and the detection method of the foreign matters in the circuit are realized on the same computer equipment. If the different computer equipment is adopted, the training method of the segmentation model and the detection method of the foreign matters in the circuit are realized on the different computer equipment.
The embodiment of the application further provides a chip for running the instructions, and the chip is used for executing the technical scheme of the detection method for the foreign matters in the circuit in the embodiment.
The embodiment of the application further provides a chip for running the instructions, and the chip is used for executing the technical scheme of the training method of the segmentation model in the embodiment.
Similarly, the chip for executing the instructions may be a chip in the same computer device, or may be a chip in a different computer device.
The embodiment of the present application further provides a computer-readable storage medium, where a computer instruction is stored in the computer-readable storage medium, and when the computer instruction runs on a computer, the computer is enabled to execute the technical solution of the method for detecting a line foreign object in the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where a computer instruction is stored in the computer-readable storage medium, and when the computer instruction runs on a computer, the computer is enabled to execute the technical solution of the training method for a segmentation model according to the above embodiment.
Likewise, the computer readable storage media may be computer readable storage media in the same computer device or in different computer devices.
The embodiment of the present application further provides a computer program product, where the computer program product includes a computer program, the computer program is stored in a computer-readable storage medium, at least one processor can read the computer program from the computer-readable storage medium, and when the at least one processor executes the computer program, the technical solution of the method for detecting a line foreign object in the foregoing embodiment can be implemented.
The embodiment of the present application further provides a computer program product, where the computer program product includes a computer program, which is stored in a computer-readable storage medium, and the computer program can be read by at least one processor from the computer-readable storage medium, and when the computer program is executed by the at least one processor, the technical solution of the training method for the segmentation model in the foregoing embodiment can be implemented.
Likewise, the computer program products may be computer program products in the same computer device or computer program products in different computer devices.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A method for detecting a foreign object in a line, comprising:
acquiring an original video, wherein the original video comprises at least one frame of line image obtained by shooting a traffic line;
inputting the original video into a segmentation model to obtain a segmentation image of each frame of line image in the original video, wherein the segmentation model is obtained by training a convolutional neural network according to a line training image and a label and is used for performing image segmentation on at least one frame of line image;
and determining whether foreign matters exist at the line position corresponding to the line image according to the segmented image of each frame of line image.
2. The method of claim 1, wherein inputting the original video into a segmentation model to obtain a segmented image of each frame of line image in the original video comprises:
performing down-sampling for each frame of line image in the original video for multiple times to obtain a down-sampled image;
performing up-sampling on the down-sampled image for multiple times to obtain a segmented image of the line image;
wherein each of the plurality of downsampling comprises a plurality of convolution operations;
the plurality of convolution operations include at least three convolution operations, and a first convolution operation, a second convolution operation, or a third convolution operation of the at least three convolution operations is an mxn convolution operation, or a first convolution operation, a second convolution operation, or a third convolution operation of the at least three convolution operations is an nxm convolution operation, where M and N are positive integers, and M ≠ N.
3. The method according to claim 1, wherein the determining whether the foreign object exists at the track line position corresponding to the route image according to the segmented image of each route image comprises:
determining a number of independent regions in the segmented image, the segmented image comprising at least one line;
for one line in the segmented image, if the number of the independent areas is greater than or equal to a preset number, determining that foreign matters exist at a line position corresponding to the line image;
and if the number of the independent areas is smaller than the preset number, determining that no foreign matters exist in the line position corresponding to the line image.
4. The method of claim 3, wherein the determining the number of independent regions in the segmented image comprises:
aiming at one line in the segmentation image, determining the minimum circumscribed rectangle of the object segmented from the segmentation image according to a minimum circumscribed rectangle method;
and taking the number of the minimum circumscribed rectangles in the segmentation image as the number of independent areas in the segmentation image.
5. The method of claim 3, further comprising:
converting the segmented image of each frame of line image into a binary image;
correspondingly, the determining the number of independent areas in the segmented image comprises:
and determining the number of independent areas in the binary image according to a minimum bounding rectangle method.
6. The method according to any one of claims 1-5, further comprising:
and uploading the foreign matter detection result of the original video to a database.
7. A method for training a segmentation model, comprising:
acquiring a line training image, wherein the line training image corresponds to a label, and the label is a pixel point coordinate of a line in the line training image;
and performing iterative training on the convolutional neural network according to the line training image and the label to obtain the segmentation model.
8. The method of claim 7, wherein iteratively training a convolutional neural network based on the line training images and labels to obtain the segmentation model comprises:
performing down-sampling on the line training image for multiple times to obtain a down-sampling training image;
performing up-sampling on the down-sampling training image for multiple times to obtain a segmentation image of the line training image;
wherein each of the plurality of downsampling comprises a plurality of convolution operations comprising at least three convolution operations;
and the first convolution operation, the second convolution operation or the third convolution operation in the at least three convolution operations is an M multiplied by N convolution operation, or the first convolution operation, the second convolution operation or the third convolution operation in the at least three convolution operations is an N multiplied by M convolution operation, wherein M and N are positive integers, and M is not equal to N.
9. A device for detecting a foreign object on a line, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an original video, and the original video comprises at least one frame of line image obtained by shooting a traffic line;
the segmentation module is used for inputting the original video into a segmentation model to obtain a segmentation image of each frame of line image in the original video, wherein the segmentation model is obtained by training a convolutional neural network according to a line training image and a label and is used for carrying out image segmentation on at least one frame of line image;
and the determining module is used for determining whether foreign matters exist at the line position corresponding to the line image according to the segmented image of each frame of line image.
10. A training apparatus for a segmentation model, comprising:
the second acquisition module is used for acquiring a line training image, wherein the line training image corresponds to a label, and the label is a pixel point coordinate of a line in the line training image;
and the training module is used for carrying out iterative training on the convolutional neural network according to the line training image and the label to obtain the segmentation model.
11. A computer device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured for implementing the method of any one of claims 1-8.
12. A computer-readable storage medium having computer-executable instructions stored thereon, which when executed by a processor, perform the method of any one of claims 1-8.
13. A computer program product, characterized in that it comprises a computer program which, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN202110516439.XA 2021-05-12 2021-05-12 Method, device and equipment for detecting foreign matters in circuit and storage medium Pending CN113160217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110516439.XA CN113160217A (en) 2021-05-12 2021-05-12 Method, device and equipment for detecting foreign matters in circuit and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110516439.XA CN113160217A (en) 2021-05-12 2021-05-12 Method, device and equipment for detecting foreign matters in circuit and storage medium

Publications (1)

Publication Number Publication Date
CN113160217A true CN113160217A (en) 2021-07-23

Family

ID=76874684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110516439.XA Pending CN113160217A (en) 2021-05-12 2021-05-12 Method, device and equipment for detecting foreign matters in circuit and storage medium

Country Status (1)

Country Link
CN (1) CN113160217A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063402A (en) * 2022-07-26 2022-09-16 国网山东省电力公司东营供电公司 Cable foreign matter detection method, system, terminal and medium based on sliding analysis

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006140636A (en) * 2004-11-10 2006-06-01 Toyota Motor Corp Obstacle detecting device and method
KR101391668B1 (en) * 2013-02-06 2014-05-07 한국과학기술연구원 System of recognizing obstacle and, method of recognizing obstacle on 4 bundle power transmisson line
CN106960438A (en) * 2017-03-25 2017-07-18 安徽继远软件有限公司 Method for recognizing impurities to transmission line of electricity is converted based on Hough straight line
CN110992349A (en) * 2019-12-11 2020-04-10 南京航空航天大学 Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN111626204A (en) * 2020-05-27 2020-09-04 北京伟杰东博信息科技有限公司 Railway foreign matter invasion monitoring method and system
CN111814720A (en) * 2020-07-17 2020-10-23 电子科技大学 Airport runway foreign matter detection and classification method based on unmanned aerial vehicle vision
WO2020237693A1 (en) * 2019-05-31 2020-12-03 华南理工大学 Multi-source sensing method and system for water surface unmanned equipment
CN112424793A (en) * 2020-10-14 2021-02-26 深圳市锐明技术股份有限公司 Object identification method, object identification device and electronic equipment
CN112528878A (en) * 2020-12-15 2021-03-19 中国科学院深圳先进技术研究院 Method and device for detecting lane line, terminal device and readable storage medium
CN112766137A (en) * 2021-01-14 2021-05-07 华南理工大学 Dynamic scene foreign matter intrusion detection method based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006140636A (en) * 2004-11-10 2006-06-01 Toyota Motor Corp Obstacle detecting device and method
KR101391668B1 (en) * 2013-02-06 2014-05-07 한국과학기술연구원 System of recognizing obstacle and, method of recognizing obstacle on 4 bundle power transmisson line
CN106960438A (en) * 2017-03-25 2017-07-18 安徽继远软件有限公司 Method for recognizing impurities to transmission line of electricity is converted based on Hough straight line
WO2020237693A1 (en) * 2019-05-31 2020-12-03 华南理工大学 Multi-source sensing method and system for water surface unmanned equipment
CN110992349A (en) * 2019-12-11 2020-04-10 南京航空航天大学 Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN111626204A (en) * 2020-05-27 2020-09-04 北京伟杰东博信息科技有限公司 Railway foreign matter invasion monitoring method and system
CN111814720A (en) * 2020-07-17 2020-10-23 电子科技大学 Airport runway foreign matter detection and classification method based on unmanned aerial vehicle vision
CN112424793A (en) * 2020-10-14 2021-02-26 深圳市锐明技术股份有限公司 Object identification method, object identification device and electronic equipment
CN112528878A (en) * 2020-12-15 2021-03-19 中国科学院深圳先进技术研究院 Method and device for detecting lane line, terminal device and readable storage medium
CN112766137A (en) * 2021-01-14 2021-05-07 华南理工大学 Dynamic scene foreign matter intrusion detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘志超: "基于像素累加值比较和最小外接矩形的车牌识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
刘羽飞;何勇;野口伸;: "基于激光传感器的农用空气动力船防撞系统开发", 浙江大学学报(农业与生命科学版), no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063402A (en) * 2022-07-26 2022-09-16 国网山东省电力公司东营供电公司 Cable foreign matter detection method, system, terminal and medium based on sliding analysis
CN115063402B (en) * 2022-07-26 2022-10-25 国网山东省电力公司东营供电公司 Cable foreign matter detection method, system, terminal and medium based on sliding analysis

Similar Documents

Publication Publication Date Title
US11176381B2 (en) Video object segmentation by reference-guided mask propagation
US10614574B2 (en) Generating image segmentation data using a multi-branch neural network
CN112560999B (en) Target detection model training method and device, electronic equipment and storage medium
CN111681273B (en) Image segmentation method and device, electronic equipment and readable storage medium
CN110008956B (en) Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN113723377B (en) Traffic sign detection method based on LD-SSD network
CN110689012A (en) End-to-end natural scene text recognition method and system
CN110781980B (en) Training method of target detection model, target detection method and device
CN111523439B (en) Method, system, device and medium for target detection based on deep learning
CN111008632A (en) License plate character segmentation method based on deep learning
CN111914654A (en) Text layout analysis method, device, equipment and medium
CN116129291A (en) Unmanned aerial vehicle animal husbandry-oriented image target recognition method and device
CN111881984A (en) Target detection method and device based on deep learning
CN114820679A (en) Image annotation method and device, electronic equipment and storage medium
CN114708426A (en) Target detection method, model training method, device, equipment and storage medium
CN111881914B (en) License plate character segmentation method and system based on self-learning threshold
CN112597996B (en) Method for detecting traffic sign significance in natural scene based on task driving
CN113160217A (en) Method, device and equipment for detecting foreign matters in circuit and storage medium
CN111079634B (en) Method, device and system for detecting obstacle in running process of vehicle and vehicle
CN112686107A (en) Tunnel invading object detection method and device
CN115984378A (en) Track foreign matter detection method, device, equipment and medium
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN112446292B (en) 2D image salient object detection method and system
CN114973271A (en) Text information extraction method, extraction system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination