CN111814787B - Lock hole detection method for visible light image - Google Patents

Lock hole detection method for visible light image Download PDF

Info

Publication number
CN111814787B
CN111814787B CN202010547236.2A CN202010547236A CN111814787B CN 111814787 B CN111814787 B CN 111814787B CN 202010547236 A CN202010547236 A CN 202010547236A CN 111814787 B CN111814787 B CN 111814787B
Authority
CN
China
Prior art keywords
keyhole
circular
connected domain
layer
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010547236.2A
Other languages
Chinese (zh)
Other versions
CN111814787A (en
Inventor
高丙团
叶俊杰
徐伟伦
陈昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Liyang Research Institute of Southeast University
Original Assignee
Southeast University
Liyang Research Institute of Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University, Liyang Research Institute of Southeast University filed Critical Southeast University
Priority to CN202010547236.2A priority Critical patent/CN111814787B/en
Publication of CN111814787A publication Critical patent/CN111814787A/en
Application granted granted Critical
Publication of CN111814787B publication Critical patent/CN111814787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a machine vision-based method for detecting a lock hole facing a visible light image, which comprises the following steps: a circular rapid detection algorithm based on edge characteristics; circular area keyhole positive and negative sample classification based on CNN (convolutional neural network); the circular rapid detection algorithm finishes the boundary calibration of the connected domain by using graying, local threshold binarization and Two-Pass method from the image extraction, and realizes the rapid positioning of the circular connected domain according to the screening condition constructed by the circular characteristics, thereby positioning the possible positions of the lock holes; the circular region keyhole classifier constructs a training set based on a large number of positive and negative keyhole samples, and a CNN network is built by itself to train the circular region keyhole positive and negative sample classifier, so that the specific position of the keyhole in the image is located. The lockhole detection algorithm has the characteristics of high accuracy, wide applicability, high response speed and the like, and can better meet the requirements of practical application.

Description

Lock hole detection method for visible light image
Technical Field
The invention relates to the field of machine vision, in particular to a lock hole detection algorithm for a visible light image based on machine vision.
Background
With the continuous development of modern control technology, robots start to gradually replace repeated, complicated and dangerous manual work, but in the scenes of factory inspection, indoor rush repair, home service and the like, the robots often need to independently realize door opening and unlocking operations, so that the requirements on the robots with door opening and unlocking capabilities are continuously improved. Taking a patrol robot of a transformer substation as an example, the patrol robot of the transformer substation is widely applied at present, but only can carry out basic external monitoring work on instruments, has no operation function on field devices of the transformer substation, is difficult to monitor equipment states inside power boxes such as terminal boxes, a control box, a mechanism box, a fire emergency box and the like of the transformer substation, has insufficient monitoring capability and poor accident handling capability on power devices, still needs manual work for specific maintenance detection and the like, has no reduced personnel workload, and can be hurt by personnel emergency when the transformer substation is in dangerous condition.
Disclosure of Invention
The invention aims to: aiming at the problems that the environment is complex and the lock hole types are various when the robot is unlocked, the invention provides a visible light image lock hole detection algorithm based on various machine vision technologies, which can enable the robot to quickly complete the necessary lock hole detection task before unlocking and contributes to endowing the robot with unlocking capability.
The technical scheme is as follows:
a lock hole detection method facing visible light images comprises the following specific steps:
step 1, constructing a circular area image keyhole positive and negative sample classifier based on a convolutional neural network CNN, wherein positive samples are circular keyhole area images, and negative samples comprise circular non-keyhole images and random images;
step 2, sequentially carrying out graying and local threshold binarization processing on the visible light image to be detected to obtain a binarized image;
step 3, calibrating the connected domain of the binarized image, and acquiring a connected domain boundary coordinate matrix;
step 4, traversing all the calibrated connected domains, filtering the connected domains which cannot simultaneously meet the following three screening conditions, and obtaining the connected domains meeting the circular requirement: mu is more than 0.001, delta is less than 0.01 and f is less than 0.15, wherein mu represents the ratio of the pixel area of the connected domain to the total pixel area of the visible light image to be detected, delta represents the xy direction span difference of the normalized connected domain, and f represents the standard deviation of the mode length set of the central vector of the connected domain, which points to the boundary point of the connected domain;
and 5, taking the connected domain meeting the circular requirement in the step 4 as input of a circular area image keyhole positive and negative sample classifier, namely determining whether keyhole and the position of keyhole exist in the visible light image to be detected.
Further, in the step 2, a weighted average method is adopted to carry out gray processing, and a Niblack method is adopted to carry out local threshold binarization processing.
And in the step 3, a Two-Pass method is adopted to calibrate the connected domain of the binarized image and obtain a connected domain boundary coordinate matrix.
Further, in the step 4,s (i) represents the pixel area of the ith connected domain, S graph And the total pixel area of the visible light image to be measured is represented.
Further, in the step 4,x max and x min Representing the maximum value and the minimum value of the boundary coordinates of the connected domain in the x direction, y max And y min Representing the maximum and minimum values in the y-direction of the set of boundary coordinates.
Further, in step 4, f=std (r), std (·) represents the standard deviation, and r represents the set of vector modular lengths in which the boundary point points to the center of the circle.
Further, training the built CNN by taking a plurality of circular keyhole area images, a plurality of circular non-keyhole images and random images as training sets, and constructing a circular area image keyhole positive and negative sample classifier;
the built CNN is a neural network structure comprising the following 15 layers:
image input layer: the input image size is 227×227×3;
first convolution layer: a convolution kernel of 5×5, a convolution step length of 3, a channel number of 3, and a convolution kernel number of 32;
first activation layer: RELU activation;
a first pooling layer: maximum pooling, pool size of 3×3, and pooling step length of 2;
second convolution layer: a convolution kernel of 5×5, a convolution step length of 4, a channel number of 32, and a convolution kernel number of 64;
a second activation layer: RELU activation;
a second pooling layer: maximum pooling, pool size of 3×3, and pooling step length of 2;
third convolution layer: a 3×3 convolution kernel, a convolution step length of 1, a channel number of 64, and a convolution kernel number of 128;
third activation layer: RELU activation;
third pooling layer: maximum pooling, pool size of 2×2, and pooling step length of 2;
first full tie layer: the number of nodes is 128;
fourth active layer: RELU activation;
second full tie layer: the number of nodes is 2;
a Softmax layer;
output layer: the label types are "have_keyhole" and "no_keyhole", and the output vector is 2×1.
The beneficial effects are that: compared with the prior art, the invention has the following beneficial effects:
1) The invention realizes the lock hole detection of visible light images based on the machine vision technology, so that the robot can perform lock hole positioning, which is an important step for the robot to perform unlocking operation;
2) The rapid screening of the circular connected domain is realized by adopting the screening condition based on the edge characteristics, the operation amount of circular detection is reduced, the radius range of circular pixels is not required to be preset, and the detection capability is improved;
3) The CNN is adopted to train a training sample set formed by a plurality of lockhole graphs and random comparison graphs, and the lockhole identification algorithm is not limited to the identification of one or two lockholes due to the number of the sample sets and the diversity of lockhole images, so that the problem of diversity of lockhole conditions encountered by the rush repair or inspection robot can be effectively adapted;
4) The method does not adopt the existing forming network on the market to carry out transfer learning, automatically builds a neural network capable of carrying out keyhole image classification, adapts to research operand, has small network data size, simple structure and high running speed, and can better meet the requirements of effectiveness and rapidity of carrying out on-site unlocking of the robot
Drawings
FIG. 1 is a detailed flowchart of the visible light keyhole detection algorithm of the present invention;
FIG. 2 is a CNN model network structure built by self in this patent;
fig. 3 is a run-time of training a shaped CNN keyhole classifier.
Detailed Description
The visible light image lock hole detection algorithm comprises a circular rapid detection algorithm based on edge characteristics and a circular region lock hole positive and negative sample classification algorithm based on CNN, and the specific flow is shown in figure 1.
The circular rapid detection algorithm based on the edge characteristics (hereinafter referred to as a circular rapid detection algorithm) and the circular region lockhole positive and negative sample classification algorithm based on the CNN (lockhole identification algorithm) concretely realize the following ideas:
1) Round rapid detection algorithm:
after reading visible light images, carrying out gray processing by adopting a weighted average method, carrying out local threshold binarization processing by using a Niblack method, carrying out image connected domain boundary calibration by using a Two-Pass method, traversing connected domains, filtering irrelevant connected domains according to the following screening conditions, and screening out connected domains meeting the circular requirements according to edge characteristics so as to realize rapid detection of circular connected domains;
1) Connected domain area screening conditions:
the connected domain with the too small pixel area is meaningless to be identified, and the connected domain with the too small pixel area is screened out according to the following formula:
wherein μ is the ratio of the pixel area of the connected domain to the total image area, S (i) represents the pixel area of the ith connected domain, S graph Representing the total pixel area of the image.
2) Connected domain xy span difference screening conditions:
according to the fact that the diameters of the connected domains are equal in any direction of the circle, the boundary of the connected domains does not accord with the circular characteristic when the span difference in the xy direction is too large, the following formula can be adopted for screening:
wherein delta represents the xy-direction span difference of the normalized connected domain, and x max And x min Representing the maximum value and the minimum value of the boundary coordinates of the connected domain in the x direction, y max And y min Representing the maximum and minimum values in the y-direction of the set of boundary coordinates.
3) Circular feature screening conditions:
according to the circular characteristics, the following screening conditions are constructed to realize rapid screening of the circular connected domain, and firstly, the central coordinates of the connected domain are calculated according to the use formula (3):
wherein B is the boundary coordinate set of the connected domain, (x) c ,y c ) Is the center coordinates of the connected domain.
When the connected domain is circular, (x) c ,y c ) Calculating the vector modular length of the normalized boundary points pointing to the circle center according to the formula (4) as the circle center coordinates:
wherein B (i) represents the coordinate of the jth point of the boundary, max (B) and min (B) represent the maximum value and the minimum value of the boundary coordinate set in the x and y directions, r is the vector modular length set pointing to the center of the connected domain from the boundary point after normalization, and r is the vector modular length set j The vector modulo length indicating that the j-th boundary point points to the center of the connected domain.
Finally, constructing a judgment index of the similarity of the shape of the connected domain and the circle according to the formula (5):
f=std(r)<0.15 (5)
wherein r represents the vector modular length set calculated in the formula (4), and f is the standard deviation of the vector modular length set of which the graph is normalized and the boundary points to the circle center.
According to the definition of the circle, the distances from any point on the boundary to the circle center are equal, and accordingly, the smaller the judgment condition f, f of the rough boundary of the connected domain is constructed, the smaller the fluctuation of the distance from the boundary point to the center point is, and the closer the shape of the connected domain is to the circle.
2) Lock hole identification algorithm
After detecting the circular area of the image, whether the circular area image is a keyhole is identified, the invention adopts a self-built CNN (convolutional neural network) to classify positive and negative samples of the keyhole of the detected circular area image, thereby determining the accurate position of the keyhole in the image.
Training material preparation: collecting and intercepting various keyhole area images from the internet, performing appropriate morphological conversion to obtain 1040 keyhole area images as positive sample materials, taking 1089 non keyhole images such as circular detected non keyhole images, a random diagram of the network collection, a circular geometric diagram and the like as negative sample materials, and taking the non keyhole images as training sets to perform keyhole classifier training.
Training network preparation: CNN is built by oneself for classifying lock holes, the network structure is shown in fig. 2 and table 1, and deep learning is performed by using the prepared training materials.
Table 1 network architecture
The following is a further explanation of the solution of the invention by means of specific examples:
the related functions and tool packages in this embodiment are from MATLAB R2017b platform and related machine vision and deep learning tool packages (similar functional tool packages to be implemented on other platforms and to be replaced by the platform by itself, or tool packages written by itself can be based on the self):
1) Constructing positive and negative sample training sets of lockhole
Collecting various pictures with lock holes from a network, then intercepting the lock hole area images, saving the intercepted images as jpg pictures, properly intercepting and rotating the jpg pictures to obtain 1040 lock hole images with various lock hole shapes and different postures, uniformly scaling the 1040 lock hole images into 227 x 3 jpg images, and placing the 227 x 3 jpg images in a have_keyhole folder to be used as a positive sample set of the lock holes.
The method comprises the steps of firstly intercepting part of button and marker lamp circular non-keyhole device partial graphs of a transformer substation screen cabinet, then randomly intercepting transformer substation screen cabinet images, then obtaining a certain random graph from a network, finally obtaining 1089 non-keyhole multi-range comparison graphs in an accumulated mode from jpg images containing circular non-keyhole objects collected from the network, uniformly scaling the jpg images into 227X 3 jpg images, and placing the jpg images in a no_keyhole folder to complete the construction of a keyhole negative sample set.
The have_keyhole folder and the no_keyhole folder are uniformly placed under the keyhole_TF folder, and the related codes of MATLAB are placed in the same folder.
2) CNN construction from classification targets (lockhole positive and negative samples)
The method uses the MATLAB platform in an example, after the deep learning tool package and the machine learning tool package are installed in the MATLAB, the nerve layer generating function is called, the nerve layers are generated layer by layer according to the requirement of fig. 2, and relevant parameters are set, so that the whole network is built.
3) CNN training from keyhole positive and negative sample set
Because the input layer of the CNN lockhole classifier built by the patent requires the image size to be 227 multiplied by 3, the images of the have_keyhole and the no_keyhole are scaled to the required sizes, 70% of the images are randomly divided into training sets, 30% of the images are test sets, and training parameters are set: the network was trained after 'InitialLearnRate' of 0.01, 'MiniBatchSize' of 32, 'MaxEpochs' of 15. According to the obtained CNN-based keyhole classifier, the classification of the training set has randomness, so that when the classifier is tested, all images are used as a test set to test a network, and the result is shown in the table 2, and the CNN classification accuracy exceeds 98 percent, so that the CNN-based keyhole classifier can be used as a keyhole discriminator of a circular area, the trained network has the size of 999Kb, and data size reduction and network pruning are not needed. The detection speed test of the classifier by using an Ingpron 3559 notebook computer of DELLC company is shown in fig. 3 and table 3, and it can be seen that the time for classifying the images of the lock hole is less than 0.04s, so that the rapidity of identifying and positioning the lock hole can be ensured.
Table 2 test results
TABLE 3 speed test results
Number of runs Total time of operation/s Single run time/s
2129 71.2642 0.0335
4) Acquiring an image, preprocessing and boundary calibration
After a visible light image is read by using a camera of the notebook (the camera shooting is set to MJPG_960×540 and 30 fps), the visible light image is subjected to graying and local threshold binarization processing by using a correlation function of the MATLAB machine vision toolkit, and then the image connected domain boundary calibration is performed by using a Two-Pass method.
5) Circular rapid detection
Traversing the connected domain, filtering the irrelevant connected domain according to the following screening conditions, screening the connected domain meeting the requirements according to the edge characteristics, thereby realizing the rapid detection of the circular connected domain, and recording the connected domain number meeting the requirements.
f=std(r)<0.15 (3)
6) Invoking a keyhole classifier to determine
After detecting the circular area, the circular area image is cut and scaled into the size of 227 multiplied by 3, the circular area image is input into a trained CNN keyhole classifier, positive and negative sample images of the circular area image keyhole are completed, and all connected domain numbers classified as keyhole are recorded.
7) Completion of keyhole detection and result display
And 5) when all the connected domains in the step 5) are traversed, ending the program, classifying the connected domains which meet the screening condition of the step 5) and are classified into the connected domains with lock holes by the classifier of the step 6), outputting the numbers of the connected domains, and carrying out related display in the image.

Claims (7)

1. A lock hole detection method facing visible light images is characterized in that: the method comprises the following specific steps:
step 1, constructing a circular area image keyhole positive and negative sample classifier based on a convolutional neural network CNN, wherein positive samples are circular keyhole area images, and negative samples comprise circular non-keyhole images and random images;
step 2, sequentially carrying out graying and local threshold binarization processing on the visible light image to be detected to obtain a binarized image;
step 3, calibrating the connected domain of the binarized image, and acquiring a connected domain boundary coordinate matrix;
step 4, traversing all the calibrated connected domains, filtering the connected domains which cannot simultaneously meet the following three screening conditions, and obtaining the connected domains meeting the circular requirement: mu is more than 0.001, delta is less than 0.01 and f is less than 0.15, wherein mu represents the ratio of the pixel area of the connected domain to the total pixel area of the visible light image to be detected, delta represents the xy direction span difference of the normalized connected domain, and f represents the standard deviation of the mode length set of the central vector of the connected domain, which points to the boundary point of the connected domain;
and 5, taking the connected domain meeting the circular requirement in the step 4 as input of a circular area image keyhole positive and negative sample classifier, namely determining whether keyhole and the position of keyhole exist in the visible light image to be detected.
2. The method for detecting a lock hole facing a visible light image according to claim 1, wherein in the step 2, a weighted average method is adopted for gray level processing, and a niback method is adopted for local threshold value binarization processing.
3. The method for detecting the lock hole facing the visible light image according to claim 1, wherein in the step 3, a Two-Pass method is adopted to calibrate the connected domain of the binarized image and obtain a connected domain boundary coordinate matrix.
4. The method for detecting a lock hole in a visible light image according to claim 1, wherein in step 4,s (i) represents the pixel area of the ith connected domain, S graph Representing the visible to be measuredTotal pixel area of the light image.
5. The method for detecting a lock hole in a visible light image according to claim 1, wherein in step 4,x max and x min Representing the maximum value and the minimum value of the boundary coordinates of the connected domain in the x direction, y max And y min Representing the maximum and minimum values in the y-direction of the set of boundary coordinates.
6. The method for detecting a lock hole facing a visible light image according to claim 1, wherein in the step 4, f=std (r), std (·) represents a standard deviation, and r represents a vector modular length set with a boundary point pointing to a center of a circle.
7. The keyhole detection method for the visible light image according to claim 1, wherein the constructed CNN is trained by taking a plurality of circular keyhole area images, a plurality of circular non-keyhole images and a random image as training sets, and a circular area image keyhole positive and negative sample classifier is constructed;
the built CNN is a neural network structure comprising the following 15 layers:
image input layer: the input image size is 227×227×3;
first convolution layer: a convolution kernel of 5×5, a convolution step length of 3, a channel number of 3, and a convolution kernel number of 32;
first activation layer: RELU activation;
a first pooling layer: maximum pooling, pool size of 3×3, and pooling step length of 2;
second convolution layer: a convolution kernel of 5×5, a convolution step length of 4, a channel number of 32, and a convolution kernel number of 64;
a second activation layer: RELU activation;
a second pooling layer: maximum pooling, pool size of 3×3, and pooling step length of 2;
third convolution layer: a 3×3 convolution kernel, a convolution step length of 1, a channel number of 64, and a convolution kernel number of 128;
third activation layer: RELU activation;
third pooling layer: maximum pooling, pool size of 2×2, and pooling step length of 2;
first full tie layer: the number of nodes is 128;
fourth active layer: RELU activation;
second full tie layer: the number of nodes is 2;
a Softmax layer;
output layer: the label types are "have_keyhole" and "no_keyhole", and the output vector is 2×1.
CN202010547236.2A 2020-06-16 2020-06-16 Lock hole detection method for visible light image Active CN111814787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010547236.2A CN111814787B (en) 2020-06-16 2020-06-16 Lock hole detection method for visible light image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010547236.2A CN111814787B (en) 2020-06-16 2020-06-16 Lock hole detection method for visible light image

Publications (2)

Publication Number Publication Date
CN111814787A CN111814787A (en) 2020-10-23
CN111814787B true CN111814787B (en) 2024-04-12

Family

ID=72846186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010547236.2A Active CN111814787B (en) 2020-06-16 2020-06-16 Lock hole detection method for visible light image

Country Status (1)

Country Link
CN (1) CN111814787B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940816A (en) * 2017-03-22 2017-07-11 杭州健培科技有限公司 Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN107230205A (en) * 2017-05-27 2017-10-03 国网上海市电力公司 A kind of transmission line of electricity bolt detection method based on convolutional neural networks
CN107247930A (en) * 2017-05-26 2017-10-13 西安电子科技大学 SAR image object detection method based on CNN and Selective Attention Mechanism
CN109461141A (en) * 2018-10-10 2019-03-12 重庆大学 A kind of workpiece starved detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940816A (en) * 2017-03-22 2017-07-11 杭州健培科技有限公司 Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN107247930A (en) * 2017-05-26 2017-10-13 西安电子科技大学 SAR image object detection method based on CNN and Selective Attention Mechanism
CN107230205A (en) * 2017-05-27 2017-10-03 国网上海市电力公司 A kind of transmission line of electricity bolt detection method based on convolutional neural networks
CN109461141A (en) * 2018-10-10 2019-03-12 重庆大学 A kind of workpiece starved detection method

Also Published As

Publication number Publication date
CN111814787A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
Zhao et al. Bolt loosening angle detection technology using deep learning
CN108009515B (en) Power transmission line positioning and identifying method of unmanned aerial vehicle aerial image based on FCN
CN107527009B (en) Remnant detection method based on YOLO target detection
CN108765412B (en) Strip steel surface defect classification method
CN110826514A (en) Construction site violation intelligent identification method based on deep learning
CN112766103B (en) Machine room inspection method and device
CN108564065B (en) Cable tunnel open fire identification method based on SSD
CN108010025B (en) Switch and indicator lamp positioning and state identification method of screen cabinet based on RCNN
CN108831161A (en) A kind of traffic flow monitoring method, intelligence system and data set based on unmanned plane
CN113283344A (en) Mining conveying belt deviation detection method based on semantic segmentation network
CN112070134A (en) Power equipment image classification method and device, power equipment and storage medium
CN112070135A (en) Power equipment image detection method and device, power equipment and storage medium
CN110688980A (en) Human body posture classification method based on computer vision
CN114022715A (en) Method and device for detecting lead sealing defect of high-voltage cable terminal
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN117114420B (en) Image recognition-based industrial and trade safety accident risk management and control system and method
CN111814787B (en) Lock hole detection method for visible light image
CN117079082A (en) Intelligent visual image target object detection method and device and DMC (digital media control) equipment
CN117274827A (en) Intelligent environment-friendly remote real-time monitoring and early warning method and system
CN114241189B (en) Ship black smoke recognition method based on deep learning
CN115187880A (en) Communication optical cable defect detection method and system based on image recognition and storage medium
CN109447446A (en) A kind of valve products assembling quality Detection task analysis method based on entropy weight TOPSIS
CN113989632A (en) Bridge detection method and device for remote sensing image, electronic equipment and storage medium
CN111582333A (en) Lightning arrester picture carrying state identification method combining Yolo-v3 and Open-pos
CN107590824B (en) Rock particle identification and displacement tracking method based on three-dimensional image processing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201207

Address after: 213300 room 428, building a, 218 Hongkou Road, Kunlun Street, Liyang City, Changzhou City, Jiangsu Province (in Zhongguancun Science and Technology Industrial Park, Jiangsu Province)

Applicant after: Liyang Research Institute of Southeast University

Applicant after: SOUTHEAST University

Address before: 210096 Jiangsu city Nanjing Province four pailou No. 2

Applicant before: SOUTHEAST University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant