CN110598665A - Pole number identification method based on vehicle-mounted mobile deep learning platform - Google Patents
Pole number identification method based on vehicle-mounted mobile deep learning platform Download PDFInfo
- Publication number
- CN110598665A CN110598665A CN201910886389.7A CN201910886389A CN110598665A CN 110598665 A CN110598665 A CN 110598665A CN 201910886389 A CN201910886389 A CN 201910886389A CN 110598665 A CN110598665 A CN 110598665A
- Authority
- CN
- China
- Prior art keywords
- positioning
- positioning frame
- deep learning
- pole number
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 12
- 230000001133 acceleration Effects 0.000 claims abstract description 9
- 238000012216 screening Methods 0.000 claims abstract description 8
- 238000013136 deep learning model Methods 0.000 claims abstract description 4
- 230000005764 inhibitory process Effects 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000001629 suppression Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 description 10
- 238000011176 pooling Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pole number identification method based on a vehicle-mounted mobile deep learning platform, which comprises the following steps of: 1. acquiring and marking images of parts of a high-speed rail contact network, and screening to obtain a data set of a contact network rod number image; 2. training the contact net pole number image data set in the step 1 by using an acceleration region convolutional neural network model, namely fast R-CNN, and positioning the pole number plate in the global contact net pole number image to obtain a pole number plate image data set; 3. training the rod number plate picture data set in the step 2 by using a rapid positioning model SSD, and positioning the numbers in the rod number plate picture to obtain a rod number plate digital picture data set; 4. deploying a deep learning model on an embedded deep learning mobile platform; 5. screening the recognition target obtained in the step 4 by using a multi-class non-maximum inhibition algorithm to obtain a rod number digital text; the method aims at the identification of the pole number of the high-speed rail contact net, and is high in positioning accuracy and short in detection time.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a pole number recognition method based on a vehicle-mounted mobile deep learning platform.
Background
The contact network 2C detection standard has clear requirements on the detection of the pole numbers on the pillars at two sides of the high-speed rail, is one of important detection items of the high-speed rail 2C, and has the functions of fault location, position detection and the like; at present, the detection of the pole numbers on two sides of a contact network is still in an offline traditional image processing mode, and certain difficulty still exists in practical use; therefore, the invention is based on the vehicle-mounted mobile deep learning platform and the deep learning positioning detection method, and digital texts for identifying and outputting pole numbers of the two side pillars of the high-speed rail are made.
Disclosure of Invention
In order to achieve the purpose, the invention provides a pole number identification method based on a vehicle-mounted mobile deep learning platform.
The invention discloses a pole number identification method based on a vehicle-mounted mobile deep learning platform, which comprises the following steps of:
step 1: and acquiring and marking images of the parts of the high-speed rail contact network, and screening to obtain a data set of the contact network rod number image.
Step 2: and (3) training the contact net pole number image data set in the step (1) by using an acceleration region convolutional neural network model Faster R-CNN, and positioning the pole number plate in the global contact net pole number image to obtain a pole number plate image data set.
And step 3: and (3) training the rod number plate picture data set in the step (2) by using a rapid positioning model (SSD), and positioning the numbers in the rod number plate picture to obtain a rod number plate digital picture data set.
And 4, step 4: and deploying a deep learning model on the embedded deep learning mobile platform.
And 5: and (4) screening the recognition target obtained in the step (4) by using a multi-class non-maximum inhibition algorithm to obtain a rod number digital text.
Further, the high-speed rail contact network part image in the step 1 is an image collected by skylight shooting of the high-speed rail detection vehicle, and an xml file containing part position and type information is generated in a manual marking mode, wherein the number 1 is marked as '1', the number 2 is marked as '2', and the like.
Further, the procedure of the acceleration region convolutional neural network model (Faster R-CNN) in step 2 is as follows:
s21: zooming the input image and the annotation information to a uniform size;
s22: the input image is transmitted into a feature extraction network, and feature extraction is carried out by carrying out multilayer convolution calculation on the image;
s23: inputting the last feature map output by the feature extraction Network into a Region suggestion Network (RPN) to generate a suggestion Region in which parts may exist;
s24: inputting the suggested Region and the last feature map into a Region of interest (RoI) pooling layer, and then transmitting the Region of interest (RoI) pooling layer into a full-link layer;
s25: obtaining the category and the coordinates of the pole number plate by the output of the S24 through a Softmax classifier and a Smooth L1 regressor;
s26: and finally, calculating the size of the rod number plate positioning frame and outputting the rod number plate category, the coordinates and the size of the positioning frame positioned by S25.
Further, the SSD deep learning positioning model in step 3 has a specific structural flow as follows:
s31: zooming the input image and the annotation information to a uniform size;
s32: the input image is transmitted into a feature extraction network, and feature extraction is carried out by carrying out multilayer convolution calculation on the image;
s33: generating 5 default frames for each pixel point of the back 5-layer network characteristic diagram, wherein the length-width ratios of the default frames are 1, 2, 3, 1/2 and 1/3 respectively;
s34: the next 5-tier network profile is convolved by two 3x3, one of which outputs four values for the position (x, y, w, h) of each default box, and the other of which outputs the probability that each default box detects a different class of object.
Further, the deployment process of the embedded deep learning mobile platform in the step 4 is as follows:
s41: the used embedded mobile deep learning platform is Nvidia Jetson TX2, the platform is oriented to GPU acceleration parallel processing in the mobile embedded system market, and high-performance and low-energy-consumption calculation in the aspects of deep learning and computer vision is realized, so that the platform becomes an ideal platform for a calculation-intensive embedded project;
s42: and (3) flashing the deep learning platform, installing JetPack3.1 and CTI-L4T, starting a multi-core mode of the platform, and compiling and installing a tensorflow1.3.0 version.
Further, the process of the multi-class non-maximum suppression algorithm in step 5 is as follows:
s51: inputting a new picture into the embedded deep learning platform in the step 4, and obtaining the position (x, y, w, h), the probability p and the category c of the number positioning frame of each rod number through target positioning and classification, wherein a plurality of misrecognized frames exist or one number is positioned by a plurality of similar frames;
s52: according to the coordinates (x, y, w, h) of the positioning frame, the probability p and the category c obtained in the step S51, the positioning frame, the probability p and the category c are arranged in a descending order according to the probability to obtain a sequence L;
s53: obtaining a permutation sequence according to S52, sequentially intersecting and comparing other positioning frames from a first positioning frame L (0) in the sequence L, and deleting the positioning frame L (i) if the IOU of the other positioning frame L (i) and the positioning frame L (0) with the maximum probability is more than 0.5;
the intersection ratio IOU calculation formula is as follows:
in the formula, A and B are two positioning frame areas of a positioning frame L (i) and a probability maximum positioning frame L (0);
s54: reserving the first positioning frame L (0) and removing the positioning frame from the descending sequence L, and returning to S53 until the descending sequence L is empty;
s55: traversing the stored positioning frames, and calculating the central position (x) of each positioning framem,ym) Arranged from top to bottom by location;
the calculation formula of the center position of the positioning frame is as follows:
in the formula, xmAnd ymIs the coordinate of the central point of the positioning frame, and x, y, w and h are respectively fixedThe horizontal and vertical coordinates of the upper left corner of the location frame and the width and height of the location frame;
s56: and traversing the stored positioning frames according to the arrangement sequence of S55, calculating the center distance of two adjacent positioning frames, and deleting the positioning frames with lower confidence degrees if the center distance is less than one ninth of the pixel size of the pole number plate.
Compared with the prior art, the invention has the following beneficial technical effects:
(1) the method can be used for the number identification of the pole of the high-speed rail contact network;
(2) the method utilizes the characteristic that the number of the pole of the high-speed rail contact net is wired, combines two positioning neural networks, and improves the pole identification accuracy;
(3) the method can shorten the detection time, reduce the difficulty of fault detection and pertinently solve the problem of safe operation of the high-speed rail contact network.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a schematic diagram of a multi-class non-maximum suppression algorithm according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
A pole number identification method based on a vehicle-mounted mobile deep learning platform is shown in a flow chart of fig. 1 and comprises the following steps:
step 1: and acquiring and marking images of the parts of the high-speed rail contact network, and screening to obtain a data set of the contact network rod number image.
The high-speed rail contact net part image is an image collected by skylight shooting of a high-speed rail detection vehicle, and an xml file containing part position and type information is generated in a manual marking mode, wherein the number 1 is marked as '1', the number 2 is marked as '2', and the like.
Step 2: and (3) training the contact net pole number image data set in the step (1) by using an acceleration region convolutional neural network model Faster R-CNN, and positioning the pole number plate in the global contact net pole number image to obtain a pole number plate image data set.
The procedure of the acceleration region convolutional neural network model (Faster R-CNN) is as follows:
s21: zooming the input image and the annotation information to a uniform size;
s22: the input image is transmitted into a feature extraction network, and feature extraction is carried out by carrying out multilayer convolution calculation on the image;
s23: inputting the last feature map output by the feature extraction Network into a Region suggestion Network (RPN) to generate a suggestion Region in which parts may exist;
s24: inputting the suggested Region and the last feature map into a Region of interest (RoI) pooling layer, and then transmitting the Region of interest (RoI) pooling layer into a full-link layer;
s25: obtaining the category and the coordinates of the pole number plate by the output of the S24 through a Softmax classifier and a Smooth L1 regressor;
s26: and finally, calculating the size of the rod number plate positioning frame and outputting the rod number plate category, the coordinates and the size of the positioning frame positioned by S25.
And step 3: and (3) training the rod number plate picture data set in the step (2) by using a rapid positioning model (SSD), and positioning the numbers in the rod number plate picture to obtain a rod number plate digital picture data set.
The SSD deep learning positioning model comprises the following specific structural processes:
s31: zooming the input image and the annotation information to a uniform size;
s32: the input image is transmitted into a feature extraction network, and feature extraction is carried out by carrying out multilayer convolution calculation on the image;
s33: generating 5 default frames for each pixel point of the back 5-layer network characteristic diagram, wherein the length-width ratios of the default frames are 1, 2, 3, 1/2 and 1/3 respectively;
s34: the next 5-tier network profile is convolved by two 3x3, one of which outputs four values for the position (x, y, w, h) of each default box, and the other of which outputs the probability that each default box detects a different class of object.
And 4, step 4: and deploying a deep learning model on the embedded deep learning mobile platform.
The embedded deep learning mobile platform deployment process is as follows:
s41: the used embedded mobile deep learning platform is Nvidia Jetson TX2, the platform is oriented to GPU acceleration parallel processing in the mobile embedded system market, and high-performance and low-energy-consumption calculation in the aspects of deep learning and computer vision is realized, so that the platform becomes an ideal platform for a calculation-intensive embedded project;
s42: and (3) flashing the deep learning platform, installing JetPack3.1 and CTI-L4T, starting a multi-core mode of the platform, and compiling and installing a tensorflow1.3.0 version.
And 5: and (4) screening the recognition target obtained in the step (4) by using a multi-class non-maximum inhibition algorithm to obtain a rod number digital text.
The process of the multi-class non-maximum suppression algorithm is as follows:
s51: inputting a new picture into the embedded deep learning platform in the step 4, and obtaining the position (x, y, w, h), the probability p and the category c of the number positioning frame of each rod number through target positioning and classification, wherein a plurality of misrecognized frames exist or one number is positioned by a plurality of similar frames;
s52: according to the coordinates (x, y, w, h) of the positioning frame, the probability p and the category c obtained in the step S51, the positioning frame, the probability p and the category c are arranged in a descending order according to the probability to obtain a sequence L;
s53: obtaining a permutation sequence according to S52, sequentially intersecting and comparing other positioning frames from a first positioning frame L (0) in the sequence L, and deleting the positioning frame L (i) if the IOU of the other positioning frame L (i) and the positioning frame L (0) with the maximum probability is more than 0.5;
the intersection ratio IOU calculation formula is as follows:
in the formula, A and B are two positioning frame areas of a positioning frame L (i) and a probability maximum positioning frame L (0);
s54: reserving the first positioning frame L (0) and removing the positioning frame from the descending sequence L, and returning to S53 until the descending sequence L is empty;
s55: traverse the stored location boxCalculating the center position (x) of each positioning framem,ym) Arranged from top to bottom by location;
the calculation formula of the center position of the positioning frame is as follows:
in the formula, xmAnd ymIs the coordinate of the center point of the positioning frame, and x, y, w and h are respectively the horizontal and vertical coordinates of the upper left corner of the positioning frame and the width and height of the positioning frame;
s56: and traversing the stored positioning frames according to the arrangement sequence of S55, calculating the center distance of two adjacent positioning frames, and deleting the positioning frames with lower confidence degrees if the center distance is less than one ninth of the pixel size of the pole number plate.
The method is applied to the pole number identification of the high-speed rail contact net through the deep learning method, combines the characteristics of limited number up-down arrangement of the pole numbers of the high-speed rail contact net, and combines two positioning neural networks, so that the high-speed rail pole number identification accuracy is improved, the detection time is effectively shortened, the fault detection difficulty is reduced, and the safety operation problem of the high-speed rail contact net can be solved in a more targeted manner.
Claims (4)
1. A pole number identification method based on a vehicle-mounted mobile deep learning platform is characterized by comprising the following steps:
step 1: acquiring and marking images of parts of a high-speed rail contact network, and screening to obtain a data set of a contact network rod number image;
step 2: training the contact net pole number image data set in the step 1 by using an acceleration region convolutional neural network model, namely fast R-CNN, and positioning the pole number plate in the global contact net pole number image to obtain a pole number plate image data set;
and step 3: training the rod number plate picture data set in the step 2 by using a rapid positioning model SSD, and positioning the numbers in the rod number plate picture to obtain a rod number plate digital picture data set;
and 4, step 4: deploying a deep learning model on an embedded deep learning mobile platform;
and 5: and (4) screening the recognition target obtained in the step (4) by using a multi-class non-maximum inhibition algorithm to obtain a rod number digital text.
2. The pole number identification method based on the vehicle-mounted mobile deep learning platform is characterized in that the high-speed rail contact net part image in the step 1 is an image collected by shooting a skylight of a high-speed rail detection vehicle, and an xml file containing part position and type information is generated in a manual marking mode, wherein the number 1 is marked as '1', the number 2 is marked as '2', and the like.
3. The pole number identification method based on the vehicle-mounted mobile deep learning platform according to claim 1, wherein the embedded deep learning mobile platform deployment process in the step 4 is as follows:
s41: the used embedded mobile deep learning platform is Nvidia Jetson TX2, and the platform is oriented to GPU acceleration parallel processing in the mobile embedded system market;
s42: and (3) flashing the deep learning platform, installing JetPack3.1 and CTI-L4T, starting a multi-core mode of the platform, and compiling and installing a tensorflow1.3.0 version.
4. The pole number identification method based on the vehicle-mounted mobile deep learning platform as claimed in claim 1, wherein the process of the multi-class non-maximum suppression algorithm in the step 5 is as follows:
s51: inputting a new picture into the embedded deep learning platform in the step 4, and obtaining the position (x, y, w, h), the probability p and the category c of the number positioning frame of each rod number through target positioning and classification, wherein a plurality of misrecognized frames exist or one number is positioned by a plurality of similar frames;
s52: according to the coordinates (x, y, w, h) of the positioning frame, the probability p and the category c obtained in the step S51, the positioning frame, the probability p and the category c are arranged in a descending order according to the probability to obtain a sequence L;
s53: obtaining a permutation sequence according to S52, sequentially intersecting and comparing other positioning frames from a first positioning frame L (0) in the sequence L, and deleting the positioning frame L (i) if the IOU of the other positioning frame L (i) and the positioning frame L (0) with the maximum probability is more than 0.5;
the intersection ratio IOU calculation formula is as follows:
in the formula, A and B are two positioning frame areas of a positioning frame L (i) and a probability maximum positioning frame L (0);
s54: reserving the first positioning frame L (0) and removing the positioning frame from the descending sequence L, and returning to S53 until the descending sequence L is empty;
s55: traversing the stored positioning frames, and calculating the central position (x) of each positioning framem,ym) Arranged from top to bottom by location;
the calculation formula of the center position of the positioning frame is as follows:
in the formula, xmAnd ymIs the coordinate of the center point of the positioning frame, and x, y, w and h are respectively the horizontal and vertical coordinates of the upper left corner of the positioning frame and the width and height of the positioning frame;
s56: and traversing the stored positioning frames according to the arrangement sequence of S55, calculating the center distance of two adjacent positioning frames, and deleting the positioning frames with low confidence degrees if the center distance is less than one ninth of the pixel size of the pole number plate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910886389.7A CN110598665B (en) | 2019-09-19 | 2019-09-19 | Pole number identification method based on vehicle-mounted mobile deep learning platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910886389.7A CN110598665B (en) | 2019-09-19 | 2019-09-19 | Pole number identification method based on vehicle-mounted mobile deep learning platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110598665A true CN110598665A (en) | 2019-12-20 |
CN110598665B CN110598665B (en) | 2022-09-09 |
Family
ID=68861153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910886389.7A Expired - Fee Related CN110598665B (en) | 2019-09-19 | 2019-09-19 | Pole number identification method based on vehicle-mounted mobile deep learning platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110598665B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008619A (en) * | 2020-01-19 | 2020-04-14 | 南京智莲森信息技术有限公司 | High-speed rail contact net support number plate detection and identification method based on deep semantic extraction |
CN111898481A (en) * | 2020-07-14 | 2020-11-06 | 济南信通达电气科技有限公司 | State identification method and device for pointer type opening and closing indicator |
CN114462646A (en) * | 2022-03-15 | 2022-05-10 | 成都中轨轨道设备有限公司 | Pole number plate identification method and system based on contact network safety inspection |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358258A (en) * | 2017-07-07 | 2017-11-17 | 西安电子科技大学 | SAR image target classification based on the double CNN passages of NSCT and Selective Attention Mechanism |
CN107844750A (en) * | 2017-10-19 | 2018-03-27 | 华中科技大学 | A kind of water surface panoramic picture target detection recognition methods |
CN109635666A (en) * | 2018-11-16 | 2019-04-16 | 南京航空航天大学 | A kind of image object rapid detection method based on deep learning |
CN109840904A (en) * | 2019-01-24 | 2019-06-04 | 西南交通大学 | A kind of high iron catenary large scale difference parts testing method |
CN109886128A (en) * | 2019-01-24 | 2019-06-14 | 南京航空航天大学 | A kind of method for detecting human face under low resolution |
CN109977780A (en) * | 2019-02-26 | 2019-07-05 | 广东工业大学 | A kind of detection and recognition methods of the diatom based on deep learning algorithm |
US20190253520A1 (en) * | 2018-02-12 | 2019-08-15 | Micron Technology, Inc. | Optimization of Data Access and Communication in Memory Systems |
CN110210463A (en) * | 2019-07-03 | 2019-09-06 | 中国人民解放军海军航空大学 | Radar target image detecting method based on Precise ROI-Faster R-CNN |
-
2019
- 2019-09-19 CN CN201910886389.7A patent/CN110598665B/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358258A (en) * | 2017-07-07 | 2017-11-17 | 西安电子科技大学 | SAR image target classification based on the double CNN passages of NSCT and Selective Attention Mechanism |
CN107844750A (en) * | 2017-10-19 | 2018-03-27 | 华中科技大学 | A kind of water surface panoramic picture target detection recognition methods |
US20190253520A1 (en) * | 2018-02-12 | 2019-08-15 | Micron Technology, Inc. | Optimization of Data Access and Communication in Memory Systems |
CN109635666A (en) * | 2018-11-16 | 2019-04-16 | 南京航空航天大学 | A kind of image object rapid detection method based on deep learning |
CN109840904A (en) * | 2019-01-24 | 2019-06-04 | 西南交通大学 | A kind of high iron catenary large scale difference parts testing method |
CN109886128A (en) * | 2019-01-24 | 2019-06-14 | 南京航空航天大学 | A kind of method for detecting human face under low resolution |
CN109977780A (en) * | 2019-02-26 | 2019-07-05 | 广东工业大学 | A kind of detection and recognition methods of the diatom based on deep learning algorithm |
CN110210463A (en) * | 2019-07-03 | 2019-09-06 | 中国人民解放军海军航空大学 | Radar target image detecting method based on Precise ROI-Faster R-CNN |
Non-Patent Citations (9)
Title |
---|
P S SOUMYA: "Optimized Tank Detector Based on Modern Convolutional Neural Networks", 《2018 SECOND INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS)》 * |
ZHIGANG LIU: "A High-Precision Positioning Approach for Catenary Support Components With Multiscale Difference", 《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT ( VOLUME: 69, ISSUE: 3, MARCH 2020)》 * |
孙悦等: "基于改进SSD算法的自然场景文本检测", 《电视技术》 * |
智能血压计: "非极大值抑制(nms)算法详解[python]", 《HTTPS://BLOG.CSDN.NET/LZ867422770/ARTICLE/DETAILS/100019587》 * |
杨红梅: "基于 SU RF 特征匹配的电气化铁路接触网支撑装置旋转双耳不良状态检测", 《铁道学报》 * |
杨红梅: "基于仿射不变矩的电气化铁路绝缘子片间夹杂异物检测", 《铁道学报》 * |
王婷婷: "基于机载下视图像的深度学习目标检测系统", 《信息科技》 * |
胡从坤等: "使用多任务级联卷积神经网络进行车牌照识别", 《企业技术开发》 * |
董阿梅: "基于卷积神经网络的色织物疵点检测与分类算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008619A (en) * | 2020-01-19 | 2020-04-14 | 南京智莲森信息技术有限公司 | High-speed rail contact net support number plate detection and identification method based on deep semantic extraction |
CN111898481A (en) * | 2020-07-14 | 2020-11-06 | 济南信通达电气科技有限公司 | State identification method and device for pointer type opening and closing indicator |
CN114462646A (en) * | 2022-03-15 | 2022-05-10 | 成都中轨轨道设备有限公司 | Pole number plate identification method and system based on contact network safety inspection |
CN114462646B (en) * | 2022-03-15 | 2022-11-15 | 成都中轨轨道设备有限公司 | Pole number plate identification method and system based on contact network safety inspection |
Also Published As
Publication number | Publication date |
---|---|
CN110598665B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6360802B2 (en) | Neural network processing device, neural network processing method, detection device, detection method, and vehicle | |
KR102516360B1 (en) | A method and apparatus for detecting a target | |
CN110598665B (en) | Pole number identification method based on vehicle-mounted mobile deep learning platform | |
CN106650740B (en) | A kind of licence plate recognition method and terminal | |
CN108830196A (en) | Pedestrian detection method based on feature pyramid network | |
CN110991444B (en) | License plate recognition method and device for complex scene | |
CN106845487A (en) | A kind of licence plate recognition method end to end | |
CN108108746A (en) | License plate character recognition method based on Caffe deep learning frames | |
CN107871102A (en) | A kind of method for detecting human face and device | |
JP2014089626A (en) | Image detection device and control program and image detection method | |
CN112381061B (en) | Facial expression recognition method and system | |
CN110929795B (en) | Method for quickly identifying and positioning welding spot of high-speed wire welding machine | |
CN109886159B (en) | Face detection method under non-limited condition | |
CN110781962B (en) | Target detection method based on lightweight convolutional neural network | |
CN106575362A (en) | Object selection based on region of interest fusion | |
CN112766184A (en) | Remote sensing target detection method based on multi-level feature selection convolutional neural network | |
KR102217020B1 (en) | Object detection device in very high-resolution aerial images baseo om single-stage digh-density pyramid feature network | |
CN105279770A (en) | Target tracking control method and device | |
CN114332942A (en) | Night infrared pedestrian detection method and system based on improved YOLOv3 | |
CN114022837A (en) | Station left article detection method and device, electronic equipment and storage medium | |
CN113902965A (en) | Multi-spectral pedestrian detection method based on multi-layer feature fusion | |
CN113963333B (en) | Traffic sign board detection method based on improved YOLOF model | |
CN111507353A (en) | Chinese field detection method and system based on character recognition | |
CN113902044B (en) | Image target extraction method based on lightweight YOLOV3 | |
CN111738088B (en) | Pedestrian distance prediction method based on monocular camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220909 |