CN111091565B - Self-adaptive motion characteristic matching and recognition bow net contact point detection method - Google Patents

Self-adaptive motion characteristic matching and recognition bow net contact point detection method Download PDF

Info

Publication number
CN111091565B
CN111091565B CN202010000633.8A CN202010000633A CN111091565B CN 111091565 B CN111091565 B CN 111091565B CN 202010000633 A CN202010000633 A CN 202010000633A CN 111091565 B CN111091565 B CN 111091565B
Authority
CN
China
Prior art keywords
contact point
bow net
pantograph
net contact
catenary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010000633.8A
Other languages
Chinese (zh)
Other versions
CN111091565A (en
Inventor
权伟
刘跃平
邹栋
周宁
张卫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010000633.8A priority Critical patent/CN111091565B/en
Publication of CN111091565A publication Critical patent/CN111091565A/en
Application granted granted Critical
Publication of CN111091565B publication Critical patent/CN111091565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Current-Collector Devices For Electrically Propelled Vehicles (AREA)

Abstract

The invention provides a pantograph-catenary contact point detection method for adaptive motion characteristic matching and identification, and relates to the technical field of railway pantograph-catenary detection and intelligent monitoring. Forming a bow net image library by a bow net image shot by a roof monitoring camera, marking out an enclosing frame taking a bow net contact point as a center, forming a bow net contact point data set by the bow net image and a marking result thereof, constructing a bow net contact point detection network, and training the detection network by using the bow net contact point data set. And matching of the pantograph and catenary contact points is completed through minimum distance judgment, a pantograph and catenary contact point information set is further established, then, whether the change frequency of the transverse speed direction of the movement of each pantograph and catenary contact point and the change range of the transverse coordinate and the longitudinal coordinate meet conditions or not is judged through analyzing the movement characteristic sequence of each pantograph and catenary contact point, and meanwhile, online learning is carried out on a detection network according to the identified real pantograph and catenary contact point data, so that the whole process can keep the adaptability to the environment.

Description

Self-adaptive motion characteristic matching and recognition bow net contact point detection method
Technical Field
The invention relates to the technical field of railway pantograph-catenary detection and monitoring, mode identification and intelligent systems.
Background art:
the pantograph system is an important device of an electric locomotive traction power supply system, the pantograph contact point is an important monitoring object in pantograph operation, reflects the state of the pantograph operation, researches a pantograph contact point detection technology based on computer vision, realizes real-time and accurate monitoring, and has important significance for improving the automation and intelligence level of pantograph system detection and ensuring the safety and stability of pantograph system operation.
At present, many researches are carried out at home and abroad in the aspect of bow net non-contact detection. In 2010, a railway science research institute proposes an image processing-based method, a high-definition camera mounted on the roof of a vehicle is used for acquiring a bow net image and calculating related parameters, only one camera is used in the method, the detection structure is simple, and the precision needs to be improved. Liu etc. in 2012 installed array camera and structured light at the detection car top, carried out on-vehicle dynamic measurement based on line structured light vision measurement technique, this method has that measurement accuracy is high, reliable and stable advantage, obtains extensive application in the not high circuit of detection speed requirement simultaneously, but this method single measurement required image data volume is big, and is extremely high to image acquisition and processing requirement, has certain limitation in high-speed dynamic measurement. In 2014, Aydin I and the like detect the image edge by using a canny algorithm and extract the position of a pantograph contact point by using Hough transformation, but the algorithm cannot identify the tiny crack under the high-speed running condition and is easy to be limited by a shooting angle, part shielding and the like. In 2017, Karakose obtains edge information of a pantograph and a contact line respectively through canny edge detection, positioning and analysis of a pantograph contact point are achieved through a mode of calculating a straight line intersection point through Hough line detection, a pantograph contact surface is divided into three areas which are fault areas, danger areas and safety areas respectively, and the areas where the obtained contact points are located are judged to diagnose possible fault types. In 2018, when Shen Y and the like combine template matching with a target tracking algorithm, the detection range of a pantograph-catenary action region is narrowed in a template matching mode, a classical tracking algorithm KCF is further combined to achieve accurate tracking of a target rectangular region, finally, positioning information of pantograph-catenary contact points is obtained through calculation, and three-dimensional coordinate reconstruction and analysis are achieved through combination of binocular parameters. In 2019, Huang Z and other researches are based on contact point detection of infrared images, point detection of a pantograph and a contact net is respectively realized by utilizing two direction enhancement operators, and then positioning of a contact point is realized by adopting an improved RANSAC strategy. In 2019, Luo Y and the like propose an improved rapid RCNN for detecting pantograph faults, and positioning accuracy of candidate frames and accuracy of an algorithm are guaranteed by adjusting parameters of the rapid RCNN. In the aspect of contact point detection, the methods further need to improve the detection precision and real-time performance, and the adaptability to the change of the pantograph-catenary operating environment.
In view of this, it is necessary to innovate a bow net contact point detection method.
Disclosure of Invention
The invention aims to provide a pantograph contact point detection method for adaptive motion characteristic matching and recognition, which can effectively solve the technical problem of long-time real-time stable detection and positioning of pantograph contact points.
The purpose of the invention is realized by the following technical scheme: a bow net contact point detection method for adaptive motion feature matching and identification comprises the following steps:
a bow net contact point detection method for adaptive motion feature matching and recognition comprises the following steps:
step one, constructing a bow net contact point data set:
adopting a pantograph-catenary image shot by a railway locomotive roof monitoring camera to form a pantograph-catenary image library, marking an enclosing frame which takes a contact point as a center in the pantograph-catenary image in a manual marking mode, and forming a pantograph-catenary contact point data set by all pantograph-catenary images and marking results thereof;
step two, constructing and training a bow net contact point detection network:
constructing a bow net contact point detection network by adopting a currently widely used yolov3 object detection network architecture, and then training the detection network by using the bow net contact point data set constructed in the step one to enable the detection network to have the capability of initially detecting the bow net contact point;
step three, initializing the bow net contact point detection network:
setting M as the number of bow net contact points detected last time, and initially setting M to be 0; setting N as the number of currently detected bow net contact points, and initially setting N as 0;
let Q ═ LiDenotes the bow net contact point information set, where i ≦ 30,
Figure GDA0003249453710000021
represents the motion characteristic sequence of the ith bow net contact point, wherein j is less than or equal to 300,
Figure GDA0003249453710000022
representing the movement characteristic of the ith bow net contact point at the moment j, wherein diTo representThe numbering of the bow net contact points,
Figure GDA0003249453710000023
respectively showing the abscissa, ordinate, transverse speed, longitudinal speed, h of the bow net contact point at the moment jiIndicating the authenticity of the bow net contact point when hiWhen the contact point is 1, the bow net contact point is a real bow net contact point, and when h is equal toiWhen the contact point is equal to 0, the bow net contact point is a false bow net contact point, and h is set initiallyiWhen the value is 0, Q is empty initially;
step four, image input:
under the condition of real-time processing, extracting a video image which is acquired by a railway locomotive roof monitoring camera and stored in a pantograph-catenary image library as an input image for detecting a pantograph-catenary contact point; under the condition of off-line processing, decomposing the acquired bow net video file into an image sequence consisting of a plurality of frames, and extracting the frame images one by one as input images according to a time sequence; if the input image is empty, the whole process is stopped;
step five, bow net contact point detection:
performing contact point detection on the input image by adopting the contact point detection network obtained in the step three, and setting N as the number of currently detected contact points;
if M is equal to 0 and N is equal to 0, Q is set to be null, and the step four is skipped;
if M is 0 and N is greater than 0, setting coordinates according to the currently detected bow net contact point
Figure GDA0003249453710000024
The numbers d are arranged in the order of 1 to 30iAnd is provided with
Figure GDA0003249453710000025
j=1,hi0, is generated accordingly
Figure GDA0003249453710000026
And is added to LiThen adding LiAdding Q, setting M to be N, and jumping to the step four;
if M is greater than 0 and N is 0, setting M to be N and jumping to the step four;
if M is greater than 0 and N is greater than 0, jumping to the sixth step;
step six, bow net contact point matching:
let Ah=(xh,yh) Indicating the h bow net contact point currently detected, wherein h is less than or equal to N, xh,yhRespectively representing the abscissa and the ordinate of the bow net contact point;
calculation of AhAnd Q is LiWill have a distance to a from the coordinates of all bow net contact points at the last momenthOf minimum distance
Figure GDA0003249453710000027
As AhAnd according to the matching result of
Figure GDA0003249453710000028
And AhGenerating
Figure GDA0003249453710000029
hhi) Wherein
Figure GDA00032494537100000210
D ofiAnd hiAnd
Figure GDA00032494537100000211
is the same, then will
Figure GDA00032494537100000212
Is added to LiIf j +1 > 300, then LiDeleting the first motion characteristic, otherwise, jumping to the fourth step;
if N is equal to M, jumping to a seventh step;
if N < M, then the unmatched L in QiDeleting, and then setting M to N;
if N is more than M, setting the number d of the currently unmatched N-M bow net contact pointsiSetting the coordinates
Figure GDA0003249453710000031
And is provided with
Figure GDA0003249453710000033
j=1,hi0, is generated accordingly
Figure GDA0003249453710000032
And is added to LiThen adding LiAdding into Q, and setting M to N;
seventhly, identifying bow net contact points:
traversing the motion characteristic sequence of the pantograph contact points in the Q, if the length of the sequence is not less than 300, calculating the times of the change of the direction of the transverse speed, if the length of the sequence is three times or more, considering the pantograph contact points corresponding to the sequence as candidate pantograph contact points, and setting h corresponding to the pantograph contact pointsiOtherwise, considering the pantograph contact point corresponding to the sequence as a false pantograph contact point, and setting the corresponding hi0; if the length of the sequence is less than 300, no processing is carried out;
calculating the maximum value and the minimum value of the abscissa and the ordinate in the motion characteristic sequence corresponding to each candidate pantograph contact point for all candidate pantograph contact points; if the difference between the maximum and minimum values of the abscissa is greater than 3/4 of the image width, or the difference between the maximum and minimum values of the ordinate is greater than 1/4 of the input image height, the candidate pantograph contact point is considered to be a false pantograph contact point, and its corresponding h is setiIf not, the candidate pantograph contact point is considered as a true pantograph contact point, and the corresponding h is seti=1;
If the bow net contact point exists, jumping to the step eight, otherwise, jumping to the step four;
step eight, detecting network online learning:
and generating an online training set according to the data of the contact point of the real bow net, performing online learning on the detection network, and skipping to the fourth step.
Compared with the prior art, the invention has the advantages and positive effects that: the invention provides a bow net contact point detection method for adaptive motion characteristic matching and identification. The method comprises the steps of firstly constructing a pantograph contact point detection network based on a yolov3 object detection network structure, training the pantograph contact point detection network by adopting a data set obtained through manually marked pantograph images, enabling the pantograph contact point detection network to have initial pantograph contact point detection capability, detecting pantograph contact points of pantograph monitoring video images by using the detection network in a real-time detection process, judging and completing the matching of the pantograph contact points through minimum distance, further establishing a pantograph contact point information set, wherein the set comprises a motion characteristic sequence corresponding to each pantograph contact point, and identifying the truth of each pantograph contact point by analyzing the motion characteristic sequence of each pantograph contact point, so that a final pantograph contact point detection task is completed. The method of the invention utilizes the motion characteristics of the pantograph-catenary contact points to carry out motion matching on the pantograph-catenary contact points and identify the real pantograph-catenary contact points, thereby improving the accuracy of detecting the pantograph-catenary contact points, and uses an object detection network based on deep learning to position possible pantograph-catenary contact points, and improves the robustness of real-time detection of the pantograph-catenary contact points through online learning of data of the real pantograph-catenary contact points. In addition, the method can process different railway lines and locomotive conditions, and in the practical application process, accurate detection of the pantograph and catenary contact points can be realized only by modifying and enhancing the pantograph and catenary contact point data set according to specific conditions and properly configuring related parameters, so that the method has strong scene adaptability.
Drawings
FIG. 1 is a technical flow chart of the present invention
Detailed Description
The technical flow chart of the method of the invention is shown in figure 1. The method comprises the steps of firstly constructing a bow net contact point data set and a bow net contact point detection network, then training the detection network by using the data set so that the detection network has the initial detection capability on the bow net contact point, detecting the bow net contact point of a bow net monitoring video image by using the detection network in the real-time detection process, completing the matching of the bow net contact points by minimum distance judgment, further establishing a bow net contact point information set which comprises a motion characteristic sequence corresponding to each bow net contact point, then identifying the true and false characteristics of the bow net contact point by analyzing the motion characteristic sequence of each bow net contact point, namely judging the number of times that the transverse speed direction of the motion of the bow net contact point changes and whether the change range of the transverse coordinate and the longitudinal coordinate meets the condition or not, thereby realizing the accurate detection on the bow net contact point, and simultaneously learning the detection network on line according to the identified true bow net contact point data, the method has high robustness.
Example (b):
the method can be used for different railway lines and locomotive conditions, and can accurately detect bow net contact points of various bow net monitoring video images.
Specifically, when the method is used for detecting the pantograph contact point, firstly, a pantograph image shot by a railway locomotive roof monitoring camera is used for forming a pantograph image library, then an enclosure frame which takes the contact point as the center in the pantograph image is marked in a manual marking mode, all pantograph images and marking results thereof form a pantograph contact point data set, then a pantograph contact point detection network is constructed based on a yolov3 object detection network structure, and the detection network is trained by using the pantograph contact point data set, so that the detection network has the capability of initially detecting the pantograph contact point. In the real-time detection process, the detection network is used for detecting contact points of pantograph and catenary monitoring video images, matching of pantograph and catenary contact points is judged and completed through the minimum distance, then an information set of the pantograph and catenary contact points is established, the set contains motion characteristic sequences corresponding to all the pantograph and catenary contact points, then the motion characteristic sequences of all the pantograph and catenary contact points are analyzed, the number of times that the motion transverse speed direction of the pantograph and catenary contact points changes is judged, whether the change range of transverse coordinates and longitudinal coordinates meets the condition or not is judged, the true and false performance of the pantograph and catenary contact points is identified, accurate detection of the pantograph and catenary contact points is achieved, meanwhile, the detection network is subjected to online learning according to identified true contact point data, and the whole detection process can keep the adaptability to the environment. The method can process different railway lines and locomotive conditions, and in the practical application process, accurate detection of the pantograph and catenary contact points can be realized only by modifying and enhancing the pantograph and catenary contact point data set according to specific conditions and properly configuring related parameters, so that the method has strong scene adaptability.
The method can be realized by programming in any computer programming language (such as C language), and the detection system software based on the method can realize real-time bow net contact point detection application in any PC or embedded system.

Claims (1)

1. A bow net contact point detection method for adaptive motion feature matching and recognition comprises the following steps:
step one, constructing a bow net contact point data set:
adopting a pantograph-catenary image shot by a railway locomotive roof monitoring camera to form a pantograph-catenary image library, marking an enclosing frame which takes a contact point as a center in the pantograph-catenary image in a manual marking mode, and forming a pantograph-catenary contact point data set by all pantograph-catenary images and marking results thereof;
step two, constructing and training a bow net contact point detection network:
constructing a bow net contact point detection network by adopting a currently widely used yolov3 object detection network architecture, and then training the detection network by using the bow net contact point data set constructed in the step one to enable the detection network to have the capability of initially detecting the bow net contact point;
step three, initializing the bow net contact point detection network:
setting M as the number of bow net contact points detected last time, and initially setting M to be 0; setting N as the number of currently detected bow net contact points, and initially setting N as 0;
let Q ═ LiDenotes the bow net contact point information set, where i ≦ 30,
Figure FDA0003249453700000011
represents the motion characteristic sequence of the ith bow net contact point, wherein j is less than or equal to 300,
Figure FDA0003249453700000012
representing the movement characteristic of the ith bow net contact point at the moment j, wherein diThe number of the bow net contact point is indicated,
Figure FDA0003249453700000013
respectively showing the abscissa, ordinate, transverse speed, longitudinal speed, h of the bow net contact point at the moment jiIndicating the authenticity of the bow net contact point when hiWhen the contact point is 1, the bow net contact point is a real bow net contact point, and when h is equal toiWhen the contact point is equal to 0, the bow net contact point is a false bow net contact point, and h is set initiallyiWhen the value is 0, Q is empty initially;
step four, image input:
under the condition of real-time processing, extracting a video image which is acquired by a railway locomotive roof monitoring camera and stored in a pantograph-catenary image library as an input image for detecting a pantograph-catenary contact point; under the condition of off-line processing, decomposing the acquired bow net video file into an image sequence consisting of a plurality of frames, and extracting the frame images one by one as input images according to a time sequence; if the input image is empty, the whole process is stopped;
step five, bow net contact point detection:
performing contact point detection on the input image by adopting the contact point detection network obtained in the step three, and setting N as the number of currently detected contact points;
if M is equal to 0 and N is equal to 0, Q is set to be null, and the step four is skipped;
if M is 0, and N>0, setting coordinates according to the currently detected bow net contact point
Figure FDA0003249453700000014
The numbers d are arranged in the order of 1 to 30iAnd is provided with
Figure FDA0003249453700000015
j=1,hi0, is generated accordingly
Figure FDA0003249453700000016
And is added to LiThen adding LiAdding Q, setting M to be N, and jumping to the step four;
if M is greater than 0 and N is 0, setting M to be N and jumping to the step four;
if M is greater than 0 and N is greater than 0, jumping to the sixth step;
step six, bow net contact point matching:
let Ah=(xh,yh) Indicating the h bow net contact point currently detected, wherein h is less than or equal to N, xh,yhRespectively representing the abscissa and the ordinate of the bow net contact point;
calculation of AhAnd Q is LiWill have a distance to a from the coordinates of all bow net contact points at the last momenthOf minimum distance
Figure FDA0003249453700000017
As AhAnd according to the matching result of
Figure FDA0003249453700000018
And AhGenerating
Figure FDA0003249453700000019
Figure FDA0003249453700000021
Wherein
Figure FDA0003249453700000022
Figure FDA0003249453700000023
D ofiAnd hiAnd
Figure FDA0003249453700000024
is the same, then will
Figure FDA0003249453700000025
Is added to LiIf j +1>300, then L isiDeleting the first motion characteristic, otherwise, jumping to the fourth step;
if N is equal to M, jumping to a seventh step;
if N is present<M, then the unmatched L in QiDeleting, and then setting M to N;
if N is present>M, setting the serial numbers d of the currently unmatched N-M bow net contact pointsiSetting the coordinates
Figure FDA0003249453700000026
And is provided with
Figure FDA0003249453700000027
j=1,hi0, is generated accordingly
Figure FDA0003249453700000028
And is added to LiThen adding LiAdding into Q, and setting M to N;
seventhly, identifying bow net contact points:
traversing the motion characteristic sequence of the pantograph contact points in the Q, if the length of the sequence is not less than 300, calculating the times of the change of the direction of the transverse speed, if the length of the sequence is three times or more, considering the pantograph contact points corresponding to the sequence as candidate pantograph contact points, and setting h corresponding to the pantograph contact pointsiOtherwise, considering the pantograph contact point corresponding to the sequence as a false pantograph contact point, and setting the corresponding hi0; if the length of the sequence is less than 300, no processing is carried out;
calculating the maximum value and the minimum value of the abscissa and the ordinate in the motion characteristic sequence corresponding to each candidate pantograph contact point for all candidate pantograph contact points; if the difference between the maximum and minimum values of the abscissa is greater than 3/4 of the image width, or the difference between the maximum and minimum values of the ordinate is greater than 1/4 of the input image height, the candidate pantograph contact point is considered to be a false pantograph contact point, and its corresponding h is seti=0,Otherwise, the candidate bow net contact point is considered as the true bow net contact point, and the h corresponding to the candidate bow net contact point is seti=1;
If the bow net contact point exists, jumping to the step eight, otherwise, jumping to the step four;
step eight, detecting network online learning:
and generating an online training set according to the data of the contact point of the real bow net, performing online learning on the detection network, and skipping to the fourth step.
CN202010000633.8A 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method Active CN111091565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010000633.8A CN111091565B (en) 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010000633.8A CN111091565B (en) 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method

Publications (2)

Publication Number Publication Date
CN111091565A CN111091565A (en) 2020-05-01
CN111091565B true CN111091565B (en) 2022-02-08

Family

ID=70399672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010000633.8A Active CN111091565B (en) 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method

Country Status (1)

Country Link
CN (1) CN111091565B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112985263B (en) * 2021-02-09 2022-09-23 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379921B1 (en) * 2010-01-26 2013-02-19 Verint Systems Ltd. Method and apparatus to determine a region of interest in a video sequence based on floor detection
CN108288055A (en) * 2018-03-14 2018-07-17 台州智必安科技有限责任公司 Block of bow collector of electric locomotive based on depth network and placement test and arc method for measuring
CN108573223A (en) * 2018-04-03 2018-09-25 同济大学 A kind of EMU operating environment cognitive method based on bow net video
CN207902448U (en) * 2017-05-10 2018-09-25 中国科学院深圳先进技术研究院 Bow net arcing Systems for optical inspection
CN110378897A (en) * 2019-07-25 2019-10-25 中车青岛四方机车车辆股份有限公司 A kind of pantograph running state real-time monitoring method and device based on video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562287B2 (en) * 2017-10-27 2023-01-24 Salesforce.Com, Inc. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379921B1 (en) * 2010-01-26 2013-02-19 Verint Systems Ltd. Method and apparatus to determine a region of interest in a video sequence based on floor detection
CN207902448U (en) * 2017-05-10 2018-09-25 中国科学院深圳先进技术研究院 Bow net arcing Systems for optical inspection
CN108288055A (en) * 2018-03-14 2018-07-17 台州智必安科技有限责任公司 Block of bow collector of electric locomotive based on depth network and placement test and arc method for measuring
CN108573223A (en) * 2018-04-03 2018-09-25 同济大学 A kind of EMU operating environment cognitive method based on bow net video
CN110378897A (en) * 2019-07-25 2019-10-25 中车青岛四方机车车辆股份有限公司 A kind of pantograph running state real-time monitoring method and device based on video

Also Published As

Publication number Publication date
CN111091565A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
WO2020094033A1 (en) Method and system for converting point cloud data for use with 2d convolutional neural networks
Wu et al. 6d-vnet: End-to-end 6-dof vehicle pose estimation from monocular rgb images
JP2010033447A (en) Image processor and image processing method
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
CN111797684A (en) Binocular vision distance measuring method for moving vehicle
CN111091565B (en) Self-adaptive motion characteristic matching and recognition bow net contact point detection method
CN113989308B (en) Polygonal target segmentation method based on Hough transformation and template matching
Zhou et al. BV-Net: Bin-based Vector-predicted Network for tubular solder joint detection
CN106408589A (en) Vehicle-mounted overlooking camera based vehicle movement measurement method
CN117037085A (en) Vehicle identification and quantity statistics monitoring method based on improved YOLOv5
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN109063543B (en) Video vehicle weight recognition method, system and device considering local deformation
CN116563786A (en) TEDS jumper fault identification detection method, storage medium and equipment
US20230005162A1 (en) Image processing system, image processing method, and storage medium
KR101391667B1 (en) A model learning and recognition method for object category recognition robust to scale changes
CN112686880B (en) Method for detecting abnormity of railway locomotive component
Gao et al. A new method for repeated localization and matching of tunnel lining defects
CN106056599B (en) A kind of object recognition algorithm and device based on Object Depth data
CN112651986B (en) Environment recognition method, recognition device, recognition system, electronic equipment and medium
Ren et al. Underwater visual tracking method based on kcf algorithm of aruco marker
Gajdošech et al. Towards Deep Learning-based 6D Bin Pose Estimation in 3D Scans
Cheng et al. G-Fusion: LiDAR and Camera Feature Fusion on the Ground Voxel Space
CN113269118A (en) Monocular vision forward vehicle distance detection method based on depth estimation
Sun et al. The study on intelligent vehicle collision-avoidance system with vision perception and fuzzy decision making

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant