CN111091565A - Self-adaptive motion characteristic matching and recognition bow net contact point detection method - Google Patents

Self-adaptive motion characteristic matching and recognition bow net contact point detection method Download PDF

Info

Publication number
CN111091565A
CN111091565A CN202010000633.8A CN202010000633A CN111091565A CN 111091565 A CN111091565 A CN 111091565A CN 202010000633 A CN202010000633 A CN 202010000633A CN 111091565 A CN111091565 A CN 111091565A
Authority
CN
China
Prior art keywords
contact point
bow net
pantograph
net contact
catenary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010000633.8A
Other languages
Chinese (zh)
Other versions
CN111091565B (en
Inventor
权伟
刘跃平
邹栋
周宁
张卫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010000633.8A priority Critical patent/CN111091565B/en
Publication of CN111091565A publication Critical patent/CN111091565A/en
Application granted granted Critical
Publication of CN111091565B publication Critical patent/CN111091565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention provides a pantograph-catenary contact point detection method for adaptive motion characteristic matching and identification, and relates to the technical field of railway pantograph-catenary detection and intelligent monitoring. Forming a bow net image library by a bow net image shot by a roof monitoring camera, marking out an enclosing frame taking a bow net contact point as a center, forming a bow net contact point data set by the bow net image and a marking result thereof, constructing a bow net contact point detection network, and training the detection network by using the bow net contact point data set. And matching of the pantograph and catenary contact points is completed through minimum distance judgment, a pantograph and catenary contact point information set is further established, then, whether the change frequency of the transverse speed direction of the movement of each pantograph and catenary contact point and the change range of the transverse coordinate and the longitudinal coordinate meet conditions or not is judged through analyzing the movement characteristic sequence of each pantograph and catenary contact point, and meanwhile, online learning is carried out on a detection network according to the identified real pantograph and catenary contact point data, so that the whole process can keep the adaptability to the environment.

Description

Self-adaptive motion characteristic matching and recognition bow net contact point detection method
Technical Field
The invention relates to the technical field of railway pantograph-catenary detection and monitoring, mode identification and intelligent systems.
Background art:
the pantograph system is an important device of an electric locomotive traction power supply system, the pantograph contact point is an important monitoring object in pantograph operation, reflects the state of the pantograph operation, researches a pantograph contact point detection technology based on computer vision, realizes real-time and accurate monitoring, and has important significance for improving the automation and intelligence level of pantograph system detection and ensuring the safety and stability of pantograph system operation.
At present, many researches are carried out at home and abroad in the aspect of bow net non-contact detection. In 2010, a railway science research institute proposes an image processing-based method, a high-definition camera mounted on the roof of a vehicle is used for acquiring a bow net image and calculating related parameters, only one camera is used in the method, the detection structure is simple, and the precision needs to be improved. Liu etc. in 2012 installed array camera and structured light at the detection car top, carried out on-vehicle dynamic measurement based on line structured light vision measurement technique, this method has that measurement accuracy is high, reliable and stable advantage, obtains extensive application in the not high circuit of detection speed requirement simultaneously, but this method single measurement required image data volume is big, and is extremely high to image acquisition and processing requirement, has certain limitation in high-speed dynamic measurement. In 2014, Aydin I and the like detect the image edge by using a canny algorithm and extract the position of a pantograph contact point by using Hough transformation, but the algorithm cannot identify the tiny crack under the high-speed running condition and is easy to be limited by a shooting angle, part shielding and the like. In 2017, Karakose obtains edge information of a pantograph and a contact line respectively through canny edge detection, positioning and analysis of a pantograph contact point are achieved through a mode of calculating a straight line intersection point through Hough line detection, a pantograph contact surface is divided into three areas which are fault areas, danger areas and safety areas respectively, and the areas where the obtained contact points are located are judged to diagnose possible fault types. In 2018, when Shen Y and the like combine template matching with a target tracking algorithm, the detection range of a pantograph-catenary action region is narrowed in a template matching mode, a classical tracking algorithm KCF is further combined to achieve accurate tracking of a target rectangular region, finally, positioning information of pantograph-catenary contact points is obtained through calculation, and three-dimensional coordinate reconstruction and analysis are achieved through combination of binocular parameters. In 2019, Huang Z and other researches are based on contact point detection of infrared images, point detection of a pantograph and a contact net is respectively realized by utilizing two direction enhancement operators, and then positioning of a contact point is realized by adopting an improved RANSAC strategy. In 2019, Luo Y and the like propose an improved rapid RCNN for detecting pantograph faults, and positioning accuracy of candidate frames and accuracy of an algorithm are guaranteed by adjusting parameters of the rapid RCNN. In the aspect of contact point detection, the methods further need to improve the detection precision and real-time performance, and the adaptability to the change of the pantograph-catenary operating environment.
In view of this, it is necessary to innovate a bow net contact point detection method.
Disclosure of Invention
The invention aims to provide a pantograph contact point detection method for adaptive motion characteristic matching and recognition, which can effectively solve the technical problem of long-time real-time stable detection and positioning of pantograph contact points.
The purpose of the invention is realized by the following technical scheme: a bow net contact point detection method for adaptive motion feature matching and identification comprises the following steps:
1. a bow net contact point detection method for adaptive motion feature matching and recognition comprises the following steps:
step one, constructing a bow net contact point data set:
adopting a pantograph-catenary image shot by a railway locomotive roof monitoring camera to form a pantograph-catenary image library, marking an enclosing frame which takes a contact point as a center in the pantograph-catenary image in a manual marking mode, and forming a pantograph-catenary contact point data set by all pantograph-catenary images and marking results thereof;
step two, constructing and training a bow net contact point detection network:
constructing a bow net contact point detection network by adopting a currently widely used yolov3 object detection network architecture, and then training the detection network by using the bow net contact point data set constructed in the step one to enable the detection network to have the capability of initially detecting the bow net contact point;
step three, initializing the bow net contact point detection network:
setting M as the number of bow net contact points detected last time, and initially setting M to be 0; setting N as the number of currently detected bow net contact points, and initially setting N as 0;
let Q ═ LiDenotes the bow net contact point information set, where i ≦ 30,
Figure BDA0002353193980000021
represents the motion characteristic sequence of the ith bow net contact point, wherein j is less than or equal to 300,
Figure BDA0002353193980000022
representing the movement characteristic of the ith bow net contact point at the moment j, wherein diThe number of the bow net contact point is indicated,
Figure BDA0002353193980000023
respectively showing the abscissa, ordinate, transverse speed, longitudinal speed, h of the bow net contact point at the moment jiIndicating the authenticity of the bow net contact point when hiWhen the contact point is 1, the bow net contact point is a real bow net contact point, and when h is equal toiWhen the contact point is equal to 0, the bow net contact point is a false bow net contact point, and h is set initiallyiWhen the value is 0, Q is empty initially;
step four, image input:
under the condition of real-time processing, extracting a video image which is acquired by a railway locomotive roof monitoring camera and stored in a pantograph-catenary image library as an input image for detecting a pantograph-catenary contact point; under the condition of off-line processing, decomposing the acquired bow net video file into an image sequence consisting of a plurality of frames, and extracting the frame images one by one as input images according to a time sequence; if the input image is empty, the whole process is stopped;
step five, bow net contact point detection:
performing contact point detection on the input image by adopting the contact point detection network obtained in the step three, and setting N as the number of currently detected contact points;
if M is equal to 0 and N is equal to 0, Q is set to be null, and the step four is skipped;
if M is 0, and N>0, setting coordinates according to the currently detected bow net contact point
Figure BDA0002353193980000024
The numbers d are arranged in the order of 1 to 30iAnd is provided with
Figure BDA0002353193980000025
j=1,hi0, is generated accordingly
Figure BDA0002353193980000026
And is added to LiThen adding LiAdding Q, setting M to be N, and jumping to the step four;
if M is greater than 0 and N is 0, setting M to be N and jumping to the step four;
if M is greater than 0 and N is greater than 0, jumping to the sixth step;
step six, bow net contact point matching:
let Ah=(xh,yh) Indicating the h bow net contact point currently detected, wherein h is less than or equal to N, xh,yhRespectively representing the abscissa and the ordinate of the bow net contact point;
calculation of AhAnd Q is LiWill have a distance to a from the coordinates of all bow net contact points at the last momenthOf minimum distance
Figure BDA0002353193980000027
As AhAnd according to the matching result of
Figure BDA0002353193980000028
And AhGenerating
Figure BDA0002353193980000029
Figure BDA00023531939800000210
Wherein
Figure BDA00023531939800000211
Figure BDA00023531939800000214
D ofiAnd hiAnd
Figure BDA00023531939800000212
is the same, then will
Figure BDA00023531939800000213
Is added to LiIf j +1>300, then L isiDeleting the first motion characteristic, otherwise, jumping to the fourth step;
if N is equal to M, jumping to a seventh step;
if N is present<M, then the unmatched L in QiDeleting, and then setting M to N;
if N is present>M, setting the serial numbers d of the currently unmatched N-M bow net contact pointsiSetting the coordinates
Figure BDA0002353193980000031
And is provided with
Figure BDA0002353193980000032
j=1,hi0, is generated accordingly
Figure BDA0002353193980000033
And is added to LiThen adding LiAdding into Q, and setting M to N;
seventhly, identifying bow net contact points:
traversing the motion characteristic sequence of the pantograph contact points in the Q, if the length of the sequence is not less than 300, calculating the times of the change of the direction of the transverse speed, and if the length of the sequence is three times or more, considering the pantograph contact points corresponding to the sequence as candidate pantograph contactsContacts and sets h corresponding theretoiOtherwise, considering the pantograph contact point corresponding to the sequence as a false pantograph contact point, and setting the corresponding hi0; if the length of the sequence is less than 300, no processing is carried out;
calculating the maximum value and the minimum value of the abscissa and the ordinate in the motion characteristic sequence corresponding to all candidate bow net contact points; if the difference between the maximum value and the minimum value of the abscissa is greater than 3/4 of the image width, or the difference between the maximum value and the minimum value of the ordinate is greater than 1/4 of the input image height, the candidate contact point is considered to be also a pseudo-bow-net contact point, and h corresponding to the candidate contact point is setiIf not, the candidate pantograph contact point is considered as a true pantograph contact point, and the corresponding h is seti=1;
If the bow net contact point exists, jumping to the step eight, otherwise, jumping to the step four;
step eight, detecting network online learning:
and generating an online training set according to the data of the contact point of the real bow net, performing online learning on the detection network, and skipping to the fourth step.
Compared with the prior art, the invention has the advantages and positive effects that: the invention provides a bow net contact point detection method for adaptive motion characteristic matching and identification. The method comprises the steps of firstly constructing a pantograph contact point detection network based on a yolov3 object detection network structure, training the pantograph contact point detection network by adopting a data set obtained through manually marked pantograph images, enabling the pantograph contact point detection network to have initial pantograph contact point detection capability, detecting pantograph contact points of pantograph monitoring video images by using the detection network in a real-time detection process, judging and completing the matching of the pantograph contact points through minimum distance, further establishing a pantograph contact point information set, wherein the set comprises a motion characteristic sequence corresponding to each pantograph contact point, and identifying the truth of each pantograph contact point by analyzing the motion characteristic sequence of each pantograph contact point, so that a final pantograph contact point detection task is completed. The method of the invention utilizes the motion characteristics of the pantograph-catenary contact points to carry out motion matching on the pantograph-catenary contact points and identify the real pantograph-catenary contact points, thereby improving the accuracy of detecting the pantograph-catenary contact points, and uses an object detection network based on deep learning to position possible pantograph-catenary contact points, and improves the robustness of real-time detection of the pantograph-catenary contact points through online learning of data of the real pantograph-catenary contact points. In addition, the method can process different railway lines and locomotive conditions, and in the practical application process, accurate detection of the pantograph and catenary contact points can be realized only by modifying and enhancing the pantograph and catenary contact point data set according to specific conditions and properly configuring related parameters, so that the method has strong scene adaptability.
Drawings
FIG. 1 is a technical flow chart of the present invention
Detailed Description
The technical flow chart of the method of the invention is shown in figure 1. The method comprises the steps of firstly constructing a bow net contact point data set and a bow net contact point detection network, then training the detection network by using the data set to enable the detection network to have the initial detection capability on the bow net contact point, using the detection network to detect the bow net contact point on a bow net monitoring video image in the real-time detection process, completing the matching of the contact points through minimum distance judgment, further establishing a bow net contact point information set, wherein the set comprises a motion characteristic sequence corresponding to each bow net contact point, then identifying the true and false performance of the bow net contact point by analyzing the motion characteristic sequence of each bow net contact point, namely judging the number of times of change of the motion of the bow net contact point in the transverse speed direction and judging whether the change range of the transverse coordinate and the longitudinal coordinate meets the condition or not, thereby realizing the accurate detection on the bow net contact point, and simultaneously carrying out on-line learning on the detection network according to the identified true bow net contact point data, the method has high robustness.
Example (b):
the method can be used for different railway lines and locomotive conditions, and can accurately detect bow net contact points of various bow net monitoring video images.
Specifically, when the method is used for detecting the pantograph contact point, firstly, a pantograph image shot by a railway locomotive roof monitoring camera is used for forming a pantograph image library, then an enclosure frame which takes the contact point as the center in the pantograph image is marked in a manual marking mode, all pantograph images and marking results thereof form a contact point data set, then a contact point detection network is constructed based on a yolov3 object detection network structure, and the detection network is trained by using the contact point data set, so that the detection network has the capability of initially detecting the contact point. In the real-time detection process, the detection network is used for detecting contact points of a pantograph-catenary monitoring video image, matching of the contact points is judged and completed through the minimum distance, then a contact point information set is established, the set comprises a motion characteristic sequence corresponding to each contact point, then the number of times of change of the transverse speed direction of motion of the contact points is judged through analyzing the motion characteristic sequence of each contact point, whether the change range of the transverse coordinate and the longitudinal coordinate meets the condition or not is judged, the truth of the contact points is identified, accurate detection of the pantograph-catenary contact points is achieved, meanwhile, online learning is conducted on the detection network according to the identified true contact point data, and the whole detection process can keep the adaptability to the environment. The method can process different railway lines and locomotive conditions, and in the practical application process, accurate detection of the pantograph-catenary contact point can be realized only by modifying and enhancing the contact point data set according to specific conditions and properly configuring related parameters, so that the method has strong scene adaptability.
The method can be realized by programming in any computer programming language (such as C language), and the detection system software based on the method can realize real-time bow net contact point detection application in any PC or embedded system.

Claims (1)

1. A bow net contact point detection method for adaptive motion feature matching and recognition comprises the following steps:
step one, constructing a bow net contact point data set:
adopting a pantograph-catenary image shot by a railway locomotive roof monitoring camera to form a pantograph-catenary image library, marking an enclosing frame which takes a contact point as a center in the pantograph-catenary image in a manual marking mode, and forming a pantograph-catenary contact point data set by all pantograph-catenary images and marking results thereof;
step two, constructing and training a bow net contact point detection network:
constructing a bow net contact point detection network by adopting a currently widely used yolov3 object detection network architecture, and then training the detection network by using the bow net contact point data set constructed in the step one to enable the detection network to have the capability of initially detecting the bow net contact point;
step three, initializing the bow net contact point detection network:
setting M as the number of bow net contact points detected last time, and initially setting M to be 0; setting N as the number of currently detected bow net contact points, and initially setting N as 0;
let Q ═ LiDenotes the bow net contact point information set, where i ≦ 30,
Figure FDA0002353193970000011
represents the motion characteristic sequence of the ith bow net contact point, wherein j is less than or equal to 300,
Figure FDA0002353193970000012
representing the movement characteristic of the ith bow net contact point at the moment j, wherein diThe number of the bow net contact point is indicated,
Figure FDA0002353193970000013
respectively showing the abscissa, ordinate, transverse speed, longitudinal speed, h of the bow net contact point at the moment jiIndicating the authenticity of the bow net contact point when hiWhen the contact point is 1, the bow net contact point is a real bow net contact point, and when h is equal toiWhen the contact point is equal to 0, the bow net contact point is a false bow net contact point, and h is set initiallyiWhen the value is 0, Q is empty initially;
step four, image input:
under the condition of real-time processing, extracting a video image which is acquired by a railway locomotive roof monitoring camera and stored in a pantograph-catenary image library as an input image for detecting a pantograph-catenary contact point; under the condition of off-line processing, decomposing the acquired bow net video file into an image sequence consisting of a plurality of frames, and extracting the frame images one by one as input images according to a time sequence; if the input image is empty, the whole process is stopped;
step five, bow net contact point detection:
performing contact point detection on the input image by adopting the contact point detection network obtained in the step three, and setting N as the number of currently detected contact points;
if M is equal to 0 and N is equal to 0, Q is set to be null, and the step four is skipped;
if M is 0, and N>0, setting coordinates according to the currently detected bow net contact point
Figure FDA0002353193970000014
The numbers d are arranged in the order of 1 to 30iAnd is provided with
Figure FDA0002353193970000015
j=1,hi0, is generated accordingly
Figure FDA0002353193970000016
And is added to LiThen adding LiAdding Q, setting M to be N, and jumping to the step four;
if M is greater than 0 and N is 0, setting M to be N and jumping to the step four;
if M is greater than 0 and N is greater than 0, jumping to the sixth step;
step six, bow net contact point matching:
let Ah=(xh,yh) Indicating the h bow net contact point currently detected, wherein h is less than or equal to N, xh,yhRespectively representing the abscissa and the ordinate of the bow net contact point;
calculation of AhAnd Q is LiWill have a distance to a from the coordinates of all bow net contact points at the last momenthOf minimum distance
Figure FDA0002353193970000017
As AhAnd according to the matching result of
Figure FDA0002353193970000018
And AhGenerating
Figure FDA0002353193970000019
Figure FDA0002353193970000021
Wherein
Figure FDA0002353193970000022
Figure FDA0002353193970000023
D ofiAnd hiAnd
Figure FDA0002353193970000024
is the same, then will
Figure FDA0002353193970000025
Is added to LiIf j +1>300, then L isiDeleting the first motion characteristic, otherwise, jumping to the fourth step;
if N is equal to M, jumping to a seventh step;
if N is present<M, then the unmatched L in QiDeleting, and then setting M to N;
if N is present>M, setting the serial numbers d of the currently unmatched N-M bow net contact pointsiSetting the coordinates
Figure FDA0002353193970000026
And is provided with
Figure FDA0002353193970000027
j=1,hi0, is generated accordingly
Figure FDA0002353193970000028
And is added to LiThen adding LiAdding into Q, and setting M to N;
seventhly, identifying bow net contact points:
traversing the motion characteristic sequence of the pantograph contact points in the Q, if the length of the sequence is not less than 300, calculating the times of the change of the direction of the transverse speed, if the length of the sequence is three times or more, considering the pantograph contact points corresponding to the sequence as candidate pantograph contact points, and setting h corresponding to the pantograph contact pointsiOtherwise, considering the pantograph contact point corresponding to the sequence as a false pantograph contact point, and setting the corresponding hi0; if the length of the sequence is less than 300, no processing is carried out;
calculating the maximum value and the minimum value of the abscissa and the ordinate in the motion characteristic sequence corresponding to all candidate bow net contact points; if the difference between the maximum value and the minimum value of the abscissa is greater than 3/4 of the image width, or the difference between the maximum value and the minimum value of the ordinate is greater than 1/4 of the input image height, the candidate contact point is considered to be also a pseudo-bow-net contact point, and h corresponding to the candidate contact point is setiIf not, the candidate pantograph contact point is considered as a true pantograph contact point, and the corresponding h is seti=1;
If the bow net contact point exists, jumping to the step eight, otherwise, jumping to the step four;
step eight, detecting network online learning:
and generating an online training set according to the data of the contact point of the real bow net, performing online learning on the detection network, and skipping to the fourth step.
CN202010000633.8A 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method Active CN111091565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010000633.8A CN111091565B (en) 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010000633.8A CN111091565B (en) 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method

Publications (2)

Publication Number Publication Date
CN111091565A true CN111091565A (en) 2020-05-01
CN111091565B CN111091565B (en) 2022-02-08

Family

ID=70399672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010000633.8A Active CN111091565B (en) 2020-01-02 2020-01-02 Self-adaptive motion characteristic matching and recognition bow net contact point detection method

Country Status (1)

Country Link
CN (1) CN111091565B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112985263A (en) * 2021-02-09 2021-06-18 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379921B1 (en) * 2010-01-26 2013-02-19 Verint Systems Ltd. Method and apparatus to determine a region of interest in a video sequence based on floor detection
CN108288055A (en) * 2018-03-14 2018-07-17 台州智必安科技有限责任公司 Block of bow collector of electric locomotive based on depth network and placement test and arc method for measuring
CN207902448U (en) * 2017-05-10 2018-09-25 中国科学院深圳先进技术研究院 Bow net arcing Systems for optical inspection
CN108573223A (en) * 2018-04-03 2018-09-25 同济大学 A kind of EMU operating environment cognitive method based on bow net video
US20190130312A1 (en) * 2017-10-27 2019-05-02 Salesforce.Com, Inc. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning
CN110378897A (en) * 2019-07-25 2019-10-25 中车青岛四方机车车辆股份有限公司 A kind of pantograph running state real-time monitoring method and device based on video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379921B1 (en) * 2010-01-26 2013-02-19 Verint Systems Ltd. Method and apparatus to determine a region of interest in a video sequence based on floor detection
CN207902448U (en) * 2017-05-10 2018-09-25 中国科学院深圳先进技术研究院 Bow net arcing Systems for optical inspection
US20190130312A1 (en) * 2017-10-27 2019-05-02 Salesforce.Com, Inc. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning
CN108288055A (en) * 2018-03-14 2018-07-17 台州智必安科技有限责任公司 Block of bow collector of electric locomotive based on depth network and placement test and arc method for measuring
CN108573223A (en) * 2018-04-03 2018-09-25 同济大学 A kind of EMU operating environment cognitive method based on bow net video
CN110378897A (en) * 2019-07-25 2019-10-25 中车青岛四方机车车辆股份有限公司 A kind of pantograph running state real-time monitoring method and device based on video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112985263A (en) * 2021-02-09 2021-06-18 中国科学院上海微系统与信息技术研究所 Method, device and equipment for detecting geometrical parameters of bow net

Also Published As

Publication number Publication date
CN111091565B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN102222341A (en) Method and device for detecting motion characteristic point and method and device for detecting motion target
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
US20240013505A1 (en) Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime
CN111797684A (en) Binocular vision distance measuring method for moving vehicle
Fischer et al. A feature descriptor for texture-less object representation using 2D and 3D cues from RGB-D data
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
CN111091565B (en) Self-adaptive motion characteristic matching and recognition bow net contact point detection method
Zhao et al. Visual odometry-A review of approaches
CN106408589A (en) Vehicle-mounted overlooking camera based vehicle movement measurement method
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
Zhou et al. BV-Net: Bin-based Vector-predicted Network for tubular solder joint detection
CN116665097A (en) Self-adaptive target tracking method combining context awareness
Wu et al. Adaptive ORB feature detection with a variable extraction radius in RoI for complex illumination scenes
CN113269118B (en) Monocular vision forward vehicle distance detection method based on depth estimation
CN106056599B (en) A kind of object recognition algorithm and device based on Object Depth data
Gajdošech et al. Towards Deep Learning-based 6D Bin Pose Estimation in 3D Scans
CN109063543B (en) Video vehicle weight recognition method, system and device considering local deformation
Yang et al. Locator slope calculation via deep representations based on monocular vision
Sun et al. The study on intelligent vehicle collision-avoidance system with vision perception and fuzzy decision making
Fu et al. Vision based navigation for power transmission line inspection robot
Shafique et al. Computer Vision based Autonomous Navigation in Controlled Environment
Gao et al. A new method for repeated localization and matching of tunnel lining defects
Pan et al. Fast vanishing point estimation based on particle swarm optimization
WO2024044887A1 (en) Vision-based perception system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant