CN117314849A - Contact net abrasion detection method based on deep learning - Google Patents

Contact net abrasion detection method based on deep learning Download PDF

Info

Publication number
CN117314849A
CN117314849A CN202311233941.5A CN202311233941A CN117314849A CN 117314849 A CN117314849 A CN 117314849A CN 202311233941 A CN202311233941 A CN 202311233941A CN 117314849 A CN117314849 A CN 117314849A
Authority
CN
China
Prior art keywords
module
contact net
image
map
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311233941.5A
Other languages
Chinese (zh)
Inventor
曾晓红
李向东
郑殷
谢生波
李奇
钟建
柴正均
席浩洲
钱云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xinlyuneng Science & Technology Co ltd
Original Assignee
Jiangsu Xinlyuneng Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Xinlyuneng Science & Technology Co ltd filed Critical Jiangsu Xinlyuneng Science & Technology Co ltd
Priority to CN202311233941.5A priority Critical patent/CN117314849A/en
Publication of CN117314849A publication Critical patent/CN117314849A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a contact net abrasion detection method based on deep learning, wherein the method is completed by mutually matching a contact net image acquisition module, a vehicle bottom vibration compensation module and a vehicle-mounted detection host computer which are arranged on rail vehicles such as a detection vehicle, a maintenance operation vehicle and the like, and the contact net surface abrasion degree is detected in real time in the running process of the vehicle. The overhead line system image acquisition module comprises two high-speed industrial cameras, a high-definition imaging trigger module and an auxiliary light source module. The vehicle-mounted detection host comprises an image processing module, a stereo matching module and a point cloud computing module. The vehicle-mounted detection host is connected with the high-definition imaging trigger module through a bus, controls the auxiliary light source module and the two industrial cameras to synchronously shoot the image data of the contact net, and receives the data through the image processing module to realize image filtering and noise reduction. According to the method, the three-dimensional reconstruction of the contact net is realized by using a deep learning algorithm, and the abrasion degree of the contact net can be detected more intuitively and clearly.

Description

Contact net abrasion detection method based on deep learning
Technical Field
The invention relates to a contact net abrasion detection method based on deep learning, and belongs to the technical field of contact net state detection.
Background
By the time of 31 days of 12 months in 2020, the operation mileage of high-speed railways in China is up to 3.8 ten thousand kilometers, and the passenger traffic volume is 235833 ten thousand people, which accounts for 64.4% of the passenger traffic volume of railways. The gravity center of the development of the high-speed rail in China is gradually changed from the construction of the high-speed rail to the operation maintenance, the performance of the high-speed rail and the operation maintenance quality of the high-speed rail are continuously improved, so that higher requirements are put forward on safe and reliable operation of power supply of the overhead contact system in the traction power supply system, and the rapid, automatic and intelligent development trend of future railway detection is realized. Because the pantograph and the contact net keep a high-speed contact friction state in the running process of the train, after long-term running, abrasion on the surface of the contact line is likely to be caused, if the abrasion degree is not detected in time, the contact line on a severely worn section is replaced, power supply instability can occur between the pantograph and the contact net, an electric arc can be generated in serious conditions, the contact net is burnt out, and even the whole railway transportation line is paralyzed, real-time detection of the abrasion degree of the contact net is necessary, and the safety and reliability of power supply of the contact net are ensured.
The contact net abrasion detection method is mainly divided into three types of detection modes, namely manual detection, non-contact detection and non-contact image visual detection. Because the detection workload is huge and the error range of the data measured by the detection personnel cannot be ensured, the non-contact detection has a greater advantage in the aspect of abrasion detection. Compared with other 2 schemes, the contact net detection based on machine vision realizes edge detection and three-dimensional reconstruction of the contact net by simulating binocular vision through an industrial camera to realize 3D visualization of the contact net surface, has high detection flexibility and high equipment intelligence degree, and has small influence on normal driving interference, so that the contact net detection based on machine vision has been widely applied.
For the above reasons, in order to meet the requirements of contact net abrasion detection and maintenance, it is very necessary to design a contact net abrasion detection method based on deep learning.
Disclosure of Invention
In order to achieve the above purpose, the invention adopts the following technical scheme: a contact net abrasion detection method based on deep learning comprises the following steps:
s100, initializing parameters of each module of a contact net abrasion detection system;
s200, a vehicle-mounted detection host is connected with a high-definition imaging trigger module through a bus, and an auxiliary light source module and two high-speed industrial cameras are controlled to synchronously shoot image data of a contact net;
s300, the vehicle-mounted detection host receives data through the image processing module, image smoothing processing and filtering noise reduction are achieved, and three-dimensional reconstruction of the overhead line system is achieved through the three-dimensional matching module and the point cloud computing module based on the STransMNet algorithm;
s400, the vehicle-mounted detection host machine corrects the model through the vibration compensation module, compares the parameters of the database catenary, and realizes abrasion degree assessment;
the contact net abrasion detection system comprises a contact net image acquisition module, a vehicle bottom vibration compensation module and a vehicle-mounted detection host;
the overhead line system image acquisition module comprises two high-speed linear array CCD cameras, an auxiliary light source and high-definition imaging trigger equipment;
the vibration compensation module consists of a laser radar, and the measurement of the offset angle is realized through polygon prism scattering scanning. The vehicle-mounted detection host comprises an image processing module, a stereo matching module and a point cloud computing module, and is connected with the modules through buses.
The Gaussian filtering noise reduction of the image is realized by utilizing an image processing module:
the gaussian filter can be expressed as:
the first directional derivative of G (x, y) is:
wherein: g represents a gradient vector and n represents a direction vector.
Will G n Convolving with the image f (x, y), adjusting the direction of n, and obtaining n when the edge direction is orthogonal;
the stereo matching module based on the STransMNet algorithm comprises: the device comprises a swing transformation module for extracting left and right image characteristics, an optimal transmission module for matching cost constraint, a modification-WTA module for parallax map regression, and a layer adjustment module for optimizing a regression parallax map;
the stereo matching based on the STransMNet algorithm obtains a disparity map, and the method comprises the following steps:
s301, feature extraction of left and right images is achieved through a swing transformation module, shallow features can contain locally rich detail information, and deep features contain global associated information;
s302, calculating feature similarity through correlation calculation to obtain a matching cost body;
s303, through characteristic differentiation loss, the capability of model training on detailed attention is improved, and specific loss L is reduced diff It can be calculated as:
wherein: h represents feature height; l (L) ploe,j Represents a j-th line loss; z p,i Representing the ith feature on the pole line; y is i Is z p,i A corresponding tag; sigma is calculated for softmax;
then construct a loss function:
L=w 1 L d1,r +w 2 L d1,f +w 3 L be,f +w 4 L rr +w 5 L diff
wherein: w (w) 1 -w 5 Representing loss function weights; l (L) d1,r And L is equal to d1,f Average SmoothL1 loss of the sub-resolution to original resolution disparity map; l (L) be,f Is the cross entropy loss of the occlusion map, which characterizes the error between the predicted occlusion map and the real occlusion map:
wherein z is occ Representing the occlusion part, z noc Represents the non-shielding part, M and N respectively represent the number of pixels of the corresponding region of the region, L rr Representing the relative response loss:
wherein t is i Representing elements in the set of matching pixels in the matching matrix T,representing elements in the set of pixels that are occluded in the matching matrix T resulting in a mismatch, N T And M is as follows T Respectively represent the two setsIs the total number of (3);
s304, carrying out unique constraint on the matching cost through an optimal transmission module;
s305, generating a sub-resolution parallax map by utilizing modified-WTA regression, and recovering the sub-resolution parallax map to the parallax map of the original image resolution through up-sampling;
s306, fusing the left image with the restored parallax map and the shielding map information through the map layer adjusting module to generate an optimized parallax map and an optimized shielding map;
the point cloud computing module is combined with the parallax map generated by the three-dimensional matching module to realize three-dimensional reconstruction of the overhead line system:
the point cloud module converts the parallax image generated by the stereo matching module into a depth image:
wherein D and D represent depth and time difference, respectively, B represents a base line length, f (x, y) represents pixel units, x 0l And x 0r Representing the column coordinates of the main points of the left and right views, respectively.
Calculating a point cloud under a camera coordinate system through a depth map to realize three-dimensional reconstruction:
Z=D
where x, y are the column and row coordinates of the pixel.
The resolution of the high-speed linear array CCD camera reaches 4K, the scanning speed reaches 26K, and the auxiliary light source adopts LCD to provide high-quality illumination.
Compared with the prior art, the invention has the beneficial effects that: 1. compared with the traditional manual contact net abrasion detection, the method has higher detection efficiency and can avoid instability of errors of manual detection. 2. Compared with a laser radar scheme, the method is simpler and economical 3. Compared with a monocular detection method, the method has no limitation on the recognition rate, because in principle, the method does not need to recognize first and then calculate, but directly measures all image objects, and reduces the dependence on database samples. 4. The method realizes real-time three-dimensional reconstruction of the overhead line system based on deep learning, and has higher precision and wider applicable scene.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a contact net abrasion detection flow chart of the present invention.
Fig. 2 is a graph showing a distribution diagram of the contact net abrasion detecting device in a vehicle.
Fig. 3 is a diagram of the overhead line system image acquisition module according to the present invention.
Fig. 4 is a three-dimensional matching flow chart based on the STransMNet of the invention.
Detailed Description
The invention is further described with reference to the drawings and specific examples. Although the present application is described in further detail below with reference to the drawings and the detailed description, the examples of the present invention are not limited to these embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In one embodiment, as shown in fig. 1, the present invention proposes a contact net abrasion detection method based on deep learning.
The specific implementation comprises the following steps:
s100, initializing parameters of each module of a contact net abrasion detection system;
s200, a vehicle-mounted detection host is connected with a high-definition imaging trigger module through a bus, and an auxiliary light source module and two high-speed industrial cameras are controlled to synchronously shoot image data of a contact net;
s300, the vehicle-mounted detection host receives data through the image processing module, image smoothing processing and filtering noise reduction are achieved, and three-dimensional reconstruction of the overhead line system is achieved through the three-dimensional matching module and the point cloud computing module based on the STransMNet algorithm;
s400, the vehicle-mounted detection host machine corrects the model through the vibration compensation module, compares the parameters of the database catenary, and realizes abrasion degree assessment;
the contact net abrasion detection system comprises a contact net image acquisition module, a vehicle bottom vibration compensation module and a vehicle-mounted detection host;
the overhead line system image acquisition module comprises two high-speed linear array CCD cameras, an auxiliary light source and high-definition imaging trigger equipment;
the vibration compensation module consists of a laser radar, and the measurement of the offset angle is realized through polygon prism scattering scanning. The vehicle-mounted detection host comprises an image processing module, a stereo matching module and a point cloud computing module, and is connected with the modules through buses.
The Gaussian filtering noise reduction of the image is realized by utilizing an image processing module:
the gaussian filter can be expressed as:
the first directional derivative of G (x, y) is:
wherein: g represents a gradient vector and n represents a direction vector.
Will G n Convolving with the image f (x, y), adjusting the direction of n, and obtaining n when the edge direction is orthogonal;
the stereo matching module based on the STransMNet algorithm comprises: the device comprises a swing transformation module for extracting left and right image characteristics, an optimal transmission module for matching cost constraint, a modification-WTA module for parallax map regression, and a layer adjustment module for optimizing a regression parallax map;
the stereo matching based on the STransMNet algorithm obtains a disparity map, and the method comprises the following steps:
s301, feature extraction of left and right images is achieved through a swing transformation module, shallow features can contain locally rich detail information, and deep features contain global associated information;
s302, calculating feature similarity through correlation calculation to obtain a matching cost body;
s303, through characteristic differentiation loss, the capability of model training on detailed attention is improved, and specific loss L is reduced diff It can be calculated as:
wherein: h represents feature height; l (L) ploe,j Represents a j-th line loss; z p,i Representing the ith feature on the pole line; y is i Is z p,i A corresponding tag; sigma is calculated for softmax;
then construct a loss function:
L=w 1 L d1,r +w 2 L d1,f +w 3 L be,f +w 4 L rr +w 5 L diff
wherein: w (w) 1 -w 5 Representing loss function weights; l (L) d1,r And L is equal to d1,f Sub-divisionAverage SmoothL1 loss of resolution versus original resolution disparity map; l (L) be,f Is the cross entropy loss of the occlusion map, which characterizes the error between the predicted occlusion map and the real occlusion map:
wherein z is occ Representing the occlusion part, z noc Represents the non-shielding part, M and N respectively represent the number of pixels of the corresponding region of the region, L rr Representing the relative response loss:
wherein t is i Representing elements in the set of matching pixels in the matching matrix T,representing elements in the set of pixels that are occluded in the matching matrix T resulting in a mismatch, N T And M is as follows T Respectively representing the total number of the two sets;
s304, carrying out unique constraint on the matching cost through an optimal transmission module;
s305, generating a sub-resolution parallax map by utilizing modified-WTA regression, and recovering the sub-resolution parallax map to the parallax map of the original image resolution through up-sampling;
s306, fusing the left image with the restored parallax map and the shielding map information through the map layer adjusting module to generate an optimized parallax map and an optimized shielding map;
the point cloud computing module is combined with the parallax map generated by the three-dimensional matching module to realize three-dimensional reconstruction of the overhead line system:
the point cloud module converts the parallax image generated by the stereo matching module into a depth image:
wherein D and D represent depth and time difference, respectively, B represents a base line length, f (x, y) represents pixel units, x 0l And x 0r Representing the column coordinates of the main points of the left and right views, respectively.
Calculating a point cloud under a camera coordinate system through a depth map to realize three-dimensional reconstruction:
Z=D
where x, y are the column and row coordinates of the pixel.
The resolution of the high-speed linear array CCD camera reaches 4K, the scanning speed reaches 26K, and the auxiliary light source adopts LCD to provide high-quality illumination.
The foregoing has shown and described the basic principles and main features of the present invention and the advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (2)

1. The contact net abrasion detection method based on deep learning is characterized by comprising the following steps:
s100, initializing parameters of each module of a contact net abrasion detection system;
s200, a vehicle-mounted detection host is connected with a high-definition imaging trigger module through a bus, and an auxiliary light source module and two high-speed industrial cameras are controlled to synchronously shoot image data of a contact net;
s300, the vehicle-mounted detection host receives data through the image processing module, image smoothing processing and filtering noise reduction are achieved, and three-dimensional reconstruction of the overhead line system is achieved through the three-dimensional matching module and the point cloud computing module based on the STransMNet algorithm;
s400, the vehicle-mounted detection host machine corrects the model through the vibration compensation module, compares the parameters of the database catenary, and realizes abrasion degree assessment;
the contact net abrasion detection system comprises a contact net image acquisition module, a vehicle bottom vibration compensation module and a vehicle-mounted detection host;
the overhead line system image acquisition module comprises two high-speed linear array CCD cameras, an auxiliary light source and high-definition imaging trigger equipment;
the vibration compensation module consists of a laser radar, and the measurement of the offset angle is realized through polygon prism scattering scanning;
the vehicle-mounted detection host comprises an image processing module, a stereo matching module and a point cloud computing module, and is connected with each module through a bus;
the Gaussian filtering noise reduction of the image is realized by utilizing an image processing module:
the gaussian filter can be expressed as:
the first directional derivative of G (x, y) is:
wherein:representing a gradient vector, n representing a direction vector;
will G n Convolving with the image f (x, y), adjusting the direction of n, and obtaining n when the edge direction is orthogonal;
the stereo matching module based on the STransMNet algorithm comprises: the device comprises a swing transformation module for extracting left and right image characteristics, an optimal transmission module for matching cost constraint, a modification-WTA module for parallax map regression, and a layer adjustment module for optimizing a regression parallax map;
the stereo matching based on the STransMNet algorithm obtains a disparity map, and the method comprises the following steps:
s301, feature extraction of left and right images is achieved through a swing transformation module, shallow features can contain locally rich detail information, and deep features contain global associated information;
s302, calculating feature similarity through correlation calculation to obtain a matching cost body;
s303, through characteristic differentiation loss, the capability of model training on detailed attention is improved, and specific loss L is reduced diff It can be calculated as:
wherein: h represents feature height; l (L) ploe,j Represents a j-th line loss; z p,i Representing the ith feature on the pole line; y is i Is z p,i A corresponding tag; sigma is calculated for softmax;
then construct a loss function:
L=w 1 L d1,r +w 2 L d1,f +w 3 L be,f +w 4 L rr +w 5 L diff
wherein: w (w) 1 -w 5 Representing loss function weights; l (L) d1,r And L is equal to d1,f Average SmoothL1 loss of the sub-resolution to original resolution disparity map; l (L) be,f Is the cross entropy loss of the occlusion map, which characterizes the error between the predicted occlusion map and the real occlusion map:
wherein z is occ Representing the occlusion part, z noc Represents the non-shielding part, M and N respectively represent the number of pixels of the corresponding region of the region, L rr Representing the relative response loss:
wherein t is i Representing elements in the set of matching pixels in the matching matrix T,representing elements in the set of pixels that are occluded in the matching matrix T resulting in a mismatch, N T And M is as follows T Respectively representing the total number of the two sets;
s304, carrying out unique constraint on the matching cost through an optimal transmission module;
s305, generating a sub-resolution parallax map by utilizing modified-WTA regression, and recovering the sub-resolution parallax map to the parallax map of the original image resolution through up-sampling;
s306, fusing the left image with the restored parallax map and the shielding map information through the map layer adjusting module to generate an optimized parallax map and an optimized shielding map;
the point cloud computing module is combined with the parallax map generated by the three-dimensional matching module to realize three-dimensional reconstruction of the overhead line system:
the point cloud module converts the parallax image generated by the stereo matching module into a depth image:
wherein D is equal tod represents depth and time difference, respectively, B represents a base line length, f (x, y) represents a pixel unit, x 0l And x 0r Column coordinates respectively representing main points of the left view and the right view;
calculating a point cloud under a camera coordinate system through a depth map to realize three-dimensional reconstruction:
Z=D
where x, y are the column and row coordinates of the pixel.
2. The contact net abrasion detection method based on deep learning as claimed in claim 1, wherein the resolution of the high-speed linear array CCD camera is up to 4K, the scanning speed is up to 26K, and the auxiliary light source adopts LCD to provide high-quality illumination.
CN202311233941.5A 2023-09-23 2023-09-23 Contact net abrasion detection method based on deep learning Pending CN117314849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311233941.5A CN117314849A (en) 2023-09-23 2023-09-23 Contact net abrasion detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311233941.5A CN117314849A (en) 2023-09-23 2023-09-23 Contact net abrasion detection method based on deep learning

Publications (1)

Publication Number Publication Date
CN117314849A true CN117314849A (en) 2023-12-29

Family

ID=89254619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311233941.5A Pending CN117314849A (en) 2023-09-23 2023-09-23 Contact net abrasion detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN117314849A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593592A (en) * 2024-01-18 2024-02-23 山东华时数字技术有限公司 Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593592A (en) * 2024-01-18 2024-02-23 山东华时数字技术有限公司 Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle
CN117593592B (en) * 2024-01-18 2024-04-16 山东华时数字技术有限公司 Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle

Similar Documents

Publication Publication Date Title
CN107590438A (en) A kind of intelligent auxiliary driving method and system
CN102390370B (en) Stereoscopic vision based emergency treatment device and method for running vehicles
CN107577996A (en) A kind of recognition methods of vehicle drive path offset and system
CN107609486A (en) To anti-collision early warning method and system before a kind of vehicle
CN106503636B (en) A kind of road sighting distance detection method and device of view-based access control model image
US20200041284A1 (en) Map road marking and road quality collecting apparatus and method based on adas system
CN110097591B (en) Bow net state detection method
CN111829549B (en) Snow pavement virtual lane line projection method based on high-precision map
CN110126824A (en) A kind of commercial vehicle AEBS system of integrated binocular camera and millimetre-wave radar
CN103630122B (en) Monocular vision lane line detection method and distance measurement method thereof
CN106970581B (en) A kind of train pantograph real-time intelligent monitoring method and system based on the three-dimensional full visual angle of unmanned aerial vehicle group
CN106871805A (en) vehicle-mounted rail gauge measuring system and measuring method
CN117314849A (en) Contact net abrasion detection method based on deep learning
WO2020103532A1 (en) Multi-axis electric bus self-guiding method
CN102069770A (en) Automobile active safety control system based on binocular stereo vision and control method thereof
CN109917359B (en) Robust vehicle distance estimation method based on vehicle-mounted monocular vision
CN111681283A (en) Monocular stereoscopic vision-based relative pose calculation method applied to wireless charging alignment
CN108189757A (en) A kind of driving safety prompt system
Hautière et al. Road scene analysis by stereovision: a robust and quasi-dense approach
CN107168327A (en) A kind of method of the pilotless automobile greasy weather active hedging based on binocular vision
CN107792052B (en) Someone or unmanned bimodulus steering electric machineshop car
CN110888441B (en) Gyroscope-based wheelchair control system
CN112508893A (en) Machine vision-based method and system for detecting tiny foreign matters between two railway tracks
CN201901101U (en) Automobile active safety control system based on binocular stereo vision
CN115953447A (en) Point cloud consistency constraint monocular depth estimation method for 3D target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination