CN113989775A - Vision-guided auxiliary driving system - Google Patents

Vision-guided auxiliary driving system Download PDF

Info

Publication number
CN113989775A
CN113989775A CN202111261161.2A CN202111261161A CN113989775A CN 113989775 A CN113989775 A CN 113989775A CN 202111261161 A CN202111261161 A CN 202111261161A CN 113989775 A CN113989775 A CN 113989775A
Authority
CN
China
Prior art keywords
scene
obstacle
image
visual
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111261161.2A
Other languages
Chinese (zh)
Other versions
CN113989775B (en
Inventor
李学钧
戴相龙
蒋勇
王晓鹏
何成虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Haohan Information Technology Co ltd
Original Assignee
Jiangsu Haohan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Haohan Information Technology Co ltd filed Critical Jiangsu Haohan Information Technology Co ltd
Priority to CN202111261161.2A priority Critical patent/CN113989775B/en
Publication of CN113989775A publication Critical patent/CN113989775A/en
Application granted granted Critical
Publication of CN113989775B publication Critical patent/CN113989775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a vision-guided driving assistance system. A vision measurement module: the rail vehicle binocular vision measuring system is used for acquiring running images of a rail vehicle through binocular vision acquisition equipment and establishing a binocular vision measuring space based on different two-dimensional coordinate systems; a three-dimensional reconstruction module: the system comprises a visual guide model, a measurement space and a display device, wherein the visual guide model is constructed through an AR technology, and the image content of the measurement space is projected into the visual guide model to generate a visual guide scene; a guiding module: and the system is used for guiding the visual guide scene into AR glasses, judging whether the visual guide scene has an obstacle or not, guiding the visual direction of the driver to be the scene video corresponding to the obstacle when the visual guide scene has the obstacle, and guiding the visual direction of the driver to be the direction of the scene video with high importance degree of the image elements according to the importance degree of the image elements in the running image when the visual guide scene does not have the obstacle.

Description

Vision-guided auxiliary driving system
Technical Field
The invention relates to the technical field of rail transit, in particular to a vision-guided auxiliary driving system.
Background
At present, in the process of driving a railway vehicle, a driver cannot observe the running condition of the vehicle at any moment. Furthermore, fatigue or inattention may occur, in which case there may be a fuzzy or incomplete condition when viewed from the external driving environment. Accidents are easily encountered, such as: pedestrians are present on the track, the track on the road surface right in front is covered with obstacles (rolling rocks, collapse and burial of mountains), and the side of the road is not seen clearly.
In the prior art, video devices of rail vehicles can display the driving state in the driving process, but the video devices need drivers to actively observe the data or videos and analyze the data or videos to judge whether obstacles or abnormal events exist on the rail, so that the drivers cannot effectively observe the external driving environment of the train.
Therefore, a system capable of guiding the driver's vision is needed to remind the driver of an accident by guiding the driver's visual direction, so as to prevent the driver from being distracted.
Disclosure of Invention
The invention provides a vision-guided auxiliary driving system, which is used for solving the situation of inattention when a driver of a rail vehicle drives the vehicle.
A visually guided driver assistance system, comprising:
a vision measurement module: the rail vehicle binocular vision measuring system is used for acquiring running images of a rail vehicle through binocular vision acquisition equipment and establishing different two-dimensional coordinate systems to establish a binocular vision measuring space;
a three-dimensional reconstruction module: the system comprises a visual guide model, a measurement space and a display device, wherein the visual guide model is constructed through an AR technology, and the image content of the measurement space is projected into the visual guide model to generate a visual guide scene;
a guiding module: and the system is used for guiding the visual guide scene into AR glasses, judging whether the visual guide scene has an obstacle or not, guiding the visual direction of the driver to be the scene video corresponding to the obstacle when the visual guide scene has the obstacle, and guiding the visual direction of the driver to be the direction of the scene video with high importance degree of the image elements according to the importance degree of the image elements in the running image when the visual guide scene does not have the obstacle.
As an embodiment of the present invention: the vision measurement module includes:
a collecting unit: the method comprises the steps that a double-camera recognition device is installed at the head of the railway vehicle, a forward-looking scene image of the railway vehicle during running is collected, and pixel points of the forward-looking scene image are calibrated; wherein the content of the first and second substances,
the dual-camera recognition device comprises a first camera and a second camera;
the forward-looking scene image comprises a scene image acquired by a first camera and a pixel point image acquired by a second camera of the dual-camera identification device;
a two-dimensional coordinate system establishing unit: the two-dimensional coordinate system is established for each forward-looking scene image to generate a two-dimensional coordinate system set; wherein the content of the first and second substances,
the horizontal axis of the two-dimensional coordinate system is the scene image characteristic;
the longitudinal axis of the two-dimensional coordinate system is the position characteristic of the pixel points;
a measurement space construction unit: the measuring space is generated by projecting the two-dimensional coordinate system in the two-dimensional coordinate system set into a space model with measuring marks.
As an embodiment of the present invention: the measurement space creation unit generating the measurement space further includes the steps of:
the method comprises the following steps: acquiring an initial forward-looking image of the rail vehicle in an initial motion state by using binocular vision acquisition equipment at an initial moment, distinguishing a scene image and a pixel point image, verifying the scene image through the pixel point image, and determining a first error coefficient;
step two: sequentially marking any scene characteristics and corresponding pixel points in the scene images and the pixel point images as (v1, p1), and mapping the scene characteristics and the corresponding pixel points to a space model (y1, D1);
step three: determining a mapped second error coefficient according to a conversion coefficient between the coordinates (y1, D1) of the spatial model and the original coordinates (v1, p 1);
step four: introducing error parameters into the spatial model, generating three-dimensional coordinates (y1, D1, W1) of a three-dimensional spatial model;
step five: for optimizing the measurement space on the basis of a correlation coefficient between the first error coefficient and the second error coefficient.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
an AR construction unit: the method comprises the steps that an obstacle database of the railway vehicle is preset, and a visual model constructed through an AR technology is in butt joint with the obstacle database to form a visual guide model;
a projection unit: the system comprises a measuring space, a visual guide model, a barrier database and a plurality of running images, wherein the running images are used for acquiring optimized running images in the measuring space, projecting the optimized running images into the visual guide model, and establishing a comparison relation between the visual guide model and the barrier database through the plurality of optimized running images to generate a secondary comparison result;
a scene construction unit: and respectively calculating correlation values among different running images, and sequentially connecting the different running images from large to small according to the magnitude of the correlation values to generate a visual guide scene.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
a position determination unit: acquiring the position information of the rail vehicle;
an area determination unit: the space area is used for acquiring the position information according to the position information;
acquiring a first virtual scene corresponding to the scene image in the space area;
and superposing the first virtual scene into a visual guidance model to obtain a virtual visual guidance scene.
As an embodiment of the present invention: the three-dimensional reconstruction module further comprises:
word segmentation unit: the system comprises a running image acquisition unit, a word segmentation tool and a word segmentation unit, wherein the running image acquisition unit is used for acquiring running images of the running image;
word segmentation and collection unit: respectively carrying out part-of-speech tagging on the words after corresponding word segmentation of each word segmentation unit by using a part-of-speech tagging tool, taking the words with parts-of-speech tagged as nouns as characteristic words, and forming a word segmentation unit characteristic word set;
a model building unit: the system comprises a segmentation unit, a feature word set, a style model data set and a language model data set, wherein the feature word set of all segmentation units is used as a group of input data, and the subject detail information of corresponding segmentation units is used as a label to integrally form the style model data set;
a mapping verification unit: and the system is used for establishing a relational link between the genre model data set and the obstacle database, verifying different running images sequentially connected from large to small through the link and judging whether the connection sequence is wrong or not.
As an embodiment of the present invention: the guide module includes:
a judging unit: the system is used for judging whether abnormal information exists in the visual guide scene; wherein the content of the first and second substances,
when the abnormal information exists, generating a corresponding obstacle;
when the scene video does not have abnormal information, the scene video is played;
an obstacle guide unit: the system is used for guiding the visual direction of the driver to be the scene video corresponding to the obstacle through any one of amplification, marking or rendering modes according to the obstacle;
non-obstacle guide unit: the method is used for acquiring the running image and judging the importance degree of the component elements of the running image in the running image.
As an embodiment of the present invention: the guiding module judges whether the visual guiding scene has obstacles or not, and comprises the following steps:
step 1: calculating an element feature Z of the visual guide scene:
Figure BDA0003325785280000051
wherein, wi,jIndicating the position of the ith element at the jth coordinate point; w is ai,0Indicating the position of the ith element relative to the central coordinate point; si,jDenotes the coordinate coefficient of the ith element at the jth coordinate point, Li,jThe element characteristic of the ith element at the jth coordinate point is represented; b isi,jRepresenting the element attribute of the ith element at the jth coordinate point; 1,2,3, … … n; j ═ 1,2,3, … … m;
step 2: generating an obstacle identification function based on a preset obstacle database:
Figure BDA0003325785280000052
wherein x isiAn identifying feature representing the ith obstacle,
Figure BDA0003325785280000053
Tia real-time identification feature representing an ith obstacle; y isiIndicates the recognition deviation of the i-th,
Figure BDA0003325785280000054
Tδrepresenting a composite characteristic of the obstacle; t isεAn invalid feature representing an obstacle;
when gamma (x, y) > 0; when, it indicates an obstacle;
when γ (x, y) is 01, it indicates that there is no obstacle.
As an embodiment of the present invention: the guiding module judges whether the visual guiding scene has an obstacle, and further comprises the following steps:
constructing a judgment model according to the obstacle identification function, and determining an obstacle;
Figure BDA0003325785280000061
wherein Z represents a total characteristic parameter of the identifiable characteristic of the obstacle; k represents the total barrier type parameter.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a driving assistance system with visual guidance according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
A visually guided driver assistance system, comprising:
a vision measurement module: the rail vehicle binocular vision measuring system is used for acquiring running images of a rail vehicle through binocular vision acquisition equipment and establishing different two-dimensional coordinate systems to establish a binocular vision measuring space; the binocular vision acquisition equipment can acquire images in different dimensions, can judge the difference between the two images, and accordingly establishes a binocular vision measurement space through a two-dimensional coordinate system through images acquired through binocular vision, has contrast, and can accurately analyze image parameters based on the binocular vision.
A three-dimensional reconstruction module: the system comprises a visual guide model, a measurement space and a display device, wherein the visual guide model is constructed through an AR technology, and the image content of the measurement space is projected into the visual guide model to generate a visual guide scene; by means of a projection technology, the image content in the measurement space is projected to the visual guidance model, and then the real scene is projected into the measurement space.
A guiding module: and the system is used for guiding the visual guide scene into AR glasses, judging whether the visual guide scene has an obstacle or not, guiding the visual direction of the driver to be the scene video corresponding to the obstacle when the visual guide scene has the obstacle, and guiding the visual direction of the driver to be the direction of the scene video with high importance degree of the image elements according to the importance degree of the image elements in the running image when the visual guide scene does not have the obstacle. It is an implementation that automatically guides the driver to look in the direction in which an accident may occur when there is an obstacle.
The principle and the beneficial effects of the technical scheme are as follows: according to the invention, the images of the rail vehicle are collected through binocular vision collection equipment, a binocular vision measurement space is established for a two-dimensional coordinate system, the images collected by the binocular vision camera equipment are stored in the space, the three-dimensional reconstruction module constructs a vision guide model through an AR technology, then when the vision guide is needed, the images are substituted into the vision guide model and then transmitted to the intelligent glasses of a driver, the obstacle is displayed through the intelligent glasses, and then the images are guided to the display direction of the obstacle in the vision direction of the driver.
The beneficial effects of the above technical scheme are that: according to the invention, the driver can be quickly focused to the display direction of the obstacle through visual guidance when the driver is not focused in the city or the visual direction of the driver is in other positions through visual guidance, so that the obstacle can be quickly found.
As an embodiment of the present invention: the vision measurement module includes:
a collecting unit: the method comprises the steps that a double-camera recognition device is installed at the head of the railway vehicle, a forward-looking scene image (a front visual scene) of the railway vehicle during running is collected, and pixel points of the forward-looking scene image are calibrated; wherein the content of the first and second substances,
the dual-camera recognition device comprises a first camera and a second camera;
the forward-looking scene image comprises a scene image acquired by a first camera and a pixel point image acquired by a second camera of the dual-camera identification device;
a two-dimensional coordinate system establishing unit: the two-dimensional coordinate system is established for each forward-looking scene image to generate a two-dimensional coordinate system set; wherein the content of the first and second substances,
the horizontal axis of the two-dimensional coordinate system is the scene image characteristic;
the longitudinal axis of the two-dimensional coordinate system is the position characteristic of the pixel points;
a measurement space construction unit: the measuring space is generated by projecting the two-dimensional coordinate system in the two-dimensional coordinate system set into a space model with measuring marks.
The principle and the beneficial effects of the technical scheme are as follows: the acquisition unit of the invention is to install a binocular vision camera on the head part of the rail vehicle, to take a picture by the camera, wherein the camera comprises a first camera and a second camera, then to realize the calibration of scene pixel points by the scene image acquired by the camera, and finally to realize the generation of measurement space based on the characteristics of the image and the positions of the pixel points, to realize the calibration acquisition of the image of the rail vehicle.
As an embodiment of the present invention: the measurement space creation unit generating the measurement space further includes the steps of:
the method comprises the following steps: acquiring an initial forward-looking image of the rail vehicle in an initial motion state by using binocular vision acquisition equipment at an initial moment, distinguishing a scene image and a pixel point image, verifying the scene image through the pixel point image, and determining a first error coefficient;
step two: sequentially marking any scene characteristics and corresponding pixel points in the scene images and the pixel point images as (v1, p1), and mapping the scene characteristics and the corresponding pixel points to a space model (y1, D1);
step three: determining a mapped second error coefficient according to a conversion coefficient between the coordinates (y1, D1) of the spatial model and the original coordinates (v1, p 1);
step four: introducing error parameters into the spatial model, generating three-dimensional coordinates (y1, D1, W1) of a three-dimensional spatial model;
step five: for optimizing the measurement space on the basis of a correlation coefficient between the first error coefficient and the second error coefficient.
The principle and the beneficial effects of the technical scheme are as follows: in the step of reconstructing the measurement space, the error coefficient is generated by distinguishing the scene image and the pixel point image in the moving state through the image of the rail vehicle in the moving state. The method comprises the steps of calibrating based on the characteristics of a scene image and the positions of pixel points, introducing error parameters based on the transformation relation of a space model to generate a three-dimensional space model, and then optimizing a measurement space based on the relation of correlation coefficients between a first error and a second error.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
an AR construction unit: the method comprises the steps that an obstacle database of the railway vehicle is preset, and a visual model constructed through an AR technology is in butt joint with the obstacle database to form a visual guide model;
a projection unit: the system comprises a measuring space, a visual guide model, a barrier database and a plurality of running images, wherein the running images are used for acquiring optimized running images in the measuring space, projecting the optimized running images into the visual guide model, and establishing a comparison relation between the visual guide model and the barrier database through the plurality of optimized running images to generate a secondary comparison result;
a scene construction unit: and respectively calculating correlation values among different running images, and sequentially connecting the different running images from large to small according to the magnitude of the correlation values to generate a visual guide scene.
The principle and the beneficial effects of the technical scheme are as follows: the method comprises the steps of constructing a visual guidance model through a preset obstacle database of the railway vehicle and an AR technology, determining an optimized running image in a measurement space through a projection unit, wherein the running image is a clearer image, projecting the clearer image into the visual guidance model, generating a contrast relation in the visual guidance model of the running image, and generating a contrast result according to the contrast relation. And when the scene is constructed, the image connection is realized through the correlation relation values among different images, and the visual guidance is generated.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
a position determination unit: acquiring the position information of the rail vehicle;
an area determination unit: the space area is used for acquiring the position information according to the position information;
acquiring a first virtual scene corresponding to the scene image in the space area;
and superposing the first virtual scene to a visual guidance model to obtain a virtual visual guidance scene.
The principle and the beneficial effects of the technical scheme are as follows: according to the method, the position of the rail vehicle is determined, the storage space area of the rail vehicle is determined according to the position of the rail vehicle, and the visual guide scene is generated in the scene image, the virtual scene and the visual guide model of the space area, so that the visual guide is convenient.
As an embodiment of the present invention: the three-dimensional reconstruction module further comprises:
word segmentation unit: the system comprises a running image acquisition unit, a word segmentation tool and a word segmentation unit, wherein the running image acquisition unit is used for acquiring running images of the running image;
word segmentation and collection unit: respectively carrying out part-of-speech tagging on the words after corresponding word segmentation of each word segmentation unit by using a part-of-speech tagging tool, taking the words with parts-of-speech tagged as nouns as characteristic words, and forming a word segmentation unit characteristic word set;
a model building unit: the system comprises a segmentation unit, a feature word set, a style model data set and a language model data set, wherein the feature word set of all segmentation units is used as a group of input data, and the subject detail information of corresponding segmentation units is used as a label to integrally form the style model data set;
a mapping verification unit: and the system is used for establishing a relational link between the genre model data set and the obstacle database, verifying different running images sequentially connected from large to small through the link and judging whether the connection sequence is wrong or not.
The principle and the beneficial effects of the technical scheme are as follows: when the three-dimensional reconstruction is carried out, the image is converted into the text, various elements in the image can be distinguished conveniently, the text is easier to identify and process, the characteristic words of the content can be calculated more easily after word segmentation, then the characteristic words form a characteristic word set according to the characteristic words of the content, and the characteristic word set is obtained according to the characteristic words. A genre model dataset; a genre model dataset; the model which represents the whole event and is formed by a cutting structure is calculated according to the model, and then the relation connection between the databases is calculated, so that the correctness and the mistake of the deleted image, namely whether the constructed model is correct or wrong can be judged.
As an embodiment of the present invention: the guide module includes:
a judging unit: the system is used for judging whether abnormal information exists in the visual guide scene; wherein the content of the first and second substances,
when the abnormal information exists, generating a corresponding obstacle;
when the scene video does not have abnormal information, the scene video is played;
an obstacle guide unit: the system is used for guiding the visual direction of the driver to be the scene video corresponding to the obstacle through any one of amplification, marking or rendering modes according to the obstacle;
non-obstacle guide unit: the method is used for acquiring the running image and judging the importance degree of the component elements of the running image in the running image.
The principle and the beneficial effects of the technical scheme are as follows: when the judgment result shows that the abnormal information exists in the case of h, the corresponding file is generated when the obstacle exists, and the scene video is played when the obstacle does not exist, so that the classification of the obstacle can be clearly determined. When an obstacle exists, the emphasis is performed by any one of the modes of amplification, marking or rendering, so that the sight of a driver is guided conveniently to observe the obstacle. Even when there is no obstacle, the driver is allowed to observe a place where abnormality is likely to occur as much as possible based on the constituent elements of the image.
As an embodiment of the present invention: the guidance module determines whether there is an obstacle in the visual guidance scene:
the guiding module judges whether the visual guiding scene has obstacles or not, and comprises the following steps:
step 1: calculating an element feature Z of the visual guide scene:
Figure BDA0003325785280000121
wherein, wi,jIndicating the position of the ith element at the jth coordinate point; w is ai,0Indicating the position of the ith element relative to the central coordinate point; si,jDenotes the coordinate coefficient of the ith element at the jth coordinate point, Li,jThe element characteristic of the ith element at the jth coordinate point is represented; b isi,jRepresenting the element attribute of the ith element at the jth coordinate point; 1,2,3, … … n; j ═ 1,2,3, … … m;
step 2: generating an obstacle identification function based on a preset obstacle database:
Figure BDA0003325785280000122
wherein x isiAn identifying feature representing the ith obstacle,
Figure BDA0003325785280000123
Tia real-time identification feature representing the ith obstacle, representing a feature being performed; y isiIndicates the recognition deviation of the i-th,
Figure BDA0003325785280000124
Tδthe comprehensive characteristics of the obstacles are shown, and all the characteristics which occupy the house are shown; t isεAn invalid feature representing an obstacle; the invalid feature is a feature which has the feature in many obstacles and does not have the unique characteristic of the obstacle in calculation.
When gamma (x, y) > 0; when, it indicates an obstacle;
when γ (x, y) is 01, it indicates that there is no obstacle.
The principle and the beneficial effects of the technical scheme are as follows: in order to calculate whether an accident is an abnormal accident or not, the method firstly judges whether elements exist in a scene or not through calculating the elements, wherein the elements are elements related to an obstacle and comprise the characteristics of the obstacle. And then determining a deflection function of the obstacle through processing the obstacle, determining whether the obstacle exists or not when determining the deflection function, calling a file of the corresponding obstacle through a rule preset by a deflection function system when the obstacle exists, and then clearly knowing what type of the obstacle exists through the file of the obstacle.
As an embodiment of the present invention: the guiding module judges whether the visual guiding scene has an obstacle, and further comprises the following steps:
constructing a judgment model according to the obstacle identification function, and determining an obstacle;
Figure BDA0003325785280000131
wherein Z represents a total characteristic parameter of the identifiable characteristic of the obstacle; k represents the total barrier type parameter.
After the characteristic identification is carried out, whether the house is occupied or not needs to be judged, so that Z and K are introduced; and through the type and the general characteristic meal, one in two days realizes intelligent obstacle identification.
The principle and the beneficial effects of the technical scheme are as follows: in order to calculate whether an abnormal accident exists or not, the characteristics of the obstacle are calculated, and at this stage, the characteristics of the obstacle are determined by constructing a large number of coordinate systems and in a coordinate fusion mode on the basis of the coordinates of an event point under the actual condition. And then determining a deflection function of the obstacle through processing the obstacle, determining whether the obstacle exists or not when determining the deflection function, calling a file of the corresponding obstacle through a rule preset by a deflection function system when the obstacle exists, and then clearly knowing what type of the obstacle exists through the file of the obstacle.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A visually guided driver assistance system, comprising:
a vision measurement module: the rail vehicle binocular vision measuring system is used for acquiring running images of a rail vehicle through binocular vision acquisition equipment and establishing a binocular vision measuring space based on different two-dimensional coordinate systems;
a three-dimensional reconstruction module: the system comprises a visual guide model, a measurement space and a display device, wherein the visual guide model is constructed through an AR technology, and the image content of the measurement space is projected into the visual guide model to generate a visual guide scene;
a guiding module: and the system is used for guiding the visual guide scene into AR glasses, judging whether the visual guide scene has an obstacle or not, guiding the visual direction of the driver to be the scene video corresponding to the obstacle when the visual guide scene has the obstacle, and guiding the visual direction of the driver to be the direction of the scene video with high importance degree of the image elements according to the importance degree of the image elements in the running image when the visual guide scene does not have the obstacle.
2. A visually-guided driver assistance system as set forth in claim 1 wherein said vision measuring module comprises:
a collecting unit: the method comprises the steps that a double-camera recognition device is installed at the head of the railway vehicle, a forward-looking scene image of the railway vehicle during running is collected, and pixel points of the forward-looking scene image are calibrated; wherein the content of the first and second substances,
the dual-camera recognition device comprises a first camera and a second camera;
the forward-looking scene image comprises a scene image acquired by a first camera and a pixel point image acquired by a second camera of the dual-camera identification device;
a two-dimensional coordinate system establishing unit: the two-dimensional coordinate system is established for each forward-looking scene image to generate a two-dimensional coordinate system set; wherein the content of the first and second substances,
the horizontal axis of the two-dimensional coordinate system is the scene image characteristic;
the longitudinal axis of the two-dimensional coordinate system is the position characteristic of the pixel points;
a measurement space construction unit: the measuring space is generated by projecting the two-dimensional coordinate system in the two-dimensional coordinate system set into a space model with measuring marks.
3. A visually guided assisted driving system according to claim 1, wherein the measurement space creation unit generates the measurement space further comprises the steps of:
the method comprises the following steps: acquiring an initial forward-looking image of the rail vehicle in an initial motion state by using binocular vision acquisition equipment at an initial moment, distinguishing a scene image and a pixel point image, verifying the scene image through the pixel point image, and determining a first error coefficient;
step two: sequentially marking any scene characteristics and corresponding pixel points in the scene images and the pixel point images as (v1, p1), and mapping the scene characteristics and the corresponding pixel points to a space model (y1, D1);
step three: determining a mapped second error coefficient according to a conversion coefficient between the coordinates (y1, D1) of the spatial model and the original coordinates (v1, p 1);
step four: introducing error parameters into the spatial model, generating three-dimensional coordinates (y1, D1, W1) of a three-dimensional spatial model;
step five: for optimizing the measurement space on the basis of a correlation coefficient between the first error coefficient and the second error coefficient.
4. A visually-guided assisted driving system according to claim 1, wherein the three-dimensional reconstruction module comprises:
an AR construction unit: the method comprises the steps that an obstacle database of the railway vehicle is preset, and a visual model constructed through an AR technology is in butt joint with the obstacle database to form a visual guide model;
a projection unit: the system comprises a measuring space, a visual guide model, a barrier database and a plurality of running images, wherein the running images are used for acquiring optimized running images in the measuring space, projecting the optimized running images into the visual guide model, and establishing a comparison relation between the visual guide model and the barrier database through the plurality of optimized running images to generate a secondary comparison result;
a scene construction unit: and respectively calculating correlation values among different running images, and sequentially connecting the different running images from large to small according to the magnitude of the correlation values to generate a visual guide scene.
5. A visually-guided assisted driving system according to claim 1, wherein the three-dimensional reconstruction module comprises:
a position determination unit: acquiring the position information of the rail vehicle;
an area determination unit: the space area is used for acquiring the position information according to the position information;
acquiring a first virtual scene corresponding to the scene image in the space area;
and superposing the first virtual scene to a visual guidance model to obtain a virtual visual guidance scene.
6. A visually-guided assisted driving system according to claim 1, wherein the three-dimensional reconstruction module further comprises:
word segmentation unit: the system comprises a running image acquisition unit, a word segmentation tool and a word segmentation unit, wherein the running image acquisition unit is used for acquiring running images of the running image;
word segmentation and collection unit: respectively carrying out part-of-speech tagging on the words after corresponding word segmentation of each word segmentation unit by using a part-of-speech tagging tool, taking the words with parts-of-speech tagged as nouns as characteristic words, and forming a word segmentation unit characteristic word set;
a model building unit: the system comprises a segmentation unit, a feature word set, a style model data set and a language model data set, wherein the feature word set of all segmentation units is used as a group of input data, and the subject detail information of corresponding segmentation units is used as a label to integrally form the style model data set;
a mapping verification unit: the system is used for establishing a relation link between the genre model data set and the obstacle database, verifying different running images sequentially connected from big to small through the relation link, and judging whether the connection sequence is wrong or not.
7. A visually guided assisted driving system according to claim 1, wherein the guiding module comprises:
a judging unit: the system is used for judging whether abnormal information exists in the visual guide scene; wherein the content of the first and second substances,
when the abnormal information exists, generating a corresponding obstacle;
when the scene video does not have abnormal information, the scene video is played;
an obstacle guide unit: the system is used for guiding the visual direction of the driver to be the scene video corresponding to the obstacle through any one of amplification, marking or rendering modes according to the obstacle;
non-obstacle guide unit: the method is used for acquiring the running image and judging the importance degree of the component elements of the running image in the running image.
8. The visual guidance assistant driving system according to claim 1, wherein the guidance module determines whether there is an obstacle in the visual guidance scene, comprising the steps of:
step 1: calculating an element feature Z of the visual guide scene:
Figure FDA0003325785270000041
wherein, wi,jIndicating the position of the ith element at the jth coordinate point; w is ai,0Indicating the position of the ith element relative to the central coordinate point; si,jDenotes the coordinate coefficient of the ith element at the jth coordinate point, Li,jThe element characteristic of the ith element at the jth coordinate point is represented; b isi,jRepresenting the element attribute of the ith element at the jth coordinate point; 1,2,3, … … n; j ═ 1,2,3, … … m;
step 2: generating an obstacle identification function based on a preset obstacle database:
Figure FDA0003325785270000051
wherein x isiAn identifying feature representing the ith obstacle,
Figure FDA0003325785270000052
Tia real-time identification feature representing an ith obstacle; y isiIndicates the recognition deviation of the i-th,
Figure FDA0003325785270000053
Tδrepresenting a composite characteristic of the obstacle; t isεAn invalid feature representing an obstacle;
when gamma (x, y) > 0; when, it indicates an obstacle;
when γ (x, y) is 01, it indicates that there is no obstacle.
9. The visual-guidance driver-assistance system according to claim 1, wherein the guidance module determines whether there is an obstacle in the visual-guidance scene, further comprising:
constructing a judgment model according to the obstacle identification function, and determining an obstacle;
Figure FDA0003325785270000054
wherein Z represents a total characteristic parameter of the identifiable characteristic of the obstacle; k represents the total barrier type parameter.
CN202111261161.2A 2021-10-28 2021-10-28 Vision-guided auxiliary driving system Active CN113989775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111261161.2A CN113989775B (en) 2021-10-28 2021-10-28 Vision-guided auxiliary driving system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111261161.2A CN113989775B (en) 2021-10-28 2021-10-28 Vision-guided auxiliary driving system

Publications (2)

Publication Number Publication Date
CN113989775A true CN113989775A (en) 2022-01-28
CN113989775B CN113989775B (en) 2022-08-05

Family

ID=79743183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111261161.2A Active CN113989775B (en) 2021-10-28 2021-10-28 Vision-guided auxiliary driving system

Country Status (1)

Country Link
CN (1) CN113989775B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11983932B1 (en) 2023-07-28 2024-05-14 New Automobile Co., Ltd Vision acquisition system equipped in intelligent terminal of smart logistics vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103231708A (en) * 2013-04-12 2013-08-07 安徽工业大学 Intelligent vehicle obstacle avoiding method based on binocular vision
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN109143215A (en) * 2018-08-28 2019-01-04 重庆邮电大学 It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision
CN109359409A (en) * 2018-10-31 2019-02-19 张维玲 A kind of vehicle passability detection system of view-based access control model and laser radar sensor
CN109918509A (en) * 2019-03-12 2019-06-21 黑龙江世纪精彩科技有限公司 Scene generating method and scene based on information extraction generate the storage medium of system
CN110415282A (en) * 2019-07-31 2019-11-05 宁夏金宇智慧科技有限公司 A kind of milk cow weight forecasting system
CN111460865A (en) * 2019-01-22 2020-07-28 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN112180605A (en) * 2020-10-20 2021-01-05 江苏濠汉信息技术有限公司 Auxiliary driving system based on augmented reality
CN113496601A (en) * 2020-03-20 2021-10-12 郑州宇通客车股份有限公司 Vehicle driving assisting method, device and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103231708A (en) * 2013-04-12 2013-08-07 安徽工业大学 Intelligent vehicle obstacle avoiding method based on binocular vision
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN109143215A (en) * 2018-08-28 2019-01-04 重庆邮电大学 It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision
CN109359409A (en) * 2018-10-31 2019-02-19 张维玲 A kind of vehicle passability detection system of view-based access control model and laser radar sensor
CN111460865A (en) * 2019-01-22 2020-07-28 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN109918509A (en) * 2019-03-12 2019-06-21 黑龙江世纪精彩科技有限公司 Scene generating method and scene based on information extraction generate the storage medium of system
CN110415282A (en) * 2019-07-31 2019-11-05 宁夏金宇智慧科技有限公司 A kind of milk cow weight forecasting system
CN113496601A (en) * 2020-03-20 2021-10-12 郑州宇通客车股份有限公司 Vehicle driving assisting method, device and system
CN112180605A (en) * 2020-10-20 2021-01-05 江苏濠汉信息技术有限公司 Auxiliary driving system based on augmented reality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11983932B1 (en) 2023-07-28 2024-05-14 New Automobile Co., Ltd Vision acquisition system equipped in intelligent terminal of smart logistics vehicle

Also Published As

Publication number Publication date
CN113989775B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
Zhe et al. Inter-vehicle distance estimation method based on monocular vision using 3D detection
EP2092270B1 (en) Method and apparatus for identification and position determination of planar objects in images
CN102737236B (en) Method for automatically acquiring vehicle training sample based on multi-modal sensor data
CN108764187A (en) Extract method, apparatus, equipment, storage medium and the acquisition entity of lane line
CN110758243A (en) Method and system for displaying surrounding environment in vehicle driving process
CN106503653A (en) Area marking method, device and electronic equipment
KR101689805B1 (en) Apparatus and method for reconstructing scene of traffic accident using OBD, GPS and image information of vehicle blackbox
CN111462249B (en) Traffic camera calibration method and device
KR102097869B1 (en) Deep Learning-based road area estimation apparatus and method using self-supervised learning
CN110119698A (en) For determining the method, apparatus, equipment and storage medium of Obj State
CN112232275B (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN111402632B (en) Risk prediction method for pedestrian movement track at intersection
CN108364476B (en) Method and device for acquiring Internet of vehicles information
CN113989775B (en) Vision-guided auxiliary driving system
CN115035626A (en) Intelligent scenic spot inspection system and method based on AR
CN117372991A (en) Automatic driving method and system based on multi-view multi-mode fusion
Nejadasl et al. Optical flow based vehicle tracking strengthened by statistical decisions
CN115782868A (en) Method and system for identifying obstacle in front of vehicle
EP3816938A1 (en) Region clipping method and recording medium storing region clipping program
CN113469045A (en) Unmanned card-collecting visual positioning method and system, electronic equipment and storage medium
DE102019201134B4 (en) Method, computer program with instructions and system for measuring augmented reality glasses and augmented reality glasses for use in a motor vehicle
Jiang et al. The design of a pedestrian aware contextual speed controller for autonomous driving
Boker et al. Bird's Eye View Effect on Situational Awareness in Remote Driving
KR102602319B1 (en) Traffic information collection system and method based on multi-object tracking using artificial intelligence image deep learning model
CN116403275B (en) Method and system for detecting personnel advancing posture in closed space based on multi-vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A vision guided assistant driving system

Effective date of registration: 20221207

Granted publication date: 20220805

Pledgee: Jiangsu Nantong Rural Commercial Bank Co.,Ltd. Si'an Sub branch

Pledgor: JIANGSU HAOHAN INFORMATION TECHNOLOGY Co.,Ltd.

Registration number: Y2022980025338

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20231017

Granted publication date: 20220805

Pledgee: Jiangsu Nantong Rural Commercial Bank Co.,Ltd. Si'an Sub branch

Pledgor: JIANGSU HAOHAN INFORMATION TECHNOLOGY Co.,Ltd.

Registration number: Y2022980025338

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Visual Guided Assisted Driving System

Effective date of registration: 20231020

Granted publication date: 20220805

Pledgee: Jiangsu Nantong Rural Commercial Bank Co.,Ltd. Si'an Sub branch

Pledgor: JIANGSU HAOHAN INFORMATION TECHNOLOGY Co.,Ltd.

Registration number: Y2023980061751

PE01 Entry into force of the registration of the contract for pledge of patent right