Disclosure of Invention
The invention provides a vision-guided auxiliary driving system, which is used for solving the situation of inattention when a driver of a rail vehicle drives the vehicle.
A visually guided driver assistance system, comprising:
a vision measurement module: the rail vehicle binocular vision measuring system is used for acquiring running images of a rail vehicle through binocular vision acquisition equipment and establishing different two-dimensional coordinate systems to establish a binocular vision measuring space;
a three-dimensional reconstruction module: the system comprises a visual guide model, a measurement space and a display device, wherein the visual guide model is constructed through an AR technology, and the image content of the measurement space is projected into the visual guide model to generate a visual guide scene;
a guiding module: and the system is used for guiding the visual guide scene into AR glasses, judging whether the visual guide scene has an obstacle or not, guiding the visual direction of the driver to be the scene video corresponding to the obstacle when the visual guide scene has the obstacle, and guiding the visual direction of the driver to be the direction of the scene video with high importance degree of the image elements according to the importance degree of the image elements in the running image when the visual guide scene does not have the obstacle.
As an embodiment of the present invention: the vision measurement module includes:
a collecting unit: the method comprises the steps that a double-camera recognition device is installed at the head of the railway vehicle, a forward-looking scene image of the railway vehicle during running is collected, and pixel points of the forward-looking scene image are calibrated; wherein the content of the first and second substances,
the dual-camera recognition device comprises a first camera and a second camera;
the forward-looking scene image comprises a scene image acquired by a first camera and a pixel point image acquired by a second camera of the dual-camera identification device;
a two-dimensional coordinate system establishing unit: the two-dimensional coordinate system is established for each forward-looking scene image to generate a two-dimensional coordinate system set; wherein the content of the first and second substances,
the horizontal axis of the two-dimensional coordinate system is the scene image characteristic;
the longitudinal axis of the two-dimensional coordinate system is the position characteristic of the pixel points;
a measurement space construction unit: the measuring space is generated by projecting the two-dimensional coordinate system in the two-dimensional coordinate system set into a space model with measuring marks.
As an embodiment of the present invention: the measurement space creation unit generating the measurement space further includes the steps of:
the method comprises the following steps: acquiring an initial forward-looking image of the rail vehicle in an initial motion state by using binocular vision acquisition equipment at an initial moment, distinguishing a scene image and a pixel point image, verifying the scene image through the pixel point image, and determining a first error coefficient;
step two: sequentially marking any scene characteristics and corresponding pixel points in the scene images and the pixel point images as (v1, p1), and mapping the scene characteristics and the corresponding pixel points to a space model (y1, D1);
step three: determining a mapped second error coefficient according to a conversion coefficient between the coordinates (y1, D1) of the spatial model and the original coordinates (v1, p 1);
step four: introducing error parameters into the spatial model, generating three-dimensional coordinates (y1, D1, W1) of a three-dimensional spatial model;
step five: for optimizing the measurement space on the basis of a correlation coefficient between the first error coefficient and the second error coefficient.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
an AR construction unit: the method comprises the steps that an obstacle database of the railway vehicle is preset, and a visual model constructed through an AR technology is in butt joint with the obstacle database to form a visual guide model;
a projection unit: the system comprises a measuring space, a visual guide model, a barrier database and a plurality of running images, wherein the running images are used for acquiring optimized running images in the measuring space, projecting the optimized running images into the visual guide model, and establishing a comparison relation between the visual guide model and the barrier database through the plurality of optimized running images to generate a secondary comparison result;
a scene construction unit: and respectively calculating correlation values among different running images, and sequentially connecting the different running images from large to small according to the magnitude of the correlation values to generate a visual guide scene.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
a position determination unit: acquiring the position information of the rail vehicle;
an area determination unit: the space area is used for acquiring the position information according to the position information;
acquiring a first virtual scene corresponding to the scene image in the space area;
and superposing the first virtual scene into a visual guidance model to obtain a virtual visual guidance scene.
As an embodiment of the present invention: the three-dimensional reconstruction module further comprises:
word segmentation unit: the system comprises a running image acquisition unit, a word segmentation tool and a word segmentation unit, wherein the running image acquisition unit is used for acquiring running images of the running image;
word segmentation and collection unit: respectively carrying out part-of-speech tagging on the words after corresponding word segmentation of each word segmentation unit by using a part-of-speech tagging tool, taking the words with parts-of-speech tagged as nouns as characteristic words, and forming a word segmentation unit characteristic word set;
a model building unit: the system comprises a segmentation unit, a feature word set, a style model data set and a language model data set, wherein the feature word set of all segmentation units is used as a group of input data, and the subject detail information of corresponding segmentation units is used as a label to integrally form the style model data set;
a mapping verification unit: and the system is used for establishing a relational link between the genre model data set and the obstacle database, verifying different running images sequentially connected from large to small through the link and judging whether the connection sequence is wrong or not.
As an embodiment of the present invention: the guide module includes:
a judging unit: the system is used for judging whether abnormal information exists in the visual guide scene; wherein the content of the first and second substances,
when the abnormal information exists, generating a corresponding obstacle;
when the scene video does not have abnormal information, the scene video is played;
an obstacle guide unit: the system is used for guiding the visual direction of the driver to be the scene video corresponding to the obstacle through any one of amplification, marking or rendering modes according to the obstacle;
non-obstacle guide unit: the method is used for acquiring the running image and judging the importance degree of the component elements of the running image in the running image.
As an embodiment of the present invention: the guiding module judges whether the visual guiding scene has obstacles or not, and comprises the following steps:
step 1: calculating an element feature Z of the visual guide scene:
wherein, wi,jIndicating the position of the ith element at the jth coordinate point; w is ai,0Indicating the position of the ith element relative to the central coordinate point; si,jDenotes the coordinate coefficient of the ith element at the jth coordinate point, Li,jThe element characteristic of the ith element at the jth coordinate point is represented; b isi,jRepresenting the element attribute of the ith element at the jth coordinate point; 1,2,3, … … n; j ═ 1,2,3, … … m;
step 2: generating an obstacle identification function based on a preset obstacle database:
wherein x is
iAn identifying feature representing the ith obstacle,
T
ia real-time identification feature representing an ith obstacle; y is
iIndicates the recognition deviation of the i-th,
T
δrepresenting a composite characteristic of the obstacle; t is
εAn invalid feature representing an obstacle;
when gamma (x, y) > 0; when, it indicates an obstacle;
when γ (x, y) is 01, it indicates that there is no obstacle.
As an embodiment of the present invention: the guiding module judges whether the visual guiding scene has an obstacle, and further comprises the following steps:
constructing a judgment model according to the obstacle identification function, and determining an obstacle;
wherein Z represents a total characteristic parameter of the identifiable characteristic of the obstacle; k represents the total barrier type parameter.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
A visually guided driver assistance system, comprising:
a vision measurement module: the rail vehicle binocular vision measuring system is used for acquiring running images of a rail vehicle through binocular vision acquisition equipment and establishing different two-dimensional coordinate systems to establish a binocular vision measuring space; the binocular vision acquisition equipment can acquire images in different dimensions, can judge the difference between the two images, and accordingly establishes a binocular vision measurement space through a two-dimensional coordinate system through images acquired through binocular vision, has contrast, and can accurately analyze image parameters based on the binocular vision.
A three-dimensional reconstruction module: the system comprises a visual guide model, a measurement space and a display device, wherein the visual guide model is constructed through an AR technology, and the image content of the measurement space is projected into the visual guide model to generate a visual guide scene; by means of a projection technology, the image content in the measurement space is projected to the visual guidance model, and then the real scene is projected into the measurement space.
A guiding module: and the system is used for guiding the visual guide scene into AR glasses, judging whether the visual guide scene has an obstacle or not, guiding the visual direction of the driver to be the scene video corresponding to the obstacle when the visual guide scene has the obstacle, and guiding the visual direction of the driver to be the direction of the scene video with high importance degree of the image elements according to the importance degree of the image elements in the running image when the visual guide scene does not have the obstacle. It is an implementation that automatically guides the driver to look in the direction in which an accident may occur when there is an obstacle.
The principle and the beneficial effects of the technical scheme are as follows: according to the invention, the images of the rail vehicle are collected through binocular vision collection equipment, a binocular vision measurement space is established for a two-dimensional coordinate system, the images collected by the binocular vision camera equipment are stored in the space, the three-dimensional reconstruction module constructs a vision guide model through an AR technology, then when the vision guide is needed, the images are substituted into the vision guide model and then transmitted to the intelligent glasses of a driver, the obstacle is displayed through the intelligent glasses, and then the images are guided to the display direction of the obstacle in the vision direction of the driver.
The beneficial effects of the above technical scheme are that: according to the invention, the driver can be quickly focused to the display direction of the obstacle through visual guidance when the driver is not focused in the city or the visual direction of the driver is in other positions through visual guidance, so that the obstacle can be quickly found.
As an embodiment of the present invention: the vision measurement module includes:
a collecting unit: the method comprises the steps that a double-camera recognition device is installed at the head of the railway vehicle, a forward-looking scene image (a front visual scene) of the railway vehicle during running is collected, and pixel points of the forward-looking scene image are calibrated; wherein the content of the first and second substances,
the dual-camera recognition device comprises a first camera and a second camera;
the forward-looking scene image comprises a scene image acquired by a first camera and a pixel point image acquired by a second camera of the dual-camera identification device;
a two-dimensional coordinate system establishing unit: the two-dimensional coordinate system is established for each forward-looking scene image to generate a two-dimensional coordinate system set; wherein the content of the first and second substances,
the horizontal axis of the two-dimensional coordinate system is the scene image characteristic;
the longitudinal axis of the two-dimensional coordinate system is the position characteristic of the pixel points;
a measurement space construction unit: the measuring space is generated by projecting the two-dimensional coordinate system in the two-dimensional coordinate system set into a space model with measuring marks.
The principle and the beneficial effects of the technical scheme are as follows: the acquisition unit of the invention is to install a binocular vision camera on the head part of the rail vehicle, to take a picture by the camera, wherein the camera comprises a first camera and a second camera, then to realize the calibration of scene pixel points by the scene image acquired by the camera, and finally to realize the generation of measurement space based on the characteristics of the image and the positions of the pixel points, to realize the calibration acquisition of the image of the rail vehicle.
As an embodiment of the present invention: the measurement space creation unit generating the measurement space further includes the steps of:
the method comprises the following steps: acquiring an initial forward-looking image of the rail vehicle in an initial motion state by using binocular vision acquisition equipment at an initial moment, distinguishing a scene image and a pixel point image, verifying the scene image through the pixel point image, and determining a first error coefficient;
step two: sequentially marking any scene characteristics and corresponding pixel points in the scene images and the pixel point images as (v1, p1), and mapping the scene characteristics and the corresponding pixel points to a space model (y1, D1);
step three: determining a mapped second error coefficient according to a conversion coefficient between the coordinates (y1, D1) of the spatial model and the original coordinates (v1, p 1);
step four: introducing error parameters into the spatial model, generating three-dimensional coordinates (y1, D1, W1) of a three-dimensional spatial model;
step five: for optimizing the measurement space on the basis of a correlation coefficient between the first error coefficient and the second error coefficient.
The principle and the beneficial effects of the technical scheme are as follows: in the step of reconstructing the measurement space, the error coefficient is generated by distinguishing the scene image and the pixel point image in the moving state through the image of the rail vehicle in the moving state. The method comprises the steps of calibrating based on the characteristics of a scene image and the positions of pixel points, introducing error parameters based on the transformation relation of a space model to generate a three-dimensional space model, and then optimizing a measurement space based on the relation of correlation coefficients between a first error and a second error.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
an AR construction unit: the method comprises the steps that an obstacle database of the railway vehicle is preset, and a visual model constructed through an AR technology is in butt joint with the obstacle database to form a visual guide model;
a projection unit: the system comprises a measuring space, a visual guide model, a barrier database and a plurality of running images, wherein the running images are used for acquiring optimized running images in the measuring space, projecting the optimized running images into the visual guide model, and establishing a comparison relation between the visual guide model and the barrier database through the plurality of optimized running images to generate a secondary comparison result;
a scene construction unit: and respectively calculating correlation values among different running images, and sequentially connecting the different running images from large to small according to the magnitude of the correlation values to generate a visual guide scene.
The principle and the beneficial effects of the technical scheme are as follows: the method comprises the steps of constructing a visual guidance model through a preset obstacle database of the railway vehicle and an AR technology, determining an optimized running image in a measurement space through a projection unit, wherein the running image is a clearer image, projecting the clearer image into the visual guidance model, generating a contrast relation in the visual guidance model of the running image, and generating a contrast result according to the contrast relation. And when the scene is constructed, the image connection is realized through the correlation relation values among different images, and the visual guidance is generated.
As an embodiment of the present invention: the three-dimensional reconstruction module comprises:
a position determination unit: acquiring the position information of the rail vehicle;
an area determination unit: the space area is used for acquiring the position information according to the position information;
acquiring a first virtual scene corresponding to the scene image in the space area;
and superposing the first virtual scene to a visual guidance model to obtain a virtual visual guidance scene.
The principle and the beneficial effects of the technical scheme are as follows: according to the method, the position of the rail vehicle is determined, the storage space area of the rail vehicle is determined according to the position of the rail vehicle, and the visual guide scene is generated in the scene image, the virtual scene and the visual guide model of the space area, so that the visual guide is convenient.
As an embodiment of the present invention: the three-dimensional reconstruction module further comprises:
word segmentation unit: the system comprises a running image acquisition unit, a word segmentation tool and a word segmentation unit, wherein the running image acquisition unit is used for acquiring running images of the running image;
word segmentation and collection unit: respectively carrying out part-of-speech tagging on the words after corresponding word segmentation of each word segmentation unit by using a part-of-speech tagging tool, taking the words with parts-of-speech tagged as nouns as characteristic words, and forming a word segmentation unit characteristic word set;
a model building unit: the system comprises a segmentation unit, a feature word set, a style model data set and a language model data set, wherein the feature word set of all segmentation units is used as a group of input data, and the subject detail information of corresponding segmentation units is used as a label to integrally form the style model data set;
a mapping verification unit: and the system is used for establishing a relational link between the genre model data set and the obstacle database, verifying different running images sequentially connected from large to small through the link and judging whether the connection sequence is wrong or not.
The principle and the beneficial effects of the technical scheme are as follows: when the three-dimensional reconstruction is carried out, the image is converted into the text, various elements in the image can be distinguished conveniently, the text is easier to identify and process, the characteristic words of the content can be calculated more easily after word segmentation, then the characteristic words form a characteristic word set according to the characteristic words of the content, and the characteristic word set is obtained according to the characteristic words. A genre model dataset; a genre model dataset; the model which represents the whole event and is formed by a cutting structure is calculated according to the model, and then the relation connection between the databases is calculated, so that the correctness and the mistake of the deleted image, namely whether the constructed model is correct or wrong can be judged.
As an embodiment of the present invention: the guide module includes:
a judging unit: the system is used for judging whether abnormal information exists in the visual guide scene; wherein the content of the first and second substances,
when the abnormal information exists, generating a corresponding obstacle;
when the scene video does not have abnormal information, the scene video is played;
an obstacle guide unit: the system is used for guiding the visual direction of the driver to be the scene video corresponding to the obstacle through any one of amplification, marking or rendering modes according to the obstacle;
non-obstacle guide unit: the method is used for acquiring the running image and judging the importance degree of the component elements of the running image in the running image.
The principle and the beneficial effects of the technical scheme are as follows: when the judgment result shows that the abnormal information exists in the case of h, the corresponding file is generated when the obstacle exists, and the scene video is played when the obstacle does not exist, so that the classification of the obstacle can be clearly determined. When an obstacle exists, the emphasis is performed by any one of the modes of amplification, marking or rendering, so that the sight of a driver is guided conveniently to observe the obstacle. Even when there is no obstacle, the driver is allowed to observe a place where abnormality is likely to occur as much as possible based on the constituent elements of the image.
As an embodiment of the present invention: the guidance module determines whether there is an obstacle in the visual guidance scene:
the guiding module judges whether the visual guiding scene has obstacles or not, and comprises the following steps:
step 1: calculating an element feature Z of the visual guide scene:
wherein, wi,jIndicating the position of the ith element at the jth coordinate point; w is ai,0Indicating the position of the ith element relative to the central coordinate point; si,jDenotes the coordinate coefficient of the ith element at the jth coordinate point, Li,jThe element characteristic of the ith element at the jth coordinate point is represented; b isi,jRepresenting the element attribute of the ith element at the jth coordinate point; 1,2,3, … … n; j ═ 1,2,3, … … m;
step 2: generating an obstacle identification function based on a preset obstacle database:
wherein x is
iAn identifying feature representing the ith obstacle,
T
ia real-time identification feature representing the ith obstacle, representing a feature being performed; y is
iIndicates the recognition deviation of the i-th,
T
δthe comprehensive characteristics of the obstacles are shown, and all the characteristics which occupy the house are shown; t is
εAn invalid feature representing an obstacle; the invalid feature is a feature which has the feature in many obstacles and does not have the unique characteristic of the obstacle in calculation.
When gamma (x, y) > 0; when, it indicates an obstacle;
when γ (x, y) is 01, it indicates that there is no obstacle.
The principle and the beneficial effects of the technical scheme are as follows: in order to calculate whether an accident is an abnormal accident or not, the method firstly judges whether elements exist in a scene or not through calculating the elements, wherein the elements are elements related to an obstacle and comprise the characteristics of the obstacle. And then determining a deflection function of the obstacle through processing the obstacle, determining whether the obstacle exists or not when determining the deflection function, calling a file of the corresponding obstacle through a rule preset by a deflection function system when the obstacle exists, and then clearly knowing what type of the obstacle exists through the file of the obstacle.
As an embodiment of the present invention: the guiding module judges whether the visual guiding scene has an obstacle, and further comprises the following steps:
constructing a judgment model according to the obstacle identification function, and determining an obstacle;
wherein Z represents a total characteristic parameter of the identifiable characteristic of the obstacle; k represents the total barrier type parameter.
After the characteristic identification is carried out, whether the house is occupied or not needs to be judged, so that Z and K are introduced; and through the type and the general characteristic meal, one in two days realizes intelligent obstacle identification.
The principle and the beneficial effects of the technical scheme are as follows: in order to calculate whether an abnormal accident exists or not, the characteristics of the obstacle are calculated, and at this stage, the characteristics of the obstacle are determined by constructing a large number of coordinate systems and in a coordinate fusion mode on the basis of the coordinates of an event point under the actual condition. And then determining a deflection function of the obstacle through processing the obstacle, determining whether the obstacle exists or not when determining the deflection function, calling a file of the corresponding obstacle through a rule preset by a deflection function system when the obstacle exists, and then clearly knowing what type of the obstacle exists through the file of the obstacle.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.