CN112526520A - Pedestrian and obstacle prompting system - Google Patents
Pedestrian and obstacle prompting system Download PDFInfo
- Publication number
- CN112526520A CN112526520A CN201910807040.XA CN201910807040A CN112526520A CN 112526520 A CN112526520 A CN 112526520A CN 201910807040 A CN201910807040 A CN 201910807040A CN 112526520 A CN112526520 A CN 112526520A
- Authority
- CN
- China
- Prior art keywords
- processing module
- pedestrian
- obstacle
- image data
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/93—Sonar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The invention provides a pedestrian and obstacle prompting system which can be used for an intelligent rail electric car, and simultaneously adopts a mode of combining image perception and radar perception to enable the intelligent rail electric car to detect a car body in a 360-degree ring outside the car body, so that the detection range is expanded, the image recognition capability can be improved through a deep learning training method, the characteristic-level multi-dimensional heterogeneous fusion can be performed on at least two kinds of sensor information by providing a set of multi-dimensional heterogeneous sensor fusion module, the accuracy of road environment target detection and recognition is effectively improved, directional sound playing can be performed to prompt pedestrians and drivers, the pedestrian and obstacle prompting capability of the intelligent rail electric car is improved, and the driving safety and pedestrian protection requirements of the intelligent rail electric car are met.
Description
Technical Field
The invention relates to a prompting system, in particular to a pedestrian and obstacle detection prompting system for an intelligent rail electric car.
Background
At present, the intelligent rail electric car adopts a track following control technology, a battery is used as an energy channel, each section of the intelligent rail electric car can carry 100 passengers, the advantage of large passenger carrying capacity like a rail train is kept, a special steel rail does not need to be built, the intelligent rail electric car can share a road with an automobile, and a new choice is brought for solving the difficulty in traveling in large and medium cities. In the aspect of cost, the cost of the subway in China is about 4 to 7 million yuan/kilometer, the cost of the line of the modern tramcar is about 1.5 to 2 million yuan/kilometer, and the investment of the whole line of the intelligent tramcar is about 1/5 of the modern tramcar. Except for traffic main lines and branch lines of large and medium-sized cities, the intelligent rail electric car has more advantages compared with the traditional rail vehicles in such a market subdivision of specific functional areas such as tourism sightseeing areas, airports, ecological towns and the like, because of low investment cost, short construction period and strong city adaptability. However, in the prior art, there is no pedestrian and obstacle detection for the smart rail electric car in the field of rail transit, and in order to improve the safety and pedestrian protection capability of the smart rail electric car, it is urgently needed to develop a pedestrian and obstacle prompting system and method suitable for the smart rail electric car.
Disclosure of Invention
The invention aims to solve the technical problem of providing a specific application of pedestrian and obstacle detection of the intelligent tramcar in the field of rail transit aiming at the defects of the prior art, which can detect pedestrian behaviors and obstacles within 360-degree range around a car body when the car is driven and parked, can play directional sound to prompt pedestrians and drivers, improves the pedestrian and obstacle recognition capability and prompt capability of the intelligent tramcar, and meets the requirement of driving safety.
In order to solve the above problems, according to a first aspect of the present invention, there is provided a pedestrian and obstacle presenting system provided in an intelligent rail electric car having L-staged cars, L ≧ 2, the pedestrian and obstacle presenting system comprising:
the detection module comprises a camera detection module and a radar sensing module,
wherein the camera detection module comprises at least L camera detection sub-modules arranged in the L sections of marshalling carriages, each camera detection sub-module comprises at least 2 cameras, and the cameras can acquire image data outside the carriages,
the radar sensing module comprises at least 2L radar sensors, and the radar sensors are arranged in each marshalling compartment and used for acquiring point cloud data outside each marshalling compartment;
the processing module is connected with the detection module, can process the image data to detect pedestrians and obstacles to obtain an image detection result, can process the point cloud data to detect pedestrians and obstacles to obtain a point cloud detection result, and can fuse the image detection result and the point cloud detection result to obtain a classification result of the pedestrians and the obstacles;
and the directional playing module is connected with the processing module and comprises at least 2L external directional players and at least 2 in-vehicle directional players, the external directional players can emit sound waves according to the pedestrian and obstacle classification result, and the in-vehicle directional players can play alarm prompts according to the pedestrian and obstacle classification result.
Preferably, the radar sensor comprises a laser radar sensor and/or a millimeter wave radar sensor.
Preferably, the radar sensors are mounted on both outer sides of each of the marshalling cars, on a front side of a first section of the marshalling car, and on a rear side of an lth section of the marshalling car.
Preferably, the cameras are disposed on both outer sides of each of the marshalling compartments, on a front side of the first section of the marshalling compartment, and on a rear side of the L-th section of the marshalling compartment.
Preferably, the processing module processes the image data collected by the cameras in the same camera detection submodule to obtain at least L groups of grouped carriage image data, processes the grouped carriage image data to obtain all-around view image data, and processes the all-around view image data to detect pedestrians and obstacles to obtain the image detection result.
Preferably, the processing module performs fusion processing on the data by using a filtering algorithm.
Preferably, the number of the external directional players is the same as that of the cameras, and the setting position of the external directional players is the same as that of the cameras.
Preferably, the in-vehicle directional player includes a first in-vehicle directional player disposed inside a first section of the marshalling compartment and a second in-vehicle directional player disposed inside an lth section of the marshalling compartment.
According to a second aspect of the present invention, there is provided a pedestrian and obstacle prompting method, for a pedestrian and obstacle prompting system of an intelligent rail electric car, wherein the intelligent rail electric car has L marshalling carriages, L is greater than or equal to 2, the pedestrian and obstacle prompting system comprises a detection module, a processing module and a directional play module, wherein the system comprises a camera detection module and a radar sensing module, the camera detection module comprises at least L camera detection sub-modules arranged in the L marshalling carriages, each camera detection sub-module comprises at least 2 cameras, the radar sensing module comprises at least 2L radar sensors arranged in the L marshalling carriages, the directional play module comprises at least 2L external directional players arranged outside the L marshalling carriages and at least 2 in-car directional players arranged inside the first marshalling carriage and the L-th marshalling carriage,
the prompting method comprises the following steps:
s1: the camera collects image data outside each marshalling carriage and sends the image data to the processing module;
s2: the processing module detects pedestrians and obstacles in the image data to obtain an image detection result;
s3: the radar sensor collects point cloud data outside the intelligent tramcar and sends the point cloud data to the processing module;
s4: the processing module detects pedestrians and obstacles in the point cloud data to obtain a point cloud detection result;
s5: the processing module fuses the image detection result and the point cloud detection result into fused data;
s6: the processing module analyzes the fusion data to obtain a pedestrian and obstacle classification result;
s7: the processing module extracts three-dimensional coordinates of the pedestrians and the obstacles;
s8: the processing module adjusts the direction of the external directional player according to the three-dimensional coordinates and controls the external directional player to emit sound waves;
s9: the processing module controls the in-vehicle directional player to play an alarm prompt,
preferably, the steps S1 to S2 and the steps S3 to S4 may be performed simultaneously, and the steps S2 and S4 are completed before the process proceeds to the step S5.
Preferably, the radar sensor comprises a laser radar sensor and/or a millimeter wave radar sensor, and the point cloud data comprises laser radar point cloud data and/or millimeter wave radar point cloud data.
Preferably, a training step is further provided before step S1, where the training step includes the following steps:
s01: connecting the pedestrian and obstacle prompting system with a training module;
s02: the camera collects training image data and transmits the training image data to the training module;
s03: the training module analyzes the training image data and marks pedestrians and obstacles;
s04: the training module sends the labeling result to a target detection training model and trains the target detection training model;
s05: and carrying the target detection training model to the processing module.
Preferably, the step S2 further includes the steps of:
s21: the processing module processes the image data acquired by the cameras in the same camera detection submodule to obtain at least L groups of carriage image data;
s22: the processing module processes the image data of each marshalling compartment to obtain all-round-looking image data;
s23: and the processing module detects pedestrians and obstacles according to the all-around image data to obtain the image detection result.
Preferably, the step S4 further includes the steps of:
s41: the processing module analyzes the point cloud data, identifies road points and marks the road points;
s42: the processing module identifies other point cloud data points and carries out clustering;
s43: the processing module extracts features from the clustering result;
s44: the processing module detects the pedestrian and the obstacle according to the extracted features.
Preferably, the step S5 further includes the steps of:
s51: the processing module carries out modeling on the camera and the radar sensor to obtain a modeling model;
s52: the processing module calibrates the modeling model;
s53: the processing module constructs coordinate conversion between the camera and the radar sensor;
s54: and the processing module performs data fusion on the image detection result and the point cloud detection result through a filtering algorithm to obtain the fused data.
Preferably, the step S8 further includes:
s81: the processing module acquires the position information of the pedestrians and the obstacles according to the three-dimensional coordinates;
s82: the processing module generates a control signal and sound source information and sends the control signal and the sound source information to the external directional player;
s83: and the directional play fee adjusts the direction and emits sound waves according to the control signals and the sound source information.
Compared with the prior art, the intelligent tramcar can detect the car body outside the car body in a 360-degree ring mode by simultaneously adopting a mode of combining image sensing and radar sensing, and the detection range is expanded. Through a deep learning training method, the recognition capability of the sensor can be improved. Through providing one set and mostly be heterogeneous sensor fusion module, can carry out the heterogeneous integration of feature level multidimension to two kinds of at least sensor information, effectively promoted the rate of accuracy that improves road environment target detection and discernment. The intelligent tramcar can prompt pedestrians through directional sound playing and prompt drivers through in-car sound playing, improves the pedestrian and obstacle prompting capability of the intelligent tramcar, and can effectively meet the driving safety and pedestrian protection requirements of the intelligent tramcar.
Drawings
The foregoing summary, as well as the following detailed description of the invention, will be better understood when read in conjunction with the appended drawings. It is to be noted that the appended drawings are intended as examples of the claimed invention. In the drawings, like reference characters designate the same or similar elements.
FIG. 1 is a schematic view of a pedestrian and obstacle alert system arrangement according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a pedestrian and obstacle prompting system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a pedestrian and obstacle alert system according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a pedestrian and obstacle indication method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a pedestrian and obstacle prompting system training procedure according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an image detection result obtaining step according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a point cloud detection result obtaining step according to an embodiment of the invention;
FIG. 8 is a schematic diagram of a data fusion process according to an embodiment of the invention; and
fig. 9 is a schematic diagram of the acoustic emission steps of an external directional player according to an embodiment of the present invention.
Detailed Description
The detailed features and advantages of the present invention are described in detail in the detailed description which follows, and will be sufficient for anyone skilled in the art to understand the technical content of the present invention and to implement the present invention, and the related objects and advantages of the present invention will be easily understood by those skilled in the art from the description, claims and drawings disclosed in the present specification.
Referring to fig. 1, as a first aspect of the present invention, a pedestrian and obstacle presenting system is provided in a smart rail electric car having L marshalling cars, L ≧ 2. According to an embodiment of the present invention, as shown in fig. 2, the pedestrian and obstacle prompting system may include a detection module 1, a processing module 2, and a directional playing module 3.
The detection module comprises a camera detection module and a radar sensing module. Wherein camera detection module 4 includes that L at least set up in the camera detection submodule piece of L section marshalling carriage, and each camera detection submodule piece includes 2 at least cameras 4, and camera 4 can gather the outside image data in carriage, and the preferred is that camera 4 can set up in the outside both sides of each marshalling carriage, the front side of first section marshalling carriage and the rear side of L section marshalling carriage. The radar sensing module 5 includes at least 2L radar sensors 5, the radar sensors 5 are disposed in each marshalling compartment and configured to acquire point cloud data outside each marshalling compartment to construct a 3D point cloud map, and preferably, the radar sensors 5 may be installed on two external sides of each marshalling compartment, a front side of a first marshalling compartment, and a rear side of an L-th marshalling compartment. The radar sensor 5 can be a laser radar sensor and/or a millimeter wave radar sensor, and can also be other radar sensors.
The processing module 2 is connected with the detection module 1, can process the image data to detect pedestrians and obstacles to obtain an image detection result, can process the point cloud data to detect pedestrians and obstacles to obtain a point cloud detection result, and can fuse the image detection result and the point cloud detection result to obtain a pedestrian and obstacle classification result.
The directional playing module 3 is connected to the processing module 2, and includes at least 2L external directional players 5 and at least 2 in-vehicle directional players 6, preferably, the number of the external directional players 5 may be set to be the same as the number of the cameras 4, correspondingly, the position where the external directional player 6 is set may also be set to be the same as the position where the camera 4 is set, the in-vehicle directional player 7 may be set inside the 1 st and L th marshalling compartments, for example, near the driver position, the external directional player 6 may emit sound waves according to the classification result of pedestrians and obstacles, and the in-vehicle directional player 7 may play an alarm prompt according to the classification result of pedestrians and obstacles.
Further, the processing module 2 can process image data collected by the cameras 4 in the same camera detection submodule to obtain at least L groups of grouped carriage image data, then process the grouped carriage image data to obtain all around view image data, and the processing module 2 further processes the all around view image data to detect pedestrians and obstacles to obtain an image detection result. The processing module 2 may use one processing module to complete processing of image data, point cloud data, data fusion, external directional player, and in-vehicle directional player, or may use a plurality of sub-processing modules to complete processing, further, may use a plurality of modules to complete multi-stage processing of image data, or may use a plurality of modules to process point cloud data of radar sensors 5 of different types, respectively, in case of using at least 2 different types of radar sensors 5. When a plurality of modules are adopted, the plurality of modules can be arranged in a centralized manner or in a dispersed manner in different marshalling carts.
For example, referring to fig. 3, at least L first processing submodules 2-1 are used to process image data acquired by the cameras 4 in the same camera detection submodule to obtain marshalling compartment image data, 1 second processing submodule 2-2 is used to process marshalling compartment image data to obtain panoramic image data and perform pedestrian and obstacle detection, 1 third processing submodule 2-3 is used to perform pedestrian and obstacle detection on point cloud data, 1 fourth processing submodule 2-4 is used to perform data fusion processing and pedestrian and obstacle classification, 1 fifth processing submodule 2-5 is used to control the directional playing module 3, that is, each compartment has one first processing submodule 2-1 to splice images acquired by the cameras 4 of the compartment, information of the compartments is spliced into a frame covering the vehicle body for 360 degrees by the second processing submodule 2-2 The third processing sub-module 2-3 detects and detects the point cloud data collected by the radar sensor 5 to obtain a point cloud detection result, the fourth processing sub-module 2-4 performs fusion processing on the image detection result and the point cloud detection result and classifies the pedestrians and the obstacles, and the fifth processing sub-module 2-5 obtains three-dimensional coordinates of the pedestrians and the obstacles and controls the external directional player 6 and the in-vehicle directional player 7 to play sound.
In one embodiment of the present invention, the processing module may perform fusion processing on the data by using a filtering algorithm.
In a second aspect, the present invention provides a pedestrian and obstacle prompting method for the pedestrian and obstacle prompting system of the smart rail electric vehicle, which is shown in fig. 4, and the prompting method includes the following steps:
s1: the camera collects image data outside each marshalling carriage and sends the image data to the processing module;
s2: the processing module detects pedestrians and obstacles in the image data to obtain an image detection result;
s3: the radar sensor collects point cloud data outside the intelligent tramcar and sends the point cloud data to the processing module;
s4: the processing module detects pedestrians and obstacles in the point cloud data to obtain a point cloud detection result;
s5: the processing module fuses the image detection result and the point cloud detection result into fused data;
s6: the processing module analyzes the fusion data to obtain a pedestrian and obstacle classification result;
s7: the processing module extracts three-dimensional coordinates of pedestrians and obstacles;
s8: the processing module adjusts the direction of the external directional player according to the three-dimensional coordinates and controls the external directional player to emit sound waves;
s9: and the processing module controls the directional player in the vehicle to play an alarm prompt.
Further, in the above steps, the steps S1 to S2 performed by the camera detection module and the steps S3 to S4 performed by the radar sensing module may be performed simultaneously and independently from each other, and the step S2 and the step S4 are completed and then the process proceeds to the step S5.
Further, the point cloud data includes laser radar point cloud data collected by a laser radar sensor and/or millimeter wave radar point cloud data collected by a millimeter wave radar sensor.
Further, a training step is further provided before step S1, the pedestrian and obstacle prompting system can be trained in a deep learning manner, in the training step, training can be performed in the ground server, after completion, the target detection training model is deployed to the train, or model training can be performed directly in the train server, preferably, the pedestrian and obstacle prompting system is trained in the ground server, see fig. 5, and the training step includes the following steps:
s01: connecting a pedestrian and obstacle prompting system with the training module;
s02: the camera collects training image data and transmits the training image data to the training module;
s03: the training module analyzes training image data and marks pedestrians and obstacles;
s04: the training module sends the labeling result to a target detection training model and trains the target detection training model;
s05: and carrying the target detection training model to a processing module.
Further, referring to fig. 6, the step S2 further includes the following steps:
s21: the processing module processes image data acquired by cameras in the same camera detection submodule to obtain at least L groups of carriage image data;
s22: the processing module splices the image data of each marshalling compartment to obtain all-round-looking image data;
s23: the processing module detects pedestrians and obstacles according to the all-around image data to obtain an image detection result.
Further, referring to fig. 7, the step S4 further includes the following steps:
s41: the processing module analyzes the point cloud data, identifies road points and marks the road points;
s42: the processing module identifies other point cloud data points and carries out clustering;
s43: the processing module extracts features from the clustering result;
s44: the processing module detects pedestrians and obstacles according to the extracted features.
Further, referring to fig. 8, the step S5 further includes the following steps:
s51: the processing module carries out modeling on the camera and the radar sensor to obtain a modeling model;
s52: the processing module calibrates the modeling model;
s53: the processing module constructs coordinate conversion between the camera and the radar sensor;
s54: and the processing module performs data fusion on the image detection result and the point cloud detection result through a filtering algorithm to obtain fusion data.
Further, referring to fig. 9, step S8 further includes:
s81: the processing module acquires pedestrian and obstacle position information according to the three-dimensional coordinates;
s82: the processing module generates a control signal and sound source information and sends the control signal and the sound source information to an external directional player;
s83: the external directional player adjusts the direction and emits sound waves according to the control signal and the sound source information.
Before a vehicle is put into operation, deep learning training is continuously carried out on a target detection training model to enable the target detection training model to have the capability of identifying pedestrians and obstacles, so that a processing module carrying the target detection training model can analyze data to detect the pedestrians and the obstacles, and when the processing module is composed of a plurality of sub-modules, the target detection training model is carried in the sub-modules which are responsible for carrying out data fusion processing and pedestrian and obstacle classification. In the driving process of a vehicle, a camera and a radar sensor which are arranged outside a vehicle body acquire external images and point cloud data in real time, a processing module analyzes the acquired data in real time to detect pedestrians and obstacles, when the pedestrians and the obstacles outside the vehicle are detected, the processing module acquires three-dimensional coordinates of the pedestrians and the obstacles through data fusion processing, controls an external directional player to emit sound waves to the direction, and controls an in-vehicle directional player to give an alarm prompt to a driver.
Those of skill would appreciate that the various illustrative logical blocks, modules, circuits, and steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative logical blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The skilled person will also readily recognise that the order or combination of components, methods or interactions described herein is merely exemplary, and that components, methods or interactions herein may be combined or performed in a manner different from those described herein.
The terms and expressions which have been employed herein are used as terms of description and not of limitation. The use of such terms and expressions is not intended to exclude any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications may be made within the scope of the claims. Other modifications, variations, and alternatives, such as the replacement of components of different specifications, may also exist. Accordingly, the claims should be looked to in order to cover all such equivalents.
Also, it should be noted that although the present invention has been described with reference to the current specific embodiments, it should be understood by those skilled in the art that the above embodiments are merely illustrative of the present invention, and various equivalent changes or substitutions may be made without departing from the spirit of the present invention, and therefore, it is intended that all changes and modifications to the above embodiments be included within the scope of the claims of the present application.
Claims (16)
1. The utility model provides a pedestrian and barrier reminder system, sets up in intelligent rail trolley-bus, intelligent rail trolley-bus has L section marshalling carriage, and L is greater than or equal to 2, pedestrian and barrier reminder system includes:
the detection module comprises a camera detection module and a radar sensing module,
wherein the camera detection module comprises at least L camera detection sub-modules arranged in the L sections of marshalling carriages, each camera detection sub-module comprises at least 2 cameras, and the cameras can acquire image data outside the carriages,
the radar sensing module comprises at least 2L radar sensors, and the radar sensors are arranged in each marshalling compartment and used for acquiring point cloud data outside each marshalling compartment;
the processing module is connected with the detection module, can process the image data to detect pedestrians and obstacles to obtain an image detection result, can process the point cloud data to detect pedestrians and obstacles to obtain a point cloud detection result, and can fuse the image detection result and the point cloud detection result to obtain a classification result of the pedestrians and the obstacles; and
and the directional playing module is connected with the processing module and comprises at least 2L external directional players and at least 2 in-vehicle directional players, the external directional players can emit sound waves according to the pedestrian and obstacle classification result, and the in-vehicle directional players can play alarm prompts according to the pedestrian and obstacle classification result.
2. The pedestrian and obstacle alert system of claim 1, wherein the radar sensor includes a laser radar sensor and/or a millimeter wave radar sensor.
3. The pedestrian and obstacle alert system of claim 1, wherein the radar sensors are mounted on both exterior sides of each of the consist cars, on a front side of a first of the consist cars, and on a rear side of an lth of the consist cars.
4. The pedestrian and obstacle alert system of claim 1, wherein the cameras are disposed on both exterior sides of each of the marshalling compartments, on a front side of a first of the marshalling compartments, and on a rear side of an lth of the marshalling compartments.
5. The pedestrian and obstacle prompting system of claim 1, wherein the processing module processes the image data collected by the cameras in the same camera detection submodule to obtain at least L groups of grouped car image data, and processes each of the grouped car image data to obtain a panoramic image data, and the processing module processes the panoramic image data to detect pedestrians and obstacles to obtain the image detection result.
6. The pedestrian and obstacle alert system of claim 1, wherein the processing module employs a filtering algorithm to fuse the data.
7. The pedestrian and obstacle alert system of claim 1, wherein the number of external directional players is the same as the number of cameras, and the same location is provided as the cameras.
8. The pedestrian and obstacle alert system of claim 6, wherein the in-vehicle directional players include a first in-vehicle directional player disposed within a first section of the consist compartment and a second in-vehicle directional player disposed within an lth section of the consist compartment.
9. A pedestrian and obstacle prompting method is used for a pedestrian and obstacle prompting system of an intelligent rail electric car, wherein the intelligent rail electric car is provided with L sections of marshalling carriages, L is more than or equal to 2, the pedestrian and obstacle prompting system comprises a detection module, a processing module and a directional playing module, the intelligent rail electric car comprises a camera detection module and a radar sensing module, the camera detection module comprises at least L camera detection submodules arranged on the L sections of marshalling carriages, each camera detection submodule comprises at least 2 cameras, the radar sensing module comprises at least 2L radar sensors arranged on the L sections of marshalling carriages, the directional playing module comprises at least 2L external directional players arranged outside the L sections of marshalling carriages and at least 2 in-car directional players arranged inside the first section of marshalling carriages and the L section of marshalling carriages,
the prompting method comprises the following steps:
s1: the camera collects image data outside each marshalling carriage and sends the image data to the processing module;
s2: the processing module detects pedestrians and obstacles in the image data to obtain an image detection result;
s3: the radar sensor collects point cloud data outside the intelligent tramcar and sends the point cloud data to the processing module;
s4: the processing module detects pedestrians and obstacles in the point cloud data to obtain a point cloud detection result;
s5: the processing module fuses the image detection result and the point cloud detection result into fused data;
s6: the processing module analyzes the fusion data to obtain a pedestrian and obstacle classification result;
s7: the processing module extracts three-dimensional coordinates of the pedestrians and the obstacles;
s8: the processing module adjusts the direction of the external directional player according to the three-dimensional coordinates and controls the external directional player to emit sound waves; and
s9: and the processing module controls the in-vehicle directional player to play an alarm prompt.
10. The pedestrian and obstacle presenting method according to claim 9, wherein the steps S1 to S2 and the steps S3 to S4 are performed simultaneously, and the process proceeds to the step S5 after the steps S2 and S4 are completed.
11. The pedestrian and obstacle prompting method according to claim 9, wherein the radar sensor includes a laser radar sensor and/or a millimeter wave radar sensor, and the point cloud data includes laser radar point cloud data and/or millimeter wave radar point cloud data.
12. The pedestrian and obstacle prompting method according to claim 9, wherein a training step is further provided before the step S1, the training step including the steps of:
s01: connecting the pedestrian and obstacle prompting system with a training module;
s02: the camera collects training image data and transmits the training image data to the training module;
s03: the training module analyzes the training image data and marks pedestrians and obstacles;
s04: the training module sends the labeling result to a target detection training model and trains the target detection training model; and
s05: and carrying the target detection training model to the processing module.
13. The pedestrian and obstacle prompting method according to claim 9, wherein the step S2 further includes the steps of:
s21: the processing module processes the image data acquired by the cameras in the same camera detection submodule to obtain at least L groups of carriage image data;
s22: the processing module splices the image data of each marshalling compartment to obtain all-round view image data; and
s23: and the processing module detects pedestrians and obstacles according to the all-around image data to obtain the image detection result.
14. The pedestrian and obstacle prompting method according to claim 9, wherein the step S4 further includes the steps of:
s41: the processing module analyzes the point cloud data, identifies road points and marks the road points;
s42: the processing module identifies other point cloud data points and carries out clustering;
s43: the processing module extracts features from the clustering result; and
s44: the processing module detects the pedestrian and the obstacle according to the extracted features.
15. The pedestrian and obstacle prompting method according to claim 9, wherein the step S5 further includes the steps of:
s51: the processing module carries out modeling on the camera and the radar sensor to obtain a modeling model;
s52: the processing module calibrates the modeling model;
s53: the processing module constructs coordinate conversion between the camera and the radar sensor; and
s54: and the processing module performs data fusion on the image detection result and the point cloud detection result through a filtering algorithm to obtain the fused data.
16. The pedestrian and obstacle prompting method according to claim 9, wherein the step S8 further includes:
s81: the processing module acquires the position information of the pedestrians and the obstacles according to the three-dimensional coordinates;
s82: the processing module generates a control signal and sound source information and sends the control signal and the sound source information to the external directional player; and
s83: and the external directional player adjusts the direction and emits sound waves according to the control signals and the sound source information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910807040.XA CN112526520A (en) | 2019-08-29 | 2019-08-29 | Pedestrian and obstacle prompting system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910807040.XA CN112526520A (en) | 2019-08-29 | 2019-08-29 | Pedestrian and obstacle prompting system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112526520A true CN112526520A (en) | 2021-03-19 |
Family
ID=74974858
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910807040.XA Pending CN112526520A (en) | 2019-08-29 | 2019-08-29 | Pedestrian and obstacle prompting system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112526520A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113420687A (en) * | 2021-06-29 | 2021-09-21 | 三一专用汽车有限责任公司 | Method and device for acquiring travelable area and vehicle |
CN116453087A (en) * | 2023-03-30 | 2023-07-18 | 无锡物联网创新中心有限公司 | Automatic driving obstacle detection method of data closed loop |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140035775A1 (en) * | 2012-08-01 | 2014-02-06 | GM Global Technology Operations LLC | Fusion of obstacle detection using radar and camera |
CN105844225A (en) * | 2016-03-18 | 2016-08-10 | 乐卡汽车智能科技(北京)有限公司 | Method and device for processing image based on vehicle |
CN106114352A (en) * | 2016-06-30 | 2016-11-16 | 智车优行科技(北京)有限公司 | Alarming method for power based on electric vehicle, device and vehicle |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN106945660A (en) * | 2017-02-24 | 2017-07-14 | 宁波吉利汽车研究开发有限公司 | A kind of automated parking system |
US20180172825A1 (en) * | 2016-12-16 | 2018-06-21 | Automotive Research & Testing Center | Environment recognition system using vehicular millimeter wave radar |
CN108195378A (en) * | 2017-12-25 | 2018-06-22 | 北京航天晨信科技有限责任公司 | It is a kind of based on the intelligent vision navigation system for looking around camera |
CN108509918A (en) * | 2018-04-03 | 2018-09-07 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
CN108860140A (en) * | 2018-05-02 | 2018-11-23 | 奇瑞汽车股份有限公司 | A kind of automatic parking emerging system |
CN108921925A (en) * | 2018-06-27 | 2018-11-30 | 广州视源电子科技股份有限公司 | The semantic point cloud generation method and device merged based on laser radar and vision |
CN108928343A (en) * | 2018-08-13 | 2018-12-04 | 吉利汽车研究院(宁波)有限公司 | A kind of panorama fusion automated parking system and method |
EP3438777A1 (en) * | 2017-08-04 | 2019-02-06 | Bayerische Motoren Werke Aktiengesellschaft | Method, apparatus and computer program for a vehicle |
CN109345510A (en) * | 2018-09-07 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Object detecting method, device, equipment, storage medium and vehicle |
CN110096059A (en) * | 2019-04-25 | 2019-08-06 | 杭州飞步科技有限公司 | Automatic Pilot method, apparatus, equipment and storage medium |
-
2019
- 2019-08-29 CN CN201910807040.XA patent/CN112526520A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140035775A1 (en) * | 2012-08-01 | 2014-02-06 | GM Global Technology Operations LLC | Fusion of obstacle detection using radar and camera |
CN105844225A (en) * | 2016-03-18 | 2016-08-10 | 乐卡汽车智能科技(北京)有限公司 | Method and device for processing image based on vehicle |
CN106114352A (en) * | 2016-06-30 | 2016-11-16 | 智车优行科技(北京)有限公司 | Alarming method for power based on electric vehicle, device and vehicle |
US20180172825A1 (en) * | 2016-12-16 | 2018-06-21 | Automotive Research & Testing Center | Environment recognition system using vehicular millimeter wave radar |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN106945660A (en) * | 2017-02-24 | 2017-07-14 | 宁波吉利汽车研究开发有限公司 | A kind of automated parking system |
EP3438777A1 (en) * | 2017-08-04 | 2019-02-06 | Bayerische Motoren Werke Aktiengesellschaft | Method, apparatus and computer program for a vehicle |
CN108195378A (en) * | 2017-12-25 | 2018-06-22 | 北京航天晨信科技有限责任公司 | It is a kind of based on the intelligent vision navigation system for looking around camera |
CN108509918A (en) * | 2018-04-03 | 2018-09-07 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
CN108860140A (en) * | 2018-05-02 | 2018-11-23 | 奇瑞汽车股份有限公司 | A kind of automatic parking emerging system |
CN108921925A (en) * | 2018-06-27 | 2018-11-30 | 广州视源电子科技股份有限公司 | The semantic point cloud generation method and device merged based on laser radar and vision |
CN108928343A (en) * | 2018-08-13 | 2018-12-04 | 吉利汽车研究院(宁波)有限公司 | A kind of panorama fusion automated parking system and method |
CN109345510A (en) * | 2018-09-07 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Object detecting method, device, equipment, storage medium and vehicle |
CN110096059A (en) * | 2019-04-25 | 2019-08-06 | 杭州飞步科技有限公司 | Automatic Pilot method, apparatus, equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113420687A (en) * | 2021-06-29 | 2021-09-21 | 三一专用汽车有限责任公司 | Method and device for acquiring travelable area and vehicle |
CN116453087A (en) * | 2023-03-30 | 2023-07-18 | 无锡物联网创新中心有限公司 | Automatic driving obstacle detection method of data closed loop |
CN116453087B (en) * | 2023-03-30 | 2023-10-20 | 无锡物联网创新中心有限公司 | Automatic driving obstacle detection method of data closed loop |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11163045B2 (en) | Automated detection of sensor miscalibration | |
US10210408B2 (en) | Use of relationship between activities of different traffic signals in a network to improve traffic signal state estimation | |
CN103105174B (en) | A kind of vehicle-mounted outdoor scene safety navigation method based on AR augmented reality | |
CN101870292B (en) | Driving assistance apparatus, driving assistance method, and driving assistance program | |
CN104554259B (en) | Active automatic Pilot accessory system and method | |
CN106969779A (en) | Intelligent vehicle map emerging system and method based on DSRC | |
CN109791565A (en) | The visual field ADAS and vision supplement V2X | |
CN105185140A (en) | Auxiliary driving method and system | |
CN105954048A (en) | Method for testing normal driving of unmanned vehicle and device thereof | |
CN109741632A (en) | A kind of vehicle auxiliary travelling method and apparatus | |
CN106157664A (en) | A kind of road speed limit identification recognition device | |
CN109389864A (en) | A kind of vehicle relative positioning and anti-collision warning method | |
US11628850B2 (en) | System for generating generalized simulation scenarios | |
CN102219000B (en) | Method of matching running noise and control device | |
CN104471625A (en) | Method and system for creating a current situation depiction | |
JP2015009599A (en) | Running control device and running control method for vehicle | |
CN106537900A (en) | Video system and method for data communication | |
US11496707B1 (en) | Fleet dashcam system for event-based scenario generation | |
CN112526520A (en) | Pedestrian and obstacle prompting system | |
CN102303563A (en) | System and method for prewarning front vehicle collision | |
CN108382399A (en) | Controller of vehicle, control method for vehicle and the medium for storing vehicle control program | |
CN114655260A (en) | Control system of unmanned tourist coach | |
CN109774702B (en) | Vehicle driving assistance system and method | |
CN116129641A (en) | Vehicle security situation calculation method and system based on multi-terminal collaborative identification | |
CN111845560B (en) | Warning system for objects rushed out of locomotive |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |