Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, embodiments of the present application provide a vehicle blind area monitoring and driving control method, a vehicle blind area monitoring and driving control device, and a vehicle and road coordination system.
In a first aspect, an embodiment of the present application provides a vehicle blind area monitoring method, including:
acquiring a road image and a blind area corresponding to a vehicle;
acquiring a first motion trail of a first object on the road according to the road image;
determining a second object which conflicts with the vehicle in the blind area in a preset time period according to the first motion track;
and generating blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
Optionally, the acquiring the road image includes:
acquiring a driving route of the vehicle;
and acquiring a road image corresponding to the driving route.
Optionally, the acquiring of the road image and the blind area corresponding to the vehicle includes:
acquiring a driving route of the vehicle;
determining a blind area of the vehicle according to the driving route;
and acquiring a road image corresponding to the blind area.
Optionally, the obtaining of the blind area corresponding to the vehicle includes:
acquiring a driving route and attribute information of the vehicle;
acquiring road condition information corresponding to the driving route;
and determining a blind area corresponding to at least one driving position of the vehicle on the driving route according to the attribute information and the road condition information.
Optionally, the obtaining of the blind area corresponding to the vehicle includes:
acquiring a driving route of the vehicle and the selected blind area type;
acquiring road condition information corresponding to the driving route;
and determining a blind area corresponding to at least one driving position of the vehicle on the driving route according to the blind area type and the road condition information.
Optionally, the determining, according to the first motion trajectory, a second object that conflicts with the vehicle traveling in the blind area within a preset time period includes:
acquiring a driving route, a driving speed and vehicle position information of the vehicle;
determining a driving track of the vehicle on the road according to the driving route;
determining an intersection point of the first motion track and the driving track in the blind area;
determining first time when the vehicle reaches the intersection point according to the running speed and the vehicle position information, and determining second time when the first object reaches the intersection point according to the first motion track;
and when the difference value between the first time and the second time is smaller than or equal to a preset threshold value, determining the first object as a second object which conflicts with the vehicle in the blind area in the preset time period.
Optionally, the method further includes:
and when the vehicle meets the preset reminding condition, sending the blind area monitoring information to a terminal corresponding to the vehicle.
In a second aspect, an embodiment of the present application provides a vehicle travel control method, including:
receiving blind area monitoring information corresponding to a vehicle, wherein the blind area monitoring information is generated according to the embodiment of the vehicle blind area monitoring method;
when the vehicle is determined to have a driving conflict in the blind area of the vehicle according to the driving information of the vehicle and the blind area monitoring information, generating driving conflict information;
and carrying out running control according to the running conflict information.
In a third aspect, an embodiment of the present application provides a vehicle blind area monitoring device, including:
the first acquisition module is used for acquiring a road image and a blind area corresponding to a vehicle;
the second acquisition module is used for acquiring a first motion track of a first object on the road according to the road image;
the determining module is used for determining a second object which conflicts with the vehicle in the blind area in a preset time period according to the first motion track;
and the generating module is used for generating blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
In a fourth aspect, an embodiment of the present application provides a travel control apparatus including:
the system comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving blind area monitoring information corresponding to a vehicle, and the blind area monitoring information is generated according to the embodiment of the vehicle blind area monitoring method;
the generating module is used for generating driving conflict information when the driving conflict of the vehicle in the blind area of the vehicle is determined according to the driving information of the vehicle and the blind area monitoring information;
and the control module is used for carrying out running control according to the running conflict information.
In a fifth aspect, an embodiment of the present application provides a vehicle-road coordination system, including: the camera device and the calculating device are arranged on a road;
the camera device is used for shooting the road and sending the shot road image to the computing device;
the computing device is used for acquiring a road image and a blind area corresponding to a vehicle; acquiring a first motion trail of a first object on the road according to the road image; determining a second object which conflicts with the vehicle in the blind area in a preset time period according to the first motion track; and generating blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
Optionally, the system further comprises: a vehicle-mounted terminal located on a vehicle;
the computing device is used for sending the blind area monitoring information to the vehicle-mounted terminal;
the vehicle-mounted terminal is used for receiving blind area monitoring information corresponding to the vehicle; when the vehicle is determined to have a driving conflict in the blind area of the vehicle according to the driving information of the vehicle and the blind area monitoring information, generating driving conflict information; and carrying out running control according to the running conflict information.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above method steps when executing the computer program.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the above method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the method comprises the steps of monitoring objects on a road by shooting road images, determining the motion tracks of the objects, judging whether the objects conflict with vehicles in the range of blind areas of the vehicles or not, and generating corresponding blind area monitoring information if the traffic conflicts are possible. The blind area monitoring information can be sent to the vehicle-mounted terminal for traffic reminding according to preset conditions, and a vehicle driver or an automatic driving vehicle can make a driving decision according to the blind area monitoring information, so that the driving safety of the road is improved, and the traffic safety of the whole road is further improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method of the embodiment of the application is mainly applied to a vehicle-road cooperative system.
Vehicle-road cooperation refers to the connection of all elements in a traffic system with all vehicles and roadside infrastructure in a wireless communication manner to form a complete system for providing dynamic information sharing. The road side part in the vehicle-road cooperation system collects traffic information on roads, uses edge computing equipment to perform recognition processing, and provides more comprehensive and accurate auxiliary information for vehicles in time.
The vehicle-road cooperation system of the embodiment of the application includes an imaging device 10 and a computing device 20 which are provided on a road. Wherein the computing device 20 comprises: an edge computing device 21 and a central computing device 22.
Fig. 1 is a block diagram of a road side portion based on a vehicle-road cooperative system according to an embodiment of the present application. As shown in fig. 1, at least one camera 10 is arranged on the road every first preset length to shoot the road section of the first preset length; the at least two camera devices 10 are connected with the edge calculation device 21; a first predetermined number of edge computing devices 21 are connected to a central computing device 22.
And the camera device 10 is used for uploading the shot image to the edge calculation device connected with the camera device. The edge computing device 21 is used for performing recognition processing on the image and sending the recognition result to a central computing device connected with the edge computing device. And the central computing device 22 is used for carrying out data processing according to the identification result. The edge computing device 21 may be an edge computing industrial personal computer, and the central computing device 22 may be an edge computing workstation.
Fig. 2 is a schematic deployment diagram of a road side system based on vehicle-road cooperation according to an embodiment of the present application. As shown in fig. 2, on an expressway, in which at least one image pickup device 10 is provided per a first preset length on the road, the first preset length of a link is photographed. At least two cameras 10 are connected to the edge computing device 21. A first predetermined number of edge computing devices 21 are connected to a central computing device 22.
For example, 1 image pickup device 10 may be provided at each end of a road segment of 100 meters. The 2 image pickup devices 10 relatively photograph the 100-meter link. Meanwhile, the 2 image pickup devices 10 are connected to the 1 edge calculation device 21. The 5 edge computing devices 20 are connected to the 1 central computing device 22.
The camera 10 and the edge computing device 21 are connected to a power over ethernet switch 41, and the central computing device 22 is connected to a power over core ethernet switch 42.
The roadside system further includes: the firewall device 50, the edge computing device 21, and the central computing device 22 are connected to a cloud server on the network side through the firewall device 50.
Fig. 3 is a schematic deployment diagram of a roadside system based on vehicle-road cooperation according to another embodiment of the present application. As shown in fig. 3, at least two image pickup devices 10 are provided on each side of the intersection, and the image pickup devices 10 take images toward the intersection. The camera means 10 arranged on each side are connected to an edge calculation means 21. Each edge computing device 21 is connected to 1 central computing device.
For example, 2 cameras 10 are provided on each side of the intersection, and the 2 cameras on each side are connected to one edge calculation device 21. The crossroad worker is provided with 4 edge computing devices 21, and the 4 edge computing devices 21 are all connected with 1 central computing device 22.
The image pickup device 10 and the edge computing device 21 are connected to a power over ethernet switch 41, and the central computing device 22 is connected to a core power over ethernet switch 42.
The edge computing device 21 and the central computing device 22 may be connected to a cloud server, and upload an image recognition result or a data processing result to the cloud server, or receive an instruction or data sent by the cloud server.
First, a vehicle blind area monitoring method provided by the embodiment of the invention is described below.
Fig. 4 is a flowchart of a vehicle blind area monitoring method according to an embodiment of the present application. As shown in fig. 4, the method comprises the steps of:
step S11, acquiring road images and blind areas corresponding to vehicles;
step S12, acquiring a first motion trail of a first object on the road according to the road image;
step S13, determining a second object which is in conflict with the vehicle running in the blind area within a preset time period according to the first motion trail;
and step S14, generating blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
In this embodiment, the road image is captured, objects on the road are monitored, the movement tracks of the objects are determined, whether the objects conflict with the vehicle in the blind area range of the vehicle is judged, and if the traffic conflict is possible, corresponding blind area monitoring information is generated. The blind area monitoring information can be sent to the vehicle-mounted terminal for traffic reminding according to preset conditions, and a vehicle driver or an automatic driving vehicle can make a driving decision according to the blind area monitoring information, so that the driving safety of the road is improved, and the traffic safety of the whole road is further improved.
Optionally, the first object comprises a dynamic object and/or a static object. The dynamic object includes: motor vehicles, bicycles, pedestrians, etc., static objects including: traffic lights, road barriers, obstacles to road maintenance, vehicles parked on the road in the event of a traffic accident, and the like. For dynamic objects, the object information may include: type of object (e.g., car, truck, van, bicycle, electric bike, pedestrian, etc.), size, object location, direction of movement, speed of movement, etc. For static objects, the object information may include: type of object (traffic lights, road barriers, roadblocks, vehicles, etc.), size, location, etc.
For a static object, if the static object is located in the range of the vehicle blind area, blind area monitoring information corresponding to the vehicle is generated according to object information such as the type, the position, the size and the like of the static object. And for the dynamic object, the motion track of the dynamic object can be predicted, whether the dynamic object enters the vehicle blind area or not and whether the dynamic object can send traffic conflict with the vehicle or not when the dynamic object enters the vehicle blind area are judged, and for the moving object which is possibly in traffic conflict with the vehicle in the vehicle blind area, blind area monitoring information corresponding to the vehicle is generated according to the motion track of the moving object. Therefore, by combining with the monitoring of dynamic objects and static objects, the traffic condition in the vehicle blind area can be monitored more comprehensively and accurately, a vehicle driver or an automatic driving vehicle can make a driving decision accurately according to the blind area monitoring information, the occurrence of traffic conflicts is avoided, the driving safety of the vehicle road is improved, and the safety of the whole road traffic is further improved.
In addition, in the embodiment, not only the dynamic object and/or the static object in the vehicle blind area is monitored, but all objects that may enter the vehicle blind area are monitored. For example, the pedestrian is not in the vehicle blind area at present, but through the prediction of the motion trail of the pedestrian, the pedestrian is found to be possibly in the vehicle blind area, and the probability of traffic conflict exists. If the motion track of the pedestrian is close to or passes through the vehicle blind area, if an emergency occurs, the pedestrian possibly enters the blind area and collides with the vehicle, the pedestrian is also monitored, and the motion track information of the pedestrian is added into the reminding information of the vehicle. Therefore, the road monitoring range is further expanded, the traffic conflict which possibly occurs in the blind area is reminded, the occurrence of sudden traffic conflict is avoided, the driving safety of the vehicle road is improved, and the safety of the whole road traffic is further improved.
In an alternative embodiment, in order to reduce the amount of computation and improve the monitoring efficiency, only the road image on the vehicle travel route may be acquired for blind area monitoring. In the above step S11, the acquiring the road image includes: acquiring a driving route of a vehicle; and acquiring a road image corresponding to the driving route.
For example, based on the traveling route information of the vehicle, it is determined that the traveling locus of the vehicle is traveling from east to west on the road and turning right at the intersection a. Therefore, it is possible to acquire only the road image of the intersection a, and the road images of the east and north sides of the intersection after the vehicle turns right.
In another alternative embodiment, in order to further reduce the amount of calculation and improve the monitoring efficiency, a blind area is determined by the vehicle travel route, and only the road image on the vehicle travel route that is associated with the blind area is acquired for blind area monitoring. The step S11 includes: acquiring a driving route of a vehicle; determining a blind area of the vehicle according to the driving route; and acquiring a road image corresponding to the blind area.
For example, based on the traveling route information of the vehicle, it is determined that the traveling locus of the vehicle is traveling from east to west on the road and turning right at the intersection a. Therefore, only the road image on the north side of the intersection where the vehicle turns right can be acquired.
In an alternative embodiment, the vehicle corresponding blind zone may be obtained by at least one of the following ways.
The method comprises the steps that a blind area corresponding to a vehicle is determined according to attribute information of the vehicle.
In the step S11, the obtaining of the blind area corresponding to the vehicle includes the following steps:
step A1, acquiring the driving route and attribute information of the vehicle;
step A2, acquiring road condition information corresponding to a driving route;
and A3, determining a blind area corresponding to at least one driving position of the vehicle on the driving route according to the attribute information and the road condition information.
Wherein the attribute information of the vehicle itself includes: size, effective braking distance, blind zone range, visual range, etc.
For an automatic driving vehicle, detection devices such as radars are installed on the vehicle, but the number and positions of the detection devices are limited, and the surrounding conditions of the vehicle cannot be comprehensively monitored. For example, the radar detection distance range is 20 cm to 100 m from the vehicle, and the vehicle blind area is actually 0 cm to 20 cm from the vehicle and exceeds 100 m. The radar detection angle ranges from-25 degrees to 15 degrees, and the vehicle blind areas range from-180 degrees to-25 degrees and 15 degrees to 180 degrees.
In addition, the traffic information includes: fixed objects which have shielding influence on the sight line of a driver or detection equipment, such as buildings, trees, green belts, street lamps and the like, are arranged near a certain driving position on the driving route of the vehicle. For example, if there is a tree on the right side of the vehicle, the detection device cannot detect an object behind the tree because the tree may block the detection device on the vehicle, and the rear of the tree is also a blind area of the vehicle.
In this embodiment, the blind area corresponding to the driving position of the vehicle can be determined by combining the road condition information corresponding to the driving route of the vehicle and the attribute information of the vehicle.
And determining the blind area corresponding to the vehicle according to the type of the blind area selected by the user.
In the step S11, the obtaining of the blind area corresponding to the vehicle and the obtaining of the blind area of the vehicle include the following steps:
step B1, acquiring the driving route of the vehicle and the selected blind area type;
step B2, acquiring road condition information corresponding to the driving route;
and step B3, determining a blind area corresponding to at least one driving position of the vehicle on the driving route according to the type of the blind area and the road condition information.
The blind area monitoring method and the blind area monitoring system can provide an interface about blind area monitoring for a user, provide a blind area type option on the interface, and enable the user to select the type of the blind area to be monitored according to needs. Fig. 5 is a schematic diagram of a blind area option interface provided in the embodiment of the present application. As shown in fig. 5, the blind area types may be divided into a left-turn blind area, a right-turn blind area, a reverse blind area, a parking blind area, a starting blind area, and the like according to the driving scene, and the user determines the blind area to be monitored and reminded by selecting the option. Each blind area type may correspond to a plurality of blind areas. Optionally, all vehicle blind areas, such as a front blind area, a rear-view mirror blind area, an AB column blind area, a short-distance blind area, a turning blind area, a long-distance blind area, and the like, may also be provided on the blind area option interface, and the user may select the blind area according to his own needs.
In an alternative embodiment, the road image comprises: at least two road images photographed at a preset time interval, and/or at least two road images extracted at a preset time interval from the photographed road video. The step S12 includes: and processing and identifying the road image to obtain a first object.
Alternatively, a moving object in the image may be identified by a three-frame difference method. First, three consecutive images are acquired as defined as image1, image2, and image 3. And performing frame difference operation on the image1 and the image2 to obtain a difference d 1. And performing frame difference operation on the image2 and the image3 to obtain a difference d 2. And d1 and d2 are subjected to smoothing processing and threshold processing and then converted into binary images. And carrying out AND operation on the binary image according to bits to obtain an identification result.
After the first object is identified, predicting the motion trajectory of the first object may be implemented by using a computer vision library OpenCV with a kalman filter. The Kalman filter predicts the motion trail of the object by continuously predicting the state of the object and updating the recursive calculation process of state prediction based on the measurement result.
However, it is impossible to immediately obtain the predicted point by directly processing the actual coordinate point by using the kalman filter, and it is seen from the actual use that the result of the processing is delayed from the current coordinate point (the coordinate point at the moment when the image pickup device recognizes the target), then regressed and advanced from the current point, but regressed and advanced from the current point again, and so on. Thus, when predicting a dynamic object, the predicted point may be behind the current point during the initial period of time, and even if the actual point is caught up by the subsequent predicted point, the target may have changed direction of motion or moved out of view. Therefore, the prediction effect of directly using the Kalman filter on the motion of the object is poor.
In the present embodiment, the following technical means are adopted to overcome the above problems.
Fig. 6 is a flowchart of a vehicle blind area monitoring method according to another embodiment of the present application. As shown in fig. 6, the first object information includes: position information, direction of motion, and speed of motion; the position information comprises actual coordinates of the first object. The method further comprises the following steps:
step S21, determining coordinate information of a first prediction point according to a preset prediction algorithm and an actual coordinate, a motion direction and a motion speed;
step S22, multiplying the difference between the actual coordinate information and the first prediction point coordinate information by a preset coefficient to obtain a product result;
step S23, adding the product result and the actual coordinate information to obtain a second predicted point coordinate;
and step S24, obtaining a first motion track of the first object according to the second predicted point coordinates.
In this embodiment, the distance between the predicted point and the actual point is multiplied by a preset coefficient and then added to the coordinates of the actual point, thereby realizing motion prediction. During actual prediction, in the initial stage, the predicted point may be ahead of the actual point, but the distance between the predicted point and the actual point is gradually shortened in the subsequent stage, and finally the predicted point and the actual point are overlapped, so that the prediction effect is more accurate for the dynamic object.
In this embodiment, in step S12, for the identification, detection, trajectory prediction, and the like of the object, an evaluation data set, such as a KITTI data set, of a computer vision algorithm in an automatic driving scene may be referred to. The data set is used for evaluating the performance of computer vision technologies such as stereo image (stereo), optical flow (optical flow), visual distance measurement (visual object measurement), 3D object detection (object detection) and 3D tracking (tracking) in a vehicle-mounted environment.
Fig. 7 is a flowchart of a vehicle blind spot monitoring method according to another embodiment of the present application. As shown in fig. 7, step S13 includes the steps of:
step S31, acquiring the driving route, the driving speed and the vehicle position information of the vehicle;
step S32, determining the driving track of the vehicle on the road according to the driving route;
step S33, determining the intersection point of the first motion track and the driving track in the blind area;
step S34, determining a first time when the vehicle reaches the intersection point according to the running speed and the vehicle position information, and determining a second time when the first object reaches the intersection point according to the first motion track;
and step S35, when the difference value between the first time and the second time is less than or equal to the preset threshold value, determining the first object as a second object which conflicts with the vehicle running in the blind area within the preset time period.
Alternatively, the preset time period in step S13 is a time period in the future from the current time, and the preset time period may be determined according to the information of the position, the speed, the direction, and the like of the first object.
For example, the monitoring area of the imaging device is an intersection, the first object is a pedestrian, the first object is located on the north side of the intersection at present, the pedestrian passes through the road from east to west, the walking speed of the pedestrian is 1 m/s, and the road width is 35 m. The pedestrian may enter the vehicle blind area in the road crossing process and conflict with the vehicle in traffic, and the time of the pedestrian passing the road is 35 seconds, so that the preset time period can be determined to be within 35 seconds from the current time. Or, considering that an emergency may occur during the pedestrian passing through the road, the time for the pedestrian to pass through the road is prolonged, and the preset time period may be prolonged to be within 40 seconds in the future from the current time.
Optionally, the preset time period in step S13 may also be determined according to the signal lamp timing information. If the pedestrian possibly enters the vehicle blind area in the road crossing process and collides with the vehicle in traffic, and the green time of a sidewalk signal lamp is 40 seconds, the preset time period can be set to be within 40 seconds from the current time; or, the red light time of the traffic lane signal lights in the south and north directions of the intersection is 60 seconds, and the preset time period can be prolonged to be within 60 seconds in the future from the current time. According to practical situations, the determination of the preset time period may be any one of the above manners or a combination of a plurality of manners, and may also be determined by other manners, which is not described herein again.
In this embodiment, the time when the vehicle reaches the intersection point in the blind area may be respectively calculated according to the movement track of the vehicle and the first object, and if the time difference is less than or equal to a preset threshold, for example, when the time difference is less than or equal to 10 seconds, the time difference and the first object may have a traffic conflict, at this time, a driver of the vehicle needs to be reminded, so that the vehicle driver or the autonomous vehicle can make a driving decision accurately according to the traffic conflict situation, thereby avoiding the occurrence of the traffic conflict, improving the driving safety of the vehicle road, and further improving the safety of the whole road traffic.
Optionally, the probability of collision between the second object and the vehicle may be calculated, the second object with higher probability is selected, and the blind area monitoring information is generated according to the second motion track.
Optionally, the method further includes: when the vehicle accords with the preset reminding condition, the blind area monitoring information is sent to the terminal corresponding to the vehicle.
For example, it may be set that when there is a traffic collision in a blind area, the blind area monitoring information is transmitted to a terminal corresponding to the vehicle. Or the blind area monitoring information is fed back to the terminal sending the request blind area monitoring information, or the corresponding terminal of the vehicle is subscribed with the service for obtaining the blind area monitoring information, so that the corresponding blind area monitoring information can be sent in real time according to the position or the route of the vehicle.
In this embodiment, the method further includes: when it is determined that a second object colliding with the vehicle in the blind area exists according to the first movement locus, the state of the signal lamp may be controlled according to the collision.
For example, when it is found that a driving conflict may occur between the vehicle and a pedestrian crossing the road when the vehicle turns right at the intersection, the right turn signal lamp may be controlled to be in a red state when the vehicle reaches the intersection, and a time for the right turn signal lamp to maintain the red state may be determined according to a time for the pedestrian to pass the road, and when the time is reached, the right turn signal lamp may be controlled to turn green so that the vehicle may turn right. Thereby avoiding traffic conflict and improving traffic safety on roads.
The following describes a vehicle travel control method provided by an embodiment of the present invention.
Fig. 8 is a flowchart of a vehicle driving control method according to an embodiment of the present application. As shown in fig. 8, the method is applied to a vehicle-mounted terminal, and comprises the following steps:
step S41, receiving blind area monitoring information corresponding to the vehicle, wherein the blind area monitoring information is generated according to the embodiment of the vehicle blind area monitoring method;
step S42, when determining that the vehicle has a driving conflict in the blind area of the vehicle according to the driving information of the vehicle and the blind area monitoring information, generating driving conflict information;
in step S43, the driving control is performed based on the driving collision information.
Wherein, the blind area monitoring information may include: and all the second motion tracks of the second object having traffic conflict with the vehicle or the second motion tracks of the second object having higher probability of traffic conflict with the vehicle. The vehicle-mounted terminal determines that the vehicle generates the driving conflict information when the driving conflict exists on the road according to the blind area monitoring information and the driving information of the vehicle, such as the driving route, the driving speed, the position information and the like.
Wherein the driving conflict information may include: the current driving speed, position and driving route of the vehicle may have traffic conflict with other vehicles, pedestrians, etc. at a certain intersection, and the speed, driving direction, etc. of the vehicles or pedestrians having traffic conflict.
Based on the driving conflict information, corresponding driving control operations can be executed, for example, for a vehicle driven by the driver, reminding can be performed according to the driving conflict information, such as generating a new driving route and recommending the driver to change the route, recommending the driver to change the driving speed, and the like; for an autonomous vehicle, the vehicle travel speed may be automatically adjusted or the travel route may be changed, etc., according to the travel conflict information.
In this embodiment, the vehicle-mounted terminal performs travel control according to the blind area monitoring information, so that the safety of road travel is improved, and the safety of the whole road traffic is further improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application.
Fig. 9 is a block diagram of a vehicle blind area monitoring device according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 9, the vehicle blind area monitoring device includes:
the first obtaining module 51 is configured to obtain a road image and a blind area corresponding to a vehicle;
the second obtaining module 52 is configured to obtain a first motion trajectory of a first object on the road according to the road image;
a determining module 53, configured to determine, according to the first motion trajectory, a second object that conflicts with the vehicle traveling in the blind area;
and the generating module 54 is configured to generate blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
Fig. 10 is a block diagram of a driving control device provided in an embodiment of the present application, which may be implemented as part or all of an electronic device by software, hardware, or a combination of the two. As shown in fig. 10, the travel control device includes:
the receiving module 61 is configured to receive blind area monitoring information corresponding to a vehicle, where the blind area monitoring information is generated according to the vehicle blind area monitoring method embodiment;
the generating module 62 is configured to generate travel conflict information when it is determined that a vehicle has a travel conflict in a blind area of the vehicle according to the travel information of the vehicle and the blind area monitoring information;
and the control module 63 is used for carrying out running control according to the running conflict information.
A vehicle-road coordination system provided in the embodiment of the present application is specifically described below.
Fig. 11 is a block diagram of a vehicle-road coordination system according to an embodiment of the present application, and as shown in fig. 11, the system includes: an image capture device 10 and a computing device 20. The image pickup device 10 is used for shooting roads and sending the shot road images to the computing device 20. The calculating device 20 is used for acquiring a road image and a blind area corresponding to the vehicle; acquiring a first motion trail of a first object on a road according to the road image; determining a second object which conflicts with the vehicle running in a blind area within a preset time period according to the first motion track; and generating blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
Optionally, the system further comprises an in-vehicle terminal 30 located on the vehicle. And the computing device 20 is used for sending the blind area monitoring information to the vehicle-mounted terminal 30. The vehicle-mounted terminal 30 is used for receiving blind area monitoring information corresponding to the vehicle; when the vehicle is determined to have a driving conflict in the blind area of the vehicle according to the driving information of the vehicle and the blind area monitoring information, generating driving conflict information; and performing running control according to the running conflict information.
As shown in fig. 11, in the present embodiment, the computing device 20 may include: an edge computing device 21 deployed on the road.
The edge calculating device 21 is used for acquiring a road image and a blind area corresponding to a vehicle; acquiring a first motion trail of a first object on a road according to the road image; determining a second object which conflicts with the vehicle running in the blind area according to the first motion track; and generating blind area monitoring information corresponding to the vehicle according to the second motion track of the second object.
As shown in fig. 11, the computing device 20 may further include: and a cloud server 23 disposed on the network side. The cloud server 23 is used for acquiring a driving route and attribute information of the vehicle; acquiring road condition information corresponding to a driving route; and determining a blind area corresponding to at least one driving position of the vehicle on the driving route according to the attribute information and the road condition information. Or, the cloud server 23 is configured to obtain a driving route of the vehicle and the selected blind area type; acquiring road condition information corresponding to a driving route; and determining a blind area corresponding to at least one driving position of the vehicle on the driving route according to the type of the blind area and the road condition information. The cloud server 23 transmits the blind area corresponding to the vehicle to the edge computing device 21.
An embodiment of the present application further provides an electronic device, as shown in fig. 12, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned in the electronic device may be a Peripheral component interconnect (pci) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method embodiments described below.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.