CN111391863B - Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium - Google Patents

Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium Download PDF

Info

Publication number
CN111391863B
CN111391863B CN201910002318.6A CN201910002318A CN111391863B CN 111391863 B CN111391863 B CN 111391863B CN 201910002318 A CN201910002318 A CN 201910002318A CN 111391863 B CN111391863 B CN 111391863B
Authority
CN
China
Prior art keywords
vehicle
information
blind area
target
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910002318.6A
Other languages
Chinese (zh)
Other versions
CN111391863A (en
Inventor
张长隆
瞿仕波
曹健
李新权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Intelligent Driving Research Institute Co Ltd
Original Assignee
Changsha Intelligent Driving Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Intelligent Driving Research Institute Co Ltd filed Critical Changsha Intelligent Driving Research Institute Co Ltd
Priority to CN201910002318.6A priority Critical patent/CN111391863B/en
Publication of CN111391863A publication Critical patent/CN111391863A/en
Application granted granted Critical
Publication of CN111391863B publication Critical patent/CN111391863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Abstract

The embodiment of the invention discloses a blind area detection method, a vehicle-mounted unit, a road side unit, a vehicle and a storage medium, wherein the method comprises the following steps: acquiring basic information corresponding to vehicles in a setting range of the current positions of the vehicles, and determining target vehicles for detecting blind areas according to the basic information when threat vehicles exist in the corresponding blind areas, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring alarm information sent by the road side unit based on the threat risk in a blind area corresponding to the current position of the vehicle, wherein the alarm information carries identification information of a target vehicle for detecting the blind area; and starting a monitoring data receiving thread according to the identification information of the target vehicle, and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread. The blind area can be effectively detected by receiving the current monitoring data of the target vehicle when threat risks exist in the vehicle blind area, and the risks can be predicted in advance.

Description

Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium
Technical Field
The invention relates to the field of vehicle safety, in particular to a blind area detection method, a vehicle-mounted unit, a road side unit, a vehicle and a storage medium.
Background
In most traffic accidents, accidents caused by "ghost probes" account for a large proportion. The ghost probe is characterized in that when a vehicle normally runs on a road, because obstacles on the left side and the right side or other vehicles generate blind areas, the blind areas cannot be captured by a vehicle-mounted laser radar or a camera due to the fact that the obstacles are arranged in the middle, and therefore the blind areas are different from the blind areas in the traditional sense. When a person or vehicle suddenly crosses the road in the area, there is no possibility that the driver who did not brake has avoided the person or vehicle, resulting in a car accident. Therefore, how to sense the threat risk in the blind area separated by other vehicles to improve the driving safety becomes an urgent technical problem to be solved.
Disclosure of Invention
In view of this, embodiments of the present invention provide a blind area detection method, an on-board unit, a roadside unit, a vehicle, and a storage medium, and aim to effectively sense a threat risk of a vehicle blind area and improve driving safety.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect of the embodiments of the present invention, a blind area detection method is provided, which is applied to a vehicle-mounted unit, and includes:
acquiring basic information corresponding to vehicles in a setting range of the current positions of the vehicles, and determining a target vehicle for detecting a blind area according to the basic information when a threatening vehicle exists in the corresponding blind area, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring warning information sent by a road side unit when threat risks exist in a blind area corresponding to the current position of the vehicle, wherein the warning information carries identification information of a target vehicle for detecting the blind area;
and starting a monitoring data receiving thread according to the identification information of the target vehicle, and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread.
In a second aspect of the embodiments of the present invention, there is also provided a blind area detection method applied to a roadside unit, where the method includes:
acquiring basic information corresponding to a vehicle in a setting range of the position of a road side unit, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle;
acquiring position information of a target object detected by an environment sensing device;
determining a target vehicle for detecting a blind area when it is determined that the target object is located within the blind area of the corresponding vehicle based on the basic information and the position information of the target object;
and generating alarm information according to the identification information of the target vehicle, and sending the alarm information to the corresponding vehicle.
In a third aspect of an embodiment of the present invention, there is provided an onboard unit including: a first transceiver module and a first processing module,
the first transceiver module is configured to acquire basic information corresponding to a vehicle within a setting range of a current position of the vehicle, and determine, when a threatening vehicle exists in a corresponding blind area, a target vehicle for detecting the blind area according to the basic information, where the basic information at least includes: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring warning information sent by a road side unit when threat risks exist in a blind area corresponding to the current position of the vehicle, wherein the warning information carries identification information of a target vehicle for detecting the blind area;
the first processing module is used for starting a monitoring data receiving thread according to the identification information of the target vehicle and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread.
In a fourth aspect of an embodiment of the present invention, there is provided a road side unit, including:
the second transceiver module is used for acquiring basic information corresponding to vehicles in a setting range of the position of the road side unit, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle;
the acquisition module is used for acquiring the position information of the target object detected by the environment sensing device;
a second processing module, configured to determine, based on the basic information and the position information of the target object, that the target object is located within a blind area of a corresponding vehicle; determining a target vehicle for detecting the blind area, and generating alarm information based on the identification information of the target vehicle;
the second transceiver module is also used for sending the warning information to the corresponding vehicle.
In a fifth aspect of the embodiments of the present invention, a vehicle is provided, where the vehicle includes a vehicle-mounted unit, a transceiver, a detection device, and an image acquisition device, where the detection device and the image acquisition device are both connected to the vehicle-mounted unit in a communication manner; the detection device is used for detecting the position information and the yaw angle of the vehicle; the transceiver is used for transmitting basic information corresponding to the current vehicle to other vehicles within a set range; the image acquisition device is used for acquiring monitoring data corresponding to the vehicle; the on-board unit is the on-board unit described in the foregoing embodiment.
In a sixth aspect of the embodiments of the present invention, a computer storage medium is provided, which stores an executable program, and when the executable program is executed by a processor, the blind area detection method according to any of the foregoing embodiments is implemented.
According to the technical scheme provided by the embodiment of the invention, when a threatening vehicle exists in a corresponding blind area according to basic information corresponding to the vehicle in a set range of the current position of the vehicle, a target vehicle for detecting the blind area is determined according to the basic information; and/or acquiring identification information of the target vehicle carried in the alarm information sent by the road side unit, starting a monitoring data receiving thread according to the identification information of the target vehicle, and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread. Compared with the prior art, the embodiment of the invention realizes that the blind area can be effectively detected by receiving the current monitoring data of the target vehicle when the threat risk exists in the vehicle blind area, and the risk is predicted in advance, thereby improving the driving safety, and the influence of the transmission of redundant data on the transmission bandwidth is reduced by opening the monitoring data receiving thread in a targeted manner according to the identification information of the target vehicle, the real-time receiving of the monitoring data is effectively ensured, and the driving safety is further improved.
Drawings
FIG. 1 is a schematic view of a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a roadside unit according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a blind spot detection method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a vehicle position in an application scenario of the present invention;
FIG. 5 is a schematic diagram illustrating a blind spot detection method in an application scenario according to the present invention;
FIG. 6 is a schematic flowchart of a blind spot detection method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a vehicle position in an application scenario of the present invention;
FIG. 8 is a schematic diagram illustrating a blind spot detection method in an application scenario according to the present invention;
FIG. 9 is a schematic diagram of an on-board unit according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a roadside unit according to an embodiment of the present invention;
FIG. 11 is a schematic view of a vehicle according to an embodiment of the present invention;
FIG. 12 is a flow chart illustrating a process of the on board unit according to an embodiment of the present invention;
FIG. 13 is a schematic flow chart of the monitoring data received by the OBU in accordance with one embodiment of the present invention;
fig. 14 is a schematic flow chart illustrating a process of sending monitoring data by the on-board unit according to an embodiment of the invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and specific embodiments. It should be understood that the examples provided herein are merely illustrative of the present invention and are not intended to limit the present invention. In addition, the following embodiments are provided as partial embodiments for implementing the present invention, not all embodiments for implementing the present invention, and the technical solutions described in the embodiments of the present invention may be implemented in any combination without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Aiming at the blind area of the existing vehicle, which is generated due to the shielding of a middle barrier or other vehicles, the blind area detection method provided by the embodiment of the invention determines the target vehicle capable of detecting the blind area when the dangerous risk exists in the blind area of the vehicle, and realizes the threat risk monitoring of the blind area by acquiring the current monitoring data of the target vehicle, thereby realizing early warning, further reducing traffic accidents and improving the driving safety.
Before introducing the blind area detection method according to the embodiment of the present invention, a vehicle and a Road Side Unit (RSU) according to the embodiment of the present invention are described as follows:
in the embodiment of the present invention, referring to fig. 1, each vehicle is provided with a vehicle-mounted unit 11 and a first transceiver 12, wherein the vehicle-mounted unit 11 transmits and receives data through the first transceiver 12, the first transceiver 12 may adopt a V2X (internet of vehicles) transceiver module, and the internet of vehicles data is transmitted and received through the V2X transceiver module, and the internet of vehicles data includes basic information of the vehicle and monitoring data of the vehicle. In this embodiment, the monitoring data of the vehicle may be at least one of video stream data, image data, radar detection data, and infrared detection data, and accordingly, at least one of the image acquisition device 13, the detection radar, the infrared sensor, and other sensing devices is configured on the vehicle. Optionally, an image capturing device 13 (e.g., a camera) is disposed on the vehicle, and the image capturing device 13 transmits captured video stream data to the on-board unit 11. Each vehicle is further provided with a detection device (not shown in the figure) for detecting the position information and the yaw angle of the vehicle, in an embodiment, the detection device comprises a GPS (global positioning system) signal receiver for real-time positioning and a gyroscope for detecting the yaw angle of the vehicle, the detection device sends the position information and the yaw angle of the vehicle to the vehicle-mounted unit 11, and the vehicle-mounted unit 11 generates basic information according to the position information and the yaw angle and broadcasts the basic information to the vehicles within a set range through the V2X transceiving module. Optionally, the basic information of the vehicle includes: the vehicle-mounted unit 11 generates a basic information packet (BSM packet) according to basic information of the vehicle, and broadcasts the BSM packet to vehicles within a set range through the V2X transceiving module. The V2X transceiver module may broadcast to the surrounding environment using DSRC (dedicated short range communication technology) broadcasts to other vehicles and roadside units of a set range. The vehicle-mounted unit acquires monitoring data of the vehicle and broadcasts and sends the monitoring data to the surrounding environment through the V2X transceiving module, so that the monitoring data are broadcasted to other vehicles in a set range.
In the embodiment of the present invention, referring to fig. 2, each road side unit includes: a data processing device 21, a second transceiver 22 and an environment sensing device 23. The roadside unit receives and transmits data through the second transceiver 22, the second transceiver 22 may adopt a V2X (internet of vehicles) transceiver module, and the internet of vehicles data is received and transmitted through the V2X transceiver module, and the internet of vehicles data includes basic information of each vehicle. The environment sensing means 23 can be a camera module containing human body recognition algorithm or related functions, which will return the coordinates of the pedestrian to the data processing means 21 through a data line (such as 485 bus) when the pedestrian appears in a preset area (lane) by continuously performing image processing from the detected streaming media data. The data processing device 21 is mainly responsible for calculating potential target objects of the corresponding vehicle, screening target vehicles capable of providing video data of the shielded area, and sending warning information carrying identification information of the target vehicles to the corresponding vehicle.
Referring to fig. 3, an embodiment of the present invention provides a blind spot detection method, where the blind spot detection method is applied to a vehicle-mounted unit, and the method includes:
step 301, obtaining basic information corresponding to a vehicle in a setting range of a current position of the vehicle, and determining a target vehicle for detecting a blind area according to the basic information when a threatening vehicle exists in the corresponding blind area, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring alarm information sent by a road side unit based on threat risks existing in a blind area corresponding to the current position of the vehicle, wherein the alarm information carries identification information of a target vehicle for detecting the blind area;
in one embodiment, the vehicle receives basic information broadcasted by other vehicles through the V2X transceiver module, determines whether a threatening vehicle exists in a corresponding blind area of the vehicle according to the received basic information, and determines a target vehicle for detecting the blind area according to the basic information when the threatening vehicle exists.
In one embodiment, the vehicle receives the warning information sent by the road side unit through the V2X transceiver module, so as to obtain the identification information of the target vehicle.
In one embodiment, the determining prior to the presence of the threatening vehicle within the corresponding blind zone comprises:
acquiring the yaw angle and position information of the vehicle;
and determining the threatening vehicles in the corresponding blind areas according to the yaw angle and position information of the vehicle and the set yaw angle deviation value.
The vehicle-mounted unit of the vehicle acquires the corresponding yaw angle and position information of the vehicle, the vehicle-mounted unit is provided with a yaw angle deviation value used for determining the threatening vehicle in the blind area, the area of the blind area corresponding to a driver is determined according to the position information and the yaw angle of the vehicle, and whether the threatening vehicle exists in the blind area can be determined according to the yaw angle of the vehicle and the yaw angle deviation value of the vehicle for the vehicle falling into the area. According to the embodiment, the threat vehicles in the blind area are detected, so that the target vehicle of the blind area is determined only when the threat vehicles exist in the blind area of the vehicle, invalid data processing is reduced, and transmission of redundant data is avoided.
In one embodiment, the determining, according to the basic information, a target vehicle for detecting the blind area includes:
selecting a vehicle to be selected between the threatening vehicle and the vehicle according to the position information of the threatening vehicle and the vehicle;
generating a first angle range value according to the position information of the vehicle and the position information of the vehicle to be selected, wherein the first angle range value is used for representing a blind area corresponding to the vehicle;
generating a second angle range value according to the position information of the vehicle and the position information of the threatening vehicle, wherein the second angle range value is used for representing the view range between the driver position of the current vehicle and the body outline of the threatening vehicle;
and if the second angle range value belongs to the first angle range value, determining the vehicle to be selected as the target vehicle to obtain the identification information of the target vehicle.
The vehicle-mounted unit of the vehicle generates a first angle range value according to the position information of the vehicle and the position information of the vehicle to be selected, generates a second angle range value according to the position information of the vehicle and the position information of the threatening vehicle, judges whether the vehicle to be selected can detect the threatening vehicle in the blind area according to the relation between the second angle range value and the first angle range value, determines the vehicle to be selected as a target vehicle if the threatening vehicle in the blind area can be detected, can automatically determine the target vehicle which threatens the vehicle in the blind area, and is convenient to receive monitoring data in the blind area in real time, so that early warning is realized, and the driving safety is improved.
Step 302, starting a monitoring data receiving thread according to the identification information of the target vehicle, and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread.
And the vehicle-mounted unit of the vehicle starts a monitoring data receiving thread according to the identification information of the target vehicle, and receives the monitoring data broadcasted by the target vehicle through the V2X transceiving module through the monitoring data receiving thread.
In one embodiment, the vehicle-mounted unit of each vehicle acquires video stream data corresponding to the periphery through the image acquisition device, converts the video stream data into RTP streaming media data through an RTP (Real-time Transport Protocol), and broadcasts the RTP streaming media data after encoding the RTP streaming media data through a DSRC Protocol. In the embodiment, the video stream data are packaged by the RTP layer and the DSRC layer, so that the method is suitable for the transmission of the video stream of the Internet of vehicles, is favorable for reducing the transmission delay, and meets the requirement on the monitoring reliability. In the method, video data is transmitted between vehicles by adopting a DSRC protocol, and point-to-point transmission can be realized without a core network and a public network, so that the real-time performance is strong, and the road condition monitoring requirements of the vehicles are met.
And the vehicle-mounted unit of the vehicle starts a monitoring data receiving thread for receiving RTP streaming media sent by the target vehicle in a broadcasting mode according to the identification information of the target vehicle. The data received by the monitoring data receiving thread is analyzed by a DSRC protocol and an RTP protocol to form data in an H.264 digital video compression format, and the data is displayed by the vehicle through an HMI (Human Machine Interface).
According to the blind area detection method, when threat vehicles exist in the corresponding blind area according to the basic information corresponding to the vehicles in the set range of the current positions of the vehicles, the target vehicles for detecting the blind area are determined according to the basic information; and/or acquiring identification information of the target vehicle carried in the alarm information sent by the road side unit, starting a monitoring data receiving thread according to the identification information of the target vehicle, and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread. Compared with the prior art, the embodiment of the invention realizes that the blind area can be effectively detected by receiving the current monitoring data of the target vehicle when the threat risk exists in the vehicle blind area, and the risk is predicted in advance, thereby improving the driving safety, and the influence of the transmission of redundant data on the transmission bandwidth is reduced by opening the monitoring data receiving thread in a targeted manner according to the identification information of the target vehicle, the real-time receiving of the monitoring data is effectively ensured, and the driving safety is further improved.
Fig. 4 is a schematic diagram illustrating the positions of vehicles in an application scene, as shown in fig. 4, there are a vehicle V2 and a vehicle V3 in front of the vehicle V1, wherein the vehicle V3 is shielded by the vehicle V2, and the vehicle V3 will cross the lane to threaten the vehicle V1, and the vehicle V3 is located in the blind zone of the vehicle V1.
Fig. 5 illustrates a schematic diagram of a blind spot detection method in an application scenario. As shown in fig. 5, the on-board unit of the vehicle V1 receives the basic information corresponding to the vehicles V2 and V3 via the V2X transceiver module, and the on-board unit obtains the position information and the yaw angle of the vehicle, and the on-board unit screens out the vehicles whose difference range between the yaw angle and the yaw angle of the vehicle is [45 °,135 ° ] and [ -135 °, -45 ° ] according to the angle of the lane orientation. Examples are as follows: assuming that the vehicle is traveling along lane 1, the GPS obtains a yaw angle of 32 °, the lane 1, 2, 3 is oriented at an angle of 32 °, and the opposite lane 4, 5, 6 is oriented at an angle of 212 ° (32 ° +180 °). Thus, when a vehicle V3 appears in the map with a yaw angle of 262 ° (the difference ranges between [45 °,135 ° ] and [ -135 °, -45 ° ]), the vehicle V3 will be determined to be a threatening vehicle.
After the threatening vehicle is judged, the vehicle-mounted unit sequentially judges the threat degree of the threatening vehicle to the vehicle, namely: the vehicle V1 is at the midpoint (x) 0 ,y 0 ) Two lines l1 and l2 connecting the four corners of the other vehicle V2 are selected, the angles of which in the WGS84 coordinate system (geocentric coordinate system) are the minimum and the maximum:
Figure BDA0001934172410000091
(angle of l 1)
Figure BDA0001934172410000092
(angle of l 2)
The vehicle V1 is at the midpoint (x) 0 ,y 0 ) Two lines l4 and l3 with the smallest and largest connecting angles are selected by connecting four corners of the threat vehicle V3, and the angles under the WGS84 coordinate system are:
Figure BDA0001934172410000093
(angle of l 3)
Figure BDA0001934172410000094
(l 4 Angle)
When theta is 3 And theta 4 Satisfies the following conditions: theta 3 ∈[θ 12 ],θ 4 ∈[θ 12 ]And the vehicle V2 is closer to V1 than the vehicle V3, it will be determined that the vehicle V3 is threatened to be blocked by the vehicle V2, and the vehicle V1 needs to receive the video of the vehicle V2. When the vehicle V3 becomes a pedestrian, the processing method is similar, and the vehicle V3 is taken as a mass point to perform connection angle calculation judgment.
The on-board unit of the vehicle V1 determines the vehicle V2 as a target vehicle, and extracts identification information (ID number) of the vehicle V2 from the received basic information, while activating an RTP streaming reception processing thread, configuring the ID number of V2 as an ID number by which the vehicle V1 can receive RTP streaming. Thereafter, the video stream captured by the vehicle V2 will be transmitted to the vehicle V1, and will be parsed by the DSRC protocol and the RTP protocol to form a h.264 digital video compression format for the vehicle V1 to display.
According to the blind area detection method, the monitoring data receiving thread is started in a targeted manner according to the identification information of the target vehicle, so that the influence of the transmission of redundant data on the transmission bandwidth is reduced, the real-time receiving performance of the monitoring data is effectively guaranteed, the visual field enhancement function is not influenced by the redundant data and the transmission bandwidth, and the driving safety is further improved.
In one embodiment, after the vehicle-mounted unit of the vehicle receives the warning message sent by the road side unit, a monitoring data receiving thread is started according to the identification information of the target vehicle carried by the warning message, and the current monitoring data of the target vehicle is received through the monitoring data receiving thread. Specifically, the ID number of the target vehicle is configured as an ID number by which the own vehicle can receive RTP streaming media data. And then, the video stream collected by the target vehicle is transmitted into the vehicle, and an H.264 digital video compression format is formed through DSRC protocol analysis and RTP protocol analysis for the display of the vehicle.
The embodiment of the present invention further provides a blind area detection method, which is applied to a road side unit, please refer to fig. 6, and the method includes:
step 601, obtaining basic information corresponding to a vehicle in a setting range of a position where a road side unit is located, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle;
the data processing device of the road side unit receives basic information of each vehicle in a broadcasting area through the V2X receiving and sending module, and sends warning information to the corresponding vehicle when determining that threat risks exist, wherein the basic information comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle.
Step 602, obtaining position information of a target object detected by an environment sensing device;
the roadside unit is provided with an environment sensing device for detecting information of surrounding roads, and the environment sensing device can be at least one of sensing equipment such as an image acquisition device, a detection radar and an infrared sensor and is used for acquiring at least one of video stream data, image data, radar detection data and infrared detection data. In one embodiment, the environment sensing device employs a camera module containing a human body recognition algorithm or related functions, which will continuously perform image processing from streaming media data, and when a person is present in a preset area (lane), the coordinates of the person will be returned to the data processing device through the 485 bus.
Step 603, determining a target vehicle for detecting a blind area when it is determined that the target object is located in the blind area of the corresponding vehicle based on the basic information and the position information of the target object;
the data processing device of the road side unit maps the position information of the target object and the position information of the vehicle in the basic information into a static map; and determining whether the target object is positioned in the blind area of the corresponding vehicle according to the relationship between the target object and the positions of the vehicles.
Optionally, the determining whether the target object is located in a blind area of a corresponding vehicle according to a relationship between the target object and the positions of the vehicles includes:
selecting the corresponding vehicle and a vehicle to be selected between the corresponding vehicle and the target object according to the relationship between the positions of the target object and the vehicles;
generating a third angle range value according to the position information of the corresponding vehicle and the position information of the vehicle to be selected, wherein the third angle range value is used for representing a blind area of the corresponding vehicle;
generating a fourth angle range value according to the position information of the corresponding vehicle and the position information of the target object, wherein the fourth angle range value is used for representing a view range between the driver position of the corresponding vehicle and the target object;
and if the fourth angle range value belongs to the third angle range value, determining that the target object is positioned in the blind area of the corresponding vehicle.
Optionally, the determining a target vehicle for detecting the blind area includes:
and selecting the vehicle which meets the set condition with the distance from the target object as the target vehicle according to the basic information.
In one embodiment, the vehicle closest to the target object is selected as the target vehicle.
And step 604, generating warning information according to the identification information of the target vehicle, and sending the warning information to the corresponding vehicle.
According to the blind area detection method, the monitoring data receiving thread is started in a targeted manner according to the identification information of the target vehicle, so that the influence of the transmission of redundant data on the transmission bandwidth is reduced, the real-time receiving performance of the monitoring data is effectively guaranteed, the visual field enhancement function is not influenced by the redundant data and the transmission bandwidth, and the driving safety is further improved.
Fig. 7 illustrates a schematic position diagram of a vehicle in an application scene, as shown in fig. 7, a vehicle V2 and a pedestrian P1 are in front of the vehicle V1, and a road side unit D1 and a camera D2 connected to the road side unit D2 are further disposed on the road side. The roadside unit D1 detects the positions of all vehicles through the basic information corresponding to each vehicle received by the V2X transceiver module, detects the position of a pedestrian through the camera D2, and then maps and updates the information in a local LDM (local dynamic map). The road side unit D1 processes the LDM through the data processing device, calculates the range of the pedestrian P1 after the vehicle V1 is shielded by the vehicle V2, and then the road side unit D1 sends the warning information to the target vehicle V1 through the V2X transceiver module and attaches the ID of the vehicle V2. After receiving the warning message, the vehicle V1 activates the RTP streaming media receiving thread, and configures the ID number of V2 as an ID that the vehicle can receive the RTP streaming media. Thereafter, the video stream collected by the vehicle V2 will be transmitted to the vehicle V1, and the h.264 digital video compression format is formed through DSRC protocol parsing and RTP protocol parsing so as to be displayed by the vehicle V1.
FIG. 8 illustrates a schematic diagram of a blind spot detection method in one embodiment. As shown in fig. 8, the on-board unit of the vehicle V1 receives basic information corresponding to the vehicles V2 and V3 via the V2X transceiver module, and the on-board unit acquires position information and yaw angle of the own vehicle. The method comprises the steps of connecting the midpoint of a vehicle V1 with four corners of other vehicles, selecting two lines with the minimum and maximum connecting angles, calculating an angle range value corresponding to a shadow area D shown in fig. 8, confirming that a pedestrian P1 is in the shadow area through plane processing of a static map, screening a vehicle V2 closest to the pedestrian P1 as a target vehicle, sending alarm information carrying identification information of the vehicle V2 to the vehicle V1, and requesting the vehicle V1 to receive an RTP video stream of the vehicle V2.
According to the blind area detection method, the influence of the transmission of redundant data on the transmission bandwidth is reduced by starting the monitoring data receiving thread in a targeted manner according to the identification information of the target vehicle, the real-time receiving performance of the monitoring data is effectively guaranteed, and the visual field enhancement function is not influenced by the redundant data and the transmission bandwidth, so that the driving safety is further improved.
An embodiment of the present invention further provides a vehicle-mounted unit, please refer to fig. 9, where the vehicle-mounted unit includes: a first transceiver module 111 and a first processing module 112.
The first transceiver module 111 is configured to acquire basic information corresponding to a vehicle within a setting range of a current position of the vehicle, and determine, when a threatening vehicle exists in a corresponding blind area, a target vehicle for detecting the blind area according to the basic information, where the basic information at least includes: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring alarm information sent by a road side unit based on threat risks existing in a blind area corresponding to the current position of the vehicle, wherein the alarm information carries identification information of a target vehicle for detecting the blind area;
the first processing module 112 is configured to start a monitoring data receiving thread according to the identification information of the target vehicle, and receive the current monitoring data of the target vehicle through the monitoring data receiving thread.
In an embodiment, the first transceiver module 111 is further configured to: before the vehicle threat situation exists in the corresponding blind area, acquiring the yaw angle and the position information of the vehicle; and determining the threatening vehicles in the corresponding blind areas according to the yaw angle and position information of the vehicle and the set yaw angle deviation value.
In an embodiment, the first transceiver module 111 is configured to: selecting a vehicle to be selected between the threatening vehicle and the vehicle according to the position information of the threatening vehicle and the vehicle; generating a first angle range value according to the position information of the vehicle and the position information of the vehicle to be selected, wherein the first angle range value is used for representing a blind area corresponding to the vehicle; generating a second angle range value according to the position information of the vehicle and the position information of the threatening vehicle, wherein the second angle range value is used for representing the view range between the driver position of the current vehicle and the body outline of the threatening vehicle; and if the second angle range value belongs to the first angle range value, determining the vehicle to be selected as the target vehicle to obtain the identification information of the target vehicle.
In one embodiment, the first processing module 112 is configured to: and starting a monitoring data receiving thread for receiving video stream data corresponding to the surrounding environment sent by the target vehicle in a broadcasting mode according to the identification information of the target vehicle.
In one embodiment, the first processing module 112 is further configured to: and displaying the received video stream data.
In an embodiment, the first transceiver module 111 is further configured to: acquiring and sending basic information of the vehicle in a broadcasting mode, wherein the basic information at least comprises the following components: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and acquiring and transmitting the monitoring data acquired by the vehicle in a broadcasting mode. The first transceiver module 11 obtains the basic information and/or the monitoring data and transmits the basic information and/or the monitoring data to the first transceiver device, and the first transceiver device broadcasts the basic information and/or the monitoring data.
It should be noted that: the division of the program modules provided in the above embodiments is exemplified, and in practical applications, the above processing may be distributed to different program modules according to needs, that is, the internal structure of the apparatus is divided into different program modules, so as to complete all or part of the above-described processing. In addition, the vehicle-mounted unit and the blind area detection method provided by the above embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment, and are not described herein again.
An embodiment of the present invention further provides a road side unit, please refer to fig. 10, where the road side unit includes: a data processing device 21, a second transceiver 22, and an environment sensing device 23. Wherein the data processing device 21 is configured with: a second transceiver module 211, an acquisition module 212, and a second processing module 213.
The second transceiver module 211 is configured to acquire basic information corresponding to a vehicle in a setting range of a location where the roadside unit is located, where the basic information at least includes: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; an obtaining module 212, configured to obtain position information of a target object detected by an environment sensing device; a second processing module 213, configured to determine, based on the basic information and the position information of the target object, that the target object is located in a blind area of a corresponding vehicle; determining a target vehicle for detecting the blind area, and generating alarm information based on the identification information of the target vehicle; the second transceiver module 211 is further configured to send the warning message to the corresponding vehicle.
In an embodiment, the second processing module 213 is further configured to: mapping the position information of the target object and the position information of the vehicle in the basic information into a static map; and determining whether the target object is positioned in the blind area of the corresponding vehicle according to the relationship between the target object and the positions of the vehicles.
In an embodiment, the second processing module 213 is further configured to: selecting the corresponding vehicle and a vehicle to be selected between the corresponding vehicle and the target object according to the relationship between the positions of the target object and the vehicles; generating a third angle range value according to the position information of the corresponding vehicle and the position information of the vehicle to be selected, wherein the third angle range value is used for representing a blind area of the corresponding vehicle; generating a fourth angle range value according to the position information of the corresponding vehicle and the position information of the target object, wherein the fourth angle range value is used for representing the view range between the driver position of the corresponding vehicle and the target object; and if the fourth angle range value belongs to the third angle range value, determining that the target object is positioned in the blind area of the corresponding vehicle.
In an embodiment, the second processing module 213 is further configured to: and selecting the vehicle which meets the set condition with the distance from the target object as the target vehicle according to the basic information.
It should be noted that: the division of the program modules provided in the above embodiments is exemplified, and in practical applications, the above processing may be distributed to different program modules according to needs, that is, the internal structure of the apparatus is divided into different program modules, so as to complete all or part of the above-described processing. In addition, the vehicle-mounted unit and the blind area detection method provided by the above embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment, and are not described herein again.
The embodiment of the present invention further provides a vehicle, please refer to fig. 11, which includes a vehicle-mounted unit 11, a first transceiver 12, an image acquisition device 13, and a detection device 14, wherein the first transceiver 12, the detection device 14, and the image acquisition device 13 are all communicatively connected to the vehicle-mounted unit 11; the detection device 14 is used for detecting the position information and the yaw angle of the vehicle; the first transceiver 12 is used for transmitting basic information corresponding to the current vehicle to other vehicles within a set range; the image acquisition device 13 is used for acquiring monitoring data corresponding to the vehicle. In this embodiment, the on-board unit 11 is configured with a first transceiver module 111 and a first processing module 112.
The first transceiver module 111 is configured to obtain basic information corresponding to a vehicle within a setting range of a current position of the vehicle, and determine, when a threatening vehicle exists in a corresponding blind area, a target vehicle for detecting the blind area according to the basic information, where the basic information at least includes: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring warning information sent by a road side unit when threat risks exist in a blind area corresponding to the current position of the vehicle, wherein the warning information carries identification information of a target vehicle for detecting the blind area; the first processing module 112 is configured to start a monitoring data receiving thread according to the identification information of the target vehicle, and receive the current monitoring data of the target vehicle through the monitoring data receiving thread.
It should be noted that the first transceiver 12 may perform V2X communication in the DSRC mode, and in other embodiments, may also implement communication in the Uu interface of the LTE-V or 5G technology.
In one embodiment, the processing flow of the on-board unit 11 is shown in fig. 12. The method comprises the following steps:
step 1201, initializing;
when the vehicle-mounted unit is powered on, the software can execute software and hardware initialization, such as camera parameter configuration, DSRC transceiver module initialization and the like.
Step 1202, starting an RTP packing and sending thread;
and after the initialization is finished, starting RTP packing and sending threads.
Step 1203, acquiring self parameter information;
the vehicle-mounted unit collects information such as self GPS coordinates, speed, yaw angle, acceleration and the like through a vehicle-mounted sensor, and the information is used for generating basic information of the vehicle.
Step 1204, judge whether to receive BSM packet;
the vehicle-mounted unit judges whether BSM packets of other vehicles are received through the DSRC receiving module, and the BSM packets comprise: location information of the vehicle (e.g., latitude and longitude), speed, yaw angle, vehicle length, vehicle width, and identification information of the vehicle. If a BSM packet is received, step 1205 is performed, otherwise step 1208 is performed.
Step 1205, unpacking the BSM packet;
the vehicle-mounted unit unpacks the received BSM packet and extracts vehicle data such as position information (such as longitude and latitude), speed, yaw angle, vehicle length, vehicle width, vehicle identification information and the like of the vehicle.
Step 1206, vehicle data storage and updating;
the vehicle-mounted unit gives a time stamp to the analyzed and extracted vehicle data, stores the data locally, and deletes the overtime data according to the time stamp to update the data, so that the storage space can be timely recovered.
Step 1207, calculating a shielded area in front of the vehicle;
and calculating the front shielded area of the vehicle according to the position information, the yaw angle and the adjacent vehicles of the vehicle.
Step 1208, judging whether a threatening vehicle exists in a blind area corresponding to the vehicle;
in the step, the vehicle-mounted unit establishes a relative coordinate system by taking the vehicle as an origin, extracts the vehicle in a shielded area of the vehicle from stored vehicle data, and judges whether the vehicle threatens the vehicle, namely the difference range between a connecting line angle and a lane angle is between [45 degrees, 135 degrees ] and [ -135 degrees, and-45 degrees ] (the difference range indicates that a track predicted by an opposite vehicle and a track driven by the vehicle are likely to converge relatively quickly), if so, executing a step 1209, otherwise, executing a step 1213.
Step 1209, determining a target vehicle for detecting a blind area;
determining a target vehicle for detecting a blind area according to basic information, and selecting a vehicle to be selected between the threatening vehicle and the vehicle according to the position information of the threatening vehicle and the position information of the vehicle; generating a first angle range value according to the position information of the vehicle and the position information of the vehicle to be selected, wherein the first angle range value is used for representing a blind area corresponding to the vehicle; generating a second angle range value according to the position information of the vehicle and the position information of the threatening vehicle, wherein the second angle range value is used for representing the view range between the driver position of the current vehicle and the body outline of the threatening vehicle; and if the second angle range value belongs to the first angle range value, determining the vehicle to be selected as the target vehicle to obtain the identification information of the target vehicle.
Step 1210, judging whether an RTP receiving thread is started;
and judging whether an RTP receiving thread is started or not, wherein the RTP receiving thread is used for receiving monitoring data of the target vehicle, if so, executing a step 1211, and if not, executing a step 1212.
Step 1211, waiting for the started RTP receiving thread;
the started RTP receiving thread has a timing function, and when the set running duration is met, the RTP receiving thread is terminated and executes step 1213.
Step 1212, starting an RTP receiving thread;
and starting an RTP receiving thread according to the identification information of the target vehicle, wherein the RTP receiving thread runs for a set time length to acquire monitoring data corresponding to the target vehicle, such as RTP video stream data, so that the visual field enhancement function is realized. And the RTP receiving thread is terminated after running for a set duration, and step 1213 is executed.
Step 1213, judging whether the alarm information sent by the road side unit is received;
and the on-board unit judges whether the alarm information sent by the road side unit is received, if so, the step 1214 is executed, and if not, the step 1217 is executed.
Step 1214, determining whether the RTP receiving thread is started;
it is determined whether an RTP receiving thread for receiving the monitoring data of the target vehicle is turned on, if so, step 1215 is performed, otherwise, step 1216 is performed.
Step 1215, waiting for the started RTP receiving thread;
the started RTP receiving thread has a timing function, and when the set running time length is met, the RTP receiving thread is terminated and step 1217 is executed.
Step 1216, start the RTP receiving thread;
and starting an RTP receiving thread according to the identification information of the target vehicle carried in the alarm information, wherein the RTP receiving thread runs for a set time length to acquire monitoring data corresponding to the target vehicle, such as RTP video stream data, so that the visual field enhancement function is realized. And the RTP receiving thread is terminated after running for a set duration, and step 1217 is executed.
Step 1217, judge whether BSM packet sending cycle is reached;
in this embodiment, the on-board unit sends the BSM packet corresponding to the vehicle according to a set period, if the sending period arrives, step 1218 is executed, otherwise, step 1203 is returned.
In step 1218, the BSM packet is sent.
The vehicle-mounted unit packages the acquired data such as own GPS coordinates, speed, yaw angle, acceleration and the like in a BSM packet data fixed format, then broadcasts and transmits the data, and the step 1203 is returned.
Fig. 13 is a schematic flow chart illustrating an embodiment of the on-board unit receiving monitoring data according to the present invention, referring to fig. 13, in an embodiment, the on-board unit receiving monitoring data includes the following steps:
step 1301, starting an RTP receiving thread;
and the vehicle-mounted unit starts an RTP receiving thread according to the identification unit of the target vehicle.
Step 1302, receiving an RTP data packet corresponding to a target vehicle;
and the vehicle-mounted unit receives the current RTP data packet of the target vehicle according to the started RTP receiving thread.
Step 1303, DSRC unpacks;
and the vehicle-mounted unit unpacks the received data through a DSRC protocol.
1304, RTP unpacking;
and the vehicle-mounted unit unpacks the data unpacked by the DSRC protocol by adopting an RTP protocol to obtain video stream data.
Step 1305, displaying the video data;
the vehicle-mounted unit displays and outputs the acquired video stream data through the HMI.
Step 1306, judging whether a set time length is reached;
the vehicle-mounted unit judges whether the running time of the RTP receiving thread reaches through the timer, if so, the thread is ended, otherwise, the vehicle-mounted unit returns to the step 1302 to continue execution until the running time reaches, and the thread is ended.
Fig. 14 is a schematic flow chart of sending monitoring data by the on-board unit according to an embodiment of the present invention, please refer to fig. 14, in an embodiment, the sending of the monitoring data by the on-board unit includes the following steps:
step 1401, starting RTP packing and sending thread;
and after the initialization of the vehicle-mounted unit is completed, starting an RTP (real-time transport protocol) packing and sending thread.
Step 1402, reading monitoring data;
and the vehicle-mounted unit reads the H.264 video stream data acquired and generated by the image acquisition device.
Step 1403, RTP packing;
the vehicle-mounted unit packs the video stream data for the first time through an RTP (real-time transport protocol);
step 1404, packaging of DSRC;
and the vehicle-mounted unit carries out secondary packaging on the information subjected to the primary packaging through a DSRC protocol to obtain an RTP data packet.
Step 1405, transmitting the RTP packet.
The onboard unit broadcasts the RTP packet via the V2X transceiver module and returns to step 1402.
In the embodiment of the invention, the vehicle-mounted unit packages the video stream data by two layers of the RTP and DSRC, so that the vehicle-mounted unit is suitable for transmitting the video stream of the Internet of vehicles; and the DSRC is used as a way to transmit videos of the vehicle workshop, the videos are only transmitted between point-to-point without passing through a core network and a public network, and the real-time performance is strong.
An embodiment of the present invention further provides a readable storage medium, where the storage medium may include: various media that can store program codes, such as a removable Memory device, a Random Access Memory (RAM), a Read-Only Memory (ROM), a magnetic disk, and an optical disk. The readable storage medium stores an executable program; the executable program is used for realizing the blind area detection method in any embodiment of the invention when being executed by a processor.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing system to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing system, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing system to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing system to cause a series of operational steps to be performed on the computer or other programmable system to produce a computer implemented process such that the instructions which execute on the computer or other programmable system provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A blind area detection method is applied to a vehicle-mounted unit and comprises the following steps:
acquiring basic information corresponding to vehicles in a setting range of the current positions of the vehicles, and determining target vehicles for detecting blind areas according to the basic information when threat vehicles exist in the corresponding blind areas, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring warning information sent by a road side unit when threat risks exist in a blind area corresponding to the current position of the vehicle, wherein the warning information carries identification information of a target vehicle for detecting the blind area;
and starting a monitoring data receiving thread according to the identification information of the target vehicle, and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread.
2. The blind spot detection method of claim 1, wherein prior to determining that a threatening vehicle is present within the corresponding blind spot, comprising:
acquiring the yaw angle and position information of the vehicle;
and determining the threatening vehicles in the corresponding blind areas according to the yaw angle and position information of the vehicle and the set yaw angle deviation value.
3. The blind area detection method according to claim 2, wherein said determining a target vehicle for detecting the blind area based on the basic information includes:
selecting a vehicle to be selected between the threatening vehicle and the vehicle according to the position information of the threatening vehicle and the vehicle;
generating a first angle range value according to the position information of the vehicle and the position information of the vehicle to be selected, wherein the first angle range value is used for representing a blind area corresponding to the vehicle;
generating a second angular range value according to the position information of the vehicle and the position information of the threatening vehicle, wherein the second angular range value is used for representing the inter-body contour view range between the driver position of the current vehicle and the body contour of the threatening vehicle;
and if the second angle range value belongs to the first angle range value, determining the vehicle to be selected as the target vehicle to obtain the identification information of the target vehicle.
4. The blind spot detection method according to claim 1, wherein said starting a monitoring data receiving thread according to the identification information of the target vehicle includes:
starting a monitoring data receiving thread for receiving video stream data corresponding to the surrounding environment sent by the target vehicle in a broadcasting mode according to the identification information of the target vehicle;
after the receiving the current monitoring data of the target vehicle through the monitoring data receiving thread, the method comprises the following steps:
and displaying the received video stream data.
5. The blind spot detection method of claim 1, wherein the method further comprises:
acquiring and sending basic information of the vehicle in a broadcasting mode, wherein the basic information at least comprises the following components: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle;
and acquiring and transmitting the monitoring data acquired by the vehicle in a broadcasting mode.
6. A blind area detection method is applied to a road side unit, and comprises the following steps:
acquiring basic information corresponding to a vehicle in a setting range of the position of a road side unit, wherein the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle;
acquiring position information of a target object detected by an environment sensing device;
determining a target vehicle for detecting a blind area when it is determined that the target object is located within the blind area of the corresponding vehicle based on the basic information and the position information of the target object;
and generating alarm information according to the identification information of the target vehicle, and sending the alarm information to the corresponding vehicle.
7. The blind area detection method according to claim 6, wherein the determining before the time when the target object is located within the blind area of the corresponding vehicle based on the basic information and the position information of the target object, comprises:
mapping the position information of the target object and the position information of the vehicle in the basic information into a static map;
and determining whether the target object is positioned in the blind area of the corresponding vehicle according to the relationship between the positions of the target object and the vehicles.
8. The blind area detection method according to claim 7, wherein the determining whether the target object is located in the blind area of the corresponding vehicle based on the relationship between the positions of the target object and the respective vehicles includes:
selecting the corresponding vehicle and a vehicle to be selected between the corresponding vehicle and the target object according to the relationship between the positions of the target object and the vehicles;
generating a third angle range value according to the position information of the corresponding vehicle and the position information of the vehicle to be selected, wherein the third angle range value is used for representing a blind area of the corresponding vehicle;
generating a fourth angle range value according to the position information of the corresponding vehicle and the position information of the target object, wherein the fourth angle range value is used for representing a view range between the driver position of the corresponding vehicle and the target object;
and if the fourth angle range value belongs to the third angle range value, determining that the target object is located in the blind area of the corresponding vehicle.
9. The blind area detection method according to claim 6, wherein the determining a target vehicle for detecting the blind area includes:
and selecting the vehicle which meets the set condition with the distance from the target object as the target vehicle according to the basic information.
10. An on-board unit, comprising: a first transceiver module and a first processing module,
the first transceiver module is configured to acquire basic information corresponding to a vehicle within a setting range of a current position of the vehicle, and determine, when a threatening vehicle exists in a corresponding blind area, a target vehicle for detecting the blind area according to the basic information, where the basic information at least includes: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle; and/or acquiring warning information sent by a road side unit when threat risks exist in a blind area corresponding to the current position of the vehicle, wherein the warning information carries identification information of a target vehicle for detecting the blind area;
the first processing module is used for starting a monitoring data receiving thread according to the identification information of the target vehicle and receiving the current monitoring data of the target vehicle through the monitoring data receiving thread.
11. A road side unit, comprising:
the second transceiver module is used for acquiring basic information corresponding to the vehicle in the setting range of the position of the road side unit, and the basic information at least comprises: identification information of the vehicle, position information of the vehicle, and a yaw angle of the vehicle;
the acquisition module is used for acquiring the position information of the target object detected by the environment sensing device;
a second processing module, configured to determine, based on the basic information and the position information of the target object, that the target object is located within a blind area of a corresponding vehicle; determining a target vehicle for detecting the blind area, and generating alarm information based on the identification information of the target vehicle;
the second transceiver module is also used for sending the warning information to the corresponding vehicle.
12. A vehicle is characterized by comprising an on-board unit, a transceiver, a detection device and an image acquisition device, wherein the detection device and the image acquisition device are both in communication connection with the on-board unit; the detection device is used for detecting the position information and the yaw angle of the vehicle; the transceiver is used for transmitting basic information corresponding to the current vehicle to other vehicles within a set range; the image acquisition device is used for acquiring monitoring data corresponding to the vehicle; the on-board unit is an on-board unit according to claim 10.
13. A computer storage medium storing an executable program which, when executed by a processor, implements the blind spot detection method according to any one of claims 1 to 9.
CN201910002318.6A 2019-01-02 2019-01-02 Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium Active CN111391863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910002318.6A CN111391863B (en) 2019-01-02 2019-01-02 Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910002318.6A CN111391863B (en) 2019-01-02 2019-01-02 Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN111391863A CN111391863A (en) 2020-07-10
CN111391863B true CN111391863B (en) 2022-12-16

Family

ID=71418795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910002318.6A Active CN111391863B (en) 2019-01-02 2019-01-02 Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN111391863B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113119962A (en) * 2021-05-17 2021-07-16 腾讯科技(深圳)有限公司 Driving assistance processing method and device, computer readable medium and electronic device
CN115705781A (en) * 2021-08-12 2023-02-17 中兴通讯股份有限公司 Vehicle blind area detection method, vehicle, server and storage medium
CN113895434B (en) * 2021-09-29 2023-01-24 岚图汽车科技有限公司 Roadblock prediction method based on vehicle-to-outside information interactive communication technology
CN114973652A (en) * 2022-04-22 2022-08-30 岚图汽车科技有限公司 Visual blind area reminding method and system, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0644498A (en) * 1992-07-23 1994-02-18 Mazda Motor Corp Information transmission device for vehicle
JP2006318093A (en) * 2005-05-11 2006-11-24 Mazda Motor Corp Vehicular moving object detection device
JP2009037462A (en) * 2007-08-02 2009-02-19 Toshiba Corp Traffic information providing system and method
JP2009086788A (en) * 2007-09-28 2009-04-23 Hitachi Ltd Vehicle surrounding monitoring device
JP2009199532A (en) * 2008-02-25 2009-09-03 Denso Corp Intersection operation support system, on-vehicle equipment, and roadside device
CN104376735A (en) * 2014-11-21 2015-02-25 中国科学院合肥物质科学研究院 Driving safety early-warning system and method for vehicle at blind zone crossing
WO2015184962A1 (en) * 2014-06-06 2015-12-10 电信科学技术研究院 Method and device for sending road safety message
JP2016053846A (en) * 2014-09-03 2016-04-14 株式会社デンソーアイティーラボラトリ Automatic driving support system, automatic driving support method and automatic driving device
JP2017111565A (en) * 2015-12-15 2017-06-22 株式会社デンソー Communication control device
JP2018190221A (en) * 2017-05-09 2018-11-29 株式会社デンソー On-vehicle device, driving support device, and driving support network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030139881A1 (en) * 2002-01-24 2003-07-24 Ford Global Technologies, Inc. Method and apparatus for activating a crash countermeasure
WO2006088916A2 (en) * 2005-02-14 2006-08-24 Regents Of The University Of Minnesota Vehicle position system using passive roadway tags
JP5053776B2 (en) * 2007-09-14 2012-10-17 株式会社デンソー Vehicular visibility support system, in-vehicle device, and information distribution device
US8558718B2 (en) * 2010-09-20 2013-10-15 Honda Motor Co., Ltd. Method of controlling a collision warning system using line of sight
US9349291B2 (en) * 2012-11-29 2016-05-24 Nissan North America, Inc. Vehicle intersection monitoring system and method
US9650026B2 (en) * 2015-08-31 2017-05-16 GM Global Technology Operations LLC Method and apparatus for rear cross traffic avoidance
US10497265B2 (en) * 2017-05-18 2019-12-03 Panasonic Intellectual Property Corporation Of America Vehicle system, method of processing vehicle information, recording medium storing a program, traffic system, infrastructure system, and method of processing infrastructure information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0644498A (en) * 1992-07-23 1994-02-18 Mazda Motor Corp Information transmission device for vehicle
JP2006318093A (en) * 2005-05-11 2006-11-24 Mazda Motor Corp Vehicular moving object detection device
JP2009037462A (en) * 2007-08-02 2009-02-19 Toshiba Corp Traffic information providing system and method
JP2009086788A (en) * 2007-09-28 2009-04-23 Hitachi Ltd Vehicle surrounding monitoring device
JP2009199532A (en) * 2008-02-25 2009-09-03 Denso Corp Intersection operation support system, on-vehicle equipment, and roadside device
WO2015184962A1 (en) * 2014-06-06 2015-12-10 电信科学技术研究院 Method and device for sending road safety message
JP2016053846A (en) * 2014-09-03 2016-04-14 株式会社デンソーアイティーラボラトリ Automatic driving support system, automatic driving support method and automatic driving device
CN104376735A (en) * 2014-11-21 2015-02-25 中国科学院合肥物质科学研究院 Driving safety early-warning system and method for vehicle at blind zone crossing
JP2017111565A (en) * 2015-12-15 2017-06-22 株式会社デンソー Communication control device
JP2018190221A (en) * 2017-05-09 2018-11-29 株式会社デンソー On-vehicle device, driving support device, and driving support network

Also Published As

Publication number Publication date
CN111391863A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111391863B (en) Blind area detection method, vehicle-mounted unit, road side unit, vehicle and storage medium
US11443631B2 (en) Enhanced onboard equipment
US8179281B2 (en) Method and apparatus for identifying concealed objects in road traffic
US8903640B2 (en) Communication based vehicle-pedestrian collision warning system
US8180561B2 (en) Vehicle-installation obstacle detection apparatus
CN106816035B (en) Pedestrian-oriented warning method and device
CN112154492A (en) Early warning and collision avoidance
CN111915915A (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN111161008A (en) AR/VR/MR ride sharing assistant
US11113969B2 (en) Data-to-camera (D2C) based filters for improved object detection in images based on vehicle-to-everything communication
CN113012445A (en) Intelligent traffic control system and control method thereof
TW201333896A (en) Remote traffic management system using video radar
CN111508276B (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
CN103699713A (en) Collision detection method for airplane formation and application of method
CN110971650B (en) Collaborative sensing system and method based on V2X system and vehicle
WO2020057406A1 (en) Driving aid method and system
CN110962744A (en) Vehicle blind area detection method and vehicle blind area detection system
WO2016115259A1 (en) Cyclist/pedestrian collision avoidance system
JP2008065482A (en) Driving support system for vehicle
JP2021026554A (en) Travel support method, road photographing image collection method, and road side device
US11335136B2 (en) Method for ascertaining illegal driving behavior by a vehicle
JP2020147107A (en) Advertisement display device, vehicle and advertisement display method
JP2008046761A (en) System, device, and method for processing image of movable object
CN109461308B (en) Information filtering method and image processing server
CN103531035A (en) Navigation method capable of indicating anterior car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant