CN112216097A - Method and device for detecting blind area of vehicle - Google Patents

Method and device for detecting blind area of vehicle Download PDF

Info

Publication number
CN112216097A
CN112216097A CN201911024795.9A CN201911024795A CN112216097A CN 112216097 A CN112216097 A CN 112216097A CN 201911024795 A CN201911024795 A CN 201911024795A CN 112216097 A CN112216097 A CN 112216097A
Authority
CN
China
Prior art keywords
vehicle
blind area
traffic road
information
blind
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911024795.9A
Other languages
Chinese (zh)
Inventor
冷继南
沈建惠
常胜
吕跃强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2020/078329 priority Critical patent/WO2021004077A1/en
Publication of CN112216097A publication Critical patent/CN112216097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for detecting a blind area of a vehicle, which relates to the field of intelligent traffic, and comprises the following steps: receiving video data shot by a camera arranged on a traffic road, and determining position information and attitude information of a vehicle on the traffic road at the current moment or the future moment according to the video data, wherein the attitude information represents the traveling direction of the vehicle; further, acquiring blind area information of the vehicle, wherein the blind area information is determined by the structural attribute of the vehicle; and finally, determining the blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road. The method can determine the blind area of the vehicle on the traffic road at the current moment or the future moment in time, greatly reduce the occurrence probability of the blind area dangerous accident and improve the driving safety.

Description

Method and device for detecting blind area of vehicle
Technical Field
The application relates to the field of intelligent traffic, in particular to a method and a device for detecting a blind area of a vehicle.
Background
With the progress of society, various vehicles traveling on a traffic road are seen everywhere, and it is particularly important for a driver of the vehicle to observe and grasp traffic conditions around the vehicle during traveling, thereby ensuring safety of the vehicle. However, due to the model and structure of some vehicles, there are blind areas where the driver often has a view of the surroundings during the normal driving of the driven vehicle, i.e. there are some areas that the driver cannot see during driving, such as: the engineering vehicles such as large trucks, excavators and loading and unloading vehicles have huge models, the visual field of a driver is shielded by the vehicle body, and partial environment around the vehicle body cannot be observed. Due to the blind vision zone of the running vehicle, the driver may not know the dangerous condition existing in the blind zone (for example, dangerous obstacles exist in the blind zone, pedestrians or other vehicles exist in the blind zone), and therefore great potential safety hazards are brought to the vehicles and the pedestrians.
At present, in order to solve the safety problem brought to vehicles and pedestrians by the blind area of the vehicle, a mode of additionally installing a detector on the vehicle body is often adopted, so that when a dangerous condition exists in the blind area of the vehicle, the vehicle detector gives an alarm prompt tone. However, this method is often too late for some dangerous situations, and is not good for a running vehicle, because the surrounding situation of the vehicle in the moving process changes at any time, the detector cannot timely know the dangerous situation and timely send an alarm prompt. Therefore, how to detect the blind area of the vehicle in a more timely manner is a problem which needs to be solved urgently at present.
Disclosure of Invention
The method can detect the blind area of the vehicle on the traffic road at the current moment or the future moment in time, greatly reduces the occurrence probability of the blind area dangerous accident and improves the traffic road safety.
In a first aspect, the present application provides a method of detecting a blind area of a vehicle, the method being performed by a detection device. The method comprises the following steps: receiving video data shot by a camera arranged on a traffic road, wherein the video data records the running conditions of targets such as vehicles on the traffic road at the current moment; after receiving the video data, determining the position information and the attitude information of the vehicle on the traffic road at the current moment or the future moment according to the video data, wherein the attitude information represents the traveling direction of the vehicle. Further, acquiring blind area information of the vehicle, wherein the blind area information is determined by the structural attributes of the vehicle, and determining the blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road.
The method comprises the steps of processing and analyzing video data shot by a camera arranged on a traffic road to obtain position information and posture information of a vehicle, and timely obtaining a blind area of the vehicle on the traffic road at the current moment or the future moment by combining blind area information of the vehicle. The determined blind area of the vehicle can be timely given to the vehicle or other targets (such as pedestrians and other vehicles) on the traffic road for reference, so that the vehicle or the pedestrian can timely avoid and adjust according to the determined blind area, the occurrence probability of the blind area dangerous accident is greatly reduced, and the traffic road safety is improved. Furthermore, the method can predict the blind area of the vehicle at the future time, so that the vehicle can judge the driving safety at the future time in advance, and the occurrence probability of the blind area dangerous accident is further reduced.
In a possible implementation manner of the first aspect, the method further includes: determining that the vehicle has blind area danger according to the video data and the blind area of the vehicle on the traffic road at the current moment or the future moment, wherein the blind area danger indicates that other targets exist in the blind area of the vehicle on the traffic road; the above method further comprises: and sending the blind area warning to the vehicle or other targets in the blind area.
According to the blind area warning method, whether other targets exist in the blind area of the vehicle or not is determined according to the blind area, and the other targets in the blind area can be considered to be possibly collided or scratched by the vehicle under the condition that other targets exist in the blind area, so that warning is timely given out, the vehicle and the pedestrian in the blind area dangerous state can be reminded more pertinently, and the blind area dangerous accidents are further reduced.
In one possible implementation manner of the first aspect, the blind zone warning sent to the vehicle at risk of the blind zone or other object in the blind zone includes warning data, and the warning data includes one or more of the following information: the position and the range of the blind area on the traffic road, the position information of other targets on the traffic road and the type of other targets, wherein the blind area is dangerous to occur. The content in the alarm data can be used for determining a strategy for avoiding the danger of the blind area by a vehicle driver according to the alarm, so that other targets in the blind area can be avoided more accurately, and the probability of the danger is further reduced.
In one possible implementation manner of the first aspect, determining the position information and the posture information of the vehicle on the traffic road at the future time from the video data includes: determining the position information and the attitude information of the vehicle on the traffic road at the current moment according to the video data; and predicting the position information and the attitude information of the vehicle on the traffic road at the future time according to the position information and the attitude information of the vehicle on the traffic road at the current time. The predicted position information and the attitude information of the vehicle at the future time may allow the detection means to determine the position and the range of the blind area of the vehicle at the future time.
In a possible implementation manner of the first aspect, the method further includes: constructing a visual blind area image according to the blind areas of the vehicles on the traffic road at the current moment or the future moment; and sending the visual blind area image to a vehicle or other equipment.
The determined blind area is sent to the vehicle currently running or equipment of a traffic management platform or other vehicles adjacent to the vehicle in a visual blind area image mode, so that a vehicle driver or a manager can intuitively and quickly determine the position and range of the blind area, the time of the driver and other targets for reacting with the blind area is reduced, and the driving safety is improved.
In a possible implementation manner of the first aspect, the method further includes: calculating the running speed of the vehicle; and adjusting the blind area of the vehicle on the traffic road at the current moment or the future moment according to the running speed of the vehicle. Since the traveling speed of the vehicle may affect the inertia, the braking distance, and the like when the vehicle travels, the blind area of the vehicle is adjusted according to the vehicle speed, for example: the range of the front blind area of the vehicle is enlarged for the vehicle running at high speed, the adjusted blind area is used for the driver to refer, the driver is reminded to pay attention, the danger of the blind area can be further reduced, and the driving safety is improved.
In a possible implementation manner of the first aspect, the method further includes: and sending an adjusting instruction to the vehicle with the blind area danger, wherein the adjusting instruction instructs the vehicle to adjust the driving route so as to avoid other targets in the blind area. According to the method, a new driving route can be planned for the driving vehicle according to the determined blind area or the positions of other targets in the determined blind area, and the vehicle is instructed to adjust the vehicle shaping route, so that the vehicle can reasonably avoid the other targets in the blind area, the problem that a driver does not react timely in confusion is solved, and the driving safety of the vehicle is further improved. On the other hand, for the automatic driving vehicle, after receiving the adjustment instruction, the automatic driving vehicle can automatically adjust the driving route according to the new driving route in the adjustment instruction, avoid the danger of the blind area, and realize safe automatic driving.
In a possible implementation manner of the first aspect, the method further includes: a high risk one of the blind zones is determined.
In a possible implementation manner of the first aspect, the method further includes: determining a blind area danger coefficient according to the blind area information of the vehicle, wherein the blind area danger coefficient indicates the danger degree of each blind area of the vehicle; and determining the high-risk blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area danger coefficient, the blind area information of the vehicle, and the position information and the posture information of the vehicle on the traffic road. According to the blind area danger coefficient, the high-risk blind area is determined, so that a driver can pay attention to the dangerous situation in the high-risk blind area more pertinently.
In a possible implementation manner of the first aspect, after the high-risk blind area is determined according to the blind area risk coefficient, whether other targets exist in the high-risk blind area may be determined according to the video data and the determined high-risk blind area on the traffic road at the current time or the future time of the vehicle. The method for determining the high-risk blind area and then determining the dangerous condition in the high-risk blind area can save computing resources and avoid bringing unnecessary dangerous reminders to the driver to influence the normal driving of the driver. For example: for some low risk blind areas where other objects are present, but for both the vehicle and the object in the blind area there is no danger, it may not be necessary to alert the driver and the object in the blind area.
In one possible implementation manner of the first aspect, the video data includes a plurality of video streams captured by a plurality of cameras disposed at different positions on the traffic road; determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the video data comprises the following steps: determining the position information of the vehicle in the video streams at the current moment or the future moment according to the video streams; and determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the position information of the vehicle in the plurality of video streams at the current moment or the future moment. The position information of the vehicle is determined by combining the plurality of video streams, so that the position of the vehicle on the traffic road can be determined more accurately, and the blind area of the vehicle on the traffic road can be determined more accurately. And the video data shot by the cameras under multiple visual angles can enlarge the range of the detected traffic road, so that vehicles under multiple visual angles of the cameras can determine blind areas by the same method.
In one possible implementation manner of the first aspect, the blind area information of the vehicle includes: the number of blind zones, the position of each blind zone relative to the vehicle, and the shape of the blind zone. According to the blind area information and the position and the posture of the vehicle on the traffic road, the distribution and the range of the blind area of the vehicle on the traffic road can be determined.
In one possible implementation manner of the first aspect, the blind area information of the vehicle includes a risk coefficient of the blind area; determining the blind area of the vehicle on the traffic road according to the blind area information of the vehicle and the position information and the attitude information of the vehicle on the traffic road at the current moment or the future moment, comprising the following steps: determining high-risk blind area information of which the danger coefficient of the blind area is greater than a preset danger threshold in the blind area information; and determining the high-risk blind area of the vehicle on the traffic road at the current moment or the future moment according to the high-risk blind area information, the position information and the posture information of the vehicle on the traffic road at the current moment or the future moment.
In one possible implementation manner of the first aspect, the blind area information of the vehicle includes a risk coefficient of the blind area; before determining that the vehicle is at risk of a blind spot, the method further comprises: correcting the danger coefficient of the blind area according to the blind area of the vehicle on the traffic road; determining a high-risk blind area of the vehicle on the traffic road according to the relationship between the risk coefficient of the corrected blind area and a preset risk threshold; determining that the vehicle has a blind area danger according to the video data and the determined blind area of the vehicle on the traffic road at the current moment or the future moment, comprising the following steps: and determining that the vehicle has a blind area danger in the high-risk blind area on the traffic road according to the position information of other targets in the blind area on the traffic road.
In one possible implementation manner of the first aspect, before determining that the vehicle is in a blind area danger according to the video data and the determined blind area on the traffic road at the current time or the future time, the method further includes: determining the danger coefficient of the blind area of the vehicle according to the blind area of the vehicle on the traffic road; determining a high-risk blind area of a vehicle on a traffic road according to the relation between the danger coefficient of the blind area and a preset danger threshold; determining that the vehicle has a blind area danger according to the video data and the determined blind area of the vehicle on the traffic road at the current moment or the future moment, comprising the following steps: and determining that the vehicle has a blind area danger in the high-risk blind area on the traffic road according to the position information of other targets in the blind area on the traffic road.
According to the method for determining the high-risk blind areas and then detecting the danger of the high-risk blind areas, on one hand, computing resources can be saved, on the other hand, warning of vehicles and other targets can be avoided under the condition that other targets exist in some blind areas but cannot form danger, the warning accuracy is improved, and frequent warning is prevented from disturbing drivers and pedestrians.
In a possible implementation manner of the first aspect, the method further includes: determining the blind area information of the vehicle according to the structural attribute of the vehicle, which specifically comprises the following steps: and inquiring the blind area information base according to the structural attributes of the vehicles to acquire the blind area information of the vehicles corresponding to the structural attributes of the vehicles in the blind area information base.
In one possible implementation of the first aspect, the structural attribute of the vehicle includes a type of the vehicle; inquiring a blind area information base according to the structural attributes of the vehicles to acquire the blind area information of the vehicles corresponding to the structural attributes of the vehicles in the blind area information base, and the method specifically comprises the following steps: inputting the structural attribute to a blind area information base, and acquiring blind area information corresponding to a vehicle with the same type as the vehicle in the blind area information base;
in one possible implementation of the first aspect, the structural attributes of the vehicle include length and width information of the vehicle, a cab type, and a cab location; inquiring a blind area information base according to the structural attribute of the vehicle, and acquiring the blind area information of the vehicle corresponding to the structural attribute in the blind area information base, wherein the method specifically comprises the following steps: and inputting the structural attributes of the vehicle into a blind area information base, and acquiring the blind area information corresponding to the vehicle with the length and width information, the cab type and the cab position similar to the vehicle in the blind area information base.
In a possible implementation manner of the first aspect, the determining, by using a real-time video stream as the received video data, position information and posture information of a vehicle on a traffic road at a current time or a future time according to the video data specifically includes: determining the position information of the vehicle in the video data at the current moment or the future moment according to the real-time video stream; determining the position information of the vehicle on the traffic road at the current moment or the future moment according to a preset calibration relation and the position information of the vehicle in the video data at the current moment or the future moment; determining the motion trail information of the vehicle on the traffic road according to the position information of the vehicle on the traffic road at the current moment or the future moment; and determining the attitude information of the vehicle at the current moment or the future moment according to the motion trail information of the vehicle, wherein the attitude information indicates the traveling direction of the vehicle.
In a second aspect, the present application further provides a detection apparatus, including: the target detection and tracking module is used for receiving video data, and the video data is obtained by shooting through a camera arranged on a traffic road; the target positioning module is used for determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the video data; the attitude determination module is used for determining the attitude information of the vehicle on the traffic road at the current moment or the future moment according to the video data; the blind area determining module is used for acquiring the blind area information of the vehicle, and the blind area information is determined by the structural attribute of the vehicle; and the blind area of the vehicle on the traffic road at the current moment or the future moment is determined according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road.
In a possible implementation manner of the second aspect, the blind area determination module is further configured to determine that the vehicle has a blind area danger according to the video data and the determined blind area of the vehicle on the traffic road at the current time or at a future time, where the blind area danger indicates that the vehicle has other objects in the blind area on the traffic road; and sending the blind area warning to the vehicle or other targets in the blind area.
In one possible implementation manner of the second aspect, the blind area alarm includes alarm data, and the alarm data includes one or more of the following information: the position and the range of the blind area on the traffic road, the position information of other targets on the traffic road and the type of other targets, wherein the blind area is dangerous to occur.
In a possible implementation form of the second aspect, the object localization module, when configured to determine the position information of the vehicle on the traffic route at a future time from the video data, is specifically configured to: determining the position information of the vehicle on the traffic road at the current moment according to the video data; predicting the position information of the vehicle on the traffic road at the future moment according to the position information of the vehicle on the traffic road at the current moment; when the attitude determination module is used for determining the attitude information of the vehicle on the traffic road at the future time according to the video data, the attitude determination module is specifically used for: determining the attitude information of the vehicle on the traffic road at the current moment according to the video data; and predicting the attitude information of the vehicle on the traffic road at the future time according to the attitude information of the vehicle on the traffic road at the current time.
In a possible implementation manner of the second aspect, the blind area determination module is further configured to construct a visualized blind area image according to a blind area of the vehicle on the traffic road at the current time or at a future time; and sending the visual blind area image to a vehicle or other equipment.
In a possible implementation manner of the second aspect, the blind area determination module is further configured to calculate a driving speed of the vehicle according to the position information of the vehicle on the traffic road at the current time or the future time; and adjusting the blind area of the vehicle on the traffic road at the current moment or the future moment according to the running speed of the vehicle.
In a possible implementation manner of the second aspect, the blind area determination module is further configured to send an adjustment instruction to a vehicle in which the blind area is dangerous, wherein the adjustment instruction instructs the vehicle to adjust the driving route so as to avoid other targets existing in the blind area.
In a possible implementation manner of the second aspect, the blind area determination module is further configured to determine a high-risk blind area among the blind areas.
In a possible implementation manner of the second aspect, the blind area determining module is specifically configured to determine a blind area risk coefficient according to blind area information of the vehicle, where the blind area risk coefficient indicates a risk level of each blind area of the vehicle; and determining the high-risk blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area danger coefficient, the blind area information of the vehicle, and the position information and the posture information of the vehicle on the traffic road.
In a possible implementation manner of the second aspect, the blind area determination module is further configured to determine whether other targets exist in the high-risk blind area according to the video data and the determined high-risk blind area, where the vehicle is on the traffic road at the current time or at a future time.
In one possible implementation manner of the second aspect, the video data includes a plurality of video streams captured by a plurality of cameras disposed at different positions on the traffic road; the target detection and tracking module is also used for determining the position information of the vehicle in the video streams at the current moment or the future moment according to the video streams; and the target positioning module is also used for determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the position information of the vehicle in the plurality of video streams at the current moment or the future moment.
In one possible implementation manner of the second aspect, the blind area information of the vehicle includes: the number of blind zones, the position of each blind zone relative to the vehicle, and the shape of the blind zone.
In a third aspect, the present application further provides an on-board device disposed on a vehicle, where the on-board device is configured to execute the method provided in the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application further provides a vehicle, which includes a storage unit and a processing unit, where the storage unit of the vehicle is configured to store a set of computer instructions and a data set, the processing unit executes the computer instructions stored in the storage unit, and the processing unit reads the data set in the storage unit, so that the vehicle executes the method provided in the first aspect or any one of the possible implementation manners of the first aspect.
In a fifth aspect, the present application provides a system comprising at least one memory for storing a set of computer instructions and at least one processor; when the set of computer instructions is executed by at least one processor, the system performs the method provided by the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, the present application further provides a detection system for detecting a blind area of a vehicle, the system comprising:
the vehicle dynamic monitoring system is used for receiving the video data and determining the position information and the attitude information of the vehicle on the traffic road at the current moment or the future moment according to the video data, wherein the video data is obtained by shooting through a camera arranged on the traffic road;
the vehicle blind area detection system is used for acquiring blind area information determined by vehicle structure attributes, and determining the blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road.
In a seventh aspect, the present application provides a non-transitory readable storage medium storing computer program code, which, when executed by a computing device, performs the method provided in the foregoing first aspect or any one of the possible implementations of the first aspect. The storage medium includes, but is not limited to, volatile memory such as random access memory, non-volatile memory such as flash memory, hard disk (HDD), Solid State Disk (SSD).
In an eighth aspect, the present application provides a computer program product comprising computer program code which, when executed by a computing device, performs the method provided in the foregoing first aspect or any possible implementation manner of the first aspect. The computer program product may be a software installation package, which may be downloaded and executed on a computing device in case it is desired to use the method as provided in the first aspect or any possible implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic view of blind areas of different trucks provided in an embodiment of the present application;
fig. 2A is a schematic deployment diagram of a detection apparatus according to an embodiment of the present disclosure;
fig. 2B is a schematic deployment diagram of another detection apparatus provided in the embodiment of the present application;
fig. 2C is a schematic deployment diagram of another detection apparatus provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of a computing device 100 according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of an exercise device 200 and a detection device 300 according to an embodiment of the present disclosure;
FIG. 5 is a method for detecting a blind area of a vehicle according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a vehicle according to an embodiment of the present disclosure, where there is a risk of a blind area;
fig. 7 is a method for target detection and target tracking according to an embodiment of the present disclosure;
FIG. 8 is a block diagram illustrating a specific method for determining a blind spot of a vehicle according to an embodiment of the present disclosure;
FIG. 9 is a schematic illustration of a determined vehicle blind spot provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a system provided in an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings attached to the present application.
Various vehicles are often operated on the road, for example: engineering vehicles (such as loading and unloading vehicles, cement trucks, oil tank trucks and freight cars), cars, bicycles, buses and the like. In the prior art, although each vehicle is provided with tools for assisting the driver in observing the driving environment around the vehicle, such as a left mirror, a right mirror, a rear mirror and the like, the driver still cannot see some areas around the vehicle, namely blind areas, when the vehicle is normally driven. In the present application, the blind area refers to a peripheral area of the vehicle that is not observed when the driver drives the vehicle normally, and the position of the blind area moves along with the travel of the vehicle. Each vehicle corresponds to respective blind area information, the blind area information of each vehicle refers to parameters of blind areas which are possibly not observed by a driver and exist in the vehicle due to factors such as the structure of the vehicle, the type of the vehicle, the position of a driver seat and the like, and the blind area information comprises the number of the blind areas, the shape of the blind areas, the position of the blind areas relative to the vehicle and the like. Fig. 1 is schematic diagrams of blind areas of two trucks, different vehicles correspond to different blind area information, and the position and distribution of the blind area of a driver in the driving process of the vehicle can be determined according to the blind area information and the position information of the vehicles. For the engineering vehicle, because the vehicle body of the engineering vehicle is usually large, the view of a driver is often shielded by the vehicle body in the driving process, so that a large-area blind area exists when the driver drives the engineering vehicle. When dynamic dangers such as other moving vehicles, pedestrians and objects exist in the blind area of the vehicle or static dangers (such as construction dangers and geographic defects) exist in the blind area, the blind area brings great dangers to the vehicle and drivers of the vehicle and also brings great dangers to people and objects existing in the blind area. In this application, the condition that has dynamic danger and static danger in the blind area is collectively called blind area danger. The blind area of detection vehicle further detects the blind area danger, has important meaning to vehicle and pedestrian's safety in the traffic road.
The present application provides a method of detecting a blind area of a vehicle, the method being performed by a detection device. The function of the detection device for detecting the blind area can be realized by a software system, or can be realized by hardware equipment, or can be realized by the combination of the software system and the hardware equipment.
The detection device is flexible to deploy, and can be deployed in a marginal environment, such as: the detection apparatus may be an edge computing device or a software apparatus running on one or more edge computing devices in an edge environment, the edge environment referring to a data center or a collection of edge computing devices that are closer to the traffic road to be detected, the edge environment including one or more edge computing devices, which may be computing-capable roadside devices disposed at the roadside of the traffic road. For example: as shown in fig. 2A, the detection apparatus is disposed at a position closer to the intersection, i.e., an edge calculation device on the roadside. The intersection is provided with a camera capable of being networked, the camera shoots video data recording passing vehicles at the intersection and sends the video data to the detection device through the network. The detection device carries out blind area detection on vehicles running at the intersection according to the video data, further carries out blind area danger detection, and sends a blind area alarm to a vehicle-mounted system in the vehicles through a network (such as a wireless network or a vehicle networking network) when the blind area danger condition of the vehicles running at the intersection is detected, so that the vehicle-mounted system in the vehicles prompts a driver of the blind area danger condition. Or the detection device sends a blind area alarm to the roadside alarm equipment, so that the roadside alarm equipment sends out alarm signals such as sound or light. Or the detection device sends the detected alarm data and statistical data to equipment or devices in the traffic management system, so that a commander can correspondingly instruct or enforce law on vehicles running on the traffic road according to the obtained alarm data and statistical data, wherein the alarm data comprises one or more of the following information: the position and range of the blind area on the traffic road where the blind area danger occurs, the position information of other objects in the blind area on the traffic road, and the type of other objects in the blind area.
The detection device can also be deployed in a cloud environment, which is an entity that provides cloud services to users using basic resources in a cloud computing mode. A cloud environment includes a cloud data center that includes a large number of infrastructure resources (including computing resources, storage resources, and network resources) owned by a cloud service provider, which may include a large number of computing devices (e.g., servers), and a cloud service platform. The detection device may be a server in the cloud data center for detecting a blind area of a traveling vehicle; the detection device can also be a virtual machine which is created in the cloud data center and used for detecting the blind area; the detection device may also be a software device deployed on a server or a virtual machine in the cloud data center, the software device being used for detecting a blind area of a traveling vehicle, and the software device may be deployed in a distributed manner on a plurality of servers, or in a distributed manner on a plurality of virtual machines, or in a distributed manner on a virtual machine and a server. For example: as shown in fig. 2B, the detection device is deployed in a cloud environment, and a network-enabled camera disposed on the traffic route side sends captured video data to the detection device in the cloud environment. The detection device carries out blind area detection and blind area danger detection on the vehicles recorded by the video according to the video data, and when a certain vehicle is detected to have a blind area danger condition, a blind area alarm is sent to a vehicle-mounted system in the vehicle, so that the vehicle-mounted system in the vehicle prompts a driver to have the blind area danger condition. Or the detection device sends a blind area alarm to the roadside alarm equipment, so that the roadside alarm equipment sends out alarm signals such as sound or light. Or the detection device sends the detected alarm data to equipment or devices in the traffic management system, so that the commander can correspondingly instruct or enforce the law on the running vehicles on the traffic road according to the obtained alarm data.
The detection device can be deployed in a cloud data center by a cloud service provider, the cloud service provider abstracts functions provided by the detection device into a cloud service, and the cloud service platform is used for users to consult and purchase the cloud service. After purchasing the cloud service, the user can use the service for detecting the blind area danger of the vehicle, which is provided by the detection device of the cloud data center. The detection device can also be deployed in a computing resource (such as a virtual machine) of a cloud data center rented by a tenant, the tenant purchases a computing resource cloud service provided by a cloud service provider through a cloud service platform, and the detection device is operated in the purchased computing resource, so that the detection device performs blind area detection on the vehicle.
When the detection means is a software means, the detection means may be logically divided into a plurality of sections, each section having a different function (the plurality of sections for example: the detection means comprises an object detection and tracking module, an object localization module, an attitude determination module, a blind zone determination module). Several parts of the detection device can be respectively deployed in different environments or equipment, and the parts of the detection device deployed in different environments or equipment cooperate with each other to realize the function of detecting the danger of the vehicle blind area. For example: as shown in fig. 2C, the target detection and tracking module in the detection apparatus is deployed on the edge computing device, and the target positioning module, the attitude determination module, and the blind area determination module are deployed in the cloud data center (for example, deployed on a server or a virtual machine of the cloud data center). The method comprises the steps that a camera arranged at a traffic intersection sends shot video data to a target detection and tracking module deployed in edge computing equipment, the target detection and tracking module detects and tracks targets such as vehicles and pedestrians recorded in the video data according to the video data, position information of the obtained targets in a video at the current moment and motion track information of the targets formed in the video at the current moment and historical moment are sent to a cloud data center, and a target positioning module, a posture determining module and a blind area determining module deployed on the cloud data center further analyze and process data such as positions and motion tracks of the targets in the video to obtain blind areas (and blind area danger judging results) of the vehicles. It should be understood that the present application does not impose a limiting limitation on the partitioning of the various parts of the detection apparatus, nor on which environment the detection apparatus is specifically deployed. In actual application, adaptive deployment can be performed according to the computing power of each computing device or specific application requirements. It is noted that, in an embodiment, the camera may be an intelligent camera with certain computing capability, and the detection apparatus may be deployed in three parts, where one part is deployed in the camera, another part is deployed in the edge computing device, and another part is deployed in the cloud computing device.
When the detection device is a software device, the detection device can be separately deployed on one computing device in any environment (cloud environment, edge environment, terminal computing device, etc.); when the detection means is a hardware device, the detection means may be a computing device in any environment. Fig. 3 provides a schematic diagram of a structure of a computing device 100, and the computing device 100 shown in fig. 3 includes a memory 101, a processor 102, a communication interface 103, and a bus 104. The memory 101, the processor 102 and the communication interface 103 are connected to each other through a bus 104.
The Memory 101 may be a Read Only Memory (ROM), a static Memory device, a dynamic Memory device, or a Random Access Memory (RAM). The memory 101 may store computer instructions that, when executed by the processor 102 stored in the memory 101, the processor 102 and the communication interface 103 are used to perform a method of detecting a blind spot of a vehicle. The memory may also store data such as: a part of the memory 101 is used to store data required for detecting a blind area of the vehicle, and to store intermediate data or result data during execution of the program.
The processor 102 may be a general-purpose Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or any combination thereof. The processor 102 may include one or more chips, and the processor 102 may include an AI accelerator, such as: a neural Network Processor (NPU).
The communication interface 103 enables communication between the computing device 100 and other devices or communication networks using transceiver modules such as, but not limited to, transceivers. For example, data required to detect a blind spot danger of the vehicle may be acquired through the communication interface 103.
Bus 104 may include a path that transfers information between components of computing device 100 (e.g., memory 101, processor 102, communication interface 103).
When the detection apparatus executes the method for detecting the blind area of the vehicle provided by the embodiment of the present application, an Artificial Intelligence (AI) model needs to be used, the AI model includes multiple types, and the neural network model is one type of the AI model. It should be understood that other AI models can also be used to perform the functions of the neural network model described in the embodiments of the present application, and the present application is not limited thereto.
Neural network models are a class of mathematical computational models that mimic the structure and function of biological neural networks (the central nervous system of animals). A neural network model may include a number of different functional neural network layers, each layer including parameters and computational formulas. Different layers in the neural network model have different names according to different calculation formulas or different functions, for example: the layers that are subjected to convolution calculations are called convolutional layers, which are commonly used for feature extraction of input signals (e.g., images). One neural network model may also be composed of a combination of a plurality of existing neural network models. Neural network models of different structures may be used in different scenarios (e.g., classification, recognition) or provide different effects when used in the same scenario. The neural network model structure specifically includes one or more of the following: the neural network model has different network layers, different sequences of the network layers, and different weights, parameters or calculation formulas in each network layer. There are many different neural network models with higher accuracy for identifying or classifying application scenarios in the industry, wherein some neural network models can be trained by a specific training set and then perform a task alone or in combination with other neural network models (or other functional modules). Some neural network models may also be used directly to perform a task alone or in combination with other neural network models (or other functional modules).
In one embodiment of the present application, performing the method of detecting a blind spot of a vehicle requires two different neural network models. One is a neural network model for detecting targets in video data, referred to as a target detection model. It should be understood that the target detection model in the embodiment of the present application may adopt any one of neural network models that have been used in the industry for target detection and have better effects, such as: a one-stage unified real-time object detection (you only look on: unified, real-time object detection, Yolo) model, a single shot multi-box detector (SSD) model, a Regional Convolutional Neural Network (RCNN) model, or a Fast-regional convolutional neural network (Fast-RCNN) model, etc.
In the embodiment of the present application, another neural network model that needs to be used to execute the method for detecting the blind area of the vehicle is: the model for detecting the attribute of the detected vehicle is called a vehicle attribute detection model, and the vehicle attribute detection model can also adopt any one of some neural network models existing in the industry, such as: a Convolutional Neural Network (CNN) model, a Resnet model, a densnet model, a VGGnet model, and the like. It should be understood that a neural network model capable of realizing target detection and vehicle attribute detection, which is developed in the future industry, may also be used as the target detection model and the vehicle attribute detection model in the embodiments of the present application, and the present application is not limited thereto.
The target detection model and the vehicle attribute detection model can be trained by a training device before being used for detecting the blind area of the vehicle, the training device respectively adopts different training sets to train the target detection model and the vehicle attribute detection model, the target detection model and the vehicle attribute detection model which are trained by the training device can be deployed in a target detection and tracking module in the detection device, and the detection device is used for detecting the danger of the blind area of the vehicle.
Fig. 4 provides a schematic structural diagram of the training device 200 and the detection device 300. The structure and function of the training device 200 and the detecting device 300 are described below with reference to fig. 4, and it should be understood that the present application only provides an exemplary division of the structure and function modules of the training device 200 and the detecting device 300, and the present application does not limit the specific division.
The training device 200 is configured to train the target detection model 203 and the vehicle attribute detection model 204, and two training sets are required for training the target detection model 203 and the vehicle attribute detection model 204, which are referred to as a target detection training set and a vehicle attribute detection training set, respectively. The obtained target detection training set and vehicle attribute detection training set are stored in a database. The acquisition device can acquire a plurality of training videos or training images, and the acquired training videos or training images are processed and labeled by a worker or the acquisition device to form a training set. When the acquisition device acquires a plurality of training videos, the acquisition device takes video frames in the training videos as training images, and then processes and marks the training images to construct a training set. When the training device 200 starts to train the target detection model 203, the initialization module 201 initializes the parameters of each layer in the target detection model 203 (i.e., each parameter is given an initial value), and the training module 202 reads the training images in the target detection training set in the database to train the target detection model 203 until the loss function in the target detection model 203 converges and the loss function value is smaller than a specific threshold or all the training images in the target detection training set are used for training, so that the training of the target detection model 203 is completed. Similarly, when the training device 200 starts to train the vehicle attribute detection model 204, the initialization module 201 first initializes the parameters of each layer in the vehicle attribute detection model 204 (i.e., assigns an initial value to each parameter), and then the training module 202 reads the training images in the vehicle attribute detection training set in the database to train the vehicle attribute detection model 204 until the loss function in the vehicle attribute detection model 204 converges and the loss function value is smaller than the specific threshold or all the training images in the vehicle attribute detection training set are used for training, and then the training of the vehicle attribute detection model 204 is completed.
It should be noted that the target detection model 203 and the vehicle property detection model 204 may also be trained by two training devices respectively, and the target detection model 203 and/or the vehicle property detection model 204 may also not need to be trained by the training device 200, for example: the target detection model 203 and/or the vehicle attribute detection model 204 adopt a neural network model which is trained by a third party and has better accuracy for target detection and/or attribute detection.
In one embodiment of the present application, it may also be unnecessary for the acquisition device to acquire training images or training videos, and it may also be unnecessary to construct a target detection training set and/or a vehicle attribute detection training set, such as: the target detection training set and/or the vehicle attribute detection training set are obtained directly from a third party. In addition, it is noted that the training images in the target detection training set in the present application may have the same content but different labels as the training images in the vehicle attribute detection training set, for example: the acquisition device acquires 1 ten thousand images containing targets such as vehicles, pedestrians, static objects and the like on each traffic road. When a target detection training set is constructed, targets in the 1 ten thousand images are labeled by using a boundary box, and the 1 ten thousand training images after the boundary box labeling form the target detection training set. When a vehicle attribute detection training set is constructed, the vehicles in the 1 ten thousand images are labeled by using boundary boxes, and each boundary box correspondingly labels the attribute (such as vehicle type, vehicle brand, and the like) of the vehicle, and the 1 ten thousand training images after the boundary boxes and the attribute labels form the vehicle detection training set.
It should be noted that, in an embodiment of the present application, the detecting device may also use only one neural network model, which may be referred to as a detection and recognition model, when detecting the blind area of the vehicle, and the detection and recognition model is a neural network model that includes all functions of the object detection model 203 and the vehicle property detection model 204. The detection and recognition model can detect the position of the target, recognize the vehicle in the detected target and detect the attribute of the recognized vehicle. The training of the detection and recognition model is the same as the training of the target detection model 203 and the vehicle attribute detection model 204, and is not described in detail here.
The target detection model 203 and the vehicle attribute detection model 204 trained by the training device 200 can be used for target detection and vehicle attribute detection of video frames in video data captured by the camera, respectively. In one embodiment of the present application, as shown in fig. 4, the trained object detection model 203 and the trained vehicle property detection model 204 are deployed to the detection apparatus 300, and in the detection apparatus 300, the trained object detection model 203 and the trained vehicle property detection model 204 are deployed to the object detection and tracking module 301.
As shown in fig. 4, the detection apparatus 300 includes an object detection and tracking module 301, an object location module 302, an attitude determination module 303, and a blind zone determination module 304.
The target detection and tracking module 301 is configured to receive video data captured by a camera, where the video data captured by the camera may be a real-time video stream. The real-time video stream records the running condition of the target on the traffic road at the current moment, detects the target in the video data to obtain the position information of the target in the video at the current moment, and further, the target detection and tracking module 301 is further configured to track and fit the running track of the target in the video picture within a period of time according to the position information of the target in the video at the current moment and the position information of the target in the video at the historical moment to obtain the motion track information of each target in the video.
It should be understood that the object in the present application is an entity located on a traffic road, the object includes a dynamic object and a static object on the traffic road, the dynamic object on the traffic road is an object moving on the traffic road along with time, and an operation track can be formed in a period of time, including: vehicles, pedestrians, animals, etc.; static objects are stationary on a traffic road for a period of time, for example: the static target may be a vehicle parked at the roadside, a construction area formed by road construction, or the like. When the target detection and tracking module 301 performs target tracking, it may identify a dynamic target and a static target, and may only track the dynamic target or track both the dynamic target and the static target.
It should be understood that the target detection and tracking module 301 may receive video data captured by at least one camera, and detect and track a target of a video frame in the video data according to each video data.
Optionally, the target detection and tracking module 301 may further receive radar data sent by the radar device, and perform target detection and tracking by combining the video data and the radar data.
And the target positioning module 302 is configured to be in communication connection with the target detection and tracking module 301, receive motion trajectory information of each target in the video data sent by the target detection and tracking module 301, and convert the motion trajectory information of each target in the video into motion trajectory information of each target on a traffic road by using a pre-obtained calibration relationship.
It should be understood that the motion trajectory information of each object in the video is a pixel coordinate sequence, the sequence is composed of pixel coordinates of the object in different video frames, and the motion trajectory information of each object in the video represents the running condition of the object in the video in a historical period including the current moment. The motion trail information of each target on the traffic road is a geographical coordinate sequence which is composed of geographical coordinates of the targets on the traffic road, and the motion trail information of each target on the traffic road represents the running conditions of the target on the traffic road in a historical period including the current time. The pixel coordinates of the target are the coordinates of pixel points at the position of the target in the video frame, and the pixel coordinates are two-dimensional coordinates; the geographic coordinates of the target are coordinates of the target in any coordinate system in the physical world, such as: the geographic coordinates in the application adopt three-dimensional coordinates consisting of longitude, latitude and altitude corresponding to the position of the target in the traffic road. Optionally, the target location module 302 may be further configured to predict the position information and the motion trajectory information of the target on the traffic road at a future time or a certain time period according to the motion trajectory information of the target on the traffic road within a time period including the current time and the historical time.
And the posture determining module 303 is configured to determine the posture of the target according to the motion track information of the target on the traffic road or the result of target detection. Alternatively, the attitude determination module 303 may perform only attitude determination for the vehicle.
It should be understood that in the present application, the attitude of the target indicates the traveling direction of the target in the physical world, and for a vehicle, the attitude of the vehicle may be represented by the orientation of the vehicle head or the tangential direction of the motion trajectory of the vehicle on the traffic road, and for a pedestrian, the attitude of the pedestrian may be represented by the tangential direction of the motion trajectory of the pedestrian.
The blind area determining module 304 is configured to determine a structural attribute of the vehicle (e.g., a model of the vehicle, a shape of the vehicle, etc.), determine blind area information of the vehicle in the blind area information base according to the structural attribute of the vehicle, and further determine a blind area of a currently traveling vehicle according to the blind area information of the vehicle (including determining a location of the blind area on a traffic road and a distribution range of the blind area). Optionally, the blind area determining module 304 is further configured to determine whether a vehicle is in a blind area range of the position and has a blind area danger (i.e., whether other objects exist in the blind area at the current time), and if so, send a blind area warning to the vehicle having the blind area danger.
Optionally, the blind zone determination module 304 may also be configured to send warning data to a traffic management system or a roadside warning device.
Optionally, the blind area determining module 304 may be further configured to count blind area danger data of the traffic road in a period of time, and send the statistical data to the traffic management system.
Due to the functions of the modules, the detection device provided by the embodiment of the application can be used for detecting the blind area situation of the running vehicles in a certain geographic area at the current moment or the future moment in real time, giving warning prompts or early warning to dangerous vehicles and pedestrians in time, and effectively reducing the traffic danger situation caused by the blind areas of the vehicles.
The method for detecting the blind area of the vehicle provided by the embodiment of the application is specifically described below with reference to fig. 5.
S401: and receiving video data, and performing target detection and target tracking according to video frames in the video data to obtain the type information of the target and the motion track information of the target in a video picture.
Specifically, video data captured by a camera disposed at a fixed location in a traffic road is received. The video data in the present application may be a video stream shot by a camera in real time, S401 obtains the video stream at the current time in real time, detects an object in the video stream at the current time, and determines motion trajectory information of the object in a video frame of a history period including the current time according to position information of the object at the current time in the video stream and position information of the object at a history time in the video stream, where the position information of the object at the history time in the video stream is obtained when the object detection is performed at the history time and is stored in a detection device or other readable device.
The target detection needs to be performed by using a target detection model, specifically: the image in the received video data is input to an object detection model, the object detection model detects objects existing in the image, and position information of the object in each image in the image and type information of the object are output. Further, tracking the running track of the target in the video picture according to the position information and the type information of the target in the plurality of images in a historical period of continuous time and a target tracking algorithm, and obtaining the motion track information of the target in the video picture. It should be understood that the motion trajectory information of the object in the video picture represents the motion trajectory of the object in the video picture within a historical period including the current time, and the end of the motion trajectory information (i.e., the last pixel coordinate value in the pixel coordinate sequence) is the position of the object in the video picture at the current time. The specific steps of this step S401 in one embodiment are described in detail later.
Optionally, S401 may further receive radar data sent by a radar device (e.g., a laser radar or a millimeter wave radar) disposed on the traffic road, and when performing target detection and tracking, S402 may combine information of a target (e.g., position information of the target, contour information of the target, etc.) included in the radar data and information obtained by detecting and tracking the target in the video frame to obtain position information and type information of the target in the image and motion trajectory information of the target in the video frame.
It is to be noted that in S401, video data captured by a plurality of cameras disposed at different positions in a traffic road may be received, and a target in each video data may be detected and tracked, so that the method provided by the present application may detect blind areas of vehicles in the traffic road under a plurality of camera viewing angles in one geographic area.
S402: and converting the motion trail information of the target in the video picture into the motion trail information of the target on the traffic road.
The motion trajectory information of the detected multiple objects in the video picture is obtained in the foregoing step S401, where the motion trajectory information of the object in the video data is a pixel coordinate sequence corresponding to the object, and the pixel coordinate sequence includes multiple pixel coordinates, and each pixel coordinate represents a position of the object in an image of one video frame in the video picture.
Converting the motion track information of the target in the video picture into the motion track information of the target on the traffic road, in other words, converting each pixel coordinate in the pixel coordinate sequence corresponding to the target into a geographic coordinate, wherein each geographic coordinate represents the actual position of the target on the traffic road of the physical world. The conversion of the pixel coordinates into geographic coordinates is dependent on a calibration relationship, which is a mapping relationship between a video image captured by a camera disposed on a traffic road and the traffic road in the physical world, i.e., a mapping relationship between the pixel coordinates of each pixel point in the video image and the geographic coordinates of the point on the traffic road in the physical world. The geographic coordinates of the target on the traffic road can be calculated through the calibration relationship and the pixel coordinates in the motion trajectory information of the target in the video image obtained in the foregoing step S401, and the process of calculating the geographic coordinates of the target may also be referred to as target positioning.
In one embodiment, the calibration relationship between the video images captured by each camera and the traffic roads in the physical world needs to be calculated in advance, and the method for calculating the calibration relationship may be:
1. the geographical coordinates of some control points on the traffic road which can be shot by the camera are collected in advance. The control point of the traffic road usually selects the cusp of the background object in the traffic road, so as to find the position of the pixel point of the control point in the video frame intuitively. For example: the right-angle points of traffic sign lines, the sharp points of arrows, green belt corner points and the like in traffic roads are used as control points, the geographic coordinates (longitude, latitude and altitude) of the control points can be collected manually or by unmanned automobiles, the selected control points of the traffic roads need to be uniformly distributed on traffic roads, and the selected number of the control points needs to refer to the actual situation.
2. And acquiring the pixel coordinates of the acquired control points in the video frame shot by the camera. Specifically, a video of the traffic road shot by a camera fixedly arranged on the traffic road is read, a pixel coordinate value corresponding to the control point is obtained in any video frame shot by the camera, and the pixel coordinate value can be manually obtained, namely, a pixel point corresponding to the control point in the video frame is manually observed, and the geographic coordinate of the pixel point is recorded. The pixel coordinate values corresponding to the control points can also be obtained by a program, for example, the corresponding pixel coordinates of the control points of the traffic road in the video frame are obtained by using a corner detection method, a short-time fourier transform edge extraction algorithm and a sub-pixel coordinate fitting method.
3. And establishing a mapping relation from the video picture under the camera view angle to the traffic road in the physical world according to the geographic coordinates and the pixel coordinates of the control points. For example, a homography transformation matrix H that is transformed from pixel coordinates to geographic coordinates may be calculated according to the homography transformation principle, where the homography transformation formula is H ═ (m0, n0, H0)/(x0, y0), where (m0, n0, H0) is used to represent the geographic coordinates of each control point, and (x0, y0) is used to represent the pixel coordinates of each control point. And calculating an H matrix corresponding to the video data shot by the camera according to the pixel coordinates (x0, y0) and the geographic coordinates (m0, n0, H0) of the at least three control points. It should be understood that the H matrix corresponding to the video data captured by each camera disposed at different positions of the traffic road is different.
The calibration matrix obtained by the steps 1-3 is the calibration relation between the video picture shot by the camera and the traffic road. By calibrating the matrix H and the motion trajectory information of the target in the video frame (i.e. the pixel coordinate sequence of the target in different video frames) obtained in S401, the geographic coordinate corresponding to each pixel coordinate of the target in different video frames can be obtained. The specific calculation method comprises the following steps: and (m, n, H) ═ H (x, y), wherein (m, n, H) is the geographic coordinate of the target to be calculated, and (x, y) is the pixel coordinate in the motion track information of the target in the video picture. Therefore, each pixel coordinate of the target corresponds to one geographic coordinate, and the geographic coordinate sequence of the target can be obtained after the geographic coordinate corresponding to each pixel coordinate in the motion trail information of the target in the video picture is calculated. It should be understood that each geographic coordinate in the geographic coordinate sequence of the target has the same position in the geographic coordinate sequence as its corresponding pixel coordinate in the pixel coordinate sequence.
It should be understood that the motion trajectory information of the object on the traffic road represents the motion trajectory of the object on the traffic road within a period of time including the current time, and the end (i.e., the last geographical coordinate value in the geographical coordinate sequence) in the motion trajectory information is the position of the object on the traffic road at the current time.
It should be understood that the present step S402 converts the motion trajectory information of each object in the video picture obtained in the foregoing step S401, and obtains the motion trajectory information of each object on the traffic road.
Optionally, after obtaining the motion trajectory information of the target on the traffic road, curve fitting may be performed on the motion trajectory of the target on the traffic road according to the motion trajectory information of the target on the traffic road and the function, specifically, according to the distribution of each coordinate point in the geographic coordinate sequence in the motion trajectory information and the time information corresponding to each coordinate point, a suitable function is selected, the parameter of the function is calculated by using the coordinate point corresponding to each time in the geographic coordinate sequence, and finally, a fitting function is obtained, where the fitting function is a function between the time information and the geographic coordinate information. The geographical position of the target at the future time can be predicted according to the obtained fitting function, that is, the position information of the target on the traffic road at the future time is obtained.
It should be noted that, in another embodiment, in order to comprehensively perform blind area detection and blind area danger judgment on vehicles on a traffic road, the foregoing step S401 may respectively acquire video data captured by a plurality of cameras disposed on the traffic road (for example, for a traffic intersection in four directions, video data captured by cameras disposed in four directions may be respectively acquired), perform target detection and target tracking as described in S401 on each video data, and obtain position information, type information, and motion track information of a plurality of targets in an image in each video data. Further, the method of S402 is executed to convert the motion trajectory information of each target in the video frame into the motion trajectory information of each target on the traffic road (the calibration relationship between the video frame captured by each camera and the front of the traffic road needs to be calculated in advance), and further, the target where the motion trajectory information on the traffic road coincides (or is close to) is determined as the same target on the traffic road, that is, the target is captured by different cameras, and a plurality of motion trajectories corresponding to the same target are fused, for example, each geographic coordinate in a plurality of geographic coordinate sequences is averaged, each average value forms a new geographic coordinate sequence, and the new geographic coordinate sequence is determined as the motion trajectory information of the target on the traffic road. And S403-S405 are continuously executed for each target, so that the blind area determination and blind area danger judgment of the vehicle can be completed.
The method for detecting the blind area and judging the danger of the blind area of the vehicle according to the video data shot by the plurality of cameras can avoid the problem that the visual field shot by a single camera on a traffic road is limited, and further avoid the problem that some targets in the video picture shot by the single camera are blocked, so that the blocked targets cannot be detected by the blind area. Further, the complete motion track of the target on the traffic road within a period of time can be obtained by utilizing the video data shot by the plurality of cameras, so that the detection device can judge the posture of the vehicle according to the complete motion track of the target in the subsequent steps.
S403: and determining the posture of the target according to the motion track information of the target on the traffic road or the result of target detection.
In the application, the attitude of the target is the advancing direction of the target on the traffic road, various methods are used for determining the attitude of the target, and different methods can be used for determining the attitude of different targets. For example: for a vehicle, one method of determining the attitude of the vehicle is: the trajectory fitting is performed by using the motion trajectory information (i.e., the geographical coordinate sequence) of the vehicle on the traffic road obtained in the foregoing S402, and the tangential direction of the obtained trajectory is taken as the attitude of the vehicle, which is understood to be the tangential direction of the tail end of the trajectory (since the trajectory obtained by fitting has a chronological order, a point on the trajectory corresponding to the current time may be referred to as the tail end). For other targets, for example: the pedestrian can also adopt the method to determine the attitude and obtain the attitude information of the target.
For the vehicle, another method can be adopted to determine the vehicle attitude, that is, a three-dimensional contour of the vehicle in the video is detected by adopting a three-dimensional detection method, the contour of the vehicle is converted into contour information of the vehicle on the traffic road by utilizing the method of the foregoing S402, and the vehicle attitude is determined according to the contour information and the direction of the exit and entrance lane on the traffic road. For example: on an intersection of a traffic road, the direction of a long side of a vehicle in the profile information of the vehicle is the northwest-southeast direction, the rule of the traffic direction of an exit lane corresponding to the position of the vehicle at the intersection is from southeast to northwest, and the posture of the vehicle is determined to be the northwest direction according to the direction of the long side of the vehicle and the direction of the exit lane on the traffic road. It should be noted that information such as the direction of the entrance/exit lane on the traffic road may be obtained by the detection device from other devices in advance.
It is noted that, for a vehicle, the two methods described above may be combined together to determine the attitude of the vehicle.
It should be noted that if the fitting function of the motion trajectory of the target such as the vehicle and the position information of the target on the traffic road at the future time are obtained in S402, the motion trajectory curve of the vehicle at a future time may be obtained according to the fitting function, and the tangential direction of the curve may be determined as the attitude of the vehicle at the future time according to the motion trajectory curve at the future time, so as to obtain the attitude information of the vehicle at the future time.
S404: a blind spot of the vehicle is determined.
The type of the target on the traffic road, the position information of the target on the image and the traffic road, the attitude information of the target are obtained by the foregoing steps S401 to S403. According to one or more of the information, the blind area estimation of the vehicle can be carried out, and the blind area position information and the blind area range information of the vehicle at a certain position at the current moment or the future moment are obtained.
The estimation of the blind area of the vehicle mainly comprises the following steps: detecting vehicle attributes of the vehicle, and determining the structural attributes of the vehicle; searching a blind area information base according to the structural attribute of the vehicle, and determining the blind area information of the vehicle; determining the blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road at the current moment or the future moment, and obtaining the blind area distribution information and the blind area position information of the vehicle.
It should be understood that the blind area information base is a database which is constructed in advance and stores the structural attributes of various vehicles and the blind area information corresponding to the vehicle for each structural attribute. The blind area information base can be constructed by manually collecting data in advance, and can also be purchased from a third party.
It should also be understood that the blind spot information base may be a database deployed in the detection device, or may be a database external to the detection device and in data communication with the detection device. In the embodiment of the present application, a blind area information base is taken as an example of a database in the detection device for description.
The specific steps of the method of estimating the blind area of the vehicle are described in detail in S4041-S4044 hereinafter.
Optionally, after determining the blind area of the vehicle on the traffic road, a visualized blind area image may be constructed for the blind area of the vehicle on the traffic road by combining the structural attribute of the vehicle, the position information of the vehicle on the traffic road, and the position and distribution of the blind area of the vehicle on the traffic road, for example: the method comprises the steps of obtaining a real-scene map, determining a model of a vehicle according to the structural attributes of the vehicle, mapping the model of the vehicle to a corresponding position of the real-scene map according to the position information of the vehicle on a traffic road, and mapping a blind area of the vehicle to a corresponding position of the real-scene map according to the position and distribution of the blind area of the vehicle on the traffic road. The obtained visual blind area image may be a Graphical User Interface (GUI), which may be sent to other display devices, for example: the vehicle-mounted display equipment is sent to the corresponding vehicle, and the vehicle-mounted display equipment is sent to the display equipment in the traffic management system. Whether the vehicle is in a dangerous state or not can be intuitively and rapidly determined through the visual blind area image of the vehicle at the current moment or the future moment, the driving direction can be adjusted in time, and the blind area danger is avoided.
S405: and judging the blind area danger of the vehicle, and sending a blind area alarm to the vehicle or other vehicles or people in the blind area danger when the blind area danger occurs.
After the blind area of the vehicle at the current time or the future time is determined in the foregoing step S404, it may be determined whether the positions of other objects at the current time or the future time are within the blind area of the vehicle according to the positions of other objects on the traffic road or the motion trajectory information of other objects on the traffic road obtained in the foregoing steps S401 to S402, or whether other objects exist within the range of the blind area may be detected in a targeted manner according to the positions and ranges of the blind areas at the current time or the future time. For example, as shown in fig. 6, at the current time, if the target is in the blind area of the vehicle, the vehicle is considered to have a blind area danger, and further a blind area warning is sent to an on-board system in the vehicle through a network (e.g., a wireless network or an internet of vehicles network). The blind zone alarm comprises alarm data, and one or more of the following can be included in the alarm data: the location of the blind zone where the blind zone hazard occurs, the location of the target in the blind zone, the type of target in the blind zone, etc., so that the on-board system in the vehicle alerts the driver that a blind zone hazard condition exists.
Optionally, when detecting that there are other targets in the blind area of the vehicle, the detection device may send an alarm to the roadside alarm device, so that the roadside alarm device sends out an alarm signal such as a sound or a light.
Optionally, the detection device may further send the detected warning data to a device or apparatus in the traffic management system, so that the commander may instruct or enforce law on the driving vehicle on the traffic road according to the obtained warning data.
Optionally, the detection device may further record blind area danger data of each vehicle at the historical time, where the blind area danger data may include the model of the vehicle at which the blind area danger occurs, the time of occurrence, the blind area position at which the blind area danger occurs, the type of the target in the blind area, and the like, and the detection device may further perform statistics on the blind area danger data of the vehicle at the current time and the blind area danger data at the historical time to obtain statistical data, and send the statistical data to the traffic management platform, so that the traffic management platform may perform risk assessment on the traffic area or perform adaptive planning and management or corresponding monitoring strategies for departure according to the statistical data, and the like.
Optionally, according to the positions and types of other targets existing in the blind area detected in S405 and the blind area of the vehicle on the traffic road at the current time or the future time, on the basis of the constructed visual blind area image, the model corresponding to the other targets in the blind area is further projected to the corresponding position in the visual blind area image, so that the obtained visual blind area image can visually reflect that the vehicle is at a risk of the blind area, and can indicate that the risk of the blind area is that the vehicle is at a certain position in a certain blind area at the current time or the future time and a certain type of target exists. Therefore, after the driver obtains the current visual blind area image, the driver can quickly understand the current dangerous situation and timely make an evasive strategy.
Optionally, after the detection device determines the danger of the blind area of the vehicle in S405, the route of the vehicle may be planned again according to the obtained current position information of other targets in the blind area, the position information of other targets at a future time, and the current driving route of the vehicle, so as to avoid a traffic accident caused by a collision between the vehicle and the vehicle in the blind area. After the detection device performs route planning, an adjustment instruction may be generated, where the adjustment instruction may include new driving route information planned by the detection device, and the detection device further sends the adjustment instruction to the vehicle, so that the vehicle receives the adjustment instruction and then adjusts the driving route of the vehicle in time, for example: after the automatic driving vehicle receives the adjusting instruction, the automatic driving vehicle continues to drive according to the new driving route in the adjusting instruction, and the blind area danger avoidance is achieved. Or the detection device sends the adjustment instruction to the traffic management platform, and the traffic management platform commands the vehicle. Optionally, the detection device may also perform route planning for other targets in the blind area, and generate an adjustment instruction according to the planned new route information, where the adjustment instruction includes the new route information and sends the adjustment instruction to other targets (e.g., an autonomous vehicle) in the blind area, so that the other targets may adjust future routes according to the adjustment instruction, and thus, the danger of the blind area may also be eliminated.
Optionally, when the blind area danger determination is performed on the blind area of the vehicle, if the obtained blind area information of the vehicle includes the danger coefficient of the blind area, a blind area block with a higher danger coefficient may be selected as a high-risk blind area according to the danger coefficient of the blind area in the blind area information of the vehicle, and the blind area danger determination is performed on the high-risk blind area by using the blind area information of the high-risk blind area, for example: and determining the blind area block with the blind area risk coefficient larger than the preset risk threshold value as a high-risk blind area, and judging whether other targets exist in the high-risk blind area with the higher risk coefficient when judging the blind area risk.
Optionally, when the blind area danger determination is performed on the blind area of the vehicle, if the obtained blind area information of the vehicle includes the danger coefficient of the blind area, the danger coefficient of the blind area may be corrected according to the position and the range of the blind area of the vehicle on the traffic road, so that the corrected danger coefficient of the blind area can more accurately reflect the danger degree of the blind area of the vehicle at the position of the traffic road; according to the relationship between the risk coefficient of the corrected blind area and the preset risk threshold, for example: and determining the blind area block with the corrected blind area danger coefficient larger than a preset danger threshold value as a high-risk blind area of the vehicle on the traffic road, and further judging the blind area danger.
It should be noted that, when the blind area danger determination is performed on the blind area of the vehicle, if the position and the range of the high-risk blind area of the vehicle on the traffic road are only obtained in S404, it is only necessary to determine whether the blind area danger exists in the high-risk blind area of the vehicle.
The method for judging the blind area danger of the high-risk blind area with the higher danger coefficient can save the time for judging the blind area danger by the detection device on one hand, because the probability that other targets exist in some blind areas with the lower danger coefficient is lower, on the other hand, the blind area alarm data sent by the detection device can be more accurate, because other targets exist in some blind areas with the lower danger coefficient of the vehicle even if other targets exist, because the possibility that the vehicle collides or rubs with the targets in the blind areas in the driving process is extremely low, the blind area danger can not be considered to be the blind area danger for the condition that other targets exist in the blind areas, and the alarm data do not need to be sent to the vehicle or a traffic management system.
Through the steps S401-S405, blind area detection and blind area danger judgment can be performed on the running vehicle on the traffic road, so that a driver driving the vehicle can know the blind area danger condition in time and avoid danger in time. Through the steps, the traffic management system can also give an alarm prompt to the target (such as pedestrians, non-motor vehicles and the like) in the vehicle blind area according to the alarm data, for example: and the roadside warning equipment is used for carrying out horn whistle, flash lamp reminding, buzzer warning and the like.
It should be noted that the foregoing steps S401-S403 of the present application may be performed for all targets on a traffic road, the foregoing steps S404-S405 may be performed for all motor vehicles on the traffic road, so that all vehicles can know the danger situation of the blind area in time, and the foregoing steps S404-S405 may also be performed for a specific vehicle type on the traffic road, for example: the blind area detection and the blind area danger judgment are only carried out on the engineering vehicle with the larger vehicle volume, because the probability of the blind area dangerous event of the vehicle with the type is higher. The type of vehicle to be subjected to blind area detection and blind area danger detection may be detected at the time of target detection in the aforementioned step S401, the type of vehicle to be detected is determined before S404 is executed, and the subsequent steps S404 and S405 are executed only for the type of vehicle to be detected.
It should be noted that the method of steps S401-S405 may be performed on video frames (or video frames at fixed time intervals) at each moment captured by the camera, that is, the detection apparatus of the present application may continuously perform the blind area detection and the blind area danger determination of the vehicle in real time.
The following describes a specific implementation method of step S401 in detail with reference to fig. 7:
s4011: the method comprises the steps of receiving video data, extracting images (namely video frames) in the video data, and standardizing the sizes of the images. It should be understood that the video data is a video stream on a traffic road shot by the camera in real time, and processing the image in the video data can be understood as processing the picture on the traffic road at the current moment.
The purpose of this step of normalizing the size of the image is to: so that the size of the normalized image can be adapted to the input of the target detection model.
The method for normalizing the size of the image is not particularly limited, and includes: a method of stretching or compressing the size or a method of filling or cutting may be employed.
S4012: and inputting the image with the standardized size to a target detection model to obtain the position information and the type information of the target in the image. Specifically, the object detection model performs feature extraction on an input image, further detects an object in the image according to the extracted features, and outputs position information and type information of the detected object in the image, for example: the target detection model outputs an output image, the detected target in the output image is framed by rectangular frames, and each rectangular frame also comprises type information of the target. The position information of the object in the image is pixel coordinates of one or more points in the image, such as: the pixel coordinates of the rectangular frame corresponding to the target may be also the pixel coordinates of the center or the lower left corner of the rectangular frame corresponding to the target. The type information of the object is a type to which the object belongs, for example: a pedestrian, a vehicle, or a static object.
It should be understood that the object detection model enables object detection on the input image because the object detection model is trained by an object detection training set before object detection is performed, for example: in this embodiment of the application, to enable the target detection model to detect driving vehicles, and static objects on a traffic road, the target detection model needs to be trained by using a plurality of images including pedestrians, vehicles, and static objects in the target detection training set, and each image in the target detection training set is labeled, specifically, the pedestrians, vehicles, or static objects included in each image are framed by a rectangular frame, and each rectangular frame corresponds to type information of the target in the rectangular frame. Since the target detection model learns the characteristics of each type of target repeatedly during the training process, the trained target detection model has the capability of detecting pedestrians, vehicles and static objects in the input image.
It is to be noted that, in S4012, object detection is performed on all images (or images at fixed time intervals) in the video data of continuous time received in S4011, so that, after S4012 is executed for a while, position information and type information of objects in images in a plurality of video data of continuous time can be obtained. And each image is corresponding to a time stamp, and the images can be sequenced in time sequence through the time stamps.
S4013: and tracking the moving target according to the position information and the type information of the target in the image detected in the aforementioned S4012, and determining the motion track of the target in the video picture within the historical time period including the current moment.
The target tracking means tracking targets in two images (or two images at fixed time intervals) in video data at adjacent times, determining that the targets in the two adjacent images are the same target in a physical world, enabling the two targets in the two images to correspond to the same target ID, and recording pixel coordinates of the target ID in the image at the current time on a target track table, wherein the target track table records the pixel coordinates of each target existing in an area shot by a camera at the current time and at a historical time (a motion track of the target can be fitted by the pixel coordinates of the target at the current time and the historical time). When the target tracking is performed, the type information and the position information of the target obtained in the foregoing step S4012 are compared with the type information and the position information of the target in the cached processed previous-time video frame, and the association between the targets in the two images at adjacent times (or two times at fixed time intervals) is determined, that is, the targets of the same target determined in the two images are marked as the same target ID, and the target ID corresponding to each target and the pixel coordinates thereof in the image are recorded, and one target ID corresponds to each other. Therefore, after the target tracking is carried out on the images after the target detection according to the time sequence, a plurality of pixel coordinates corresponding to one target ID can be obtained, and a pixel coordinate sequence is formed, wherein the pixel coordinate sequence is the motion trail information of the target.
A specific target tracking procedure is provided below:
1: and (6) matching the targets. Matching the detected target in the image at the current moment with the target in the image at the previous moment (or the image before the fixed time interval) according to the position information (namely the pixel coordinates of the target in the image) and the type information of the detected target in the image at the current moment, for example: determining a target ID of a target in a current image according to the overlapping rate of a rectangular frame of the target in the current image and a rectangular frame of the target in an image at a previous moment, determining that the target at the current moment is the same as the target at the previous moment when the overlapping rate of the rectangular frame of the target in the current image and the rectangular frame of one target in the image at the previous moment is larger than a preset threshold value, finding a recorded target ID corresponding to the target in a target track table, and recording the pixel coordinates of the target in the current image in a sequence corresponding to the target ID. It should be understood that step 1 and subsequent steps are performed for each target detected in the current image.
2: when one or more objects in the current image are not matched with the objects in the image at the previous moment in the step 1 (i.e. the one or more objects are not found in the image at the previous moment, for example, a vehicle just drives into the area of the traffic intersection shot by the camera at the current moment), the one or more objects are determined to be newly added objects on the traffic road at the current moment, a new object ID is established for the objects, wherein the object ID uniquely identifies the objects, and the object ID and the pixel coordinates of the object ID at the current moment are recorded in the object track table.
3: when one or more targets at the previous time are not matched with the target(s) in the image at the current time in the aforementioned step 1 (that is, there is a target at the previous time, and the target is not found at the current time, for example, in the case that the target is partially or completely blocked by another target at the current time or the target leaves the area of the traffic road photographed by the camera at the current time), the pixel coordinates of the target in the image at the current time are predicted according to the pixel coordinates of the target at the historical time recorded in the target trajectory table (for example, using a three-point extrapolation method, a trajectory fitting algorithm, etc.).
And 4, determining the existence state of the target according to the pixel coordinates of the target predicted in the step 3. When the pixel coordinates of the predicted target are outside or at the edge position of the current image, determining that the predicted target leaves the picture of the visual angle shot by the camera at the current moment; when the pixel coordinates of the predicted object are in the interior and non-edge position of the current video frame, it is determined that the object is still in the video frame at the current moment.
And 5, when the predicted target is determined to leave the view angle shot by the camera at the current moment in the step 4, deleting the target ID and the corresponding data in the target track table.
And 6, when the predicted target is determined to be still in the image at the current moment in the step 4, recording the pixel coordinates of the predicted target into the target track table.
It is to be noted that the foregoing steps 1 to 5 may be performed for each object detected in an image at each time point (or images at fixed time intervals) in the video data captured by the camera, or the foregoing steps 1 to 5 may be performed only for objects whose object detection results obtained in the foregoing S4012 are non-static in an image at each time point (or images at fixed time intervals) in the video data captured by the camera.
After the target is tracked, a pixel coordinate sequence corresponding to a plurality of target IDs can be obtained, and the pixel coordinate sequence is the motion trail information of the target in the video picture. If the target tracking operation is also performed on a static target, a pixel coordinate sequence of the static target can be obtained, and since the target is static, the pixel coordinates in the pixel coordinate sequence of the target can be gathered near a point.
The following describes the specific implementation steps of step S404 in an embodiment with reference to fig. 8:
s4041: structural attributes of the vehicle are detected.
Since the foregoing step S401 detects the target recorded in the video data captured by the camera provided on the traffic road, the target includes a vehicle, a pedestrian, and the like.
In the step, two methods can be used for detecting the structural attribute of the vehicle according to the vehicle attribute detection model, wherein one method is as follows: and inputting each frame of image or images at fixed time intervals in the video data into the trained vehicle attribute detection model, and performing structural attribute detection on the vehicles in the input image by using the vehicle attribute detection model to obtain the position information of each vehicle in the image and the structural attribute of each vehicle.
The other is as follows: in this step, a rectangular frame corresponding to the vehicle in the image is divided according to the position information of the target whose target type is detected in the step S401, the divided sub-image of the vehicle is input to the vehicle attribute detection model, the structural attribute of each vehicle is obtained by the vehicle attribute detection model, and the position information of the target, the type information of the target and the structural attribute of the vehicle in the image can be obtained by combining the target detection model and the vehicle attribute detection model.
Specifically, which of the above methods is used to obtain the structural attributes of the vehicle may be determined according to the type of input images required by the trained vehicle attribute detection model.
It should be noted that, as can be seen from the foregoing, the vehicle property detection model is a neural network model, and the structural property of the vehicle monitored by the vehicle property detection model may be the type of the vehicle, for example: the model of the vehicle or the sub-classification of the vehicle, which is related to the function that the vehicle property detection model has after being trained. For example: the training image adopted by the vehicle attribute detection model when being trained comprises a vehicle, and the model of the vehicle (such as Changan 20-ton truck, Biddi 30 passenger car, Benz c200 car and the like) is marked on the vehicle in the training image, so that the trained vehicle attribute detection model can be used for detecting the model of the vehicle; for another example: the training images used by the vehicle attribute detection model when being trained include vehicles, and the vehicles in the training images are labeled with the sub-classifications of the vehicles (e.g., 7 commercial vehicles, 4 cars, 20-ton trucks, 10-ton cement trucks, etc.), then the trained vehicle attribute detection model can be used to detect the sub-classifications of the vehicles.
The structural attributes of the vehicle may further include: the length and width information of the vehicle, the type and position of the cab of the vehicle, and the like may be detected by the vehicle attribute detection model in step S4041. It should be appreciated that by detecting the structural attributes of the vehicle, one or more of the structural attributes may be obtained, such as the model of the vehicle, the sub-classification of the vehicle, the aspect ratio of the vehicle, the type and location of the cab of the vehicle, and the like.
It should be noted that step S4041 may also be executed after the target detection in S401 is executed, and the obtained vehicle structure attribute and the type of the target and the position information of the target obtained in the target detection stage may be stored in the target detection and tracking module in the detection apparatus, or in another module of the detection apparatus, or in another storage apparatus readable by the detection apparatus.
S4042: and inquiring a blind area information base according to the structural attributes of the vehicles to obtain the blind area information of the vehicles.
The structure and the form of the blind area information base are not limited in any way, for example: in one embodiment, the blind area information base may be a relational database, and the blind area information base provides a query interface to which the structural attributes of the vehicle obtained in the previous step may be sent, and the interface queries the corresponding blind area information in the blind area information base according to the structural attributes of the vehicle, and the blind area information base returns the query result through the interface.
The query result is the blind area information of the vehicle corresponding to the vehicle structure attribute, and may include: the position of the vehicle's blind zone relative to the vehicle, the shape of the blind zone, the number of blind zones, wherein the position of the blind zone relative to the vehicle may be the offset of a key point in the blind zone relative to the center point of the vehicle, for example: the position of a rectangular blind area relative to the vehicle is the offset length and the offset direction of four corner points of the rectangular blind area relative to the center point of the vehicle. Optionally, the query result may further include: the area of blind area, the danger coefficient of blind area, wherein, the danger coefficient of blind area is the probability that this blind area probably takes place the dangerous condition for show the dangerous degree of blind area, the danger coefficient of blind area can be synthesized by factors such as area, position, the shape of blind area according to the vehicle by the blind area information base and judge, for example: the risk coefficient that can be judged as the blind area is higher to the area of the blind area of vehicle is great, and is located the oblique rear position of vehicle.
The specific way of querying the blind area information base according to the structural attribute of the vehicle to obtain the query result can be various, and the following three ways are exemplary:
the method comprises the steps that a blind area determining module of a detection device sends the model of a vehicle or the sub-classification of the vehicle to a blind area information base interface, a blind area information base inquires the blind area information corresponding to the model or the sub-classification of the vehicle according to the model or the sub-classification of the vehicle, the blind area information is returned to the interface as an inquiry result, and the inquiry result can comprise: the position of the blind area of the vehicle relative to the vehicle, the shape of the blind area and the number of the blind areas; optionally, the query result may further include: the area of the blind zone, the risk factor of the blind zone.
Secondly, a blind area determining module of the detecting device sends the model number of the vehicle or the sub-classification of the vehicle to a blind area information base interface, the blind area information base does not inquire the blind area information corresponding to the vehicle according to the model number or the sub-classification of the vehicle, the blind area information base returns the information of the inquiry failure to the blind area determining module, the blind area information base further sends the structural attributes such as the length and width information of the vehicle, the cab type and the position of the vehicle to the blind area information base interface, the blind area information base inquires the blind area information corresponding to one vehicle closest to the structural attribute of the vehicle in the blind area information base according to the structural attributes such as the length and width information of the vehicle, the cab type and the position of the vehicle as the inquiry result, and returns the inquiry result to the interface, and the inquiry result can include: the position of the blind area of the vehicle relative to the vehicle, the shape of the blind area and the number of the blind areas; optionally, the query result may further include: the area of the blind zone, the risk factor of the blind zone.
Thirdly, a blind area determining module of the detecting device sends the structural attributes of the vehicle to an interface of a blind area information base, wherein the structural attributes of the vehicle comprise: the type of the vehicle, the sub-classification of the vehicle, the length and width information of the vehicle, the type and the position of a cab of the vehicle, and the like. And determining the blind area information of the vehicle of the model or the sub-classification according to the structural attribute by the blind area information base, or determining the blind area information of the vehicle closest to the structural attribute of the vehicle according to the structural attribute by the blind area information base. The blind area information base finally returns the query result to the blind area determination module, and the query result may include: the number of the blind areas, the positions of the blind areas of the vehicles relative to the vehicles and the shapes of the blind areas; optionally, the query result may further include: risk coefficient of the blind area, area of the blind area.
The vehicle blind area information is obtained from the foregoing S4042, the blind area information is related to the attribute of the vehicle, and vehicles with different attributes correspond to different blind area information.
S4043: and determining the blind area of the vehicle according to the blind area information.
This step determines the blind area of the vehicle at the current time based on the blind area information of the vehicle obtained in the aforementioned step S4042, the movement track information of the vehicle on the traffic road obtained in the aforementioned step S402, and the attitude information of the vehicle obtained in the aforementioned step S403. Alternatively, from the foregoing steps S401 to S403, the position information and the posture information of the vehicle at the future time, and the blind area information of the vehicle are obtained, and the blind area of the vehicle at the future time can be determined.
The method for specifically determining the blind area of the vehicle at the current moment comprises the following steps: and determining the position, distribution and area of the blind area of the vehicle on the traffic road according to the geographical coordinates of the vehicle at the current moment in the motion track information of the vehicle on the traffic road and the attitude of the vehicle in combination with the blind area information such as the position, the area and the shape of the blind area relative to the vehicle. For example: as shown in fig. 9, for a vehicle model of the 20 ton truck, knowing the geographic coordinates of the vehicle on the traffic road, that is, knowing the geographic coordinates of the center point of the vehicle and the attitude of the vehicle from east to west, the blind area information of the vehicle obtained from the foregoing S4042 can be known: the vehicle has 6 independent blind areas, the shape and the area of each independent blind area, and the offset of a key point in each independent blind area relative to a special point of the vehicle. Therefore, the actual position and range of the blind area of the vehicle on the traffic road can be determined according to the blind area information and the position and the posture of the vehicle on the traffic road.
Optionally, when the vehicle determines the blind area according to the blind area information in step S4043, if the blind area information of the vehicle includes the risk coefficient of the blind area, in another embodiment of the present application, S4043 determines high-risk blind area information corresponding to a high-risk blind area block whose risk coefficient is greater than a preset risk threshold according to the risk coefficient of the blind area in the blind area information, and determines the position and range of the high-risk blind area of the vehicle on the traffic road at the current time according to the high-risk blind area information, the position information of the vehicle on the traffic road at the current time, and the posture information. The method only determines the position and the distribution range of the high-risk blind area of the vehicle on the traffic road, so that when the blind area danger judgment is subsequently carried out, only whether the high-risk blind area has the blind area danger or not is judged, the calculated amount is reduced, and because the possibility that some low-risk blind area blocks of the vehicle are dangerous is small, the vehicle is not easy to collide and rub with other targets in the low-risk blind area even if other targets exist in the blind area, therefore, the condition that other targets exist in the low-risk blind area is not alarmed, the vehicle and other targets cannot be dangerous and damaged, and the interference of excessive alarms to a vehicle driver and pedestrians is avoided.
Optionally, when the blind area of the vehicle is determined according to the blind area information in step S4043, the driving speed of the vehicle may be determined according to the motion trajectory information of the vehicle on the traffic road, and the range of all or part of the blind area of the vehicle is expanded according to the driving speed of the vehicle. For example: and if the driving speed of the vehicle is higher than a certain threshold value, multiplying the front blind area range of the vehicle by a preset proportionality coefficient to enlarge the blind area range. For another example: the driving speed of the vehicle is converted into a proportionality coefficient according to a preset rule, the area of the blind area in the blind area information is multiplied by the proportionality coefficient, and the position of the blind area in the blind area information relative to the vehicle is obtained. The method for determining the driving speed of the vehicle according to the motion trail information of the vehicle on the traffic road specifically comprises the following steps: and performing division operation according to the distance difference between the adjacent geographic coordinates in the geographic coordinate sequence of the motion trail information and the time difference between two adjacent video frames in the video data corresponding to the adjacent geographic coordinates, so as to obtain the driving speed of the vehicle at one moment.
Optionally, if the obtained blind area information does not have the risk coefficient of the blind area, after the blind area of the vehicle is determined according to the blind area information in step S4043, the risk coefficient of the blind area may be determined according to the geographical position and the range of each determined blind area. Or if the danger coefficient of each blind area exists in the blind area information obtained in the step S4042, adjusting the danger coefficient of each blind area according to the geographical position and range of each blind area, so that the danger coefficient of each blind area is more accurate.
The blind area of the vehicle on the traffic road at a certain moment can be determined through the steps S4041-S4043. It should be understood that the video data captured by the camera in the present application may be a video stream that records the motion of the object on the traffic road in real time, and therefore, the method of determining the blind area of the vehicle may be continuously performed to determine the blind area of the vehicle at each moment.
The present application further provides a detection apparatus 300 as shown in fig. 4, and the modules and functions included in the detection apparatus 300 are as described above and will not be described herein again.
In one embodiment, the target detection and tracking module 301 in the detection apparatus 300 is configured to perform the aforementioned method step S401, and in another more specific embodiment, the target detection and tracking module 301 is configured to perform the aforementioned method steps S4011-S4013 and optional steps thereof; the target location module 302 is configured to perform the aforementioned method step S402; the pose determination module 303 is configured to perform the aforementioned method step S403; the blind zone determination module 304 is configured to perform the aforementioned method steps S404-S405, and in another more specific embodiment, the blind zone determination module 304 is configured to perform the aforementioned method steps S4041-S4043, S405, and the optional steps described in the aforementioned S404-S405.
The application also provides a detection system for detecting a blind area of a vehicle, which comprises a vehicle dynamic monitoring system and a vehicle blind area detection system. The vehicle dynamic monitoring system is used for receiving the video data and determining the position information and the attitude information of the vehicle on the traffic road at the current moment or the future moment according to the video data, wherein the video data is obtained by shooting through a camera arranged on the traffic road. The vehicle blind area detection system is used for acquiring blind area information determined by vehicle structure attributes, and determining the blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road. More specifically, the detection system is used for executing the method of the aforementioned S401-S405, the vehicle dynamic monitoring system of the detection system is used for executing the aforementioned S401-S403, and the vehicle blind area detection system is used for executing the aforementioned S404-S405.
The present application also provides an in-vehicle apparatus, which is disposed on a vehicle, and can be used to perform the aforementioned methods of S401-S405, and the in-vehicle apparatus can provide the same functions as the detection apparatus 300.
The present application further provides a vehicle comprising a storage unit for storing a set of computer instructions and data sets, and a processing unit for executing the computer instructions stored by the storage unit, the processing unit reading the data sets of the storage unit, such that the vehicle can perform the method of S401-S405 as described above.
The storage unit of the vehicle may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The Processing Unit of the vehicle may be a general-purpose Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or any combination thereof. The processing unit may also include one or more chips, and the processing unit may also include an AI accelerator, such as a Neural Processing Unit (NPU).
The present application also provides a computing device 100 as shown in fig. 3, wherein a processor 102 in the computing device 100 reads a set of computer instructions stored in a memory 101 to execute the aforementioned method for detecting a blind area of a vehicle.
Since the modules in the detection apparatus 300 provided by the present application can be distributively disposed on a plurality of computers in the same environment or different environments, the present application also provides a system as shown in fig. 10, wherein the system comprises a plurality of computers 500, and each computer 500 comprises a memory 501, a processor 502, a communication interface 503 and a bus 504. The memory 501, the processor 502 and the communication interface 503 are connected to each other by a bus 504.
The Memory 501 may be a Read Only Memory (ROM), a static Memory device, a dynamic Memory device, or a Random Access Memory (RAM). The memory 501 may store computer instructions that, when executed by the processor 502, stored in the memory 501, the processor 502 and the communication interface 503 are operable to perform a portion of a method of detecting a blind spot of a vehicle. The memory may also store a data set, such as: a part of the storage resources in the memory 501 is divided into a blind area information base storage module for storing a blind area information base required by the detection apparatus 300.
The processor 502 may be a general-purpose Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or any combination thereof. The processor 502 may include one or more chips. The processor 502 may include an AI accelerator, such as a Neural Processing Unit (NPU).
The communication interface 503 enables communication between the computer 500 and other devices or communication networks using transceiver modules, such as, but not limited to, transceivers. For example, the blind area information may be acquired through the communication interface 503.
Bus 504 may include a path that transfers information between components of computer 500 (e.g., memory 501, processor 502, communication interface 503).
A communication path is established between each of the computers 500 through a communication network. On each computer 500 is run any one or more of an object detection and tracking module 301, an object location module 302, an attitude determination module 303, a blind spot determination module 304. Any of the computers 500 may be a computer (e.g., a server) in a cloud data center, or a computer in an edge data center, or a terminal computing device.
The descriptions of the flows corresponding to the above-mentioned figures have respective emphasis, and for parts not described in detail in a certain flow, reference may be made to the related descriptions of other flows.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. A computer program product for implementing blind zone detection of a vehicle includes one or more computer program instructions for detecting a blind zone of a vehicle, which when loaded and executed on a computer, cause, in whole or in part, the processes or functions described in fig. 5-7 to be performed according to embodiments of the invention.
The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optics, digital subscriber line, or wireless (e.g., infrared, wireless, microwave, etc.) means, the computer readable storage medium stores a readable storage medium of computer program instructions that implement blind spot detection for vehicles. (e.g., floppy disk, hard disk, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., SSD).

Claims (23)

1. A method of detecting a blind spot of a vehicle, the method comprising:
receiving video data, wherein the video data is obtained by shooting through a camera arranged on a traffic road;
determining the position information and the attitude information of the vehicle on the traffic road at the current moment or the future moment according to the video data;
acquiring blind area information of the vehicle, wherein the blind area information is determined by the structural attribute of the vehicle;
and determining the blind area of the vehicle on the traffic road at the current moment or the future moment according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road.
2. The method of claim 1, wherein the method further comprises:
determining that the vehicle has a blind area danger according to the video data and the blind area of the vehicle on the traffic road at the current moment or the future moment, wherein the blind area danger indicates that the vehicle has other targets in the blind area on the traffic road;
and sending a blind area alarm.
3. The method of claim 2, wherein the blind zone alert comprises alert data including one or more of the following information: the position and the range of the blind area on the traffic road, the position information of the other targets on the traffic road and the type of the other targets are dangerous to generate.
4. The method of any one of claims 1-3, wherein said determining position information and attitude information of a vehicle on the traffic road at a future time from the video data comprises:
determining the position information and the attitude information of the vehicle on the traffic road at the current moment according to the video data;
and predicting the position information and the attitude information of the vehicle on the traffic road at the future time according to the position information and the attitude information of the vehicle on the traffic road at the current time.
5. The method of any one of claims 1-4, further comprising: constructing a visual blind area image according to the blind area of the vehicle on the traffic road at the current moment or the future moment;
and sending the visual blind area image.
6. The method of any one of claims 1-5, further comprising:
acquiring the running speed of the vehicle;
and adjusting the blind area of the vehicle on the traffic road at the current moment or the future moment according to the running speed of the vehicle.
7. The method of claim 2 or 3, wherein the method further comprises:
and sending an adjusting instruction to the vehicle with the blind area danger, wherein the adjusting instruction instructs the vehicle to adjust the driving route.
8. The method of any one of claims 1-7, further comprising:
determining a high risk one of the blind zones.
9. The method of any of claims 1-8, wherein the video data comprises a plurality of video streams captured by a plurality of cameras disposed at different locations on the traffic road;
the determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the video data comprises the following steps:
determining position information of the vehicle in the plurality of video streams at the current time or the future time according to the plurality of video streams;
and determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the position information of the vehicle in the plurality of video streams at the current moment or the future moment.
10. The method according to any one of claims 1-9, wherein the vehicle's blind spot information includes: the number of blind zones, the position of each blind zone relative to the vehicle, and the shape of the blind zone.
11. A detection device, comprising:
the target detection and tracking module is used for receiving video data, and the video data is shot by a camera arranged on a traffic road;
the target positioning module is used for determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the video data;
the attitude determination module is used for determining the attitude information of the vehicle on the traffic road at the current moment or the future moment according to the video data;
the blind area determining module is used for acquiring blind area information of the vehicle, and the blind area information is determined by the structural attribute of the vehicle; and the blind area of the vehicle on the traffic road at the current moment or the future moment is determined according to the blind area information of the vehicle, the position information and the posture information of the vehicle on the traffic road.
12. The detection apparatus of claim 11,
the blind area determining module is further configured to determine that the vehicle has a blind area danger according to the video data and a blind area of the vehicle on the traffic road at the current time or the future time, where the blind area danger indicates that the vehicle has other targets in the blind area on the traffic road; and sending a blind area alarm.
13. The detection apparatus of claim 12, wherein the blind zone alert comprises alert data including one or more of the following information: the position and the range of the blind area on the traffic road, the position information of the other targets on the traffic road and the type of the other targets are dangerous to generate.
14. The detection apparatus according to any one of claims 11 to 13,
the target positioning module, when being configured to determine the position information of the vehicle on the traffic road at the future time according to the video data, is specifically configured to:
determining the position information of the vehicle on the traffic road at the current moment according to the video data;
predicting the position information of the vehicle on the traffic road at the future moment according to the position information of the vehicle on the traffic road at the current moment;
the attitude determination module, when being configured to determine the attitude information of the vehicle on the traffic road at the future time according to the video data, is specifically configured to:
determining the attitude information of the vehicle on the traffic road at the current moment according to the video data;
and predicting the attitude information of the vehicle on the traffic road at the future time according to the attitude information of the vehicle on the traffic road at the current time.
15. The detection apparatus according to any one of claims 11 to 14,
the blind area determining module is also used for constructing a visual blind area image according to the blind area of the vehicle on the traffic road at the current moment or the future moment; and sending the visual blind area image.
16. The detection apparatus according to any one of claims 11 to 15,
the blind area determining module is further used for acquiring the driving speed of the vehicle; and adjusting the blind area of the vehicle on the traffic road at the current moment or the future moment according to the running speed of the vehicle.
17. The detection apparatus according to claim 12 or 13,
the blind area determining module is further configured to send an adjustment instruction to the vehicle with the blind area danger, wherein the adjustment instruction instructs the vehicle to adjust the driving route.
18. The detection apparatus according to any one of claims 11 to 17,
the blind area determining module is further used for determining a high-risk blind area in the blind area.
19. The detection apparatus according to any one of claims 11 to 18, wherein the video data includes a plurality of video streams captured by a plurality of cameras disposed at different positions on the traffic road;
the target detection and tracking module is further used for determining the position information of the vehicle in the video streams at the current moment or the future moment according to the video streams;
the target positioning module is further used for determining the position information of the vehicle on the traffic road at the current moment or the future moment according to the position information of the vehicle in the video streams at the current moment or the future moment.
20. The detection apparatus according to any one of claims 11 to 19, wherein the blind area information of the vehicle includes: the number of blind zones, the position of each blind zone relative to the vehicle, and the shape of the blind zone.
21. An on-board device provided on a vehicle, characterized in that it is configured to perform the method of any one of claims 1 to 10.
22. A system, comprising at least one memory and at least one processor, the at least one memory configured to store a set of computer instructions;
the system performs the method of any of claims 1-10 when the set of computer instructions is executed by the at least one processor.
23. A non-transitory readable storage medium storing computer program code, wherein the computer program code is executed by a computing device to perform the method of any one of claims 1 to 10.
CN201911024795.9A 2019-07-09 2019-10-25 Method and device for detecting blind area of vehicle Pending CN112216097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/078329 WO2021004077A1 (en) 2019-07-09 2020-03-07 Method and apparatus for detecting blind areas of vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019106168403 2019-07-09
CN201910616840 2019-07-09

Publications (1)

Publication Number Publication Date
CN112216097A true CN112216097A (en) 2021-01-12

Family

ID=74048637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911024795.9A Pending CN112216097A (en) 2019-07-09 2019-10-25 Method and device for detecting blind area of vehicle

Country Status (2)

Country Link
CN (1) CN112216097A (en)
WO (1) WO2021004077A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927509A (en) * 2021-02-05 2021-06-08 长安大学 Road driving safety risk assessment system based on traffic conflict technology
CN112991818A (en) * 2021-01-22 2021-06-18 浙江合众新能源汽车有限公司 Method and system for avoiding collision of automobile due to blind area
CN112990114A (en) * 2021-04-21 2021-06-18 四川见山科技有限责任公司 Traffic data visualization simulation method and system based on AI identification
CN113223312A (en) * 2021-04-29 2021-08-06 重庆长安汽车股份有限公司 Camera blindness prediction method and device based on map and storage medium
CN113415287A (en) * 2021-07-16 2021-09-21 恒大新能源汽车投资控股集团有限公司 Vehicle road running indication method and device and computer readable storage medium
CN113628444A (en) * 2021-08-12 2021-11-09 智道网联科技(北京)有限公司 Method, device and computer-readable storage medium for prompting traffic risk
CN113859118A (en) * 2021-10-15 2021-12-31 深圳喜为智慧科技有限公司 Road safety early warning method and device for large vehicle
CN114582153A (en) * 2022-02-25 2022-06-03 智己汽车科技有限公司 Long solid line reminding method and system for ramp entrance and vehicle
CN115134491A (en) * 2022-05-27 2022-09-30 深圳市有方科技股份有限公司 Image processing method and device
WO2022242134A1 (en) * 2021-05-17 2022-11-24 腾讯科技(深圳)有限公司 Driving assistance processing method and apparatus, computer-readable medium and electronic device
CN115482679A (en) * 2022-09-15 2022-12-16 深圳海星智驾科技有限公司 Automatic driving blind area early warning method and device and message server
WO2023015925A1 (en) * 2021-08-12 2023-02-16 中兴通讯股份有限公司 Vehicle blind spot detection method, vehicle, server, and storage medium
WO2023226588A1 (en) * 2022-05-27 2023-11-30 魔门塔(苏州)科技有限公司 Blind-area detection method and apparatus, alarm method and apparatus, and vehicle, medium and device
CN117373248A (en) * 2023-11-02 2024-01-09 深圳市汇芯视讯电子有限公司 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform
CN117734680A (en) * 2024-01-22 2024-03-22 珠海翔越电子有限公司 Blind area early warning method, system and storage medium for large vehicle

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634188A (en) * 2021-02-02 2021-04-09 深圳市爱培科技术股份有限公司 Vehicle far and near scene combined imaging method and device
CN113228135B (en) * 2021-03-29 2022-08-26 华为技术有限公司 Blind area image acquisition method and related terminal device
CN113060157B (en) * 2021-03-30 2022-07-22 恒大新能源汽车投资控股集团有限公司 Blind zone road condition broadcasting device, road condition information sharing device, system and vehicle
CN113619599B (en) * 2021-03-31 2023-03-24 中汽创智科技有限公司 Remote driving method, system, device and storage medium
CN112937446A (en) * 2021-04-14 2021-06-11 宝能汽车科技有限公司 Blind area video acquisition method and system
CN113096195A (en) * 2021-05-14 2021-07-09 北京云迹科技有限公司 Camera calibration method and device
CN113479197A (en) * 2021-06-30 2021-10-08 银隆新能源股份有限公司 Control method of vehicle, control device of vehicle, and computer-readable storage medium
CN113682319B (en) * 2021-08-05 2023-08-01 地平线(上海)人工智能技术有限公司 Camera adjustment method and device, electronic equipment and storage medium
CN115731742A (en) * 2021-08-26 2023-03-03 博泰车联网(南京)有限公司 Collision prompt information output method and device, electronic equipment and readable storage medium
CN114655131B (en) * 2022-03-29 2023-10-13 东风汽车集团股份有限公司 Vehicle-mounted sensing sensor adjustment method, device, equipment and readable storage medium
CN115222767B (en) * 2022-04-12 2024-01-23 广州汽车集团股份有限公司 Tracking method and system based on space parking space
CN114782923B (en) * 2022-05-07 2024-05-03 厦门瑞为信息技术有限公司 Detection system for dead zone of vehicle
CN114944067B (en) * 2022-05-16 2023-08-15 浙江海康智联科技有限公司 Elastic bus lane implementation method based on vehicle-road cooperation
CN115171431A (en) * 2022-08-17 2022-10-11 东揽(南京)智能科技有限公司 Intersection multi-view-angle large vehicle blind area early warning method
CN116080529B (en) * 2023-04-12 2023-08-29 深圳市速腾聚创科技有限公司 Blind area early warning method and device, electronic equipment and storage medium
CN116564111B (en) * 2023-07-10 2023-09-29 中国电建集团昆明勘测设计研究院有限公司 Vehicle early warning method, device and equipment for intersection and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160090038A1 (en) * 2014-09-26 2016-03-31 International Business Machines Corporation Danger zone warning system
CN106143309A (en) * 2016-07-18 2016-11-23 乐视控股(北京)有限公司 A kind of vehicle blind zone based reminding method and system
CN107564334A (en) * 2017-08-04 2018-01-09 武汉理工大学 A kind of parking lot vehicle blind zone danger early warning system and method
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN108932868A (en) * 2017-05-26 2018-12-04 奥迪股份公司 The danger early warning system and method for vehicle
CN109278640A (en) * 2018-10-12 2019-01-29 北京双髻鲨科技有限公司 A kind of blind area detection system and method
CN109671299A (en) * 2019-01-04 2019-04-23 浙江工业大学 It is a kind of based on crossing camera probe to the system and method for pedestrian's danger early warning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009177245A (en) * 2008-01-21 2009-08-06 Nec Corp Blind corner image display system, blind corner image display method, image transmission apparatus, and image reproducing apparatus
JP2013200819A (en) * 2012-03-26 2013-10-03 Hitachi Consumer Electronics Co Ltd Image receiving and displaying device
CN106373430B (en) * 2016-08-26 2023-03-31 华南理工大学 Intersection traffic early warning method based on computer vision
CN107554430B (en) * 2017-09-20 2020-01-17 京东方科技集团股份有限公司 Vehicle blind area visualization method, device, terminal, system and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160090038A1 (en) * 2014-09-26 2016-03-31 International Business Machines Corporation Danger zone warning system
CN106143309A (en) * 2016-07-18 2016-11-23 乐视控股(北京)有限公司 A kind of vehicle blind zone based reminding method and system
CN108932868A (en) * 2017-05-26 2018-12-04 奥迪股份公司 The danger early warning system and method for vehicle
CN107564334A (en) * 2017-08-04 2018-01-09 武汉理工大学 A kind of parking lot vehicle blind zone danger early warning system and method
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN109278640A (en) * 2018-10-12 2019-01-29 北京双髻鲨科技有限公司 A kind of blind area detection system and method
CN109671299A (en) * 2019-01-04 2019-04-23 浙江工业大学 It is a kind of based on crossing camera probe to the system and method for pedestrian's danger early warning

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991818A (en) * 2021-01-22 2021-06-18 浙江合众新能源汽车有限公司 Method and system for avoiding collision of automobile due to blind area
CN112927509A (en) * 2021-02-05 2021-06-08 长安大学 Road driving safety risk assessment system based on traffic conflict technology
CN112990114A (en) * 2021-04-21 2021-06-18 四川见山科技有限责任公司 Traffic data visualization simulation method and system based on AI identification
CN113223312A (en) * 2021-04-29 2021-08-06 重庆长安汽车股份有限公司 Camera blindness prediction method and device based on map and storage medium
WO2022242134A1 (en) * 2021-05-17 2022-11-24 腾讯科技(深圳)有限公司 Driving assistance processing method and apparatus, computer-readable medium and electronic device
US11999371B2 (en) 2021-05-17 2024-06-04 Tencent Technology (Shenzhen) Company Limited Driving assistance processing method and apparatus, computer-readable medium, and electronic device
CN113415287A (en) * 2021-07-16 2021-09-21 恒大新能源汽车投资控股集团有限公司 Vehicle road running indication method and device and computer readable storage medium
CN113628444A (en) * 2021-08-12 2021-11-09 智道网联科技(北京)有限公司 Method, device and computer-readable storage medium for prompting traffic risk
WO2023015925A1 (en) * 2021-08-12 2023-02-16 中兴通讯股份有限公司 Vehicle blind spot detection method, vehicle, server, and storage medium
CN113859118A (en) * 2021-10-15 2021-12-31 深圳喜为智慧科技有限公司 Road safety early warning method and device for large vehicle
CN114582153A (en) * 2022-02-25 2022-06-03 智己汽车科技有限公司 Long solid line reminding method and system for ramp entrance and vehicle
CN114582153B (en) * 2022-02-25 2023-12-12 智己汽车科技有限公司 Ramp entry long solid line reminding method, system and vehicle
CN115134491A (en) * 2022-05-27 2022-09-30 深圳市有方科技股份有限公司 Image processing method and device
CN115134491B (en) * 2022-05-27 2023-11-24 深圳市有方科技股份有限公司 Image processing method and device
WO2023226588A1 (en) * 2022-05-27 2023-11-30 魔门塔(苏州)科技有限公司 Blind-area detection method and apparatus, alarm method and apparatus, and vehicle, medium and device
CN115482679A (en) * 2022-09-15 2022-12-16 深圳海星智驾科技有限公司 Automatic driving blind area early warning method and device and message server
CN115482679B (en) * 2022-09-15 2024-04-26 深圳海星智驾科技有限公司 Automatic driving blind area early warning method and device and message server
CN117373248A (en) * 2023-11-02 2024-01-09 深圳市汇芯视讯电子有限公司 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform
CN117734680A (en) * 2024-01-22 2024-03-22 珠海翔越电子有限公司 Blind area early warning method, system and storage medium for large vehicle
CN117734680B (en) * 2024-01-22 2024-06-07 珠海翔越电子有限公司 Blind area early warning method, system and storage medium for large vehicle

Also Published As

Publication number Publication date
WO2021004077A1 (en) 2021-01-14

Similar Documents

Publication Publication Date Title
CN112216097A (en) Method and device for detecting blind area of vehicle
US11990036B2 (en) Driver behavior monitoring
US11685360B2 (en) Planning for unknown objects by an autonomous vehicle
US11400925B2 (en) Planning for unknown objects by an autonomous vehicle
US11314258B2 (en) Safety system for a vehicle
US11380105B2 (en) Identification and classification of traffic conflicts
JP7499256B2 (en) System and method for classifying driver behavior - Patents.com
KR101534056B1 (en) Traffic signal mapping and detection
CN106571046B (en) Vehicle-road cooperative driving assisting method based on road surface grid system
CN113223317B (en) Method, device and equipment for updating map
CN111932901B (en) Road vehicle tracking detection apparatus, method and storage medium
US20220375208A1 (en) Annotation cross-labeling for autonomous control systems
CN111094095B (en) Method and device for automatically sensing driving signal and vehicle
US11511737B2 (en) Apparatus and method for processing vehicle signals to compute a behavioral hazard measure
CN111025297A (en) Vehicle monitoring method and device, electronic equipment and storage medium
CN113674523A (en) Traffic accident analysis method, device and equipment
US20210364321A1 (en) Driving information providing method, and vehicle map providing server and method
Kim et al. Vehicle path prediction based on radar and vision sensor fusion for safe lane changing
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
JP7429246B2 (en) Methods and systems for identifying objects
CN117897737A (en) Unmanned aerial vehicle monitoring method and device, unmanned aerial vehicle and monitoring equipment
US11989949B1 (en) Systems for detecting vehicle following distance
WO2024026110A1 (en) Aerial view generation for vehicle control
Ahlers et al. Laserscanner based cooperative Pre-data-fusion
CN112784707A (en) Information fusion method and device, integrated detection equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination