WO2021004077A1 - Method and apparatus for detecting blind areas of vehicle - Google Patents

Method and apparatus for detecting blind areas of vehicle Download PDF

Info

Publication number
WO2021004077A1
WO2021004077A1 PCT/CN2020/078329 CN2020078329W WO2021004077A1 WO 2021004077 A1 WO2021004077 A1 WO 2021004077A1 CN 2020078329 W CN2020078329 W CN 2020078329W WO 2021004077 A1 WO2021004077 A1 WO 2021004077A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
blind
traffic road
information
target
Prior art date
Application number
PCT/CN2020/078329
Other languages
French (fr)
Chinese (zh)
Inventor
冷继南
沈建惠
常胜
吕跃强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021004077A1 publication Critical patent/WO2021004077A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • This application relates to the field of smart transportation, and in particular to a method and device for detecting the blind area of a vehicle.
  • the method of installing a detector on the body is often adopted, so that when a dangerous situation exists in the blind area of the vehicle, the vehicle detector emits an alarm sound.
  • this method is often too late for some dangerous situations, and it is even less effective for running vehicles, because the surrounding conditions of the vehicle during the movement are changing at any time, and the detector cannot learn the dangerous situation in time and send it out in time. Warning prompt. Therefore, how to detect the blind area of the vehicle in a more timely manner is a problem that needs to be solved urgently.
  • This application provides a method for detecting the blind area of a vehicle.
  • the method can detect the blind area of a vehicle on a traffic road at the current or future time in a more timely manner, greatly reducing the probability of dangerous accidents in the blind area, and improving the traffic road Safety.
  • the present application provides a method for detecting a blind area of a vehicle, and the method is executed by a detecting device.
  • the method includes: receiving video data taken by a camera set on a traffic road, the video data recording the operating conditions of vehicles and other targets on the traffic road at the current moment; after receiving the video data, determining the current time or the future time of the vehicle according to the video data Position information and posture information on the traffic road, where posture information indicates the direction of travel of the vehicle. Further, the blind spot information of the vehicle is acquired, and the blind spot information is determined by the structural attributes of the vehicle. According to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road, the blind spot of the vehicle on the traffic road at the current or future time is determined.
  • the above method processes and analyzes the video data taken by a camera installed on a traffic road to obtain the position information and posture information of the vehicle, and combines the blind area information of the vehicle to obtain the blind area of the vehicle on the traffic road at the current time or in the future in time.
  • the determined blind area of the vehicle can be timely given to the vehicle or other targets on the traffic road (such as pedestrians, other vehicles) as a reference, so that the vehicle or pedestrian can avoid and adjust in time according to the determined blind area, greatly reducing the probability of dangerous accidents in the blind area , Improve traffic and road safety.
  • the above method can predict the blind area of the vehicle at a future time, so that the vehicle can judge the safety of driving at the future time in advance, and further reduces the probability of dangerous accidents in the blind area.
  • the above method further includes: determining that the vehicle has a blind spot risk based on the video data and the blind spot of the vehicle on the traffic road at the current or future moments, wherein the blind spot danger indicates that the vehicle is in danger. There are other targets in the blind zone on the traffic road; the above method further includes: sending a blind zone warning to the vehicle or other targets in the blind zone.
  • the blind zone determine whether there are other targets in the blind zone of the vehicle. If there are other targets in the blind zone, it can be considered that other targets in the blind zone may be collided or scratched by the vehicle, and timely warning can be more targeted. Remind vehicles and pedestrians in a dangerous state in the blind spot, further reducing dangerous accidents in the blind spot.
  • the blind zone warning sent to vehicles with blind spot danger or other targets in the blind zone includes warning data
  • the warning data includes one or more of the following information: The location and range of the blind spot on the traffic road, the location information of other targets on the traffic road, and the types of other targets.
  • the content in the warning data can be used by the vehicle driver to determine the strategy for avoiding the danger in the blind area according to the warning, avoiding other targets in the blind area more accurately, and further reducing the probability of danger.
  • determining the position information and posture information of the vehicle on the traffic road at a future time according to the video data includes: determining the position information and posture information of the vehicle on the traffic road at the current time according to the video data; According to the vehicle's current location information and posture information on the traffic road, predict the vehicle's location information and posture information on the traffic road in the future. The predicted position information and posture information of the vehicle at the future time can enable the detection device to determine the location and range of the blind zone of the vehicle at the future time.
  • the method further includes: constructing a visual blind spot image according to the blind spot of the vehicle on the traffic road at the current time or in the future; sending the visual blind spot image to the vehicle or other devices.
  • the driver or manager of the vehicle can intuitively and quickly determine the blind spot.
  • the location and range reduce the time for the driver and other targets to react to the blind spot, and improve driving safety.
  • the method further includes: calculating the driving speed of the vehicle; and adjusting the blind spot of the vehicle on the traffic road at the current time or in the future according to the driving speed of the vehicle.
  • the driving speed of the vehicle will affect the inertia and braking distance when the vehicle is driving, adjust the blind area of the vehicle according to the speed. For example, for a high-speed vehicle, expand the range of the front blind area of the vehicle and provide the adjusted blind area for the driver's reference and remind Drivers pay attention to further reduce the risk of blind spots and improve driving safety.
  • the method further includes: sending an adjustment instruction to a vehicle with a blind spot risk, wherein the adjustment instruction instructs the vehicle to adjust a driving route to avoid other targets in the blind zone.
  • the method can plan a new driving route for the driving vehicle according to the determined blind zone or the position of other targets in the determined blind zone, and instruct the vehicle to adjust the shape of the car route, so that the vehicle can reasonably avoid other targets in the blind zone. It further avoids the problem of the driver's untimely response in a panic, and further improves the driving safety of the vehicle.
  • the self-driving vehicle after receiving the adjustment instruction, the self-driving vehicle can automatically adjust the driving route according to the new driving route in the adjustment instruction, avoid the danger of blind spots, and realize safe automatic driving.
  • the method further includes: determining a high-risk blind area in the blind area.
  • the method further includes: determining a blind spot risk coefficient according to the blind spot information of the vehicle, the blind spot risk coefficient indicating the degree of danger of each blind spot of the vehicle; according to the blind spot risk factor and the blind spot of the vehicle Information and the location information and posture information of the vehicle on the traffic road determine the high-risk blind area of the vehicle on the traffic road at the current time or in the future.
  • determining the high-risk blind zone enables the driver to pay more attention to the dangerous situation in the high-risk blind zone.
  • the high-risk blind zone can also be determined based on the video data and the determined high-risk blind zone on the traffic road at the current or future time. Whether there are other targets in the risk blind zone.
  • This method of first determining the high-risk blind zone and then determining the dangerous situation in the high-risk blind zone can save computing resources and avoid bringing many unnecessary danger reminders to the driver and affecting the normal driving of the driver. For example, for some low-risk blind spots, there are other targets, but there is no danger for the vehicle and the targets in the blind zone, so it is not necessary to remind the driver and the targets in the blind zone.
  • the video data includes multiple video streams taken by multiple cameras set at different positions on the traffic road; the video data is used to determine the current or future time of the vehicle on the traffic road.
  • Location information includes: determining the current or future position of the vehicle in multiple video streams according to multiple video streams; determining the current or future position of the vehicle in multiple video streams according to the current or future position of the vehicle in multiple video streams Location information on traffic roads.
  • the blind area information of the vehicle includes: the number of blind areas, the position of each blind area relative to the vehicle, and the shape of the blind area. According to the above-mentioned blind area information and the position and posture of the vehicle on the traffic road, the distribution and range of the blind area of the vehicle on the traffic road can be determined.
  • the blind spot information of the vehicle includes the risk factor of the blind spot; according to the blind spot information of the vehicle and the position information and posture information of the vehicle on the traffic road at the current or future moments, it is determined that the vehicle is in traffic Blind areas on the road, including: determine the high-risk blind area information in which the risk coefficient of the blind area in the blind area information is greater than the preset danger threshold; according to the high-risk blind area information, the current or future position information and posture of the vehicle on the road Information to determine the high-risk blind area on the traffic road at the current time or in the future.
  • the blind spot information of the vehicle includes the risk factor of the blind spot; before determining that the vehicle has a risk of the blind spot, the method further includes: correcting the risk factor of the blind spot according to the blind spot of the vehicle on the traffic road ; Determine the high-risk blind area of the vehicle on the traffic road according to the relationship between the risk coefficient of the revised blind area and the preset dangerous threshold; According to the video data and the determined blind area of the vehicle on the traffic road at the current time or in the future , Determining that the vehicle is in danger of a blind spot includes: determining that the vehicle is in a high-risk blind zone on the traffic road based on the location information of other targets in the blind spot on the traffic road.
  • the method before determining that the vehicle is in danger of the blind area according to the video data and the determined blind area of the vehicle at the current or future time, the method further includes: The blind zone on the road determines the risk factor of the blind zone of the vehicle; according to the relationship between the risk factor of the blind zone and the preset risk threshold, the high-risk blind zone of the vehicle on the traffic road is determined; the current or future time of the vehicle is determined according to the video data
  • determining that the vehicle has the risk of the blind area includes: determining that the vehicle has a blind area hazard in the high-risk blind area on the traffic road according to the location information of other targets in the blind area on the traffic road.
  • the above method of determining high-risk blind areas and then performing blind-area hazard detection on high-risk blind areas can save computing resources on the one hand, and on the other hand, it can avoid vehicles and other targets when there are other targets in some blind areas but do not pose a danger. Carry out warnings to improve the accuracy of warnings and avoid frequent warnings to disturb drivers and pedestrians.
  • the method further includes: determining the blind zone information of the vehicle according to the structural attributes of the vehicle, specifically: querying the blind zone information database according to the structural attributes of the vehicle to obtain the structure of the vehicle in the blind zone information database The blind zone information of the vehicle corresponding to the attribute.
  • the structural attributes of the vehicle include the type of the vehicle; query the blind zone information database according to the structural attributes of the vehicle to obtain the blind zone information of the vehicle corresponding to the structural attributes of the vehicle in the blind zone information database, specifically including : Input the structural attributes to the blind zone information database to obtain the blind zone information corresponding to the vehicle of the same type as the vehicle in the blind zone information database;
  • the structural attributes of the vehicle include the length and width information of the vehicle, the type of cab, and the position of the cab; according to the structural attributes of the vehicle, the blind zone information database is queried to obtain the corresponding structural attributes in the blind zone information database.
  • the blind area information of the vehicle specifically includes: inputting the structural attributes of the vehicle into the blind area information database, and obtaining the blind area information corresponding to the vehicle whose length and width information, cab type, and cab position are similar to the vehicle in the blind area information database.
  • the received video data is a real-time video stream
  • the position information and posture information of the vehicle on the traffic road at the current or future time are determined according to the video data, which specifically includes: according to the real-time video Determine the position information of the vehicle in the video data at the current time or the future time; according to the preset calibration relationship and the position information of the vehicle in the video data at the current time or the future time, determine the current time or the future time
  • the position information of the vehicle on the traffic road; the position information of the vehicle on the traffic road at the current time or the future time is used to determine the trajectory information of the vehicle on the traffic road; the current time or the future time position is determined according to the trajectory information of the vehicle
  • the posture information of the vehicle is described, and the posture information indicates the traveling direction of the vehicle.
  • the present application also provides a detection device, including: a target detection and tracking module for receiving video data, the video data is captured by a camera set on a traffic road; a target positioning module for determining according to the video data The position information of the vehicle on the traffic road at the current time or in the future; the posture determination module is used to determine the posture information of the vehicle on the traffic road at the current time or the future time according to the video data; the blind spot determination module is used to obtain the blind area of the vehicle Information, blind area information is determined by the structural attributes of the vehicle; it is also used to determine the blind area of the vehicle on the traffic road at the current time or in the future based on the vehicle’s blind area information, position information and posture information of the vehicle on the traffic road.
  • a detection device including: a target detection and tracking module for receiving video data, the video data is captured by a camera set on a traffic road; a target positioning module for determining according to the video data The position information of the vehicle on the traffic road at the current time or in the future; the posture determination module
  • the blind spot determination module is also used to determine that the vehicle has a blind spot hazard based on the video data and the determined blind spot of the vehicle on the traffic road at the current time or in the future.
  • the vehicle has other targets in the blind zone on the traffic road; the blind zone warning is sent to the vehicle or other targets in the blind zone.
  • the blind zone warning includes warning data
  • the warning data includes one or more of the following information: the position and range of the blind zone on the traffic road where the blind zone danger occurs, and other targets in the traffic Location information on the road and the types of other targets.
  • the target positioning module when used to determine the position information of the vehicle on the traffic road at a future time according to the video data, it is specifically used to: determine that the vehicle is in the traffic at the current time according to the video data. Position information on the road; predict the position information of the vehicle on the road in the future according to the position information of the vehicle on the road at the current time; the posture determination module is used to determine the posture of the vehicle on the road in the future based on the video data
  • the information is used, it is specifically used to determine the vehicle's current posture information on the traffic road according to the video data; according to the vehicle's current posture information on the traffic road, predict the vehicle's posture information on the traffic road in the future.
  • the blind spot determination module is further configured to construct a visual blind spot image according to the blind spot of the vehicle on the traffic road at the current time or in the future; and send the visual blind spot image to the vehicle or other devices.
  • the blind spot determination module is also used to calculate the driving speed of the vehicle according to the position information of the vehicle on the traffic road at the current time or in the future; adjust the current time or the vehicle according to the driving speed of the vehicle A blind spot on the traffic road in the future.
  • the blind spot determination module is also used to send an adjustment instruction to a vehicle in danger of the blind spot, where the adjustment instruction instructs the vehicle to adjust the driving route to avoid other targets in the blind zone.
  • the blind spot determination module is further configured to determine a high-risk blind zone in the blind zone.
  • the blind spot determination module is specifically used to determine the blind spot risk coefficient according to the blind spot information of the vehicle.
  • the blind spot risk coefficient indicates the degree of danger of each blind spot of the vehicle; according to the blind spot risk factor and the vehicle's
  • the blind spot information and the position information and posture information of the vehicle on the traffic road determine the high-risk blind spot on the traffic road at the current time or in the future.
  • the blind spot determination module is also used to determine whether there are other targets in the high-risk blind zone based on the video data and the determined high-risk blind zone on the traffic road at the current time or in the future. .
  • the video data includes multiple video streams captured by multiple cameras set at different positions on the traffic road; the target detection and tracking module is also used to determine The location information of the vehicle in multiple video streams at the current or future time; the target positioning module is also used to determine whether the vehicle is on the traffic road at the current or future time based on the location information of the vehicle in multiple video streams at the current or future time Location information.
  • the blind area information of the vehicle includes: the number of blind areas, the position of each blind area relative to the vehicle, and the shape of the blind area.
  • the present application also provides an in-vehicle device, the in-vehicle device is installed on a vehicle, and the in-vehicle device is configured to execute the method provided by the first aspect or any one of the possible implementations of the first aspect.
  • the present application also provides a vehicle.
  • the vehicle includes a storage unit and a processing unit.
  • the storage unit of the vehicle is used to store a set of computer instructions and data sets.
  • the processing unit executes the computer instructions stored in the storage unit, and the processing unit reads The data set of the storage unit is taken, so that the vehicle executes the method provided by the first aspect or any one of the possible implementation manners of the first aspect.
  • the present application provides a system that includes at least one memory and at least one processor, at least one memory is used to store a set of computer instructions; when at least one processor executes the set of computer instructions, the system Implement the method provided by the first aspect or any one of the possible implementations of the first aspect.
  • the present application also provides a detection system, which is used to detect the blind area of a vehicle, and the system includes:
  • the vehicle dynamic monitoring system is used to receive video data, and determine the position information and posture information of the vehicle on the traffic road at the current time or in the future based on the video data, wherein the video data is captured by a camera set on the traffic road;
  • the vehicle blind spot detection system is used to obtain the blind spot information determined by the structural attributes of the vehicle, and determine the blind spot of the vehicle on the traffic road at the current time or in the future according to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road.
  • the present application provides a non-transitory readable storage medium that stores computer program code, and when the computer program code is executed by a computing device, the computing device Perform the foregoing first aspect or the method provided in any one of the possible implementation manners of the first aspect.
  • the storage medium includes but is not limited to volatile memory, such as random access memory, non-volatile memory, such as flash memory, hard disk (English: hard disk drive, abbreviation: HDD), solid state drive (English: solid state drive, Abbreviation: SSD).
  • the present application provides a computer program product.
  • the computer program product includes computer program code.
  • the computing device executes the foregoing first aspect or any of the first aspects.
  • the computer program product may be a software installation package.
  • the computer program product may be downloaded and executed on a computing device. Program product.
  • Figure 1 is a schematic diagram of the blind areas of different trucks provided by an embodiment of the application.
  • 2A is a schematic diagram of the deployment of a detection device provided by an embodiment of the application.
  • 2B is a schematic diagram of the deployment of another detection device provided by an embodiment of the application.
  • FIG. 2C is a schematic diagram of deployment of yet another detection device provided by an embodiment of this application.
  • FIG. 3 is a schematic structural diagram of a computing device 100 provided by an embodiment of this application.
  • FIG. 4 is a schematic structural diagram of a training device 200 and a detection device 300 provided by an embodiment of the application;
  • FIG. 5 is a method for detecting a blind area of a vehicle according to an embodiment of the application.
  • Fig. 6 is a schematic diagram of a blind spot danger in a blind spot of a vehicle provided by an embodiment of the application;
  • FIG. 7 is a method for target detection and target tracking provided by an embodiment of the application.
  • FIG. 8 is a specific method for determining the blind area of a vehicle according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of a determined blind area of a vehicle according to an embodiment of the application.
  • Fig. 10 is a schematic diagram of a system provided by an embodiment of the application.
  • Various vehicles are often running on traffic roads, such as engineering vehicles (such as loading and unloading trucks, cement trucks, tank trucks, trucks), cars, bicycles, buses, etc.
  • traffic roads such as engineering vehicles (such as loading and unloading trucks, cement trucks, tank trucks, trucks), cars, bicycles, buses, etc.
  • engineering vehicles such as loading and unloading trucks, cement trucks, tank trucks, trucks
  • cars bicycles, buses, etc.
  • the angle of view and field of view of the traffic road that the driver can observe when driving the vehicle normally on the seat is also different, although each vehicle is currently equipped with left and right mirrors and rear view mirrors. Sight glasses and other tools that assist the driver in observing the driving environment around the vehicle, but the driver still cannot see some areas around the vehicle when driving the vehicle normally, that is, blind spots.
  • the blind zone refers to the area around the vehicle that is not observed when the driver drives the vehicle normally, and the position of the blind zone moves with the driving of the vehicle.
  • Each vehicle has its own blind area information.
  • the blind area information of each vehicle refers to the parameters of the blind area that the driver may not be able to observe due to factors such as the structure of the vehicle, the vehicle type, and the position of the driver's seat.
  • the blind area information includes the blind area information. Number, shape of blind zone, position of blind zone relative to vehicle, etc.
  • Figure 1 is a schematic diagram of the blind spots of two trucks. Different vehicles have different blind spot information. According to the blind spot information and the position information of the vehicle, the position and distribution of the blind spots of the driver during the driving of the vehicle can be determined. For construction vehicles, because the body of the construction vehicle is usually large, the driver's vision is often blocked by the vehicle body during driving, resulting in a large blind area when the driver drives the construction vehicle.
  • blind zone hazards When there are dynamic hazards such as other moving vehicles, pedestrians, objects, etc. in the blind area of the vehicle, or static hazards in the blind area (such as construction hazards, geographic defects, etc.), it will not only bring great danger to the vehicle and vehicle drivers and passengers , It will also bring great danger to people and things in the blind zone. In this application, situations where there are dynamic hazards and static hazards in the blind zone are collectively referred to as blind zone hazards. Detecting the blind area of the vehicle and further detecting the danger of the blind area is of great significance to the safety of vehicles and pedestrians on the traffic road.
  • the present application provides a method for detecting a blind area of a vehicle, and the method is executed by a detecting device.
  • the function of the detection device for detecting the blind zone can be realized by a software system, can also be realized by a hardware device, or can be realized by a combination of a software system and a hardware device.
  • the deployment of the detection device is flexible, and it can be deployed in an edge environment.
  • the detection device can be an edge computing device in the edge environment or a software device running on one or more edge computing devices.
  • the edge environment refers to the distance to be detected.
  • the edge environment includes one or more edge computing devices.
  • the edge computing devices may be roadside devices with computing capabilities installed on the side of the traffic road.
  • the detection device is deployed at a location close to the intersection, that is, the edge computing device on the roadside.
  • a networkable camera is installed at the intersection. The camera captures and records the video data of vehicles passing by at the intersection and sends the video data to the detection device via the network.
  • the detection device detects the blind area of the vehicle driving at the intersection according to the video data, and further performs the blind area danger detection.
  • the vehicle driving at the intersection can be through the network (for example: wireless network or car networking network) Send a blind zone warning to the on-board system in the vehicle, so that the on-board system in the vehicle reminds the driver that there is a dangerous situation in the blind zone.
  • the detection device sends a blind zone alarm to the roadside warning device, so that the roadside warning device emits warning signals such as sound or light.
  • the detection device sends the detected alarm data and statistical data to the equipment or device in the traffic management system, so that the commander can conduct corresponding command or law enforcement on the vehicles on the traffic road based on the obtained alarm data and statistical data ,
  • the warning data includes one or more of the following information: the location and range of the blind zone on the traffic road where the blind zone danger occurs, the location information of other targets in the blind zone on the traffic road, and the types of other targets in the blind zone .
  • the detection device can also be deployed in a cloud environment, which is an entity that uses basic resources to provide cloud services to users in a cloud computing mode.
  • the cloud environment includes a cloud data center and a cloud service platform.
  • the cloud data center includes a large number of basic resources (including computing resources, storage resources, and network resources) owned by a cloud service provider.
  • the computing resources included in the cloud data center can be a large number of computing resources.
  • Device for example, server).
  • the detection device can be a server in a cloud data center that is used to detect the blind area of a moving vehicle; the detection device can also be a virtual machine created in a cloud data center to detect the blind area; the detection device can also be deployed in the cloud A software device on a server or virtual machine in a data center.
  • the software device is used to detect the blind area of a driving vehicle.
  • the software device can be distributed on multiple servers or distributed on multiple virtual machines On virtual machines and servers.
  • the detection device is deployed in a cloud environment, and a networkable camera set on the side of the traffic road sends the captured video data to the detection device in the cloud environment.
  • the detection device performs blind zone detection and blind zone danger detection on the video recorded vehicle according to the video data.
  • a vehicle is detected as a blind zone danger, it sends a blind zone warning to the on-board system of the vehicle, so that the on-board system in the vehicle prompts driving There is a dangerous situation in the blind spot.
  • the detection device sends a blind zone alarm to the roadside warning device, so that the roadside warning device emits warning signals such as sound or light.
  • the detection device sends the detected alarm data to the equipment or device in the traffic management system, so that the commander can conduct corresponding command or law enforcement on the vehicles on the traffic road based on the obtained alarm data.
  • the detection device can be deployed in a cloud data center by a cloud service provider.
  • the cloud service provider abstracts the function provided by the detection device into a cloud service, and the cloud service platform allows users to consult and purchase this cloud service. After purchasing this cloud service, the user can use the service provided by the detection device of the cloud data center to detect the vehicle's blind spot danger.
  • the detection device can also be deployed by the tenant in the computing resources (such as virtual machines) of the cloud data center rented by the tenant. The tenant purchases the computing resource cloud service provided by the cloud service provider through the cloud service platform, and runs the detection device in the purchased computing resources , So that the detection device performs blind spot detection on the vehicle.
  • the detection device When the detection device is a software device, the detection device can be logically divided into multiple parts, each of which has different functions (multiple parts such as: the detection device includes a target detection and tracking module, a target positioning module, a posture determination module, and a blind zone Determine the module).
  • the detection device includes a target detection and tracking module, a target positioning module, a posture determination module, and a blind zone Determine the module.
  • the detection device includes a target detection and tracking module, a target positioning module, a posture determination module, and a blind zone Determine the module.
  • the various parts of the detection device deployed in different environments or equipment cooperate to realize the function of vehicle blind zone hazard detection.
  • the target detection and tracking module in the detection device is deployed on the edge computing device, and the target positioning module, posture determination module, and blind spot determination module are deployed in the cloud data center (for example, the server or On the virtual machine).
  • the camera set at the traffic intersection sends the captured video data to the target detection and tracking module deployed in the edge computing device.
  • the target detection and tracking module detects and tracks the vehicles, pedestrians and other targets recorded in the video data based on the video data , Send the obtained position information of the target in the video at the current moment, the motion trajectory information formed in the video at the current and historical moments of the target to the cloud data center, and the target positioning module and posture determination module deployed on the cloud data center ,
  • the blind spot determination module further analyzes and processes data such as the position and running track of the target in the video, and obtains the blind spot of the vehicle (and the blind spot risk determination result). It should be understood that this application does not restrict the division of the parts of the detection device, nor does it restrict the environment in which the detection device is specifically deployed.
  • the camera can be a smart camera with certain computing capabilities
  • the detection device can also be deployed in three parts, of which one is deployed in the camera, one is deployed in the edge computing device, and the other is deployed in the cloud. Computing equipment.
  • FIG. 3 provides a schematic structural diagram of a computing device 100.
  • the computing device 100 shown in FIG. 3 includes a memory 101, a processor 102, a communication interface 103, and a bus 104.
  • the memory 101, the processor 102, and the communication interface 103 implement a communication connection between each other through the bus 104.
  • the memory 101 may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device, or a random access memory (Random Access Memory, RAM).
  • the memory 101 may store computer instructions. When the computer instructions stored in the memory 101 are executed by the processor 102, the processor 102 and the communication interface 103 are used to execute the method for detecting the blind area of the vehicle.
  • the memory can also store data. For example, a part of the memory 101 is used to store data required for detecting the blind area of the vehicle, and used to store intermediate data or result data during program execution.
  • the processor 102 may adopt a general-purpose central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or any combination thereof.
  • the processor 102 may include one or more chips, and the processor 102 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
  • NPU neural network processor
  • the communication interface 103 uses a transceiver module such as but not limited to a transceiver to implement communication between the computing device 100 and other devices or communication networks. For example, the data required to detect the danger of the blind zone of the vehicle can be obtained through the communication interface 103.
  • a transceiver module such as but not limited to a transceiver to implement communication between the computing device 100 and other devices or communication networks. For example, the data required to detect the danger of the blind zone of the vehicle can be obtained through the communication interface 103.
  • the bus 104 may include a path for transferring information between various components of the computing device 100 (for example, the memory 101, the processor 102, and the communication interface 103).
  • the detection device executes the method for detecting the blind area of a vehicle provided in the embodiments of the application, it needs to adopt an artificial intelligence (AI) model.
  • AI artificial intelligence
  • the AI model includes multiple types, and the neural network model is one of the AI models.
  • a neural network model is taken as an example. It should be understood that other AI models can also be used to complete the functions of the neural network model described in the embodiments of the present application, which is not limited in this application.
  • Neural network model is a kind of mathematical calculation model that imitates the structure and function of biological neural network (animal's central nervous system).
  • a neural network model can include a variety of neural network layers with different functions, and each layer includes parameters and calculation formulas. According to different calculation formulas or different functions, different layers in the neural network model have different names. For example, the layer that performs convolution calculations is called convolutional layer, and the convolutional layer is often used for input signals (for example: image ) Perform feature extraction.
  • a neural network model can also be composed of a combination of multiple existing neural network models. Neural network models with different structures can be used in different scenarios (for example: classification, recognition) or provide different effects when used in the same scenario.
  • the different neural network model structures specifically include one or more of the following: the number of network layers in the neural network model is different, the order of each network layer is different, and the weights, parameters or calculation formulas in each network layer are different.
  • two different neural network models are required to execute the method of detecting the blind area of the vehicle.
  • One is a neural network model used to detect targets in video data, which is called a target detection model.
  • the target detection model in the embodiments of the present application can use any of the neural network models that have been used for target detection with better effects in the industry, for example: one-stage unified real-time target detection (you only look once: unified, real-time object detection, Yolo) model, single shot multi-box detector (SSD) model, regional convolutional neural network (regional convolutional neural network, RCNN) model, or fast regional convolutional neural network (fast region convolutional neural network, Fast-RCNN) model, etc.
  • a model used to perform attribute detection on a detected vehicle which is called a vehicle attribute detection model
  • a vehicle attribute detection model is also Any one of some existing neural network models in the industry can be used, for example: convolutional neural network (convolutional neural network, CNN) model, Resnet model, Densenet model, VGGnet model, etc.
  • CNN convolutional neural network
  • Resnet model Resnet model
  • Densenet model Densenet model
  • VGGnet model etc.
  • a neural network model developed in the industry that can realize target detection and vehicle attribute detection in the future can also be used as the target detection model and vehicle attribute detection model in the embodiments of this application, which is not limited in this application.
  • the target detection model and the vehicle attribute detection model can be trained by the training device before being used for the blind spot detection of the vehicle.
  • the training device uses different training sets to train the target detection model and the vehicle attribute detection model.
  • the training device is completed
  • the target detection model and the vehicle attribute detection model can be deployed in the target detection and tracking module in the detection device, and the detection device is used to detect the blind zone danger of the vehicle.
  • FIG. 4 provides a schematic structural diagram of a training device 200 and a detection device 300.
  • the following describes the structure and function of the training device 200 and the detection device 300 with reference to FIG. 4. It should be understood that the embodiment of the present application only exemplarily divides the structure and function modules of the training device 200 and the detection device 300, and the present application does not describe them. Make any restrictions on the specific division.
  • the training device 200 is used to train the target detection model 203 and the vehicle attribute detection model 204 separately. Training the target detection model 203 and the vehicle attribute detection model 204 requires two training sets, which are called target detection training set and vehicle attribute detection respectively. Training set. The obtained target detection training set and vehicle attribute detection training set are stored in the database.
  • a collection device can collect multiple training videos or training images, and the collected multiple training videos or training images are processed and annotated manually or by the collection device to form a training set. When the collection device collects multiple training videos, the collection device uses video frames in the training videos as training images, and then processes and labels the training images to construct a training set.
  • the initialization module 201 When the training device 200 starts training the target detection model 203, the initialization module 201 first initializes the parameters of each layer in the target detection model 203 (that is, assigns an initial value to each parameter), and then the training module 202 reads The training images in the target detection training set in the database train the target detection model 203 until the loss function in the target detection model 203 converges and the loss function value is less than a certain threshold or all the training images in the target detection training set are used for training, then The target detection model 203 has been trained.
  • the initialization module 201 first initializes the parameters of each layer in the vehicle attribute detection model 204 (that is, assigns an initial value to each parameter), and then The training module 202 reads the training images in the vehicle attribute detection training set in the database to train the vehicle attribute detection model 204, until the loss function in the vehicle attribute detection model 204 converges and the loss function value is less than a specific threshold or all vehicle attribute detection training sets The training image of is used for training, and the training of the vehicle attribute detection model 204 is completed.
  • the target detection model 203 and the vehicle attribute detection model 204 can also be trained by two training devices respectively, and the target detection model 203 and/or the vehicle attribute detection model 204 may not need to be trained by the training device 200, for example:
  • the target detection model 203 and/or the vehicle attribute detection model 204 use a neural network model that has been trained by a third party and has good accuracy for target detection and/or attribute detection.
  • the training set is obtained directly from a third party.
  • the training images in the target detection training set in this application can have the same content as the training images in the vehicle attribute detection training set but with different labels.
  • the collection device collects 10,000 images containing vehicles on various traffic roads. , Pedestrians, static objects and other target images.
  • the targets in the 10,000 images are labeled with bounding boxes, and the 10,000 training images after the bounding box labeling constitute the target detection training set.
  • the vehicles in these 10,000 images are marked with bounding boxes, and each bounding box corresponds to the attributes of the vehicle (for example: car model, vehicle brand, etc.), passing through the bounding box and These 10,000 training images after attribute labeling constitute the vehicle detection training set.
  • the detection device may also use only one neural network model when detecting the blind area of the vehicle, which may be called a detection and recognition model, and the detection and recognition model is a target A neural network model for all functions of the detection model 203 and the vehicle attribute detection model 204.
  • the detection and recognition model can detect the target position or identify the vehicle in the detected target, and then perform the attribute detection on the identified vehicle.
  • the training of the detection and recognition model is the same as the training of the target detection model 203 and the vehicle attribute detection model 204, and will not be repeated here.
  • the target detection model 203 and the vehicle attribute detection model 204 trained by the training device 200 can be used to perform target detection and vehicle attribute detection on video frames in the video data captured by the camera, respectively.
  • the trained target detection model 203 and the vehicle attribute detection model 204 are deployed to the detection device 300.
  • the trained target detection model 203 and the training The completed vehicle attribute detection model 204 is deployed to the target detection and tracking module 301.
  • the detection device 300 includes a target detection and tracking module 301, a target positioning module 302, a posture determination module 303, and a blind spot determination module 304.
  • the target detection and tracking module 301 is used to receive video data captured by the camera, and the video data captured by the camera may be a real-time video stream.
  • the real-time video stream records the operation of the target on the traffic road at the current moment, detects the target in the video data, and obtains the position information of the target in the video at the current moment.
  • the target detection and tracking module 301 is also used for According to the location information of the target in the video at the current moment and the location information of the target in the video at the historical moment, track and fit the running trajectory of the target in the video screen over a period of time to obtain the motion of each target in the video Track information.
  • the target in this application is an entity located on a traffic road, and the target includes a dynamic target and a static target on the traffic road.
  • the dynamic target on the traffic road is moving and running on the traffic road over time.
  • Objects that form a running track include: vehicles, pedestrians, animals, etc.; static targets are stationary on traffic roads for a period of time, for example: static targets can be vehicles parked on the side of the road with their flames out, and construction areas formed by road construction.
  • the target detection and tracking module 301 can identify dynamic targets and static targets during target tracking, and can track only dynamic targets, or both dynamic targets and static targets.
  • the target detection and tracking module 301 may receive video data shot by at least one camera, and detect and track the target in the video frame of the video data according to each video data.
  • the target detection and tracking module 301 may also receive radar data sent by a radar device, and combine video data and radar data to jointly perform target detection and tracking.
  • the target positioning module 302 is configured to communicate with the target detection and tracking module 301, receive the motion track information of each target in the video data sent by the target detection and tracking module 301, and use the pre-obtained calibration relationship to place each target in the video
  • the trajectory information in is converted into the trajectory information of each target on the traffic road.
  • the motion trajectory information of each target in the video is a sequence of pixel coordinates, which consists of the pixel coordinates of the target in different video frames.
  • the motion trajectory information of each target in the video indicates that the target The running status in the video during a historical period of time.
  • the trajectory information of each target on the traffic road is a geographic coordinate sequence, which is composed of the geographic coordinates of the target on the traffic road.
  • the trajectory information of each target on the traffic road indicates the target’s current time including the current time. The operating conditions on the traffic road in a historical period.
  • the pixel coordinates of the target are the coordinates of the pixel at the location of the target in the video frame, and the pixel coordinates are two-dimensional coordinates;
  • the geographic coordinates of the target are the coordinates of the target in any coordinate system in the physical world, for example: this application
  • the geographic coordinates in the use of three-dimensional coordinates composed of longitude, latitude and altitude corresponding to the location of the target in the traffic road.
  • the target positioning module 302 can also be used to predict the target's movement on the traffic road at a certain time or a certain period of time in the future based on the target's movement trajectory information during a period of time including the current moment and the historical moment. Position information and motion track information.
  • the posture determination module 303 is used to determine the posture of the target according to the trajectory information of the target on the traffic road or the result of target detection.
  • the attitude determination module 303 may also only determine the attitude of the vehicle.
  • the posture of the target indicates the direction of travel of the target in the physical world.
  • the heading of the vehicle or the tangent direction of the vehicle's trajectory on a traffic road can be used to indicate the posture of the vehicle.
  • the tangent direction of the trajectory of the pedestrian can be used to express the posture of the pedestrian.
  • the blind spot determination module 304 is used to determine the structural attributes of the vehicle (for example, the model of the vehicle, the shape of the vehicle, etc.), determine the blind spot information of the vehicle in the blind spot information database according to the structural attributes of the vehicle, and further determine the current driving according to the blind spot information of the vehicle
  • the blind area of the vehicle (including: determining the location of the blind area on the traffic road and the distribution range of the blind area).
  • the blind spot determination module 304 is also used to determine whether the vehicle has a blind spot danger within the blind spot range of the location (that is, whether there are other targets in the blind zone at the current moment), and if so, send a blind spot warning to the vehicle in the blind spot danger .
  • the blind spot determination module 304 may also be used to send alarm data to the traffic management system or roadside alarm equipment.
  • the blind spot determination module 304 can also be used to count the blind spot risk data of the traffic road for a period of time, and send the statistical data to the traffic management system.
  • the detection device provided by the embodiments of the present application can be used to detect the blind spots of vehicles in a certain geographic area at the current time or in the future in real time, and promptly warn vehicles and pedestrians in danger. Early warning can effectively reduce traffic hazards caused by blind spots of vehicles.
  • S401 Receive video data, perform target detection and target tracking according to video frames in the video data, and obtain target type information and motion track information of the target in the video screen.
  • the video data captured by a camera set at a fixed position in a traffic road is received.
  • the video data in this application may be a video stream captured by a camera in real time.
  • S401 acquires the video stream at the current moment in real time, detects the target in the video stream at the current moment, and detects the target's position in the video stream according to the location information and
  • the position information of the target in the video stream at the historical moment determines the motion track information of the target in the video screen of a historical period including the current moment.
  • the position information of the target in the video stream at the historical moment is detected by the historical moment. Obtained at the time and stored in a detection device or other readable device.
  • Target detection needs to adopt a target detection model, specifically: the image in the received video data is input to the target detection model, the target detection model detects the target in the image, and outputs the target in each image. Location information and type information of the target. Further, according to the position information and type information of the target in the multiple images in a continuous period of history and the target tracking algorithm, the running track of the target in the video screen is tracked to obtain the moving track information of the target in the video screen.
  • the motion trajectory information of the target in the video picture represents the motion trajectory of the target in the video picture in a historical time including the current moment, and the end of the motion trajectory information (that is, the last pixel coordinate value in the pixel coordinate sequence) is The position of the target in the video frame at the current moment.
  • S401 can also receive radar data sent by radar equipment (such as lidar, millimeter wave radar) installed on the traffic road, and S402 can combine target information included in the radar data when detecting and tracking targets. (For example: target position information, target contour information, etc.) and the information obtained from the detection and tracking of the target in the video frame, to obtain the position information of the target in the image, the type information, and the movement track information of the target in the video screen .
  • radar equipment such as lidar, millimeter wave radar
  • S402 Convert the movement track information of the target in the video picture into the movement track information of the target on the traffic road.
  • the motion track information of the detected multiple targets in the video screen is obtained.
  • the motion track information of the target in the video data is the pixel coordinate sequence corresponding to the target, and the pixel coordinate sequence includes multiple pixel coordinates. , Each pixel coordinate represents the position of the target in the image of a video frame in the video screen.
  • Convert the target's motion trajectory information in the video screen into the target's motion trajectory information on the traffic road in other words, convert each pixel coordinate in the pixel coordinate sequence corresponding to the target into geographic coordinates, each geographic coordinate Represents the actual position of the target on the traffic road in the physical world.
  • the conversion of pixel coordinates into geographic coordinates requires a calibration relationship, which is the mapping relationship between the video picture taken by the camera set on the traffic road and the traffic road in the physical world, that is, the relationship between each pixel in the video picture.
  • the mapping relationship between the pixel coordinates and the geographic coordinates of the point on the traffic road in the physical world Through the calibration relationship and the pixel coordinates in the motion track information of the target in the video image obtained in S401, the geographic coordinates of the target on the traffic road can be calculated, and the process of calculating the geographic coordinates of the target may also be called target positioning.
  • the calibration relationship between the video image captured by each camera and the traffic road in the physical world needs to be calculated in advance.
  • the method for calculating the calibration relationship may be:
  • the control point of the traffic road usually selects the sharp point of the background object in the traffic road in order to intuitively find the position of the pixel point of the control point in the video frame.
  • the geographic coordinates (longitude, latitude, altitude) of the control points can be collected manually or by unmanned
  • the selected control points of the traffic road need to be evenly distributed on the traffic road, and the number of control points selected needs to refer to the actual situation.
  • Obtain the pixel coordinates of the captured control point in the video frame captured by the camera Specifically, read the video of the traffic road taken by the camera fixedly set on the traffic road, and obtain the pixel coordinate value corresponding to the control point in any video frame taken by the camera, which can be obtained manually, that is, manually observe the control point in the video frame
  • the corresponding pixel and record the geographic coordinates of the pixel.
  • Obtaining the pixel coordinate value corresponding to the control point can also be obtained through the program, for example, using corner detection, short-time Fourier transform edge extraction algorithm and sub-pixel coordinate fitting method to obtain the corresponding pixel of the control point of the traffic road in the video frame coordinate.
  • the homography transformation matrix H converted from pixel coordinates to geographic coordinates can be calculated according to the principle of homography transformation.
  • the H matrix corresponding to the video data captured by the camera can be calculated according to the pixel coordinates (x0, y0) and geographic coordinates (m0, n0, h0) of at least three control points. It should be understood that the H matrix corresponding to the video data captured by each camera set at a different position of the traffic road is different.
  • the calibration matrix obtained through the above steps 1-3 is the calibration relationship between the video picture taken by the camera and the traffic road.
  • the geographic coordinates corresponding to each pixel coordinate of the target in different video frames can be obtained.
  • the pixel coordinates in the motion trajectory information It can be seen from the above that each pixel coordinate of the target corresponds to a geographic coordinate.
  • the geographic coordinate sequence of the target After calculating the geographic coordinate corresponding to each pixel coordinate in the motion track information of the target in the video screen, the geographic coordinate sequence of the target can be obtained. It should be understood that the position of each geographic coordinate in the geographic coordinate sequence of the target in the geographic coordinate sequence is the same as the position of its corresponding pixel coordinate in the pixel coordinate sequence.
  • the movement trajectory information of the target on the traffic road indicates the movement trajectory of the target on the traffic road in a period of time including the current moment, and the end (ie the last geographical coordinate value in the geographical coordinate sequence) in the movement trajectory information is this The position of the target on the traffic road at the current moment.
  • this step S402 converts the motion trajectory information of each target in the video image obtained in the foregoing S401 to obtain the motion trajectory information of each target on the traffic road.
  • curve fitting can be performed on the target's motion trajectory on the traffic road according to the target's motion trajectory information and function on the traffic road.
  • the distribution of each coordinate point in the geographic coordinate sequence in the trajectory information and the time information corresponding to each coordinate point select an appropriate function, use the coordinate point corresponding to each moment in the geographic coordinate sequence to calculate the parameters of the function, and finally obtain the pseudo
  • the fitting function is a function between time information and geographic coordinate information.
  • the geographic location of the target at the future time can be predicted according to the obtained fitting function, that is, the location information of the target on the traffic road at the future time can be obtained.
  • the foregoing step S401 may separately obtain video data taken by multiple cameras set on the traffic road ( For example: For a traffic intersection in four directions, you can obtain the video data shot by cameras set in the four directions respectively), perform target detection and target tracking as described in S401 on each video data, and obtain the Position information, type information of multiple targets in the image, and movement track information of the target in the video screen.
  • the method of executing S402 converts the movement trajectory information of each target in the video picture into the movement trajectory information of each target on the traffic road (need to calculate in advance the video picture taken by each camera and the calibration before the traffic road Further, the target whose motion trajectory information on the traffic road overlaps (or is similar) is determined to be the same target on the traffic road, that is, the target is captured by different cameras, and the same target corresponds to multiple targets.
  • the movement trajectory is fused, for example, the average value of each geographic coordinate in multiple geographic coordinate sequences is taken, and each average value forms a new geographic coordinate sequence, and the new geographic coordinate sequence is determined as the target on the traffic road. Motion track information.
  • S403-S405 for each target to complete the vehicle blind spot determination and blind spot risk judgment.
  • the method of blind spot detection and blind spot hazard determination of vehicles based on video data captured by multiple cameras can avoid the problem of limited field of view on a traffic road captured by a single camera, and can further avoid some targets in the video screen captured by a single camera It is occluded, resulting in the problem that the occluded target cannot be detected in the blind area. Further, the video data captured by multiple cameras can also be used to obtain the complete movement trajectory of the target on the traffic road within a period of time, so that the detection device can judge the posture of the vehicle according to the complete movement trajectory of the target in the subsequent steps.
  • S403 Determine the posture of the target according to the trajectory information of the target on the traffic road or the result of target detection.
  • the posture of the target is the traveling direction of the target on the traffic road.
  • a method for determining the posture of the vehicle is to use the trajectory information (ie geographic coordinate sequence) of the vehicle on the traffic road obtained in S402 to perform trajectory fitting, and use the tangent direction of the trajectory as
  • the tangent direction is the tangent direction of the end of the trajectory (because the trajectory obtained by fitting has a time sequence, the point on the trajectory corresponding to the current moment can be called the end).
  • this method can also be used to determine the posture and obtain the posture information of the target.
  • the three-dimensional detection method is used to detect the three-dimensional contour of the vehicle in the video, and the contour of the vehicle is converted into the contour information of the vehicle on the traffic road using the aforementioned method of S402. Determine the posture of the vehicle based on the contour information and the direction of the entrance and exit lanes on the traffic road.
  • the direction of the long side of the vehicle in the contour information of a car is northwest-southeast, and the traffic direction of the entrance and exit lane corresponding to the position of the vehicle at the intersection is determined by southeast To the northwest, the posture of the vehicle is determined to be the northwest direction according to the direction of the long side of the vehicle and the direction of the entrance and exit lanes on the traffic road. It is worth noting that information such as the direction of the entrance and exit lanes on the traffic road can be obtained by the detection device from other devices in advance.
  • the vehicle's movement trajectory curve in the future can be obtained according to the fitting function.
  • the tangent direction of the curve can be determined according to the trajectory curve for a period of time in the future as the posture of the vehicle at the future time, and the posture information of the vehicle at the future time can be obtained.
  • S404 Determine the blind area of the vehicle.
  • the type of the target on the traffic road, the position information of the target on the image and the traffic road, and the posture information of the target are obtained. According to one or more of these information, the blind zone estimation of the vehicle can be performed, and the blind zone position information and blind zone range information of the vehicle at a certain position at the current time or in the future can be obtained.
  • Blind zone estimation of a vehicle mainly includes: detecting vehicle attributes to determine the structural attributes of the vehicle; searching the blind zone information database according to the structural attributes of the vehicle to determine the blind zone information of the vehicle; according to the blind zone information of the vehicle, the current time or the future time
  • the location information and posture information of the vehicle on the traffic road determine the blind area of the vehicle on the traffic road at the current time or in the future, and obtain the blind area distribution information and the blind area location information of the vehicle.
  • the blind zone information database is a pre-built database that stores the structural attributes of various vehicles and the blind zone information corresponding to the vehicles of each structural attribute.
  • the blind zone information database can be constructed by manually collecting data in advance, or it can be purchased from a third party.
  • the blind zone information database may be a database deployed in the detection device, or may be a database outside the detection device and capable of data communication with the detection device.
  • description is made by taking the blind area information database as a database in the detection device as an example.
  • the structural attributes of the vehicle, the position information of the vehicle on the traffic road, the position and distribution of the blind area of the vehicle on the traffic road can be combined with the blind area of the vehicle on the traffic road.
  • Construct a visual blind spot image for example, you can obtain a real-world map, determine the model of the vehicle according to the structural attributes of the vehicle, map the model of the vehicle to the corresponding position of the real-world map according to the position information of the vehicle on the traffic road, and according to the vehicle's position on the traffic road The location and distribution of the blind area map the blind area of the vehicle to the corresponding position on the real map.
  • the obtained visual blind spot image may be a graphical user interface (GUI), and the GUI may be sent to other display devices, for example, sent to the vehicle-mounted display device of the corresponding vehicle, or sent to the display device in the traffic management system.
  • GUI graphical user interface
  • S405 Perform a blind zone risk judgment on the blind zone of the vehicle, and send a blind zone warning to the vehicle or other vehicles or people in the blind zone when the blind zone is dangerous.
  • the position of other targets on the traffic road or the movement trajectory information of other targets on the traffic road obtained in the foregoing steps S401-S402 can be used to determine the current moment Or whether the position of other targets in the future is in the blind zone of the vehicle, or whether there are other targets in the blind zone can be detected according to the position and range of the blind zone at the current time or in the future.
  • the blind zone warning includes warning data, which can include one or more of the following: the position of the blind zone where the blind zone danger occurs, the position of the target in the blind zone, the type of the target in the blind zone, etc., so that the vehicle in the vehicle The system reminds the driver that there is a dangerous situation in the blind spot.
  • the detection device may send an alarm to the roadside warning device, so that the roadside warning device emits warning signals such as sound or light.
  • the detection device can also send the detected alarm data to the equipment or device in the traffic management system, so that the commander can conduct corresponding command or law enforcement on the vehicles on the traffic road based on the obtained alarm data.
  • the detection device can also record the blind zone risk data of each vehicle at historical moments.
  • the blind zone risk data can include: the vehicle model where the blind zone danger occurs, the time of occurrence, the position of the blind zone where the blind zone danger occurs, and the target information in the blind zone.
  • the detection device can also collect statistics on the blind zone risk data of the vehicle at the current moment and the blind zone risk data at the historical moment, obtain statistical data, and send the statistical data to the traffic management platform, so that the traffic management platform can perform statistics on the traffic area. Risk assessment or adaptive planning and management based on statistical data or the introduction of corresponding regulatory strategies.
  • the visual blind zone image can be further constructed based on the previously constructed visual blind zone image.
  • the model corresponding to other targets in the blind zone is projected to the corresponding position in the visual blind zone image, so that the obtained visual blind zone image can intuitively reflect that the vehicle is in danger of the blind zone, and can indicate that the blind zone danger is the current or future time of the vehicle.
  • the detection device can also perform a renewal for the vehicle based on the obtained current position information of other targets in the blind zone and the position information of other targets in the future, as well as the current driving route of the vehicle. Carry out route planning to avoid conflicts between vehicles and vehicles in the blind area, causing traffic accidents.
  • the detection device After the detection device performs route planning, it can generate an adjustment instruction.
  • the adjustment instruction can include the new driving route information planned by the detection device.
  • the detection device further sends the adjustment instruction to the vehicle, so that the vehicle can promptly respond to the vehicle after receiving the adjustment instruction.
  • the driving route is adjusted, for example, after the autonomous vehicle receives the adjustment instruction, it will continue to drive according to the new driving route in the adjustment instruction to realize the blind spot danger avoidance.
  • the detection device sends the adjustment instruction to the traffic management platform, and the traffic management platform commands the vehicles.
  • the detection device can also plan a route for other targets in the blind zone, and generate adjustment instructions according to the planned new route information.
  • the adjustment instructions include the new route information and send the adjustment instructions to other targets in the blind zone. (For example: self-driving vehicles), so that other targets can adjust future routes according to adjustment instructions, which can also eliminate the danger of blind spots.
  • the blind zone with a higher risk coefficient can be selected according to the risk factor of the blind zone in the blind zone information of the vehicle.
  • use the blind zone information of the high-risk blind zone to make a blind zone hazard judgment on the high-risk blind zone. For example, if a blind zone with a blind zone risk factor greater than a preset danger threshold is determined as a high-risk blind zone, then when making a blind zone risk judgment It only needs to judge whether there are other targets in the high-risk blind zone with a high risk factor.
  • the risk factor of the blind zone can be corrected according to the position and range of the blind zone on the traffic road, so that The revised risk factor of the blind zone can more accurately reflect the risk of the vehicle in the blind zone of the traffic road; according to the relationship between the revised risk factor of the blind zone and the preset risk threshold, for example, the revised The blind zone with the blind zone risk coefficient greater than the preset risk threshold is determined as the high-risk blind zone of the vehicle on the traffic road, and then the blind zone risk judgment is performed.
  • the above-mentioned method of only judging the danger of the blind zone in the high-risk blind zone with a higher risk factor can save the time for the detection device to judge the danger of the blind zone, because some blind zones with a lower risk factor are less likely to have other targets, and on the other hand
  • it can also make the blind zone warning data sent by the detection device more accurate, because even if there are other targets in the blind zone with a small risk factor of the vehicle, the possibility of collision or friction between the vehicle and the target in the blind zone during driving is also possible. Very small, therefore, for the presence of other targets in these blind spots, it is not considered a blind spot hazard, and there is no need to send warning data to vehicles or traffic management systems.
  • blind spot detection and blind spot hazard judgment can be realized for the vehicles running on the traffic road, so that the driver driving the vehicle can learn about the blind spot danger in time and carry out timely danger avoidance.
  • the traffic management system can also alert targets (such as pedestrians, non-motor vehicles, etc.) in the blind area of the vehicle based on the warning data, such as: horns, flashlights, and buzzers through roadside warning equipment Alarms, etc.
  • the aforementioned steps S401-S403 of this application can be performed on all targets on the traffic road, and the aforementioned steps S404-S405 can be performed on all motor vehicles on the traffic road, so that all vehicles can learn about the dangerous situation of the blind zone in time.
  • the aforementioned steps S404-S405 can also be performed for a specific type of vehicle on a traffic road, for example: blind spot detection and blind spot risk judgment are only performed on construction vehicles with larger vehicles, because this type of vehicle is more likely to have blind spot dangerous events .
  • step S401 It is possible to detect the type of vehicle for blind spot detection and blind spot hazard detection during target detection in the aforementioned step S401, then determine the type of vehicle to be detected before executing S404, and execute the subsequent steps of S404 and S405 only for the type of vehicle to be detected .
  • the present application can perform the method of steps S401-S405 on the video frame (or the video frame at a fixed time interval) captured by the camera at each moment, that is, the detection device of the present application can realize continuous and real-time vehicle detection.
  • step S401 is described below in detail with reference to FIG. 7:
  • S4011 Receive video data, extract images (that is, video frames) in the video data, and standardize the size of the images. It should be understood that the video data is a video stream on a traffic road captured by a camera in real time, and processing the image in the video data can be understood as processing the picture on the traffic road at the current moment.
  • the purpose of standardizing the size of the image in this step is to make the size of the standardized image adaptable to the input of the target detection model.
  • This application does not specifically limit the method for standardizing the size of the image.
  • a method of stretching or compressing the size or a method of filling or cropping can be used.
  • S4012 Input the standardized image to the target detection model, and obtain the position information and type information of the target in the image.
  • the target detection model performs feature extraction on the input image, and further detects the target in the image based on the extracted features.
  • the target detection model outputs the location information and type information of the target in the detected image, for example: target detection
  • the model outputs an output image, the detected target in the output image is framed by a rectangular frame, and each rectangular frame also includes the type information of the target.
  • the position information of the target in the image is the pixel coordinates of one or more points in the image, for example: it can be the pixel coordinates of the rectangular frame corresponding to the target, or the center or the lower left corner of the rectangular frame corresponding to the target Pixel coordinates.
  • the type information of the target is the type to which the target belongs, for example: pedestrians, vehicles, or static objects.
  • the target detection model can perform target detection on the input image because the target detection model is trained by the target detection training set before the target detection is performed.
  • the target detection model must be To detect traffic, vehicles, and static objects on traffic roads, it is necessary to use multiple images of pedestrians, vehicles, and static objects in the target detection training set to train the target detection model, and each image in the target detection training set is Annotated, specifically, pedestrians, vehicles or static objects contained in each image are framed by rectangular boxes, and each rectangular box corresponds to the type information of the target in the rectangular box. Because the target detection model repeatedly learns the characteristics of each type of target during the training process, the trained target detection model has the ability to detect pedestrians, vehicles, and static objects in the input image.
  • S4012 performs target detection on the images in the continuous time video data received in S4011 (or images at a fixed time interval). Therefore, after S4012 is executed for a period of time, multiple continuous times can be obtained.
  • each image corresponds to a time stamp, and the images can be sorted in time order through the time stamp.
  • S4013 Track the moving target according to the position information and type information of the target in the image detected in S4012, and determine the moving track of the target in the video frame in the historical time period including the current moment.
  • Target tracking refers to tracking the targets in two images (or two images at a fixed time interval) in the video data at adjacent moments, to determine that the targets in the two adjacent images are the same target in the physical world, so that the two Two targets in each image correspond to the same target ID, and the pixel coordinates of the target ID in the image at the current moment are recorded on the target trajectory table.
  • the target trajectory table is recorded in each of the areas captured by the camera.
  • the pixel coordinates of the target at the current and historical moments (the trajectory of the target can be fitted by the pixel coordinates of the target's current and historical moments).
  • the type information and location information of the target obtained in step S4012 are compared with the type information and location information of the target in the buffered and processed video frame at the previous time to determine the adjacent time (or fixed time interval).
  • the association between the targets in the two images at the two moments), that is, the targets judged to be the same target in the two images are marked with the same target ID, and the target ID corresponding to each target and its image A target ID corresponds to the pixel coordinates in.
  • multiple pixel coordinates corresponding to a target ID can be obtained to form a pixel coordinate sequence, and the pixel coordinate sequence is the motion track information of the target.
  • Target match According to the location information of the detected target in the image at the current moment (that is, the pixel coordinates of the target in the image) and type information, the detected target in the image at the current moment is compared with the image at the previous moment (or before a fixed time interval). The target in the current image) is matched, for example, the target ID of the target in the current image is determined according to the overlap ratio between the rectangular frame of the target in the current image and the rectangular frame of the target in the previous image.
  • step 1 and subsequent steps are performed for each target detected in the current image.
  • the one or more targets in the current image in step 1 above do not match the targets in the image at the previous moment (that is, the one or more targets are not found in the image at the previous moment, for example: a vehicle At the current moment, you have just entered the area of the traffic intersection captured by the camera), the one or more targets are determined to be new targets on the traffic road at the current moment, and a new target ID is set for the target, where the target ID is uniquely Identify the target, record the target ID and its pixel coordinates at the current moment in the target track table.
  • the target at the current moment is If another target is partially or completely occluded or the target has left the area of the traffic road captured by the camera at the current moment), the target's pixel coordinates at the historical moment recorded in the target track table are used to predict that the target is in the current moment's image Pixel coordinates (for example, using three-point extrapolation, trajectory fitting algorithm, etc.).
  • step 4 When it is determined in step 4 that the predicted target has left the view of the camera at the current moment, delete the target ID and its corresponding data in the target track table.
  • step 4 When it is determined in step 4 that the predicted target is still in the image at the current moment, record the pixel coordinates of the predicted target in the target track table.
  • the aforementioned steps 1-5 can be performed for each target detected in the image at each moment in the video data captured by the camera (or the image at a fixed time interval), or only the video captured by the camera In the image at each moment in the data (or the image at a fixed time interval), the target detection result obtained in the foregoing S4012 is non-static, and the foregoing steps 1-5 are executed.
  • the pixel coordinate sequence corresponding to multiple target IDs can be obtained, and the pixel coordinate sequence is the motion track information of the target in the video screen. If the target tracking operation is also performed on a static target, a pixel coordinate sequence of the static target can be obtained. Since the target is static, the pixel coordinates in the pixel coordinate sequence of the target will be gathered near a point.
  • step S404 in an embodiment are described below in detail with reference to FIG. 8:
  • step S401 detects the target recorded in the video data captured by the camera set on the traffic road, the target includes vehicles, pedestrians, and so on.
  • the vehicle attribute detection model detects the structural attributes of the vehicle according to the vehicle attribute detection model.
  • One is: input each frame of image or fixed time interval images in the video data into the trained vehicle attribute detection model.
  • the vehicle attribute detection model detects the structural attributes of the vehicles in the input image, and obtains the position information of each vehicle in the image and the structural attributes of each vehicle.
  • this step divides the rectangular frame corresponding to the vehicle in the image according to the position information of the target whose target type is vehicle detected in the foregoing step S401, and inputs the divided vehicle sub-image into the vehicle attribute detection model.
  • the structural attributes of each vehicle are obtained from the vehicle attribute detection model. According to the combination of the target detection model and the vehicle attribute detection model, the position information of the target in the image, the target type information and the structural attributes of the vehicle can be obtained.
  • Which method is used to obtain the structural attributes of the vehicle can be determined according to the type of input image required by the trained vehicle attribute detection model.
  • the vehicle attribute detection model is a neural network model.
  • the structural attributes of the vehicle monitored by the vehicle attribute detection model can be the type of vehicle, such as the model of the vehicle or the sub-category of the vehicle. It is related to the function of the vehicle attribute detection model after being trained.
  • the training images used include vehicles, and the vehicles in the training images are marked with the vehicle model (for example: Changan 20-ton truck, BYD 30-seater passenger car, Mercedes-Benz c200 sedan, etc.)
  • the trained vehicle attribute detection model can be used to detect the model of the vehicle
  • the training image used when the vehicle attribute detection model is trained contains the vehicle, and the vehicle in the training image is marked with the sub-category of the vehicle (such as : 7-seater commercial vehicle, 4-seater car, 20-ton truck, 10-ton cement truck, etc.), the vehicle attribute detection model that has been trained can be used to detect sub-categories of vehicles.
  • the structural attributes of the vehicle can also include: the length and width information of the vehicle, the type and location of the vehicle cab, etc.
  • the vehicle attribute detection model can also be used to detect the length and width information of the vehicle, the type and location of the vehicle cab, etc. information. It should be understood that by detecting the structural attributes of the vehicle, one or more of structural attributes such as the model of the vehicle, the sub-category of the vehicle, the aspect ratio of the vehicle, the type and location of the cab of the vehicle, etc. can be obtained.
  • step S4041 can also be performed after the target detection in S401 is performed.
  • the obtained vehicle structural attributes and the target type and target location information obtained in the target detection stage can be stored together in the detection device.
  • Target detection and tracking Module or stored in other modules of the detection device, or stored in other storage devices readable by the detection device.
  • S4042 Query the blind area information database according to the structural attributes of the vehicle to obtain the blind area information of the vehicle.
  • the blind spot information database may be a relational database, and the blind spot information database provides a query interface.
  • the structural attributes of the vehicle obtained by the foregoing steps can be It is sent to the interface, and the corresponding blind zone information in the blind zone information database is queried by the interface according to the structural attributes of the vehicle, and the blind zone information database returns the query result through the interface.
  • the query result is the blind zone information of the vehicle corresponding to the vehicle structure attribute, which can include: the position of the blind zone relative to the vehicle, the shape of the blind zone, and the number of blind zones.
  • the position of the blind zone relative to the vehicle can be the relative key point in the blind zone.
  • the offset from the center point of the vehicle for example, the position of a rectangular blind area relative to the vehicle is the offset length and direction of the four corner points of the rectangular blind area relative to the center point of the vehicle.
  • the query result may also include: the area of the blind zone and the risk factor of the blind zone, where the risk factor of the blind zone is the probability that a dangerous situation may occur in the blind zone, and is used to indicate the degree of danger of the blind zone.
  • the risk factor of the blind zone can be obtained from the blind zone information
  • the library comprehensively judges the area, location, shape and other factors of the blind area of the vehicle. For example, the blind area of the vehicle that has a large blind area and is located diagonally behind the vehicle can be judged as a higher risk factor for the blind area.
  • the blind spot determination module of the detection device sends the model of the vehicle or the sub-category of the vehicle to the interface of the blind spot information database, and the blind spot information database queries the blind spot information corresponding to the model or the sub-category of the vehicle according to the model or sub-category of the vehicle.
  • the blind spot information is returned to the interface as the query result.
  • the query result can include: the position of the blind spot of the vehicle relative to the vehicle, the shape of the blind spot, and the number of blind spots; optionally, the query result can also include: the area of the blind spot and the risk factor of the blind spot .
  • the blind spot determination module of the detection device sends the model of the vehicle or the sub-category of the vehicle to the interface of the blind zone information database.
  • the blind spot information database does not find the vehicle corresponding to the model or the sub-category according to the vehicle model or sub-category.
  • Blind zone information the blind zone information database returns a query failure message to the blind zone determination module.
  • the blind zone information database further sends the length and width information of the vehicle, the type and location of the cab of the vehicle and other structural attributes to the blind zone information database interface.
  • the blind zone information database is based on the length of the vehicle.
  • the query result can include: The position of the blind zone relative to the vehicle, the shape of the blind zone, and the number of blind zones; optionally, the query result can also include: the area of the blind zone and the risk factor of the blind zone.
  • the blind zone determination module of the detection device sends the structural attributes of the vehicle to the interface of the blind zone information database.
  • the structural attributes of the vehicle include: vehicle model, vehicle sub-category, vehicle length and width information, vehicle cab type and location, etc.
  • the blind spot information database determines the blind spot information of the vehicle of the model or sub-category according to the structural attributes, or the blind spot information database determines the blind spot information of the vehicle closest to the structural attributes of the vehicle according to the structural attributes.
  • the blind spot information database finally returns the query result to the blind spot determination module.
  • the query result can include: the number of blind spots, the position of the blind spot relative to the vehicle, and the shape of the blind spot; optionally, the query result can also include: the risk factor of the blind spot, the blind spot The area.
  • the blind area information of the vehicle is obtained from the aforementioned S4042.
  • the blind area information is related to the attributes of the vehicle, and vehicles with different attributes correspond to different blind area information.
  • S4043 Determine the blind zone of the vehicle according to the blind zone information.
  • the blind spot information of the vehicle obtained in step S4042, the trajectory information of the vehicle on the traffic road obtained in step S402, and the posture information of the vehicle obtained in step S403 are used to determine the blind spot of the vehicle at the current moment.
  • the position information of the vehicle at the future time and the posture information of the vehicle at the future time, and the blind area information of the vehicle are obtained from the foregoing steps S401-S403, and the blind area of the vehicle at the future time can be determined.
  • the specific method for determining the blind zone of the vehicle at the current moment is as follows: according to the vehicle's trajectory information on the traffic road, the geographic coordinates and the posture of the vehicle at the current moment are combined with the blind zone information such as the position, area, and shape of the blind zone relative to the vehicle.
  • the location, distribution and area of the blind area of the vehicle on the traffic road For example: as shown in Figure 9, for a 20-ton truck of Changan, the geographic coordinates of the vehicle on the traffic road are known, that is, the geographic coordinates of the center point of the vehicle are known, and the The attitude of the car is from east to west.
  • the blind spot information of the vehicle obtained from the aforementioned S4042 shows that the vehicle has 6 independent blind areas, the shape and area of each independent blind area, and the key points in each independent blind area relative to the vehicle The offset of the special point. Therefore, the actual position and range of the blind area of the vehicle on the traffic road can be determined according to the blind spot information and the position and posture of the vehicle on the traffic road.
  • S4043 determines the risk according to the risk factor of the blind zone in the blind zone information
  • the high-risk blind area information corresponding to the high-risk blind block whose coefficient is greater than the preset risk threshold is determined based on the high-risk blind area information, the current position information and posture information of the vehicle on the traffic road. The location and extent of high-risk blind spots.
  • This method only determines the location and distribution range of the high-risk blind area of the vehicle on the traffic road, so that when the blind area is judged later, only the high-risk blind area is judged whether there is a blind area hazard, which reduces the amount of calculation and due to the low level of the vehicle.
  • the risk of the blind zone is less likely to be dangerous. Even if there are other targets in the blind zone, the vehicle will not easily collide and rub against other targets in the low-risk blind zone when driving. Therefore, the situation where there are other targets in the low-risk blind zone is not correct.
  • the warning will not cause danger and injury to the vehicle and other targets, and avoid the interference of the vehicle driver and pedestrians caused by excessive warning.
  • the driving speed of the vehicle can also be determined according to the vehicle's movement track information on the traffic road, and the range of all or part of the blind zone of the vehicle is expanded according to the driving speed of the vehicle .
  • the driving speed of the vehicle is higher than a certain threshold, the front blind zone range of the vehicle is multiplied by the preset proportional coefficient, so that the blind zone range is expanded.
  • the driving speed of the vehicle is converted into a proportional coefficient according to a preset rule, the area of the blind area in the blind area information is multiplied by the proportional coefficient, and the position of the blind area in the blind area information relative to the vehicle.
  • the method of determining the driving speed of the vehicle according to the movement trajectory information of the vehicle on the traffic road is as follows: the distance difference between the adjacent geographic coordinates in the geographic coordinate sequence of the movement trajectory information corresponds to the adjacent geographic coordinate value The time difference between two adjacent video frames in the video data is divided to obtain the driving speed of the vehicle at a moment.
  • the risk factor of the blind spot may be determined according to the geographic location and range of each blind spot determined.
  • the risk coefficient of the blind area is adjusted according to the geographic location and range of each blind area to make the risk coefficient of the blind area more accurate.
  • the blind area of the vehicle on the traffic road at a certain moment can be determined.
  • the video data captured by the camera in this application may be a video stream that records the movement of objects on the traffic road in real time. Therefore, the method of determining the blind area of the vehicle can be continuously executed to determine the blind area of the vehicle at each moment.
  • the present application also provides a detection device 300 as shown in FIG. 4, and the modules and functions included in the detection device 300 are as described above, which will not be repeated here.
  • the target detection and tracking module 301 in the detection device 300 is used to execute the aforementioned method step S401, and in another more specific embodiment, the target detection and tracking module 301 is used to execute the aforementioned method step S4011. -S4013 and its optional steps; the target positioning module 302 is used to perform the aforementioned method step S402; the posture determination module 303 is used to perform the aforementioned method step S403; the blind spot determination module 304 is used to perform the aforementioned method steps S404-S405, in another In a more specific embodiment, the blind spot determination module 304 is configured to execute the aforementioned method steps S4041-S4043, S405, and the aforementioned optional steps described in S404-S405.
  • the present application also provides a detection system for detecting the blind area of a vehicle.
  • the system includes a vehicle dynamic monitoring system and a vehicle blind area detection system.
  • the vehicle dynamic monitoring system is used to receive video data, and determine the position information and posture information of the vehicle on the traffic road at the current time or in the future according to the video data, wherein the video data is captured by a camera set on the traffic road.
  • the vehicle blind spot detection system is used to obtain the blind spot information determined by the structural attributes of the vehicle, and determine the blind spot of the vehicle on the traffic road at the current time or in the future according to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road.
  • the aforementioned detection system is used to execute the aforementioned S401-S405 methods
  • the aforementioned vehicle dynamic monitoring system of the aforementioned detection system is used to execute aforementioned S401-S403
  • the vehicle blind spot detection system is used to execute aforementioned S404-S405.
  • the present application also provides a vehicle-mounted device, which is installed on a vehicle.
  • the vehicle-mounted device of the present application can be used to execute the aforementioned methods S401-S405, and the vehicle-mounted device can provide the same functions as the detection device 300.
  • the vehicle includes a storage unit and a processing unit.
  • the storage unit of the vehicle is used to store a set of computer instructions and data sets.
  • the processing unit executes the computer instructions stored in the storage unit, and the processing unit reads the The data collection of the storage unit, so that the vehicle can execute the aforementioned methods S401-S405.
  • the storage unit of the aforementioned vehicle may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device or a random access memory (Random Access Memory, RAM).
  • the processing unit of the vehicle may be a general-purpose central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or any combination thereof.
  • the processing unit may also include one or more chips, and the processing unit may also include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
  • the present application also provides a computing device 100 as shown in FIG. 3.
  • the processor 102 in the computing device 100 reads a set of computer instructions stored in the memory 101 to execute the aforementioned method for detecting the blind area of a vehicle.
  • each module in the detection device 300 provided in this application can be distributed on multiple computers in the same environment or in different environments, this application also provides a system as shown in FIG. 10, which includes multiple computers.
  • Each computer 500 includes a memory 501, a processor 502, a communication interface 503, and a bus 504. Among them, the memory 501, the processor 502, and the communication interface 503 realize the communication connection between each other through the bus 504.
  • the memory 501 may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device, or a random access memory (Random Access Memory, RAM).
  • the memory 501 may store computer instructions. When the computer instructions stored in the memory 501 are executed by the processor 502, the processor 502 and the communication interface 503 are used to execute part of the method for detecting the blind area of the vehicle.
  • the memory may also store a data set. For example, a part of the storage resources in the memory 501 is divided into a blind area information library storage module for storing the blind area information library required by the detection device 300.
  • the processor 502 may adopt a general-purpose central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or any combination thereof.
  • the processor 502 may include one or more chips.
  • the processor 502 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
  • the communication interface 503 uses a transceiver module such as but not limited to a transceiver to implement communication between the computer 500 and other devices or communication networks.
  • a transceiver module such as but not limited to a transceiver to implement communication between the computer 500 and other devices or communication networks.
  • the blind spot information can be obtained through the communication interface 503.
  • the bus 504 may include a path for transferring information between various components of the computer 500 (for example, the memory 501, the processor 502, and the communication interface 503).
  • Each of the above-mentioned computers 500 establishes a communication path through a communication network.
  • Each computer 500 runs any one or more of the target detection and tracking module 301, the target positioning module 302, the posture determination module 303, and the blind spot determination module 304.
  • Any computer 500 may be a computer in a cloud data center (for example, a server), a computer in an edge data center, or a terminal computing device.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product that realizes the blind spot detection of the vehicle includes one or more computer instructions for detecting the blind spot of the vehicle. When these computer program instructions are loaded and executed on the computer, all or part of the instructions are generated according to the embodiment of the present invention.
  • Figure 5-7 The process or function described.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server, or data center via wired (such as coaxial cable, optical fiber, digital subscriber line, or wireless (such as infrared, wireless, microwave, etc.)).
  • the computer-readable storage medium stores the implementation A readable storage medium of computer program instructions for detecting the blind spot of a vehicle.
  • the computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integration
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for detecting blind areas of a vehicle, relating to the field of intelligent transportation. The method comprises: receiving video data photographed by a camera provided on a traffic road; determining position information and posture information of the vehicle on the traffic road at the current time point or a future time point according to the video data, wherein the posture information represents a driving direction of the vehicle; further, obtaining blind area information of the vehicle, the blind area information being decided by structural attributes of the vehicle; and finally, determining blind areas of the vehicle on the traffic road at the current time point or the future time point according to the blind area information of the vehicle and the position information and posture information of the vehicle on the traffic road. According to the method, the blind areas of the vehicle on the traffic road at the current time point or the future time point can be timely determined, the probability of occurrence of a dangerous accident in the blind areas is greatly reduced, and the safety of driving is improved.

Description

一种检测车辆的盲区的方法及装置Method and device for detecting blind area of vehicle 技术领域Technical field
本申请涉及智慧交通领域,尤其涉及一种检测车辆的盲区的方法及装置。This application relates to the field of smart transportation, and in particular to a method and device for detecting the blind area of a vehicle.
背景技术Background technique
随着社会的进步,交通道路上随处可见正在行驶的各式各样的车辆,对于车辆的驾驶员而言,其在行驶过程中对车辆周围的交通状况的观测和把握对行驶安全尤其重要。然而,由于一些车辆的车型和结构等原因,驾驶员在驾驶车辆正常行驶的过程中对周围环境常存在视野的盲区,即存在驾驶员在驾驶过程中无法观察到的某些区域,例如:大货车、挖掘机、装卸车等工程车的车型较为庞大,驾驶员视野受到车身的遮挡,对于车身周围的部分环境无法观测。由于行驶车辆的视野盲区,可能会导致驾驶员无法获知在盲区中存在的危险情况(例如:盲区中有危险障碍物、盲区中有行人或其他车辆),这给车辆和行人都带来了巨大的安全隐患。With the progress of society, all kinds of vehicles can be seen everywhere on traffic roads. For the driver of the vehicle, the observation and grasp of the traffic conditions around the vehicle during the driving process is particularly important for driving safety. However, due to the model and structure of some vehicles, the driver often has blind spots in the surrounding environment during the normal driving of the vehicle, that is, there are certain areas that the driver cannot observe during driving, such as: Trucks, excavators, loading and unloading trucks and other engineering vehicles are relatively large, and the driver's field of vision is blocked by the body, and part of the environment around the body cannot be observed. Due to the blind area of the driving vehicle, the driver may not be able to know the dangerous situation in the blind area (for example: there are dangerous obstacles in the blind area, pedestrians or other vehicles in the blind area), which brings huge problems to both vehicles and pedestrians. Safety hazards.
目前,为了解决车辆的盲区给车辆和行人带来的安全问题,常采用在车身上加装探测器的方式,使得车辆的盲区中存在危险情况时,车辆探测器发出告警提示音。但是这种方式对于一些危险情况常常为时已晚,且对于正在运行中的车辆效果更不佳,因为运动过程中的车辆的周边情况随时都在变化,探测器无法及时获知危险情况并及时发出告警提示。因此,如何更及时地检测车辆的盲区是目前急需解决的一个问题。At present, in order to solve the safety problems of vehicles and pedestrians caused by the blind area of the vehicle, the method of installing a detector on the body is often adopted, so that when a dangerous situation exists in the blind area of the vehicle, the vehicle detector emits an alarm sound. However, this method is often too late for some dangerous situations, and it is even less effective for running vehicles, because the surrounding conditions of the vehicle during the movement are changing at any time, and the detector cannot learn the dangerous situation in time and send it out in time. Warning prompt. Therefore, how to detect the blind area of the vehicle in a more timely manner is a problem that needs to be solved urgently.
发明内容Summary of the invention
本申请提供了一种检测车辆的盲区的方法,该方法该方法可以更及时地检测交通道路上的车辆在当前时刻或未来时刻的盲区,大大降低了盲区危险事故的发生概率,提高了交通道路安全。This application provides a method for detecting the blind area of a vehicle. The method can detect the blind area of a vehicle on a traffic road at the current or future time in a more timely manner, greatly reducing the probability of dangerous accidents in the blind area, and improving the traffic road Safety.
第一方面,本申请提供了一种检测车辆的盲区的方法,该方法由检测装置执行。该方法包括:接收设置于交通道路的摄像头拍摄的视频数据,该视频数据记录了当前时刻的交通道路上的车辆等目标的运行状况;接收视频数据之后,根据视频数据确定车辆当前时刻或未来时刻在交通道路上的位置信息与姿态信息,其中姿态信息表示车辆的行进方向。进一步地,获取车辆的盲区信息,盲区信息由车辆的结构属性决定,根据车辆的盲区信息、车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的盲区。In the first aspect, the present application provides a method for detecting a blind area of a vehicle, and the method is executed by a detecting device. The method includes: receiving video data taken by a camera set on a traffic road, the video data recording the operating conditions of vehicles and other targets on the traffic road at the current moment; after receiving the video data, determining the current time or the future time of the vehicle according to the video data Position information and posture information on the traffic road, where posture information indicates the direction of travel of the vehicle. Further, the blind spot information of the vehicle is acquired, and the blind spot information is determined by the structural attributes of the vehicle. According to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road, the blind spot of the vehicle on the traffic road at the current or future time is determined.
上述方法通过对设置于交通道路上的摄像头拍摄的视频数据进行处理和分析,获得车辆的位置信息和姿态信息,结合车辆的盲区信息及时获得车辆当前时刻或未来时刻在交通道路上的盲区。所确定的车辆的盲区可及时地给予车辆或交通道路上的其他目标(例如:行人、其他车辆)以参考,使得车辆或行人根据确定的盲区及时避让调整,大大降低了盲区危险事故的发生概率,提高了交通道路安全。进一步地,上述方法可以预测未来时刻车辆的盲区,使得车辆可以提前判断未来时刻行驶的安全性,更加降低了盲区危险事故的发生概率。The above method processes and analyzes the video data taken by a camera installed on a traffic road to obtain the position information and posture information of the vehicle, and combines the blind area information of the vehicle to obtain the blind area of the vehicle on the traffic road at the current time or in the future in time. The determined blind area of the vehicle can be timely given to the vehicle or other targets on the traffic road (such as pedestrians, other vehicles) as a reference, so that the vehicle or pedestrian can avoid and adjust in time according to the determined blind area, greatly reducing the probability of dangerous accidents in the blind area , Improve traffic and road safety. Further, the above method can predict the blind area of the vehicle at a future time, so that the vehicle can judge the safety of driving at the future time in advance, and further reduces the probability of dangerous accidents in the blind area.
在第一方面的一种可能的实现方式中,上述方法还包括:根据视频数据以及车辆当前时刻或未来时刻在交通道路上的盲区,确定所述车辆存在盲区危险,其中,盲区危险表示车辆在交通道路上的盲区内存在其它目标;上述方法进一步包括:发送盲区告警至车辆或者盲区内的其它目标。In a possible implementation of the first aspect, the above method further includes: determining that the vehicle has a blind spot risk based on the video data and the blind spot of the vehicle on the traffic road at the current or future moments, wherein the blind spot danger indicates that the vehicle is in danger. There are other targets in the blind zone on the traffic road; the above method further includes: sending a blind zone warning to the vehicle or other targets in the blind zone.
上述根据盲区确定车辆的盲区中是否存在其他目标,对于盲区中存在其他目标的情况,则可认为盲区内的其他目标有可能被该车辆碰撞或刮蹭,及时进行告警,可更有针对性地提醒处于盲区危险状态的车辆和行人,进一步降低了盲区危险事故。According to the above-mentioned blind zone, determine whether there are other targets in the blind zone of the vehicle. If there are other targets in the blind zone, it can be considered that other targets in the blind zone may be collided or scratched by the vehicle, and timely warning can be more targeted. Remind vehicles and pedestrians in a dangerous state in the blind spot, further reducing dangerous accidents in the blind spot.
在第一方面的一种可能的实现方式中,发送至存在盲区危险的车辆或者盲区内的其它目标的盲区告警包括告警数据,告警数据包括以下信息中的一种或多种:发生盲区危险的盲区在交通道路上的位置和范围、其他目标在交通道路上的位置信息、其他目标的类型。告警数据中的内容可用于车辆驾驶员根据告警确定规避盲区危险的策略,更准确地避开盲区内的其他目标,进一步地降低发生危险的概率。In a possible implementation of the first aspect, the blind zone warning sent to vehicles with blind spot danger or other targets in the blind zone includes warning data, and the warning data includes one or more of the following information: The location and range of the blind spot on the traffic road, the location information of other targets on the traffic road, and the types of other targets. The content in the warning data can be used by the vehicle driver to determine the strategy for avoiding the danger in the blind area according to the warning, avoiding other targets in the blind area more accurately, and further reducing the probability of danger.
在第一方面的一种可能的实现方式中,根据视频数据确定车辆未来时刻在交通道路上的位置信息与姿态信息包括:根据视频数据确定车辆当前时刻在交通道路上的位置信息与姿态信息;根据车辆当前时刻在交通道路上的位置信息与姿态信息,预测车辆在未来时刻在交通道路上的位置信息与姿态信息。预测的车辆在未来时刻的位置信息与姿态信息可使得检测装置可以确定车辆在未来时刻的盲区的位置和范围。In a possible implementation of the first aspect, determining the position information and posture information of the vehicle on the traffic road at a future time according to the video data includes: determining the position information and posture information of the vehicle on the traffic road at the current time according to the video data; According to the vehicle's current location information and posture information on the traffic road, predict the vehicle's location information and posture information on the traffic road in the future. The predicted position information and posture information of the vehicle at the future time can enable the detection device to determine the location and range of the blind zone of the vehicle at the future time.
在第一方面的一种可能的实现方式中,所述方法还包括:根据车辆当前时刻或未来时刻在交通道路上的盲区,构建可视化盲区图像;发送可视化盲区图像至车辆或者其他设备。In a possible implementation of the first aspect, the method further includes: constructing a visual blind spot image according to the blind spot of the vehicle on the traffic road at the current time or in the future; sending the visual blind spot image to the vehicle or other devices.
通过将确定的盲区以可视化盲区图像的形式发送给当前正在行驶的车辆或者交通管理平台的设备或者与该车辆相邻的其他车辆,可以使得车辆驾驶员或者管理人员可以直观快速地确定盲区所在的位置和范围,降低了驾驶员和其他目标对盲区反应的时间,提高了行车安全性。By sending the determined blind spot in the form of visual blind spot image to the currently driving vehicle or the equipment of the traffic management platform or other vehicles adjacent to the vehicle, the driver or manager of the vehicle can intuitively and quickly determine the blind spot. The location and range reduce the time for the driver and other targets to react to the blind spot, and improve driving safety.
在第一方面的一种可能的实现方式中,所述方法还包括:计算车辆的行驶速度;根据车辆的行驶速度调整车辆当前时刻或未来时刻在交通道路上的盲区。由于车辆的行驶速度会影响车辆行驶时的惯性和刹车距离等,根据车速调整车辆的盲区,例如:对于高速行驶的车辆扩大车辆的前盲区的范围,将调整后的盲区供驾驶员参考,提醒驾驶员注意,可进一步地降低盲区危险,提高行车安全。In a possible implementation of the first aspect, the method further includes: calculating the driving speed of the vehicle; and adjusting the blind spot of the vehicle on the traffic road at the current time or in the future according to the driving speed of the vehicle. As the driving speed of the vehicle will affect the inertia and braking distance when the vehicle is driving, adjust the blind area of the vehicle according to the speed. For example, for a high-speed vehicle, expand the range of the front blind area of the vehicle and provide the adjusted blind area for the driver's reference and remind Drivers pay attention to further reduce the risk of blind spots and improve driving safety.
在第一方面的一种可能的实现方式中,所述方法还包括:向存在盲区危险的车辆发送调整指令,其中,调整指令指示车辆调整行驶路线,以避开盲区内存在的其它目标。该方法根据确定的盲区或者确定的盲区内存在的其他目标的位置,可以为行驶的车辆规划新的行驶路线,并指示车辆进行形车路线调整,使得车辆可以合理避开盲区内其他目标,这样更加避免了司机慌乱中反应不及时的问题,更进一步提高了车辆的行车安全。另一方面,对于自动驾驶车辆而言,自动驾驶车辆接收到调整指令后,根据调整指令中的新的行驶路线可实现自动调整行驶路线,规避盲区危险,实现安全地自动驾驶。In a possible implementation of the first aspect, the method further includes: sending an adjustment instruction to a vehicle with a blind spot risk, wherein the adjustment instruction instructs the vehicle to adjust a driving route to avoid other targets in the blind zone. The method can plan a new driving route for the driving vehicle according to the determined blind zone or the position of other targets in the determined blind zone, and instruct the vehicle to adjust the shape of the car route, so that the vehicle can reasonably avoid other targets in the blind zone. It further avoids the problem of the driver's untimely response in a panic, and further improves the driving safety of the vehicle. On the other hand, for self-driving vehicles, after receiving the adjustment instruction, the self-driving vehicle can automatically adjust the driving route according to the new driving route in the adjustment instruction, avoid the danger of blind spots, and realize safe automatic driving.
在第一方面的一种可能的实现方式中,所述方法还包括:确定盲区中的高风险盲区。In a possible implementation of the first aspect, the method further includes: determining a high-risk blind area in the blind area.
在第一方面的一种可能的实现方式中,所述方法还包括:根据车辆的盲区信息确定盲区危险系数,盲区危险系数指示车辆的每一块盲区的危险程度;根据盲区危险系数、车辆的盲区信息和车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的高风险盲区。根据盲区危险系数,确定高风险盲区可使驾驶员更加有针对性地注意高风险盲区中的危险情况。In a possible implementation of the first aspect, the method further includes: determining a blind spot risk coefficient according to the blind spot information of the vehicle, the blind spot risk coefficient indicating the degree of danger of each blind spot of the vehicle; according to the blind spot risk factor and the blind spot of the vehicle Information and the location information and posture information of the vehicle on the traffic road determine the high-risk blind area of the vehicle on the traffic road at the current time or in the future. According to the blind zone risk coefficient, determining the high-risk blind zone enables the driver to pay more attention to the dangerous situation in the high-risk blind zone.
在第一方面的一种可能的实现方式中,在根据盲区危险系数确定了高风险盲区之后,还可以根据视频数据和确定的车辆当前时刻或未来时刻在交通道路上的高风险盲区,确定高风险盲区内是否存在其他目标。这种先确定高风险盲区,再确定高风险盲区中的危险情况的方法,可节约计算资源,且避免给驾驶员带来许多不必要的危险提醒,影响驾驶员的正常驾驶。例如:对于一些低风险盲区中存在其他目标,但是对于车辆和盲区中的目标而言都是不会存在危险的情况,可不必提醒驾驶员和盲区中的目标。In a possible implementation of the first aspect, after the high-risk blind zone is determined according to the blind zone risk coefficient, the high-risk blind zone can also be determined based on the video data and the determined high-risk blind zone on the traffic road at the current or future time. Whether there are other targets in the risk blind zone. This method of first determining the high-risk blind zone and then determining the dangerous situation in the high-risk blind zone can save computing resources and avoid bringing many unnecessary danger reminders to the driver and affecting the normal driving of the driver. For example, for some low-risk blind spots, there are other targets, but there is no danger for the vehicle and the targets in the blind zone, so it is not necessary to remind the driver and the targets in the blind zone.
在第一方面的一种可能的实现方式中,视频数据包括设置于交通道路上的不同位置的多个摄像头拍摄的多个视频流;根据视频数据确定车辆当前时刻或未来时刻在交通道路上的位置信息包括:根据多个视频流确定当前时刻或未来时刻车辆在多个视频流中的位置信息;根据当前时刻或未来时刻车辆在多个视频流中的位置信息确定车辆当前时刻或未来时刻在交通道路上的位置信息。通过结合多个视频流共同确定车辆的位置信息可以更准确地确定车辆在交通道路上的位置,进而更准确地确定车辆在交通道路上的盲区。且采用多个视角下的摄像头拍摄的视频数据可以扩大检测的交通道路的范围,使得多个摄像头视角下的车辆都可被用同样的方法确定盲区。In a possible implementation of the first aspect, the video data includes multiple video streams taken by multiple cameras set at different positions on the traffic road; the video data is used to determine the current or future time of the vehicle on the traffic road. Location information includes: determining the current or future position of the vehicle in multiple video streams according to multiple video streams; determining the current or future position of the vehicle in multiple video streams according to the current or future position of the vehicle in multiple video streams Location information on traffic roads. By combining multiple video streams to jointly determine the location information of the vehicle, the location of the vehicle on the traffic road can be determined more accurately, and the blind area of the vehicle on the traffic road can be determined more accurately. In addition, the use of video data captured by cameras with multiple viewing angles can expand the range of detected traffic roads, so that vehicles under multiple camera viewing angles can be used to determine blind areas in the same way.
在第一方面的一种可能的实现方式中,车辆的盲区信息包括:盲区的数量、每块盲区相对于车辆的位置、盲区的形状。根据上述盲区信息和车辆在交通道路上的位置和姿态可以确定车辆在交通道路上的盲区的分布和范围。In a possible implementation of the first aspect, the blind area information of the vehicle includes: the number of blind areas, the position of each blind area relative to the vehicle, and the shape of the blind area. According to the above-mentioned blind area information and the position and posture of the vehicle on the traffic road, the distribution and range of the blind area of the vehicle on the traffic road can be determined.
在第一方面的一种可能的实现方式中,车辆的盲区信息包括盲区的危险系数;根据车辆的盲区信息和车辆当前时刻或未来时刻在交通道路上的位置信息和姿态信息,确定车辆在交通道路上的盲区,包括:确定盲区信息中盲区的危险系数大于预设定的危险阈值的高风险盲区信息;根据高风险盲区信息、当前时刻或未来时刻的车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的高风险盲区。In a possible implementation of the first aspect, the blind spot information of the vehicle includes the risk factor of the blind spot; according to the blind spot information of the vehicle and the position information and posture information of the vehicle on the traffic road at the current or future moments, it is determined that the vehicle is in traffic Blind areas on the road, including: determine the high-risk blind area information in which the risk coefficient of the blind area in the blind area information is greater than the preset danger threshold; according to the high-risk blind area information, the current or future position information and posture of the vehicle on the road Information to determine the high-risk blind area on the traffic road at the current time or in the future.
在第一方面的一种可能的实现方式中,车辆的盲区信息包括盲区的危险系数;在确定车辆存在盲区危险之前,所述方法还包括:根据车辆在交通道路上的盲区修正盲区的危险系数;根据所述修正后的盲区的危险系数与预设定的危险阈值的关系,确定车辆在交通道路上的高风险盲区;根据视频数据和确定的车辆当前时刻或未来时刻在交通道路上的盲区,确定车辆存在盲区危险,包括:根据盲区中其他目标在交通道路上的位置信息确定车辆在交通道路上的高风险盲区内存在盲区危险。In a possible implementation of the first aspect, the blind spot information of the vehicle includes the risk factor of the blind spot; before determining that the vehicle has a risk of the blind spot, the method further includes: correcting the risk factor of the blind spot according to the blind spot of the vehicle on the traffic road ; Determine the high-risk blind area of the vehicle on the traffic road according to the relationship between the risk coefficient of the revised blind area and the preset dangerous threshold; According to the video data and the determined blind area of the vehicle on the traffic road at the current time or in the future , Determining that the vehicle is in danger of a blind spot includes: determining that the vehicle is in a high-risk blind zone on the traffic road based on the location information of other targets in the blind spot on the traffic road.
在第一方面的一种可能的实现方式中,在根据视频数据和确定的车辆当前时刻或未来时刻在交通道路上的盲区,确定车辆存在盲区危险之前,所述方法还包括:根据车辆在交通道路上的盲区确定车辆的盲区的危险系数;根据盲区的危险系数与预设定的危险阈值的关系,确定车辆在交通道路上的高风险盲区;根据视频数据和确定的车辆当前时刻或未来时刻在交通道路上的盲区,确定车辆存在盲区危险,包括:根据盲区中其他目标在交通道路上的位置信息确定车辆在交通道路上的高风险盲区内存在盲区危险。In a possible implementation of the first aspect, before determining that the vehicle is in danger of the blind area according to the video data and the determined blind area of the vehicle at the current or future time, the method further includes: The blind zone on the road determines the risk factor of the blind zone of the vehicle; according to the relationship between the risk factor of the blind zone and the preset risk threshold, the high-risk blind zone of the vehicle on the traffic road is determined; the current or future time of the vehicle is determined according to the video data In the blind area on the traffic road, determining that the vehicle has the risk of the blind area includes: determining that the vehicle has a blind area hazard in the high-risk blind area on the traffic road according to the location information of other targets in the blind area on the traffic road.
上述确定高风险盲区,进而对高风险盲区进行盲区危险检测的方法,一方面可节约计算资源,另一方面,对于一些盲区内存在其他目标但不会构成危险的情况可以避免给车辆和其他目标进行告警,提高告警的准确性,避免了频繁告警打扰司机和行人。The above method of determining high-risk blind areas and then performing blind-area hazard detection on high-risk blind areas can save computing resources on the one hand, and on the other hand, it can avoid vehicles and other targets when there are other targets in some blind areas but do not pose a danger. Carry out warnings to improve the accuracy of warnings and avoid frequent warnings to disturb drivers and pedestrians.
在第一方面的一种可能的实现方式中,该方法还包括:根据车辆的结构属性确定车辆的盲区信息,具体为:根据车辆的结构属性查询盲区信息库,获取盲区信息库中车辆的结构属性对应的车辆的盲区信息。In a possible implementation of the first aspect, the method further includes: determining the blind zone information of the vehicle according to the structural attributes of the vehicle, specifically: querying the blind zone information database according to the structural attributes of the vehicle to obtain the structure of the vehicle in the blind zone information database The blind zone information of the vehicle corresponding to the attribute.
在第一方面的一种可能的实现方式中,车辆的结构属性包括车辆的类型;根据车辆的结构属性查询盲区信息库,获取盲区信息库中车辆的结构属性对应的车辆的盲区信息,具体包括:输入结构属性至盲区信息库,获取盲区信息库中与该车辆的类型相同的车辆对应的盲区信息;In a possible implementation of the first aspect, the structural attributes of the vehicle include the type of the vehicle; query the blind zone information database according to the structural attributes of the vehicle to obtain the blind zone information of the vehicle corresponding to the structural attributes of the vehicle in the blind zone information database, specifically including : Input the structural attributes to the blind zone information database to obtain the blind zone information corresponding to the vehicle of the same type as the vehicle in the blind zone information database;
在第一方面的一种可能的实现方式中,车辆的结构属性包括车辆的长宽信息、驾驶室类型和驾驶室位置;根据车辆的结构属性查询盲区信息库,获取盲区信息库中结构属性对应的车辆的盲区信息,具体包括:输入车辆的结构属性至盲区信息库,获取盲区信息库中与该车辆的长宽信息、驾驶室类型和驾驶室位置相似的车辆对应的盲区信息。In a possible implementation of the first aspect, the structural attributes of the vehicle include the length and width information of the vehicle, the type of cab, and the position of the cab; according to the structural attributes of the vehicle, the blind zone information database is queried to obtain the corresponding structural attributes in the blind zone information database. The blind area information of the vehicle specifically includes: inputting the structural attributes of the vehicle into the blind area information database, and obtaining the blind area information corresponding to the vehicle whose length and width information, cab type, and cab position are similar to the vehicle in the blind area information database.
在第一方面的一种可能的实现方式中,接收到的视频数据为实时视频流,根据视频数据确定当前时刻或未来时刻车辆在交通道路上的位置信息与姿态信息,具体包括:根据实时视频流确定当前时刻或未来时刻所述车辆在视频数据中的位置信息;根据预设定的标定关系和当前时刻或未来时刻所述车辆在视频数据中的位置信息,确定当前时刻或未来时刻所述车辆在交通道路上的位置信息;根据当前时刻或未来时刻所述车辆在交通道路上的的位置信息确定车辆在交通道路上的运动轨迹信息;根据车辆的运动轨迹信息确定当前时刻或未来时刻所述车辆的姿态信息,姿态信息指示车辆的行进方向。In a possible implementation of the first aspect, the received video data is a real-time video stream, and the position information and posture information of the vehicle on the traffic road at the current or future time are determined according to the video data, which specifically includes: according to the real-time video Determine the position information of the vehicle in the video data at the current time or the future time; according to the preset calibration relationship and the position information of the vehicle in the video data at the current time or the future time, determine the current time or the future time The position information of the vehicle on the traffic road; the position information of the vehicle on the traffic road at the current time or the future time is used to determine the trajectory information of the vehicle on the traffic road; the current time or the future time position is determined according to the trajectory information of the vehicle The posture information of the vehicle is described, and the posture information indicates the traveling direction of the vehicle.
第二方面,本申请还提供了一种检测装置,包括:目标检测与跟踪模块,用于接收视频数据,视频数据由设置于交通道路的摄像头拍摄获得;目标定位模块,用于根据视频数据确定车辆当前时刻或未来时刻在所述交通道路上的位置信息;姿态确定模块,用于根据视频数据确定车辆当前时刻或未来时刻在交通道路上的姿态信息;盲区确定模块,用于获取车辆的盲区信息,盲区信息由车辆的结构属性决定;还用于根据车辆的盲区信息、车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的盲区。In a second aspect, the present application also provides a detection device, including: a target detection and tracking module for receiving video data, the video data is captured by a camera set on a traffic road; a target positioning module for determining according to the video data The position information of the vehicle on the traffic road at the current time or in the future; the posture determination module is used to determine the posture information of the vehicle on the traffic road at the current time or the future time according to the video data; the blind spot determination module is used to obtain the blind area of the vehicle Information, blind area information is determined by the structural attributes of the vehicle; it is also used to determine the blind area of the vehicle on the traffic road at the current time or in the future based on the vehicle’s blind area information, position information and posture information of the vehicle on the traffic road.
在第二方面的一种可能的实现方式中,盲区确定模块,还用于根据视频数据和确定的车辆当前时刻或未来时刻在交通道路上的盲区,确定车辆存在盲区危险,其中,盲区危险表示车辆在交通道路上的盲区内存在其它目标;发送盲区告警至车辆或者盲区内的其它目标。In a possible implementation of the second aspect, the blind spot determination module is also used to determine that the vehicle has a blind spot hazard based on the video data and the determined blind spot of the vehicle on the traffic road at the current time or in the future. The vehicle has other targets in the blind zone on the traffic road; the blind zone warning is sent to the vehicle or other targets in the blind zone.
在第二方面的一种可能的实现方式中,盲区告警包括告警数据,告警数据包括以下信息中的一种或多种:发生盲区危险的盲区在交通道路上的位置和范围、其他目标在交通道路上的位置信息、其他目标的类型。In a possible implementation of the second aspect, the blind zone warning includes warning data, and the warning data includes one or more of the following information: the position and range of the blind zone on the traffic road where the blind zone danger occurs, and other targets in the traffic Location information on the road and the types of other targets.
在第二方面的一种可能的实现方式中,目标定位模块在用于根据视频数据确定车辆未来时刻在所述交通道路上的位置信息时,具体用于:根据视频数据确定车辆当前时刻在交通道路上的位置信息;根据车辆当前时刻在交通道路上的位置信息,预测车辆在未来时刻在交通道路上的位置信息;姿态确定模块在用于根据视频数据确定车辆未来时刻 在交通道路上的姿态信息时,具体用于:根据视频数据确定车辆当前时刻在交通道路上的姿态信息;根据车辆当前时刻在交通道路上的姿态信息,预测车辆在未来时刻在交通道路上的姿态信息。In a possible implementation of the second aspect, when the target positioning module is used to determine the position information of the vehicle on the traffic road at a future time according to the video data, it is specifically used to: determine that the vehicle is in the traffic at the current time according to the video data. Position information on the road; predict the position information of the vehicle on the road in the future according to the position information of the vehicle on the road at the current time; the posture determination module is used to determine the posture of the vehicle on the road in the future based on the video data When the information is used, it is specifically used to determine the vehicle's current posture information on the traffic road according to the video data; according to the vehicle's current posture information on the traffic road, predict the vehicle's posture information on the traffic road in the future.
在第二方面的一种可能的实现方式中,盲区确定模块,还用于根据车辆当前时刻或未来时刻在交通道路上的盲区,构建可视化盲区图像;发送可视化盲区图像至车辆或者其他设备。In a possible implementation of the second aspect, the blind spot determination module is further configured to construct a visual blind spot image according to the blind spot of the vehicle on the traffic road at the current time or in the future; and send the visual blind spot image to the vehicle or other devices.
在第二方面的一种可能的实现方式中,盲区确定模块,还用于根据车辆当前时刻或未来时刻在交通道路上的位置信息计算车辆的行驶速度;根据车辆的行驶速度调整车辆当前时刻或未来时刻在交通道路上的盲区。In a possible implementation of the second aspect, the blind spot determination module is also used to calculate the driving speed of the vehicle according to the position information of the vehicle on the traffic road at the current time or in the future; adjust the current time or the vehicle according to the driving speed of the vehicle A blind spot on the traffic road in the future.
在第二方面的一种可能的实现方式中,盲区确定模块,还用于向存在盲区危险的车辆发送调整指令,其中,调整指令指示车辆调整行驶路线,以避开盲区内存在的其它目标。In a possible implementation of the second aspect, the blind spot determination module is also used to send an adjustment instruction to a vehicle in danger of the blind spot, where the adjustment instruction instructs the vehicle to adjust the driving route to avoid other targets in the blind zone.
在第二方面的一种可能的实现方式中,盲区确定模块,还用于确定所述盲区中的高风险盲区。In a possible implementation of the second aspect, the blind spot determination module is further configured to determine a high-risk blind zone in the blind zone.
在第二方面的一种可能的实现方式中,盲区确定模块,具体用于根据车辆的盲区信息确定盲区危险系数,盲区危险系数指示车辆的每一块盲区的危险程度;根据盲区危险系数、车辆的盲区信息和车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的高风险盲区。In a possible implementation of the second aspect, the blind spot determination module is specifically used to determine the blind spot risk coefficient according to the blind spot information of the vehicle. The blind spot risk coefficient indicates the degree of danger of each blind spot of the vehicle; according to the blind spot risk factor and the vehicle's The blind spot information and the position information and posture information of the vehicle on the traffic road determine the high-risk blind spot on the traffic road at the current time or in the future.
在第二方面的一种可能的实现方式中,盲区确定模块,还用于根据视频数据和确定的车辆当前时刻或未来时刻在交通道路上的高风险盲区,确定高风险盲区内是否存在其他目标。In a possible implementation of the second aspect, the blind spot determination module is also used to determine whether there are other targets in the high-risk blind zone based on the video data and the determined high-risk blind zone on the traffic road at the current time or in the future. .
在第二方面的一种可能的实现方式中,视频数据包括设置于交通道路上的不同位置的多个摄像头拍摄的多个视频流;目标检测与跟踪模块,还用于根据多个视频流确定当前时刻或未来时刻车辆在多个视频流中的位置信息;目标定位模块,还用于根据当前时刻或未来时刻车辆在多个视频流中的位置信息确定车辆当前时刻或未来时刻在交通道路上的位置信息。In a possible implementation of the second aspect, the video data includes multiple video streams captured by multiple cameras set at different positions on the traffic road; the target detection and tracking module is also used to determine The location information of the vehicle in multiple video streams at the current or future time; the target positioning module is also used to determine whether the vehicle is on the traffic road at the current or future time based on the location information of the vehicle in multiple video streams at the current or future time Location information.
在第二方面的一种可能的实现方式中,车辆的盲区信息包括:盲区的数量、每块盲区相对于车辆的位置、盲区的形状。In a possible implementation of the second aspect, the blind area information of the vehicle includes: the number of blind areas, the position of each blind area relative to the vehicle, and the shape of the blind area.
第三方面,本申请还提供一种车载装置,该车载装置设置于车辆上,该车载装置用于执行第一方面或第一方面的任意一种可能的实现方式提供的方法。In a third aspect, the present application also provides an in-vehicle device, the in-vehicle device is installed on a vehicle, and the in-vehicle device is configured to execute the method provided by the first aspect or any one of the possible implementations of the first aspect.
第四方面,本申请还提供一种车辆,该车辆包括存储单元和处理单元,该车辆的存储单元用于存储一组计算机指令和数据集合,处理单元执行存储单元存储的计算机指令,处理单元读取所述存储单元的数据集合,以使得该车辆执行第一方面或第一方面的任意一种可能的实现方式提供的方法。In a fourth aspect, the present application also provides a vehicle. The vehicle includes a storage unit and a processing unit. The storage unit of the vehicle is used to store a set of computer instructions and data sets. The processing unit executes the computer instructions stored in the storage unit, and the processing unit reads The data set of the storage unit is taken, so that the vehicle executes the method provided by the first aspect or any one of the possible implementation manners of the first aspect.
第五方面,本申请提供一种系统,该系统包括至少一个存储器和至少一个处理器,至少一个存储器用于存储一组计算机指令;当至少一个处理器执行所述一组计算机指令时,该系统执行第一方面或第一方面的任意一种可能的实现方式提供的方法。In a fifth aspect, the present application provides a system that includes at least one memory and at least one processor, at least one memory is used to store a set of computer instructions; when at least one processor executes the set of computer instructions, the system Implement the method provided by the first aspect or any one of the possible implementations of the first aspect.
第六方面,本申请还提供一种检测系统,该检测系统用于检测车辆的盲区,该系统包括:In a sixth aspect, the present application also provides a detection system, which is used to detect the blind area of a vehicle, and the system includes:
车辆动态监控系统,用于接收视频数据,根据视频数据确定车辆当前时刻或未来时刻在交通道路上的位置信息与姿态信息,其中,视频数据由设置于交通道路的摄像头拍摄获得;The vehicle dynamic monitoring system is used to receive video data, and determine the position information and posture information of the vehicle on the traffic road at the current time or in the future based on the video data, wherein the video data is captured by a camera set on the traffic road;
车辆盲区检测系统,用于获取由车辆结构属性决定的盲区信息,根据车辆的盲区信息、车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的盲区。The vehicle blind spot detection system is used to obtain the blind spot information determined by the structural attributes of the vehicle, and determine the blind spot of the vehicle on the traffic road at the current time or in the future according to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road.
第七方面,本申请提供一种非瞬态的可读存储介质,所述非瞬态的可读存储介质存储有计算机程序代码,当所述计算机程序代码被计算设备执行时,所述计算设备执行前述第一方面或第一方面的任意一种可能的实现方式中提供的方法。该存储介质包括但不限于易失性存储器,例如随机访问存储器,非易失性存储器,例如快闪存储器、硬盘(英文:hard disk drive,缩写:HDD)、固态硬盘(英文:solid state drive,缩写:SSD)。In a seventh aspect, the present application provides a non-transitory readable storage medium that stores computer program code, and when the computer program code is executed by a computing device, the computing device Perform the foregoing first aspect or the method provided in any one of the possible implementation manners of the first aspect. The storage medium includes but is not limited to volatile memory, such as random access memory, non-volatile memory, such as flash memory, hard disk (English: hard disk drive, abbreviation: HDD), solid state drive (English: solid state drive, Abbreviation: SSD).
第八方面,本申请提供一种计算机程序产品,所述计算机程序产品包括计算机程序代码,在所述计算机程序代码被计算设备执行时,所述计算设备执行前述第一方面或第一方面的任意可能的实现方式中提供的方法。该计算机程序产品可以为一个软件安装包,在需要使用前述第一方面或第一方面的任意可能的实现方式中提供的方法的情况下,可以下载该计算机程序产品并在计算设备上执行该计算机程序产品。In an eighth aspect, the present application provides a computer program product. The computer program product includes computer program code. When the computer program code is executed by a computing device, the computing device executes the foregoing first aspect or any of the first aspects. The methods provided in the possible implementations. The computer program product may be a software installation package. In the case where the method provided in the foregoing first aspect or any possible implementation of the first aspect needs to be used, the computer program product may be downloaded and executed on a computing device. Program product.
附图说明Description of the drawings
为了更清楚地说明本申请实施例的技术方法,下面将对实施例中所需使用的附图作以简单地介绍。In order to more clearly illustrate the technical methods of the embodiments of the present application, the following will briefly introduce the drawings needed in the embodiments.
图1为本申请实施例提供的不同货车的盲区的示意图;Figure 1 is a schematic diagram of the blind areas of different trucks provided by an embodiment of the application;
图2A为本申请实施例提供的一种检测装置的部署示意图;2A is a schematic diagram of the deployment of a detection device provided by an embodiment of the application;
图2B为本申请实施例提供的另一种检测装置的部署示意图;2B is a schematic diagram of the deployment of another detection device provided by an embodiment of the application;
图2C为本申请实施例提供的又一种检测装置的部署示意图;2C is a schematic diagram of deployment of yet another detection device provided by an embodiment of this application;
图3为本申请实施例提供的一种计算设备100的结构示意图;FIG. 3 is a schematic structural diagram of a computing device 100 provided by an embodiment of this application;
图4为本申请实施例提供的一种训练装置200和检测装置300的结构示意图;4 is a schematic structural diagram of a training device 200 and a detection device 300 provided by an embodiment of the application;
图5为本申请实施例提供的一种检测车辆的盲区的方法;FIG. 5 is a method for detecting a blind area of a vehicle according to an embodiment of the application;
图6为本申请实施例提供的一种车辆的盲区存在盲区危险的示意图;Fig. 6 is a schematic diagram of a blind spot danger in a blind spot of a vehicle provided by an embodiment of the application;
图7为本申请实施例提供的一种目标检测与目标跟踪的方法;FIG. 7 is a method for target detection and target tracking provided by an embodiment of the application;
图8为本申请实施例提供的一种具体的确定车辆的盲区的方法;FIG. 8 is a specific method for determining the blind area of a vehicle according to an embodiment of the application;
图9为本申请实施例提供的一种被确定的车辆的盲区的示意图;FIG. 9 is a schematic diagram of a determined blind area of a vehicle according to an embodiment of the application;
图10为本申请实施例提供的一种系统的示意图。Fig. 10 is a schematic diagram of a system provided by an embodiment of the application.
具体实施方式Detailed ways
下面将结合本申请中的附图,对本申请提供的实施例中的方案进行描述。The solutions in the embodiments provided in this application will be described below in conjunction with the drawings in this application.
交通道路上常常运行着各式各样的车辆,例如:工程车(如:装卸车、水泥车、油罐车、货车)、小轿车、自行车、公交车等。对于不同的车辆,由于其车型和内部设计结构不同,驾驶员在座位上正常驾驶车辆时所能观测到的交通道路的视角和视野也不相同, 虽然目前每辆车配置有左右视镜和后视镜等辅助驾驶员观察车辆周围的行驶环境的工具,但是驾驶员在正常驾驶车辆时依然看不到一些车辆周边的区域,即盲区。在本申请中,盲区指驾驶员正常驾驶车辆时所观察不到的车辆周边区域,盲区的位置随着车辆的行驶而移动。每辆车对应有各自的盲区信息,每辆车的盲区信息是指由于车辆的结构、车型、驾驶座的位置等因素使得车辆存在驾驶员可能观察不到的盲区的参数,盲区信息包括盲区的数量、盲区的形状、盲区相对于车辆的位置等。图1为两种货车的盲区示意图,不同的车辆对应有不同的盲区信息,根据盲区信息和车辆的位置信息可确定驾驶员在驾驶该车辆行驶过程中的盲区的位置和分布。对于工程车,由于工程车的车身通常较大,驾驶员在驾驶过程中视野常常受到车身的遮挡,导致驾驶员驾驶工程车时存在较大区域的盲区。车辆的盲区中存在其他正在运动的车辆、行人、物体等动态危险或者盲区中存在静态危险(例如:施工危险、地理缺陷等)时,不仅对该车辆和车辆驾乘人员带来很大的危险,也会对存在于盲区中的人和物带来巨大的危险。本申请中,将盲区中存在动态危险及静态危险的情况统称为盲区危险。检测车辆的盲区,进一步地检测盲区危险,对交通道路中车辆和行人的安全具有重要意义。Various vehicles are often running on traffic roads, such as engineering vehicles (such as loading and unloading trucks, cement trucks, tank trucks, trucks), cars, bicycles, buses, etc. For different vehicles, due to their different models and internal design structures, the angle of view and field of view of the traffic road that the driver can observe when driving the vehicle normally on the seat is also different, although each vehicle is currently equipped with left and right mirrors and rear view mirrors. Sight glasses and other tools that assist the driver in observing the driving environment around the vehicle, but the driver still cannot see some areas around the vehicle when driving the vehicle normally, that is, blind spots. In this application, the blind zone refers to the area around the vehicle that is not observed when the driver drives the vehicle normally, and the position of the blind zone moves with the driving of the vehicle. Each vehicle has its own blind area information. The blind area information of each vehicle refers to the parameters of the blind area that the driver may not be able to observe due to factors such as the structure of the vehicle, the vehicle type, and the position of the driver's seat. The blind area information includes the blind area information. Number, shape of blind zone, position of blind zone relative to vehicle, etc. Figure 1 is a schematic diagram of the blind spots of two trucks. Different vehicles have different blind spot information. According to the blind spot information and the position information of the vehicle, the position and distribution of the blind spots of the driver during the driving of the vehicle can be determined. For construction vehicles, because the body of the construction vehicle is usually large, the driver's vision is often blocked by the vehicle body during driving, resulting in a large blind area when the driver drives the construction vehicle. When there are dynamic hazards such as other moving vehicles, pedestrians, objects, etc. in the blind area of the vehicle, or static hazards in the blind area (such as construction hazards, geographic defects, etc.), it will not only bring great danger to the vehicle and vehicle drivers and passengers , It will also bring great danger to people and things in the blind zone. In this application, situations where there are dynamic hazards and static hazards in the blind zone are collectively referred to as blind zone hazards. Detecting the blind area of the vehicle and further detecting the danger of the blind area is of great significance to the safety of vehicles and pedestrians on the traffic road.
本申请提供一种检测车辆的盲区的方法,该方法由检测装置执行。用于检测盲区的检测装置的功能可以由软件系统实现、也可以由硬件设备实现、还可以由软件系统和硬件设备结合实现。The present application provides a method for detecting a blind area of a vehicle, and the method is executed by a detecting device. The function of the detection device for detecting the blind zone can be realized by a software system, can also be realized by a hardware device, or can be realized by a combination of a software system and a hardware device.
检测装置的部署灵活,其可以部署在边缘环境,例如:检测装置可以是边缘环境中的一个边缘计算设备或运行在一个或多个边缘计算设备上的软件装置,所述边缘环境指距离要检测的交通道路较近的数据中心或者边缘计算设备的集合,边缘环境包括一个或多个边缘计算设备,边缘计算设备可以是设置在交通道路路边的具有计算能力的路侧设备。例如:如图2A所示,检测装置部署在距离路口较近的位置,即路边的边缘计算设备。该路口中设置有可联网的摄像头,摄像头拍摄到记录路口车辆过往的视频数据,并将视频数据通过网络发送至检测装置。检测装置根据视频数据对该路口中行驶的车辆进行盲区检测,进一步地进行盲区危险检测,当检测到行驶在路口的车辆存在盲区危险情况时,可通过网络(例如:无线网络或车联网网络)向该车辆中的车载系统发送盲区告警,使得车辆中的车载系统提示驾驶员存在盲区危险情况。或者检测装置向路侧的告警设备发送盲区告警,使得路侧告警设备发出声音或灯光等告警信号。再或者,检测装置将检测到的告警数据和统计数据发送至交通管理系统中的设备或装置,使得指挥人员可根据获得的告警数据和统计数据对交通道路上的行驶车辆进行相应的指挥或执法,其中,告警数据包括以下信息中的一种或多种:发生盲区危险的盲区在交通道路上的位置和范围、盲区内的其他目标在交通道路上的位置信息、盲区内的其他目标的类型。The deployment of the detection device is flexible, and it can be deployed in an edge environment. For example, the detection device can be an edge computing device in the edge environment or a software device running on one or more edge computing devices. The edge environment refers to the distance to be detected. A collection of data centers or edge computing devices that are close to traffic roads. The edge environment includes one or more edge computing devices. The edge computing devices may be roadside devices with computing capabilities installed on the side of the traffic road. For example, as shown in Fig. 2A, the detection device is deployed at a location close to the intersection, that is, the edge computing device on the roadside. A networkable camera is installed at the intersection. The camera captures and records the video data of vehicles passing by at the intersection and sends the video data to the detection device via the network. The detection device detects the blind area of the vehicle driving at the intersection according to the video data, and further performs the blind area danger detection. When the vehicle driving at the intersection is detected as a blind area danger, it can be through the network (for example: wireless network or car networking network) Send a blind zone warning to the on-board system in the vehicle, so that the on-board system in the vehicle reminds the driver that there is a dangerous situation in the blind zone. Or the detection device sends a blind zone alarm to the roadside warning device, so that the roadside warning device emits warning signals such as sound or light. Or, the detection device sends the detected alarm data and statistical data to the equipment or device in the traffic management system, so that the commander can conduct corresponding command or law enforcement on the vehicles on the traffic road based on the obtained alarm data and statistical data , Where the warning data includes one or more of the following information: the location and range of the blind zone on the traffic road where the blind zone danger occurs, the location information of other targets in the blind zone on the traffic road, and the types of other targets in the blind zone .
检测装置还可以部署在云环境,云环境是云计算模式下利用基础资源向用户提供云服务的实体。云环境包括云数据中心和云服务平台,所述云数据中心包括云服务提供商拥有的大量基础资源(包括计算资源、存储资源和网络资源),云数据中心包括的计算资源可以是大量的计算设备(例如服务器)。检测装置可以是云数据中心中用于对行驶车辆的盲区进行检测的服务器;检测装置也可以是创建在云数据中心中的用于对盲区进行检测的虚拟机;检测装置还可以是部署在云数据中心中的服务器或者虚拟机上的软件装置,该软件装置用于对行驶车辆的盲区进行检测,该软 件装置可以分布式地部署在多个服务器上、或者分布式地部署在多个虚拟机上、或者分布式地部署在虚拟机和服务器上。例如:如图2B所示,检测装置部署在云环境中,设置于交通道路侧的可联网的摄像头将拍摄到的视频数据发送至云环境中的检测装置。检测装置根据视频数据对该视频记录的车辆进行盲区检测和盲区危险检测,当检测到某个车辆存在盲区危险情况时,向该车辆中的车载系统发送盲区告警,使得车辆中的车载系统提示驾驶员存在盲区危险情况。或者检测装置向路侧的告警设备发送盲区告警,使得路侧告警设备发出声音或灯光等告警信号。再或者,检测装置将检测到的告警数据发送至交通管理系统中的设备或装置,使得指挥人员可根据获得的告警数据对交通道路上的行驶车辆进行相应的指挥或执法。The detection device can also be deployed in a cloud environment, which is an entity that uses basic resources to provide cloud services to users in a cloud computing mode. The cloud environment includes a cloud data center and a cloud service platform. The cloud data center includes a large number of basic resources (including computing resources, storage resources, and network resources) owned by a cloud service provider. The computing resources included in the cloud data center can be a large number of computing resources. Device (for example, server). The detection device can be a server in a cloud data center that is used to detect the blind area of a moving vehicle; the detection device can also be a virtual machine created in a cloud data center to detect the blind area; the detection device can also be deployed in the cloud A software device on a server or virtual machine in a data center. The software device is used to detect the blind area of a driving vehicle. The software device can be distributed on multiple servers or distributed on multiple virtual machines On virtual machines and servers. For example, as shown in FIG. 2B, the detection device is deployed in a cloud environment, and a networkable camera set on the side of the traffic road sends the captured video data to the detection device in the cloud environment. The detection device performs blind zone detection and blind zone danger detection on the video recorded vehicle according to the video data. When a vehicle is detected as a blind zone danger, it sends a blind zone warning to the on-board system of the vehicle, so that the on-board system in the vehicle prompts driving There is a dangerous situation in the blind spot. Or the detection device sends a blind zone alarm to the roadside warning device, so that the roadside warning device emits warning signals such as sound or light. Or, the detection device sends the detected alarm data to the equipment or device in the traffic management system, so that the commander can conduct corresponding command or law enforcement on the vehicles on the traffic road based on the obtained alarm data.
检测装置可以由云服务提供商部署在云数据中心,云服务提供商将检测装置提供的功能抽象成为一项云服务,云服务平台供用户咨询和购买这项云服务。用户购买这项云服务后即可使用云数据中心的该检测装置提供的检测车辆的盲区危险的服务。检测装置还可以由租户部署在租户租用的云数据中心的计算资源中(例如虚拟机),租户通过云服务平台购买云服务提供商提供的计算资源云服务,在购买的计算资源中运行检测装置,使得检测装置对车辆执行盲区检测。The detection device can be deployed in a cloud data center by a cloud service provider. The cloud service provider abstracts the function provided by the detection device into a cloud service, and the cloud service platform allows users to consult and purchase this cloud service. After purchasing this cloud service, the user can use the service provided by the detection device of the cloud data center to detect the vehicle's blind spot danger. The detection device can also be deployed by the tenant in the computing resources (such as virtual machines) of the cloud data center rented by the tenant. The tenant purchases the computing resource cloud service provided by the cloud service provider through the cloud service platform, and runs the detection device in the purchased computing resources , So that the detection device performs blind spot detection on the vehicle.
当检测装置为软件装置时,检测装置可以在逻辑上分成多个部分,每个部分具有不同的功能(多个部分例如:检测装置包括目标检测与跟踪模块、目标定位模块、姿态确定模块、盲区确定模块)。检测装置的几个部分可以分别部署在不同的环境或设备上,部署在不同环境或设备上的检测装置的各个部分之间协同实现车辆盲区危险检测的功能。例如:如图2C所示,检测装置中的目标检测与跟踪模块部署在边缘计算设备,目标定位模块、姿态确定模块、盲区确定模块部署在云数据中心(例如:部署在云数据中心的服务器或虚拟机上)。设置在交通路口的摄像头将拍摄到的视频数据发送至部署在边缘计算设备中的目标检测与跟踪模块,目标检测与跟踪模块根据视频数据对视频数据中记录的车辆、行人等目标进行检测和跟踪,将获得的目标在当前时刻的视频中的位置信息、目标在当前时刻和历史时刻的在视频中形成的运动轨迹信息发送至云数据中心,云数据中心上部署的目标定位模块、姿态确定模块、盲区确定模块进一步地对目标在视频中的位置和运行轨迹等数据进行分析和处理,获得车辆的盲区(和盲区危险判定结果)。应理解,本申请不对检测装置的各部分的划分进行限制性的限定,也不对检测装置具体部署在哪个环境进行限制性的限定。实际应用时可根据各计算设备的计算能力或具体应用需求进行适应性的部署。值得注意的是,在一种实施例中,摄像头可以为具有一定计算能力的智能摄像头,检测装置还可以分三部分部署,其中,一部分部署在摄像头,一部分部署在边缘计算设备,一部分部署在云计算设备。When the detection device is a software device, the detection device can be logically divided into multiple parts, each of which has different functions (multiple parts such as: the detection device includes a target detection and tracking module, a target positioning module, a posture determination module, and a blind zone Determine the module). Several parts of the detection device can be respectively deployed in different environments or equipment, and the various parts of the detection device deployed in different environments or equipment cooperate to realize the function of vehicle blind zone hazard detection. For example: as shown in Figure 2C, the target detection and tracking module in the detection device is deployed on the edge computing device, and the target positioning module, posture determination module, and blind spot determination module are deployed in the cloud data center (for example, the server or On the virtual machine). The camera set at the traffic intersection sends the captured video data to the target detection and tracking module deployed in the edge computing device. The target detection and tracking module detects and tracks the vehicles, pedestrians and other targets recorded in the video data based on the video data , Send the obtained position information of the target in the video at the current moment, the motion trajectory information formed in the video at the current and historical moments of the target to the cloud data center, and the target positioning module and posture determination module deployed on the cloud data center , The blind spot determination module further analyzes and processes data such as the position and running track of the target in the video, and obtains the blind spot of the vehicle (and the blind spot risk determination result). It should be understood that this application does not restrict the division of the parts of the detection device, nor does it restrict the environment in which the detection device is specifically deployed. In actual applications, it can be deployed adaptively according to the computing capabilities of each computing device or specific application requirements. It is worth noting that in one embodiment, the camera can be a smart camera with certain computing capabilities, and the detection device can also be deployed in three parts, of which one is deployed in the camera, one is deployed in the edge computing device, and the other is deployed in the cloud. Computing equipment.
当检测装置为软件装置时,检测装置可以单独部署在任意环境(云环境、边缘环境、终端计算设备等)中的一个计算设备上;当检测装置为硬件设备时,检测装置可以是任意环境中的一个计算设备。图3提供了一种计算设备100的结构示意图,图3所示的计算设备100包括存储器101、处理器102、通信接口103以及总线104。其中,存储器101、处理器102、通信接口103通过总线104实现彼此之间的通信连接。When the detection device is a software device, the detection device can be separately deployed on a computing device in any environment (cloud environment, edge environment, terminal computing equipment, etc.); when the detection device is a hardware device, the detection device can be in any environment Of a computing device. FIG. 3 provides a schematic structural diagram of a computing device 100. The computing device 100 shown in FIG. 3 includes a memory 101, a processor 102, a communication interface 103, and a bus 104. Among them, the memory 101, the processor 102, and the communication interface 103 implement a communication connection between each other through the bus 104.
存储器101可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器101可以存储计算机指令,当存储器101中存储的计算机指令被处理器102执行时,处理器102和通信接口103用于执行检测车辆的盲区的方法。存储器还可以存储数据,例如:存储器101中的一部分用来存储检测车辆的盲区所需的数据,以及用于存储程序执行过程中的中间数据或结果数据。The memory 101 may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device, or a random access memory (Random Access Memory, RAM). The memory 101 may store computer instructions. When the computer instructions stored in the memory 101 are executed by the processor 102, the processor 102 and the communication interface 103 are used to execute the method for detecting the blind area of the vehicle. The memory can also store data. For example, a part of the memory 101 is used to store data required for detecting the blind area of the vehicle, and used to store intermediate data or result data during program execution.
处理器102可以采用通用的中央处理器(Central Processing Unit,CPU),应用专用集成电路(Application Specific Integrated Circuit,ASIC),图形处理器(graphics processing unit,GPU)或其任意组合。处理器102可以包括一个或多个芯片,处理器102可以包括AI加速器,例如:神经网络处理器(neural processing unit,NPU)。The processor 102 may adopt a general-purpose central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or any combination thereof. The processor 102 may include one or more chips, and the processor 102 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
通信接口103使用例如但不限于收发器一类的收发模块,来实现计算设备100与其他设备或通信网络之间的通信。例如,可以通过通信接口103获取检测车辆的盲区危险所需的数据。The communication interface 103 uses a transceiver module such as but not limited to a transceiver to implement communication between the computing device 100 and other devices or communication networks. For example, the data required to detect the danger of the blind zone of the vehicle can be obtained through the communication interface 103.
总线104可包括在计算设备100各个部件(例如,存储器101、处理器102、通信接口103)之间传送信息的通路。The bus 104 may include a path for transferring information between various components of the computing device 100 (for example, the memory 101, the processor 102, and the communication interface 103).
检测装置在执行本申请实施例提供的检测车辆的盲区的方法时,需要采用人工智能(artificial intelligence,AI)模型,AI模型包括多种多类,神经网络模型为AI模型中的一类,在描述本申请实施例时,以神经网络模型为例。应理解,还可以使用其他AI模型完成本申请实施例描述的神经网络模型的功能,本申请不对此作任何限定。When the detection device executes the method for detecting the blind area of a vehicle provided in the embodiments of the application, it needs to adopt an artificial intelligence (AI) model. The AI model includes multiple types, and the neural network model is one of the AI models. When describing the embodiments of the present application, a neural network model is taken as an example. It should be understood that other AI models can also be used to complete the functions of the neural network model described in the embodiments of the present application, which is not limited in this application.
神经网络模型是一类模仿生物神经网络(动物的中枢神经系统)的结构和功能的数学计算模型。一个神经网络模型可以包括多种不同功能的神经网络层,每层包括参数和计算公式。根据计算公式的不同或功能的不同,神经网络模型中不同的层有不同的名称,例如:进行卷积计算的层称为卷积层,所述卷积层常用于对输入信号(例如:图像)进行特征提取。一个神经网络模型也可以由多个已有的神经网络模型组合构成。不同结构的神经网络模型可用于不同的场景(例如:分类、识别)或在用于同一场景时提供不同的效果。神经网络模型结构不同具体包括以下一项或多项:神经网络模型中网络层的层数不同、各个网络层的顺序不同、每个网络层中的权重、参数或计算公式不同。业界已存在多种不同的用于识别或分类等应用场景的具有较高准确率的神经网络模型,其中,一些神经网络模型可以被特定的训练集进行训练后单独完成一项任务或与其他神经网络模型(或其他功能模块)组合完成一项任务。一些神经网络模型也可以被直接用于单独完成一项任务或与其他神经网络模型(或其他功能模块)组合完成一项任务。Neural network model is a kind of mathematical calculation model that imitates the structure and function of biological neural network (animal's central nervous system). A neural network model can include a variety of neural network layers with different functions, and each layer includes parameters and calculation formulas. According to different calculation formulas or different functions, different layers in the neural network model have different names. For example, the layer that performs convolution calculations is called convolutional layer, and the convolutional layer is often used for input signals (for example: image ) Perform feature extraction. A neural network model can also be composed of a combination of multiple existing neural network models. Neural network models with different structures can be used in different scenarios (for example: classification, recognition) or provide different effects when used in the same scenario. The different neural network model structures specifically include one or more of the following: the number of network layers in the neural network model is different, the order of each network layer is different, and the weights, parameters or calculation formulas in each network layer are different. There are many different neural network models with high accuracy for application scenarios such as recognition or classification in the industry. Among them, some neural network models can be trained by a specific training set to complete a task alone or be combined with other neural network models. A combination of network models (or other functional modules) completes a task. Some neural network models can also be directly used to complete a task alone or combined with other neural network models (or other functional modules) to complete a task.
在本申请的一个实施例中,执行检测车辆的盲区的方法需要用到两种不同的神经网络模型。一种是用于对视频数据中的目标进行检测的神经网络模型,称为目标检 测模型。应理解,本申请实施例中的目标检测模型可采用业界已有的用于目标检测具有较优效果的神经网络模型中的任意一种,例如:一阶段统一实时目标检测(you only look once:unified,real-time object detection,Yolo)模型、单镜头多盒检测器(single shot multi box detector,SSD)模型、区域卷积神经网络(region convolutional neural network,RCNN)模型或快速区域卷积神经网络(fast region convolutional neural network,Fast-RCNN)模型等。In an embodiment of the present application, two different neural network models are required to execute the method of detecting the blind area of the vehicle. One is a neural network model used to detect targets in video data, which is called a target detection model. It should be understood that the target detection model in the embodiments of the present application can use any of the neural network models that have been used for target detection with better effects in the industry, for example: one-stage unified real-time target detection (you only look once: unified, real-time object detection, Yolo) model, single shot multi-box detector (SSD) model, regional convolutional neural network (regional convolutional neural network, RCNN) model, or fast regional convolutional neural network (fast region convolutional neural network, Fast-RCNN) model, etc.
本申请实施例中,执行检测车辆的盲区的方法需要用到的另一种神经网络模型是:用于对检测到的车辆进行属性检测的模型,称为车辆属性检测模型,车辆属性检测模型也可采用业界已有的一些神经网络模型中的任意一种,例如:卷积神经网络(convolutional neural network,CNN)模型、Resnet模型、Densenet模型、VGGnet模型等。应理解,未来业界开发的可实现目标检测和车辆属性检测的神经网络模型也可用于作为本申请实施例中的目标检测模型和车辆属性检测模型,本申请对此不作限定。In the embodiments of the present application, another neural network model that needs to be used to perform the method of detecting the blind area of a vehicle is: a model used to perform attribute detection on a detected vehicle, which is called a vehicle attribute detection model, and a vehicle attribute detection model is also Any one of some existing neural network models in the industry can be used, for example: convolutional neural network (convolutional neural network, CNN) model, Resnet model, Densenet model, VGGnet model, etc. It should be understood that a neural network model developed in the industry that can realize target detection and vehicle attribute detection in the future can also be used as the target detection model and vehicle attribute detection model in the embodiments of this application, which is not limited in this application.
目标检测模型和车辆属性检测模型在被用于进行车辆的盲区检测之前可由训练装置进行训练,训练装置分别采用不同的训练集对目标检测模型和车辆属性检测模型进行训练,经训练装置训练完成的目标检测模型和车辆属性检测模型可被部署于检测装置中的目标检测与跟踪模块中,由检测装置用于进行检测车辆的盲区危险。The target detection model and the vehicle attribute detection model can be trained by the training device before being used for the blind spot detection of the vehicle. The training device uses different training sets to train the target detection model and the vehicle attribute detection model. The training device is completed The target detection model and the vehicle attribute detection model can be deployed in the target detection and tracking module in the detection device, and the detection device is used to detect the blind zone danger of the vehicle.
图4提供了一种训练装置200和检测装置300的结构示意图。下面结合图4对训练装置200和检测装置300的结构和功能进行介绍,应理解,本申请实施例仅对训练装置200和检测装置300的结构和功能模块进行示例性划分,本申请并不对其具体划分做任何限定。FIG. 4 provides a schematic structural diagram of a training device 200 and a detection device 300. The following describes the structure and function of the training device 200 and the detection device 300 with reference to FIG. 4. It should be understood that the embodiment of the present application only exemplarily divides the structure and function modules of the training device 200 and the detection device 300, and the present application does not describe them. Make any restrictions on the specific division.
训练装置200用于对目标检测模型203和车辆属性检测模型204分别进行训练,对目标检测模型203和车辆属性检测模型204进行训练需要两个训练集,分别称为目标检测训练集和车辆属性检测训练集。获得的目标检测训练集和车辆属性检测训练集被保存在数据库中。可由采集装置采集多个训练视频或训练图像,采集到的多个训练视频或训练图像由人工或采集装置进行处理和标注后构成一个训练集。当采集装置采集的是多个训练视频时,采集装置将训练视频中的视频帧作为训练图像,进而对训练图像进行处理和标注构建训练集。训练装置200在启动对目标检测模型203进行训练时,初始化模块201首先对目标检测模型203中的每层的参数进行初始化(即:为每个参数赋予一个初始值),进而训练模块202读取数据库中的目标检测训练集中的训练图像对目标检测模型203进行训练,直到目标检测模型203中的损失函数收敛且损失函数值小于特定阈值或目标检测训练集中所有的训练图像被用于训练,则目标检测模型203训练完成。同理,训练装置200在启动对车辆属性检测模型204进行训练时,初始化模块201首先对车辆属性检测模型204中的每层的参数进行初始化(即:为每个参数赋予一个初始值),进而训练模块202读取数据库中的车辆属性检测训练集中的训练图像对车辆属性检测模型204进行训练,直到车辆属性检测模型204中的损失函数收敛且损失函数值小于特定阈值或者车辆属性 检测训练集中所有的训练图像被用于训练,则车辆属性检测模型204训练完成。The training device 200 is used to train the target detection model 203 and the vehicle attribute detection model 204 separately. Training the target detection model 203 and the vehicle attribute detection model 204 requires two training sets, which are called target detection training set and vehicle attribute detection respectively. Training set. The obtained target detection training set and vehicle attribute detection training set are stored in the database. A collection device can collect multiple training videos or training images, and the collected multiple training videos or training images are processed and annotated manually or by the collection device to form a training set. When the collection device collects multiple training videos, the collection device uses video frames in the training videos as training images, and then processes and labels the training images to construct a training set. When the training device 200 starts training the target detection model 203, the initialization module 201 first initializes the parameters of each layer in the target detection model 203 (that is, assigns an initial value to each parameter), and then the training module 202 reads The training images in the target detection training set in the database train the target detection model 203 until the loss function in the target detection model 203 converges and the loss function value is less than a certain threshold or all the training images in the target detection training set are used for training, then The target detection model 203 has been trained. Similarly, when the training device 200 starts to train the vehicle attribute detection model 204, the initialization module 201 first initializes the parameters of each layer in the vehicle attribute detection model 204 (that is, assigns an initial value to each parameter), and then The training module 202 reads the training images in the vehicle attribute detection training set in the database to train the vehicle attribute detection model 204, until the loss function in the vehicle attribute detection model 204 converges and the loss function value is less than a specific threshold or all vehicle attribute detection training sets The training image of is used for training, and the training of the vehicle attribute detection model 204 is completed.
值得注意的是,目标检测模型203和车辆属性检测模型204也可由两个训练装置分别进行训练,目标检测模型203和/或车辆属性检测模型204还可以不需要由训练装置200进行训练,例如:目标检测模型203和/或车辆属性检测模型204采用的是第三方已训练好的,且对目标检测和/或属性检测具有较好精确度的神经网络模型。It is worth noting that the target detection model 203 and the vehicle attribute detection model 204 can also be trained by two training devices respectively, and the target detection model 203 and/or the vehicle attribute detection model 204 may not need to be trained by the training device 200, for example: The target detection model 203 and/or the vehicle attribute detection model 204 use a neural network model that has been trained by a third party and has good accuracy for target detection and/or attribute detection.
在本申请的一个实施例中,也可不需要采集装置采集训练图像或训练视频,也不需要构建目标检测训练集和/或车辆属性检测训练集,例如:目标检测训练集和/或车辆属性检测训练集从第三方直接获得。另外,值得注意的是,本申请中目标检测训练集中的训练图像可以与车辆属性检测训练集中的训练图像内容相同但是标签不同,例如:采集装置在采集到1万张包含各个交通道路上的车辆、行人、静态物体等目标的图像。在构建目标检测训练集时,将这1万张图像中的目标用边界框进行标注,经过边界框标注后的这1万张训练图像就构成了目标检测训练集。在构建车辆属性检测训练集时,将这1万张图像中的车辆用边界框进行标注,且每个边界框对应标注出该车辆的属性(例如:车型、车辆品牌等),经过边界框和属性标注后的这1万张训练图像就构成了车辆检测训练集。In an embodiment of the present application, there may be no need for an acquisition device to collect training images or training videos, and no need to construct a target detection training set and/or vehicle attribute detection training set, for example: target detection training set and/or vehicle attribute detection The training set is obtained directly from a third party. In addition, it is worth noting that the training images in the target detection training set in this application can have the same content as the training images in the vehicle attribute detection training set but with different labels. For example, the collection device collects 10,000 images containing vehicles on various traffic roads. , Pedestrians, static objects and other target images. When constructing the target detection training set, the targets in the 10,000 images are labeled with bounding boxes, and the 10,000 training images after the bounding box labeling constitute the target detection training set. When constructing the vehicle attribute detection training set, the vehicles in these 10,000 images are marked with bounding boxes, and each bounding box corresponds to the attributes of the vehicle (for example: car model, vehicle brand, etc.), passing through the bounding box and These 10,000 training images after attribute labeling constitute the vehicle detection training set.
值得注意的是,在本申请的一个实施例中,检测装置在对车辆的盲区进行检测时也可以仅使用一个神经网络模型,可称为检测与识别模型,该检测与识别模型是一个包含目标检测模型203和车辆属性检测模型204的全部功能的神经网络模型。该检测与识别模型既可以进行目标位置的检测也可以对检测的目标中的车辆进行识别,再对识别的车辆进行属性检测。对于检测与识别模型的训练与上述目标检测模型203和车辆属性检测模型204的训练的思想相同,在此不再赘述。It is worth noting that, in an embodiment of the present application, the detection device may also use only one neural network model when detecting the blind area of the vehicle, which may be called a detection and recognition model, and the detection and recognition model is a target A neural network model for all functions of the detection model 203 and the vehicle attribute detection model 204. The detection and recognition model can detect the target position or identify the vehicle in the detected target, and then perform the attribute detection on the identified vehicle. The training of the detection and recognition model is the same as the training of the target detection model 203 and the vehicle attribute detection model 204, and will not be repeated here.
经过训练装置200训练完成的目标检测模型203和车辆属性检测模型204可分别被用于对摄像头拍摄的视频数据中的视频帧进行目标检测和车辆属性检测。在本申请的一个实施例中,如图4所示,训练完成的目标检测模型203和车辆属性检测模型204被部署至检测装置300,在检测装置300中,训练完成的目标检测模型203和训练完成的车辆属性检测模型204被部署至目标检测与跟踪模块301。The target detection model 203 and the vehicle attribute detection model 204 trained by the training device 200 can be used to perform target detection and vehicle attribute detection on video frames in the video data captured by the camera, respectively. In an embodiment of the present application, as shown in FIG. 4, the trained target detection model 203 and the vehicle attribute detection model 204 are deployed to the detection device 300. In the detection device 300, the trained target detection model 203 and the training The completed vehicle attribute detection model 204 is deployed to the target detection and tracking module 301.
如图4所示,检测装置300包括目标检测与跟踪模块301、目标定位模块302、姿态确定模块303、盲区确定模块304。As shown in FIG. 4, the detection device 300 includes a target detection and tracking module 301, a target positioning module 302, a posture determination module 303, and a blind spot determination module 304.
目标检测与跟踪模块301,用于接收摄像头拍摄的视频数据,由摄像头拍摄的视频数据可以是实时的视频流。实时视频流记录了交通道路上的目标在当前时刻的运行情况,对视频数据中的目标进行检测,获得目标在当前时刻在视频中的位置信息,进一步地,目标检测与跟踪模块301还用于根据目标在当前时刻在视频中的位置信息以及目标在历史时刻在视频中的位置信息,对目标在一段时间内在视频画面中的运行轨迹进行跟踪和拟合,获得每个目标在视频中的运动轨迹信息。The target detection and tracking module 301 is used to receive video data captured by the camera, and the video data captured by the camera may be a real-time video stream. The real-time video stream records the operation of the target on the traffic road at the current moment, detects the target in the video data, and obtains the position information of the target in the video at the current moment. Further, the target detection and tracking module 301 is also used for According to the location information of the target in the video at the current moment and the location information of the target in the video at the historical moment, track and fit the running trajectory of the target in the video screen over a period of time to obtain the motion of each target in the video Track information.
应理解,本申请中的目标为位于交通道路上的实体,目标包括交通道路上的动态目标和静态目标,交通道路上的动态目标为在交通道路上随着时间进行移动运行, 一段时间内可形成一条运行轨迹的物体,包括:车辆、行人、动物等;静态目标在交通道路上一段时间内静止,例如:静态目标可以是熄火停靠在路边的车辆、道路施工形成的施工区域等。目标检测与跟踪模块301在进行目标跟踪时,可识别动态目标和静态目标,可以仅对动态目标进行跟踪,也可以对动态目标和静态目标都进行跟踪。It should be understood that the target in this application is an entity located on a traffic road, and the target includes a dynamic target and a static target on the traffic road. The dynamic target on the traffic road is moving and running on the traffic road over time. Objects that form a running track include: vehicles, pedestrians, animals, etc.; static targets are stationary on traffic roads for a period of time, for example: static targets can be vehicles parked on the side of the road with their flames out, and construction areas formed by road construction. The target detection and tracking module 301 can identify dynamic targets and static targets during target tracking, and can track only dynamic targets, or both dynamic targets and static targets.
应理解,目标检测与跟踪模块301可以接收至少一个摄像头拍摄的视频数据,根据每个视频数据对视频数据中的视频帧的目标进行检测与跟踪。It should be understood that the target detection and tracking module 301 may receive video data shot by at least one camera, and detect and track the target in the video frame of the video data according to each video data.
可选的,目标检测与跟踪模块301还可以接收雷达设备发送的雷达数据,且结合视频数据和雷达数据共同进行目标的检测和跟踪。Optionally, the target detection and tracking module 301 may also receive radar data sent by a radar device, and combine video data and radar data to jointly perform target detection and tracking.
目标定位模块302,用于与目标检测与跟踪模块301进行通信连接,接收目标检测与跟踪模块301发送的视频数据中每个目标的运动轨迹信息,利用预先获得的标定关系将每个目标在视频中的运动轨迹信息转换成每个目标在交通道路上的运动轨迹信息。The target positioning module 302 is configured to communicate with the target detection and tracking module 301, receive the motion track information of each target in the video data sent by the target detection and tracking module 301, and use the pre-obtained calibration relationship to place each target in the video The trajectory information in is converted into the trajectory information of each target on the traffic road.
应理解,每个目标在视频中的运动轨迹信息为一个像素坐标序列,该序列由目标在不同视频帧中的像素坐标组成,每个目标在视频中的运动轨迹信息表示目标在包括当前时刻在内的一个历史时段内在视频中的运行状况。每个目标在交通道路上的运动轨迹信息为一个地理坐标序列,该序列由目标在交通道路上的地理坐标组成,每个目标在交通道路上的运动轨迹信息表示目标在包括当前时刻在内的一个历史时段内在交通道路上的运行状况。所述目标的像素坐标是视频帧中目标所在位置的像素点的坐标,像素坐标是二维坐标;目标的地理坐标是目标在物理世界中的任意一种坐标系下的坐标,例如:本申请中的地理坐标采用由目标在交通道路中的位置对应的经度、纬度和海拔组成的三维坐标。可选的,目标定位模块302,还可以用于根据目标在包括当前时刻和历史时刻的一段时间内在交通道路上的运动轨迹信息,预测目标在未来某一时刻或者某一时段在交通道路上的位置信息和运动轨迹信息。It should be understood that the motion trajectory information of each target in the video is a sequence of pixel coordinates, which consists of the pixel coordinates of the target in different video frames. The motion trajectory information of each target in the video indicates that the target The running status in the video during a historical period of time. The trajectory information of each target on the traffic road is a geographic coordinate sequence, which is composed of the geographic coordinates of the target on the traffic road. The trajectory information of each target on the traffic road indicates the target’s current time including the current time. The operating conditions on the traffic road in a historical period. The pixel coordinates of the target are the coordinates of the pixel at the location of the target in the video frame, and the pixel coordinates are two-dimensional coordinates; the geographic coordinates of the target are the coordinates of the target in any coordinate system in the physical world, for example: this application The geographic coordinates in the use of three-dimensional coordinates composed of longitude, latitude and altitude corresponding to the location of the target in the traffic road. Optionally, the target positioning module 302 can also be used to predict the target's movement on the traffic road at a certain time or a certain period of time in the future based on the target's movement trajectory information during a period of time including the current moment and the historical moment. Position information and motion track information.
姿态确定模块303,用于根据目标在交通道路上的运动轨迹信息或目标检测的结果对目标的姿态进行确定。可选的,姿态确定模块303也可仅对车辆进行姿态确定。The posture determination module 303 is used to determine the posture of the target according to the trajectory information of the target on the traffic road or the result of target detection. Optionally, the attitude determination module 303 may also only determine the attitude of the vehicle.
应理解,本申请中,目标的姿态指示目标在物理世界中的行进方向,对于车辆而言,可用车头的朝向或者车辆在交通道路上的运动轨迹的切线方向表示车辆的姿态,对于行人而言,可用行人的运动轨迹的切线方向表示行人的姿态。It should be understood that in this application, the posture of the target indicates the direction of travel of the target in the physical world. For a vehicle, the heading of the vehicle or the tangent direction of the vehicle's trajectory on a traffic road can be used to indicate the posture of the vehicle. For pedestrians , The tangent direction of the trajectory of the pedestrian can be used to express the posture of the pedestrian.
盲区确定模块304,用于确定车辆的结构属性(例如:车辆的型号、车辆的形状等),根据车辆的结构属性在盲区信息库中确定车辆的盲区信息,进一步根据车辆的盲区信息确定当前行驶的车辆的盲区(包括:确定盲区在交通道路的位置和盲区的分布范围)。可选的,盲区确定模块304还用于判定车辆在该位置的盲区范围内是否存在盲区危险(即当前时刻的盲区内是否存在其他目标),若有,则发送盲区告警至存在盲区危险的车辆。The blind spot determination module 304 is used to determine the structural attributes of the vehicle (for example, the model of the vehicle, the shape of the vehicle, etc.), determine the blind spot information of the vehicle in the blind spot information database according to the structural attributes of the vehicle, and further determine the current driving according to the blind spot information of the vehicle The blind area of the vehicle (including: determining the location of the blind area on the traffic road and the distribution range of the blind area). Optionally, the blind spot determination module 304 is also used to determine whether the vehicle has a blind spot danger within the blind spot range of the location (that is, whether there are other targets in the blind zone at the current moment), and if so, send a blind spot warning to the vehicle in the blind spot danger .
可选的,盲区确定模块304还可以用于向交通管理系统或路边告警设备发送告警数据。Optionally, the blind spot determination module 304 may also be used to send alarm data to the traffic management system or roadside alarm equipment.
可选的,盲区确定模块304还可以用于统计交通道路在一段时间的盲区危险数据,将统计数据发送至交通管理系统。Optionally, the blind spot determination module 304 can also be used to count the blind spot risk data of the traffic road for a period of time, and send the statistical data to the traffic management system.
由于上述各模块的功能,本申请实施例提供的检测装置可以用于实时地检测一定地理区域内的行驶车辆在当前时刻或者未来时刻的盲区情况,及时给处于危险的车辆、行人予以告警提示或预警,可有效地降低车辆的盲区造成的交通危险情况。Due to the functions of the above-mentioned modules, the detection device provided by the embodiments of the present application can be used to detect the blind spots of vehicles in a certain geographic area at the current time or in the future in real time, and promptly warn vehicles and pedestrians in danger. Early warning can effectively reduce traffic hazards caused by blind spots of vehicles.
下面结合图5具体描述本申请实施例提供的检测车辆的盲区的方法。The method for detecting the blind area of a vehicle provided by an embodiment of the present application will be described in detail below with reference to FIG. 5.
S401:接收视频数据,根据视频数据中的视频帧进行目标检测与目标跟踪,获得目标的类型信息和目标在视频画面中的运动轨迹信息。S401: Receive video data, perform target detection and target tracking according to video frames in the video data, and obtain target type information and motion track information of the target in the video screen.
具体地,接收设置于交通道路中的固定位置的摄像头拍摄的视频数据。本申请中的视频数据可以是摄像头实时拍摄的视频流,S401实时获取当前时刻的视频流,对当前时刻的视频流中的目标进行检测,根据当前时刻的目标在该视频流中的位置信息和历史时刻的该目标在视频流中的位置信息确定目标在包括当前时刻的一段历史时段的视频画面中的运动轨迹信息,其中历史时刻的该目标在视频流中的位置信息由历史时刻进行目标检测的时候获得,且存储在检测装置或其他可读取的装置中。Specifically, the video data captured by a camera set at a fixed position in a traffic road is received. The video data in this application may be a video stream captured by a camera in real time. S401 acquires the video stream at the current moment in real time, detects the target in the video stream at the current moment, and detects the target's position in the video stream according to the location information and The position information of the target in the video stream at the historical moment determines the motion track information of the target in the video screen of a historical period including the current moment. The position information of the target in the video stream at the historical moment is detected by the historical moment. Obtained at the time and stored in a detection device or other readable device.
进行目标检测需要采用目标检测模型,具体地:将接收的视频数据中的图像输入至目标检测模型,目标检测模型对图像中存在的目标进行检测,输出每个图像中的目标在该图像中的位置信息和该目标的类型信息。进一步地,根据历史的一段连续时间中的多个图像中的目标的位置信息和类型信息和目标跟踪算法对目标在视频画面中的运行轨迹进行跟踪,获得目标在视频画面中的运动轨迹信息。应理解,目标在视频画面中的运动轨迹信息表示目标在一段包括当前时刻的历史时间内在视频画面中的运动轨迹,运动轨迹信息中的末端(即像素坐标序列中的最后一个像素坐标值)为该目标在当前时刻在视频画面中的位置。该步骤S401的在一种实施例中的具体步骤在后文中详细描述。Target detection needs to adopt a target detection model, specifically: the image in the received video data is input to the target detection model, the target detection model detects the target in the image, and outputs the target in each image. Location information and type information of the target. Further, according to the position information and type information of the target in the multiple images in a continuous period of history and the target tracking algorithm, the running track of the target in the video screen is tracked to obtain the moving track information of the target in the video screen. It should be understood that the motion trajectory information of the target in the video picture represents the motion trajectory of the target in the video picture in a historical time including the current moment, and the end of the motion trajectory information (that is, the last pixel coordinate value in the pixel coordinate sequence) is The position of the target in the video frame at the current moment. The specific steps of this step S401 in an embodiment are described in detail later.
可选的,S401还可以接收由设置于交通道路上的雷达设备(例如:激光雷达、毫米波雷达)发送的雷达数据,S402在进行目标检测与跟踪时可结合雷达数据中包括的目标的信息(例如:目标的位置信息、目标的轮廓信息等)与对视频帧中的目标进行检测和跟踪得到的信息,获得目标在图像中的位置信息、类型信息、目标在视频画面中的运动轨迹信息。Optionally, S401 can also receive radar data sent by radar equipment (such as lidar, millimeter wave radar) installed on the traffic road, and S402 can combine target information included in the radar data when detecting and tracking targets. (For example: target position information, target contour information, etc.) and the information obtained from the detection and tracking of the target in the video frame, to obtain the position information of the target in the image, the type information, and the movement track information of the target in the video screen .
值得注意的是,在S401中可以接收多个设置于交通道路中的不同位置的摄像头拍摄的视频数据,分别对每个视频数据中的目标进行检测与跟踪,以使得本申请提供的方法可以检测一个地理区域内的多个摄像头视角下的交通道路中的车辆的盲区。It is worth noting that in S401, it is possible to receive video data taken by multiple cameras located at different positions on the traffic road, and detect and track the target in each video data respectively, so that the method provided in this application can detect Blind areas of vehicles on traffic roads under the perspective of multiple cameras in a geographic area.
S402:将目标在视频画面中的运动轨迹信息转换成目标在交通道路上的运动轨迹信息。S402: Convert the movement track information of the target in the video picture into the movement track information of the target on the traffic road.
由前述步骤S401获得了检测到的多个目标在视频画面中的运动轨迹信息,目标在该视频数据中的运动轨迹信息即为该目标对应的像素坐标序列,像素坐标序列中包括多个像素坐标,每个像素坐标表示该目标在视频画面中的一个视频帧的图像中的位置。In the foregoing step S401, the motion track information of the detected multiple targets in the video screen is obtained. The motion track information of the target in the video data is the pixel coordinate sequence corresponding to the target, and the pixel coordinate sequence includes multiple pixel coordinates. , Each pixel coordinate represents the position of the target in the image of a video frame in the video screen.
将目标在视频画面中的运动轨迹信息转换成目标在交通道路上的运动动轨迹信 息,换而言之是将目标对应的像素坐标序列中的每个像素坐标转换成地理坐标,每个地理坐标代表目标在物理世界的交通道路上的实际位置。将像素坐标转换成地理坐标需要依赖于一个标定关系,该标定关系为设置于交通道路上的摄像头拍摄的视频画面与物理世界中该交通道路的映射关系,即视频画面中的每个像素点的像素坐标与该点在物理世界的交通道路上的地理坐标之间的映射关系。通过标定关系以及前述S401获得的目标在视频画面中的运动轨迹信息中的像素坐标可以计算目标在交通道路上的地理坐标,计算目标的地理坐标的过程也可称为目标定位。Convert the target's motion trajectory information in the video screen into the target's motion trajectory information on the traffic road, in other words, convert each pixel coordinate in the pixel coordinate sequence corresponding to the target into geographic coordinates, each geographic coordinate Represents the actual position of the target on the traffic road in the physical world. The conversion of pixel coordinates into geographic coordinates requires a calibration relationship, which is the mapping relationship between the video picture taken by the camera set on the traffic road and the traffic road in the physical world, that is, the relationship between each pixel in the video picture The mapping relationship between the pixel coordinates and the geographic coordinates of the point on the traffic road in the physical world. Through the calibration relationship and the pixel coordinates in the motion track information of the target in the video image obtained in S401, the geographic coordinates of the target on the traffic road can be calculated, and the process of calculating the geographic coordinates of the target may also be called target positioning.
在一种实施例中,每个摄像头拍摄的视频画面与物理世界中的交通道路的标定关系需要提前计算,计算标定关系的方法可以是:In an embodiment, the calibration relationship between the video image captured by each camera and the traffic road in the physical world needs to be calculated in advance. The method for calculating the calibration relationship may be:
1、预先采集摄像头可拍摄到的交通道路上的一些控制点的地理坐标。交通道路的控制点通常选择交通道路中背景物体的尖点,以便于直观地找到该控制点在视频帧中的像素点的位置。例如:以交通道路中的交通标志线的直角点、箭头的尖点、绿化带拐角点等作为控制点,控制点的地理坐标(经度、纬度、海拔)可通过人工采集,也可通过无人驾驶汽车采集,选取的交通道路的控制点需均匀地分布在交通道路上,控制点的选取数目需参考实际情况。1. Collect in advance the geographic coordinates of some control points on the traffic road that can be captured by the camera. The control point of the traffic road usually selects the sharp point of the background object in the traffic road in order to intuitively find the position of the pixel point of the control point in the video frame. For example, the right-angle points of traffic signs, sharp points of arrows, corners of green belts, etc. in traffic roads are used as control points. The geographic coordinates (longitude, latitude, altitude) of the control points can be collected manually or by unmanned For driving car collection, the selected control points of the traffic road need to be evenly distributed on the traffic road, and the number of control points selected needs to refer to the actual situation.
2、获取采集的控制点在摄像头拍摄的视频帧中的像素坐标。具体地,读取固定设置于交通道路上的摄像头拍摄的交通道路的视频,在摄像头拍摄的任意视频帧中获取控制点对应的像素坐标值,可通过人工获取,即人工观察视频帧中控制点所对应的像素点,并记录该像素点的地理坐标。获取控制点对应的像素坐标值也可通过程序获取,例如利用角点检测、短时傅里叶变换边缘提取算法和亚像素坐标拟合的方法获取交通道路的控制点在视频帧中相应的像素坐标。2. Obtain the pixel coordinates of the captured control point in the video frame captured by the camera. Specifically, read the video of the traffic road taken by the camera fixedly set on the traffic road, and obtain the pixel coordinate value corresponding to the control point in any video frame taken by the camera, which can be obtained manually, that is, manually observe the control point in the video frame The corresponding pixel, and record the geographic coordinates of the pixel. Obtaining the pixel coordinate value corresponding to the control point can also be obtained through the program, for example, using corner detection, short-time Fourier transform edge extraction algorithm and sub-pixel coordinate fitting method to obtain the corresponding pixel of the control point of the traffic road in the video frame coordinate.
3、根据控制点的地理坐标和像素坐标建立摄像头视角下的视频画面到物理世界中的交通道路的映射关系。例如,可根据单应变换原理计算由像素坐标转换为地理坐标的单应变换矩阵H,单应变换公式为H=(m0,n0,h0)/(x0,y0),其中,(m0,n0,h0)用以表示每个控制点的地理坐标,(x0,y0)用以表示每个控制点的像素坐标。可根据至少三个控制点的像素坐标(x0,y0)和地理坐标(m0,n0,h0)计算得到摄像头拍摄的视频数据对应的H矩阵。应理解,每个设置于交通道路的不同位置的摄像头拍摄的视频数据对应的H矩阵不同。3. Establish the mapping relationship between the video picture under the camera's perspective and the traffic road in the physical world according to the geographic coordinates and pixel coordinates of the control point. For example, the homography transformation matrix H converted from pixel coordinates to geographic coordinates can be calculated according to the principle of homography transformation. The homography transformation formula is H=(m0,n0,h0)/(x0,y0), where (m0,n0 ,h0) is used to represent the geographic coordinates of each control point, (x0, y0) is used to represent the pixel coordinates of each control point. The H matrix corresponding to the video data captured by the camera can be calculated according to the pixel coordinates (x0, y0) and geographic coordinates (m0, n0, h0) of at least three control points. It should be understood that the H matrix corresponding to the video data captured by each camera set at a different position of the traffic road is different.
通过上述步骤1-3获得的标定矩阵即为摄像头拍摄的视频画面与交通道路的标定关系。通过标定矩阵H以及S401获得的目标在视频画面中的运动轨迹信息(即目标在不同视频帧中的像素坐标序列),可获得目标的在不同视频帧中的每个像素坐标对应的地理坐标。具体计算方法为:(m,n,h)=H*(x,y),其中,(m,n,h)为待计算的目标的地理坐标,(x,y)为目标在视频画面中的运动轨迹信息中的像素坐标。由上述可知,目标的每个像素坐标对应一个地理坐标,计算完目标在视频画面中的运动轨迹信息中的每个像素坐标对应的地理坐标后,即可获得目标的地理坐标序列。应理解,目标的地理坐标序列中的每个地理坐标在地理坐标序列中的位置与其对应的像素坐标在像素坐标序列中的位置相同。The calibration matrix obtained through the above steps 1-3 is the calibration relationship between the video picture taken by the camera and the traffic road. Through the calibration matrix H and the movement track information of the target in the video frame obtained by S401 (that is, the pixel coordinate sequence of the target in different video frames), the geographic coordinates corresponding to each pixel coordinate of the target in different video frames can be obtained. The specific calculation method is: (m,n,h)=H*(x,y), where (m,n,h) is the geographic coordinates of the target to be calculated, and (x,y) is the target in the video screen The pixel coordinates in the motion trajectory information. It can be seen from the above that each pixel coordinate of the target corresponds to a geographic coordinate. After calculating the geographic coordinate corresponding to each pixel coordinate in the motion track information of the target in the video screen, the geographic coordinate sequence of the target can be obtained. It should be understood that the position of each geographic coordinate in the geographic coordinate sequence of the target in the geographic coordinate sequence is the same as the position of its corresponding pixel coordinate in the pixel coordinate sequence.
应理解,目标在交通道路上的运动轨迹信息表示目标在包括当前时刻的一段时间内在交通道路上的运动轨迹,运动轨迹信息中的末端(即地理坐标序列中的最后 一个地理坐标值)为该目标在当前时刻在交通道路上的位置。It should be understood that the movement trajectory information of the target on the traffic road indicates the movement trajectory of the target on the traffic road in a period of time including the current moment, and the end (ie the last geographical coordinate value in the geographical coordinate sequence) in the movement trajectory information is this The position of the target on the traffic road at the current moment.
应理解,本步骤S402对前述S401中获得的每个目标在视频画面中的运动轨迹信息均进行转换,获得每个目标在交通道路上的运动轨迹信息。It should be understood that this step S402 converts the motion trajectory information of each target in the video image obtained in the foregoing S401 to obtain the motion trajectory information of each target on the traffic road.
可选的,在获得目标在交通道路上的运动轨迹信息后,还可以根据目标在交通道路上的运动轨迹信息和函数对目标在交通道路上的运动轨迹进行曲线拟合,具体地,根据运动轨迹信息中的地理坐标序列中每个坐标点的分布和每个坐标点对应的时间信息,选用合适的函数,利用地理坐标序列中每个时刻对应的坐标点计算该函数的参数,最终获得拟合函数,该拟合函数为时间信息与地理坐标信息之间的函数。可以根据获得的拟合函数对目标在未来时刻所处于的地理位置进行预测,即获得目标在未来时刻在交通道路上的位置信息。Optionally, after obtaining the target's motion trajectory information on the traffic road, curve fitting can be performed on the target's motion trajectory on the traffic road according to the target's motion trajectory information and function on the traffic road. Specifically, according to the motion The distribution of each coordinate point in the geographic coordinate sequence in the trajectory information and the time information corresponding to each coordinate point, select an appropriate function, use the coordinate point corresponding to each moment in the geographic coordinate sequence to calculate the parameters of the function, and finally obtain the pseudo The fitting function is a function between time information and geographic coordinate information. The geographic location of the target at the future time can be predicted according to the obtained fitting function, that is, the location information of the target on the traffic road at the future time can be obtained.
值得注意的是,在另一种实施例中,为了全面地对交通道路上的车辆进行盲区检测和盲区危险判断,前述步骤S401可以分别获取设置于交通道路上的多个摄像头拍摄的视频数据(例如:对于一个四个方向的交通路口,可以分别获取设置于四个方向的摄像头拍摄的视频数据),对每个视频数据进行如S401描述的目标检测与目标跟踪,获得每个视频数据中的多个目标在图像中的位置信息、类型信息、目标在视频画面中的运动轨迹信息。进一步地,执行S402的方法将每个目标在视频画面中的运动轨迹信息转换成每个目标在交通道路上的运动轨迹信息(需提前计算每个摄像头拍摄的视频画面与该交通道路之前的标定关系),进一步地,将交通道路上的运动轨迹信息出现重合(或相近)的目标确定为交通道路上的同一个目标,即,该目标被不同摄像头拍摄到,将同一个目标对应的多个运动轨迹进行融合,例如:将多个地理坐标序列中的每个地理坐标取平均值,每个平均值形成一个新的地理坐标序列,将新的地理坐标序列确定为该目标在交通道路上的运动轨迹信息。对每个目标继续执行S403-S405即可完成车辆的盲区确定和盲区危险判断。It is worth noting that in another embodiment, in order to comprehensively perform blind spot detection and blind spot risk judgment on vehicles on the traffic road, the foregoing step S401 may separately obtain video data taken by multiple cameras set on the traffic road ( For example: For a traffic intersection in four directions, you can obtain the video data shot by cameras set in the four directions respectively), perform target detection and target tracking as described in S401 on each video data, and obtain the Position information, type information of multiple targets in the image, and movement track information of the target in the video screen. Further, the method of executing S402 converts the movement trajectory information of each target in the video picture into the movement trajectory information of each target on the traffic road (need to calculate in advance the video picture taken by each camera and the calibration before the traffic road Further, the target whose motion trajectory information on the traffic road overlaps (or is similar) is determined to be the same target on the traffic road, that is, the target is captured by different cameras, and the same target corresponds to multiple targets. The movement trajectory is fused, for example, the average value of each geographic coordinate in multiple geographic coordinate sequences is taken, and each average value forms a new geographic coordinate sequence, and the new geographic coordinate sequence is determined as the target on the traffic road. Motion track information. Continue to execute S403-S405 for each target to complete the vehicle blind spot determination and blind spot risk judgment.
根据多个摄像头拍摄的视频数据进行车辆的盲区检测和盲区危险判定的方法,可以避免单个摄像头拍摄到一个交通道路上的视野有限的问题,还可以进一步地避免单个摄像机拍摄的视频画面中一些目标被遮挡,导致被遮挡的目标无法进行盲区检测的问题。进一步地,利用多个摄像机拍摄的视频数据还可以获取一段时间内的交通道路上的目标的完整运动轨迹,以便于检测装置在后续步骤根据目标的完整运动轨迹判断车辆的姿态。The method of blind spot detection and blind spot hazard determination of vehicles based on video data captured by multiple cameras can avoid the problem of limited field of view on a traffic road captured by a single camera, and can further avoid some targets in the video screen captured by a single camera It is occluded, resulting in the problem that the occluded target cannot be detected in the blind area. Further, the video data captured by multiple cameras can also be used to obtain the complete movement trajectory of the target on the traffic road within a period of time, so that the detection device can judge the posture of the vehicle according to the complete movement trajectory of the target in the subsequent steps.
S403:根据目标在交通道路上的运动轨迹信息或目标检测的结果确定目标的姿态。S403: Determine the posture of the target according to the trajectory information of the target on the traffic road or the result of target detection.
本申请中,目标的姿态为目标在交通道路上的行进方向,确定目标的姿态的方法有多种,对不同的目标进行姿态确定可采用不同的方法。例如:对于车辆,一种确定车辆的姿态的方法是:采用前述S402中获得的该车辆在交通道路上的运动轨迹信息(即地理坐标序列)进行轨迹拟合,将获得的轨迹的切线方向作为该车辆的姿态,应理解,该切线方向为轨迹的末端的切线方向(由于拟合获得的轨迹具有时间的先后顺序,可将当前时刻对应的轨迹上的点称为末端)。对于其他目标,例如:行人,也可采用该方法确定姿态,获得目标的姿态信息。In this application, the posture of the target is the traveling direction of the target on the traffic road. There are many methods for determining the posture of the target, and different methods can be used to determine the posture of different targets. For example, for a vehicle, a method for determining the posture of the vehicle is to use the trajectory information (ie geographic coordinate sequence) of the vehicle on the traffic road obtained in S402 to perform trajectory fitting, and use the tangent direction of the trajectory as For the posture of the vehicle, it should be understood that the tangent direction is the tangent direction of the end of the trajectory (because the trajectory obtained by fitting has a time sequence, the point on the trajectory corresponding to the current moment can be called the end). For other targets, such as pedestrians, this method can also be used to determine the posture and obtain the posture information of the target.
对于车辆,还可以采用另一种方法确定车辆的姿态,即采用三维检测的方法检测车辆在视频中的三维轮廓,利用前述S402的方法将车辆的轮廓转换成车辆在交通道路上的轮廓信息,根据轮廓信息和交通道路上的进出口车道的方向确定车辆的姿态。例如:在交通道路的一个路口上,一辆车的轮廓信息中车辆的长边的方向为西北-东南方向,该路口的在该车辆的位置对应的进出口车道的交通方向的规则为由东南至西北,则根据该车辆的长边的方向和交通道路上的进出口车道的方向确定该车辆的姿态为西北方向。值得注意的是,交通道路上的进出口车道的方向等信息可由检测装置预先从其他装置中获得。For vehicles, another method can be used to determine the posture of the vehicle, that is, the three-dimensional detection method is used to detect the three-dimensional contour of the vehicle in the video, and the contour of the vehicle is converted into the contour information of the vehicle on the traffic road using the aforementioned method of S402. Determine the posture of the vehicle based on the contour information and the direction of the entrance and exit lanes on the traffic road. For example: at an intersection of a traffic road, the direction of the long side of the vehicle in the contour information of a car is northwest-southeast, and the traffic direction of the entrance and exit lane corresponding to the position of the vehicle at the intersection is determined by southeast To the northwest, the posture of the vehicle is determined to be the northwest direction according to the direction of the long side of the vehicle and the direction of the entrance and exit lanes on the traffic road. It is worth noting that information such as the direction of the entrance and exit lanes on the traffic road can be obtained by the detection device from other devices in advance.
值得注意的是,对于车辆,也可以综合上述两种方法一起确定车辆的姿态。It is worth noting that for vehicles, the above two methods can also be combined to determine the posture of the vehicle.
还值得注意的是,若上述S402获得了车辆等目标的运动轨迹的拟合函数,以及目标在未来时刻在交通道路上的位置信息,可根据拟合函数获得车辆在未来一段时间的运动轨迹曲线,可根据未来一段时间的运动轨迹曲线确定曲线的切线方向作为该车辆在未来时刻的姿态,获得车辆在未来时刻的姿态信息。It is also worth noting that if the above S402 obtains the fitting function of the movement trajectory of the vehicle and other targets, and the position information of the target on the traffic road in the future, the vehicle's movement trajectory curve in the future can be obtained according to the fitting function. , The tangent direction of the curve can be determined according to the trajectory curve for a period of time in the future as the posture of the vehicle at the future time, and the posture information of the vehicle at the future time can be obtained.
S404:确定车辆的盲区。S404: Determine the blind area of the vehicle.
由前述步骤S401-S403获得了交通道路上的目标的类型、目标在图像和交通道路上的位置信息、目标的姿态信息。根据这些信息中的一种或多种可进行车辆的盲区估计,获得车辆在当前时刻或未来时刻在某一位置的盲区位置信息和盲区范围信息。From the foregoing steps S401-S403, the type of the target on the traffic road, the position information of the target on the image and the traffic road, and the posture information of the target are obtained. According to one or more of these information, the blind zone estimation of the vehicle can be performed, and the blind zone position information and blind zone range information of the vehicle at a certain position at the current time or in the future can be obtained.
对车辆进行盲区估计主要包括:对车辆进行车辆属性检测,确定车辆的结构属性;根据车辆的结构属性查找盲区信息库,确定该车辆的盲区信息;根据车辆的盲区信息、当前时刻或未来时刻的所述车辆在交通道路上的位置信息和姿态信息,确定该车辆当前时刻或未来时刻在交通道路上的盲区,获得车辆的盲区分布信息和盲区位置信息。Blind zone estimation of a vehicle mainly includes: detecting vehicle attributes to determine the structural attributes of the vehicle; searching the blind zone information database according to the structural attributes of the vehicle to determine the blind zone information of the vehicle; according to the blind zone information of the vehicle, the current time or the future time The location information and posture information of the vehicle on the traffic road determine the blind area of the vehicle on the traffic road at the current time or in the future, and obtain the blind area distribution information and the blind area location information of the vehicle.
应理解,盲区信息库为预先构建的一个保存有各种车辆的结构属性及每种结构属性的车辆对应的盲区信息的数据库。盲区信息库可由人工预先收集数据进而构建获得,也可从第三方购买获得。It should be understood that the blind zone information database is a pre-built database that stores the structural attributes of various vehicles and the blind zone information corresponding to the vehicles of each structural attribute. The blind zone information database can be constructed by manually collecting data in advance, or it can be purchased from a third party.
还应理解,盲区信息库可以是部署在检测装置中的数据库,也可以是在检测装置以外,且可与检测装置进行数据通信的数据库。在本申请的实施例中,以盲区信息库为检测装置内的一个数据库为例进行描述。It should also be understood that the blind zone information database may be a database deployed in the detection device, or may be a database outside the detection device and capable of data communication with the detection device. In the embodiment of the present application, description is made by taking the blind area information database as a database in the detection device as an example.
对车辆进行盲区估计的方法的具体步骤在后文S4041-S4044中进行详细描述。The specific steps of the method for estimating the blind zone of the vehicle are described in detail in S4041-S4044 below.
可选的,在确定了车辆在交通道路上的盲区,可结合车辆的结构属性、车辆在交通道路上的位置信息、车辆在交通道路上的盲区的位置和分布对车辆在交通道路上的盲区构建可视化盲区图像,例如:可以获取实景地图,根据车辆的结构属性确定车辆的模型,根据车辆在交通道路上的位置信息将车辆的模型映射至实景地图的相应位置,根据车辆在交通道路上的盲区的位置和分布将车辆的盲区也映射至实景地图的相应位置。获得的可视化盲区图像可以是图形用户界面(graphical user interface,GUI),该GUI可以发送至其他显示设备,例如:发送至对应的车辆的车载显示设备、发送至交通管理系统中的显示设备。通过车辆在当前时刻或者未来 时刻的可视化盲区图像,可以直观快速地确定车辆是否处于危险状态,及时进行行驶方向调整,避免发生盲区危险。Optionally, after determining the blind area of the vehicle on the traffic road, the structural attributes of the vehicle, the position information of the vehicle on the traffic road, the position and distribution of the blind area of the vehicle on the traffic road can be combined with the blind area of the vehicle on the traffic road. Construct a visual blind spot image, for example, you can obtain a real-world map, determine the model of the vehicle according to the structural attributes of the vehicle, map the model of the vehicle to the corresponding position of the real-world map according to the position information of the vehicle on the traffic road, and according to the vehicle's position on the traffic road The location and distribution of the blind area map the blind area of the vehicle to the corresponding position on the real map. The obtained visual blind spot image may be a graphical user interface (GUI), and the GUI may be sent to other display devices, for example, sent to the vehicle-mounted display device of the corresponding vehicle, or sent to the display device in the traffic management system. Through the visualized blind spot images of the vehicle at the current moment or in the future, it can be intuitively and quickly determined whether the vehicle is in a dangerous state, and the driving direction can be adjusted in time to avoid the danger of blind spots.
S405:对车辆的盲区进行盲区危险判断,当出现盲区危险时发送盲区告警至该车辆或处于盲区危险的其他车或人。S405: Perform a blind zone risk judgment on the blind zone of the vehicle, and send a blind zone warning to the vehicle or other vehicles or people in the blind zone when the blind zone is dangerous.
由前述步骤S404确定了车辆在当前时刻或未来时刻的盲区之后,可根据前述步骤S401-S402中获得的交通道路上的其他目标的位置或者其他目标在交通道路上的运动轨迹信息判断在当前时刻或者未来时刻其他目标的位置是否处于该车辆的盲区内,或者可根据当前时刻或未来时刻盲区的位置和范围,有针对性地检测盲区范围内是否存在其他目标。例如,如图6所示,在当前时刻,存在目标处于该车辆的盲区内,则认为该车辆存在盲区危险,进一步通过网络(例如:无线网络或车联网网络)向该车辆中的车载系统发送盲区告警。盲区告警包括告警数据,告警数据中可以包括以下中的一种或多种:发生盲区危险的盲区的位置、目标在盲区中的位置、在盲区中的目标的类型等,以使得车辆中的车载系统提示驾驶员存在盲区危险情况。After the blind zone of the vehicle at the current time or the future time is determined by the foregoing step S404, the position of other targets on the traffic road or the movement trajectory information of other targets on the traffic road obtained in the foregoing steps S401-S402 can be used to determine the current moment Or whether the position of other targets in the future is in the blind zone of the vehicle, or whether there are other targets in the blind zone can be detected according to the position and range of the blind zone at the current time or in the future. For example, as shown in Figure 6, at the current moment, if there is a target in the blind area of the vehicle, it is considered that the vehicle is in danger of the blind area, and the vehicle is further sent to the in-vehicle system through the network (for example: wireless network or car networking network) Blind zone warning. The blind zone warning includes warning data, which can include one or more of the following: the position of the blind zone where the blind zone danger occurs, the position of the target in the blind zone, the type of the target in the blind zone, etc., so that the vehicle in the vehicle The system reminds the driver that there is a dangerous situation in the blind spot.
可选的,当检测到车辆的盲区内存在其他目标时,检测装置可向路侧的告警设备发送告警,使得路侧告警设备发出声音或灯光等告警信号。Optionally, when detecting that there are other targets in the blind area of the vehicle, the detection device may send an alarm to the roadside warning device, so that the roadside warning device emits warning signals such as sound or light.
可选的,检测装置还可以将检测到的告警数据发送至交通管理系统中的设备或装置,使得指挥人员可根据获得的告警数据对交通道路上的行驶车辆进行相应的指挥或执法。Optionally, the detection device can also send the detected alarm data to the equipment or device in the traffic management system, so that the commander can conduct corresponding command or law enforcement on the vehicles on the traffic road based on the obtained alarm data.
可选的,检测装置还可以记录历史时刻的每辆车的盲区危险数据,盲区危险数据中可以包括:发生盲区危险的车辆型号、发生的时间、发生盲区危险的盲区位置、盲区内的目标的类型等,检测装置还可以将当前时刻的车辆的盲区危险数据和历史时刻的盲区危险数据进行统计,获得统计数据,将统计数据发送至交通管理平台,以使得交通管理平台可以对该交通区域进行风险评估或者根据统计数据进行适应性的规划和管理或出台相应监管策略等。Optionally, the detection device can also record the blind zone risk data of each vehicle at historical moments. The blind zone risk data can include: the vehicle model where the blind zone danger occurs, the time of occurrence, the position of the blind zone where the blind zone danger occurs, and the target information in the blind zone. The detection device can also collect statistics on the blind zone risk data of the vehicle at the current moment and the blind zone risk data at the historical moment, obtain statistical data, and send the statistical data to the traffic management platform, so that the traffic management platform can perform statistics on the traffic area. Risk assessment or adaptive planning and management based on statistical data or the introduction of corresponding regulatory strategies.
可选的,根据S405检测到的盲区内存在的其他目标的位置和类型,以及车辆当前时刻或未来时刻在所述交通道路上的盲区,可在前述构建好的可视化盲区图像的基础上,进一步地将盲区内的其他目标对应的模型投射至可视化盲区图像中的对应位置,使得获得的可视化盲区图像可以直观地反映车辆处于盲区危险,且可表示盲区危险为车辆在当前时刻或者未来时刻中的某一盲区内的某一位置存在某种类型的目标。这使得驾驶员获得当前的可视化盲区图像后,可快速明白当前的危险境况,并及时作出规避策略。Optionally, according to the location and type of other targets in the blind zone detected in S405, and the blind zone of the vehicle on the traffic road at the current or future moments, the visual blind zone image can be further constructed based on the previously constructed visual blind zone image. The model corresponding to other targets in the blind zone is projected to the corresponding position in the visual blind zone image, so that the obtained visual blind zone image can intuitively reflect that the vehicle is in danger of the blind zone, and can indicate that the blind zone danger is the current or future time of the vehicle. There is a certain type of target in a certain position in a certain blind zone. This allows the driver to quickly understand the current dangerous situation after obtaining the current visualized blind spot image, and make timely avoidance strategies.
可选的,检测装置在S405中确定了车辆的盲区危险后,还可以根据获得的盲区中其他目标当前的位置信息和其他目标在未来时刻的位置信息,以及当前车辆行驶路线,为该车辆重新进行路线规划,以避免车辆与盲区内的车辆发生冲突,引起交通事故。检测装置在进行路线规划后,可生成调整指令,调整指令中可以包括检测装置规划的新的行车路线信息,检测装置进一步地将调整指令发送至该车辆,使得该车辆接收调整指令后及时对车辆行驶路线进行调整,例如:自动驾驶车辆接收到调整指令后,根据调整指令中的新的行车路线继续行驶,实现盲区危险规避。或者检测装置将调整指令发送至交通管理平台,由交通管理平台对车辆进行指挥。还可选的,检测装置还可以为处于盲区内的其他目标进行路线规划,并根据规划的新的路线信息生成调整指令,调整指令中包含 新的路线信息将调整指令发送至盲区内的其他目标(例如:自动驾驶车辆),以使得其他目标可以根据调整指令调整未来的路线,这样也可以消除盲区危险。Optionally, after the detection device determines the risk of the blind zone of the vehicle in S405, it can also perform a renewal for the vehicle based on the obtained current position information of other targets in the blind zone and the position information of other targets in the future, as well as the current driving route of the vehicle. Carry out route planning to avoid conflicts between vehicles and vehicles in the blind area, causing traffic accidents. After the detection device performs route planning, it can generate an adjustment instruction. The adjustment instruction can include the new driving route information planned by the detection device. The detection device further sends the adjustment instruction to the vehicle, so that the vehicle can promptly respond to the vehicle after receiving the adjustment instruction. The driving route is adjusted, for example, after the autonomous vehicle receives the adjustment instruction, it will continue to drive according to the new driving route in the adjustment instruction to realize the blind spot danger avoidance. Or the detection device sends the adjustment instruction to the traffic management platform, and the traffic management platform commands the vehicles. Optionally, the detection device can also plan a route for other targets in the blind zone, and generate adjustment instructions according to the planned new route information. The adjustment instructions include the new route information and send the adjustment instructions to other targets in the blind zone. (For example: self-driving vehicles), so that other targets can adjust future routes according to adjustment instructions, which can also eliminate the danger of blind spots.
可选的,上述对车辆的盲区进行盲区危险判定时,若获得的车辆的盲区信息中包括盲区的危险系数,可以根据该车辆的盲区信息中的盲区的危险系数选择危险系数较高的盲区块作为高风险盲区,利用高风险盲区的盲区信息对高风险盲区进行盲区危险判断,例如:将盲区危险系数大于预设定的危险阈值的盲区块确定为高风险盲区,则在进行盲区危险判断时仅判断危险系数较高的高风险盲区内是否存在其他目标即可。Optionally, when the blind zone risk determination is performed on the blind zone of the vehicle, if the obtained blind zone information of the vehicle includes the risk coefficient of the blind zone, the blind zone with a higher risk coefficient can be selected according to the risk factor of the blind zone in the blind zone information of the vehicle As a high-risk blind zone, use the blind zone information of the high-risk blind zone to make a blind zone hazard judgment on the high-risk blind zone. For example, if a blind zone with a blind zone risk factor greater than a preset danger threshold is determined as a high-risk blind zone, then when making a blind zone risk judgment It only needs to judge whether there are other targets in the high-risk blind zone with a high risk factor.
可选的,上述对车辆的盲区进行盲区危险判定时,若获得的车辆的盲区信息中包括盲区的危险系数,可根据根据车辆在交通道路上的盲区的位置和范围修正盲区的危险系数,使得修正后的盲区的危险系数更能准确地反映该车辆在该交通道路的位置的盲区的危险程度;根据修正后的盲区的危险系数与预设定的危险阈值的关系,例如:将修正后的盲区危险系数大于预设定的危险阈值的盲区块确定为该车辆在交通道路上的高风险盲区,进而进行盲区危险判断。Optionally, when the blind zone risk determination is performed on the blind zone of the vehicle, if the obtained blind zone information of the vehicle includes the risk factor of the blind zone, the risk factor of the blind zone can be corrected according to the position and range of the blind zone on the traffic road, so that The revised risk factor of the blind zone can more accurately reflect the risk of the vehicle in the blind zone of the traffic road; according to the relationship between the revised risk factor of the blind zone and the preset risk threshold, for example, the revised The blind zone with the blind zone risk coefficient greater than the preset risk threshold is determined as the high-risk blind zone of the vehicle on the traffic road, and then the blind zone risk judgment is performed.
值得注意的是,上述对车辆的盲区进行盲区危险判定时,若S404中获得的仅是车辆的高风险盲区在交通道路上的位置和范围,则仅需要对车辆的高风险盲区是否存在盲区危险进行判断。It is worth noting that when the blind zone hazard determination is performed on the blind zone of the vehicle, if only the position and range of the high-risk blind zone of the vehicle on the traffic road are obtained in S404, it is only necessary to check whether the high-risk blind zone of the vehicle is dangerous. Make judgments.
上述这种仅判断危险系数较高的高风险盲区的盲区危险的方法一方面可节约检测装置进行盲区危险判断的时间,因为一些危险系数较小的盲区中存在其他目标的可能性较小,另一方面也可以使得检测装置发出的盲区告警数据更加精准,因为车辆的一些危险系数较小的盲区中就算存在其他目标,由于车辆在行驶过程中与盲区中的目标产生碰撞或者摩擦的可能性也极小,因此,对于这些盲区中存在其他目标的情况,可不认为是一种盲区危险,无需向车辆或者交通管理系统发送告警数据。The above-mentioned method of only judging the danger of the blind zone in the high-risk blind zone with a higher risk factor can save the time for the detection device to judge the danger of the blind zone, because some blind zones with a lower risk factor are less likely to have other targets, and on the other hand On the one hand, it can also make the blind zone warning data sent by the detection device more accurate, because even if there are other targets in the blind zone with a small risk factor of the vehicle, the possibility of collision or friction between the vehicle and the target in the blind zone during driving is also possible. Very small, therefore, for the presence of other targets in these blind spots, it is not considered a blind spot hazard, and there is no need to send warning data to vehicles or traffic management systems.
通过上述步骤S401-S405即可实现对交通道路上正在运行的车辆进行盲区检测和盲区危险判断,以使得驾驶该车辆的司机可以及时获知盲区危险情况,进行及时的危险规避。通过上述步骤也可以使得交通管理系统根据告警数据向处于车辆盲区的目标(例如:行人、非机动车等)进行告警提示,例如:通过路侧告警设备进行喇叭鸣笛、闪光灯提醒、蜂鸣器告警等。Through the above steps S401-S405, blind spot detection and blind spot hazard judgment can be realized for the vehicles running on the traffic road, so that the driver driving the vehicle can learn about the blind spot danger in time and carry out timely danger avoidance. Through the above steps, the traffic management system can also alert targets (such as pedestrians, non-motor vehicles, etc.) in the blind area of the vehicle based on the warning data, such as: horns, flashlights, and buzzers through roadside warning equipment Alarms, etc.
值得注意的是,本申请前述步骤S401-S403可以对交通道路上的所有目标进行,可以对交通道路上的所有机动车辆执行前述步骤S404-S405,以使得所有车辆都能及时获知盲区危险情况,也可以对交通道路上特定的车辆类型执行前述步骤S404-S405,例如:仅对车辆体积较大的工程车执行盲区检测和盲区危险判断,因为该类型的车辆发生盲区危险事件的可能性较大。可以在前述步骤S401中进行目标检测时检测到要进行盲区检测和盲区危险检测的车辆的类型,则在执行S404之前确定待检测的车辆类型,仅对待检测的车辆的类型执行S404后续步骤和S405。It is worth noting that the aforementioned steps S401-S403 of this application can be performed on all targets on the traffic road, and the aforementioned steps S404-S405 can be performed on all motor vehicles on the traffic road, so that all vehicles can learn about the dangerous situation of the blind zone in time. The aforementioned steps S404-S405 can also be performed for a specific type of vehicle on a traffic road, for example: blind spot detection and blind spot risk judgment are only performed on construction vehicles with larger vehicles, because this type of vehicle is more likely to have blind spot dangerous events . It is possible to detect the type of vehicle for blind spot detection and blind spot hazard detection during target detection in the aforementioned step S401, then determine the type of vehicle to be detected before executing S404, and execute the subsequent steps of S404 and S405 only for the type of vehicle to be detected .
还值得注意的是,本申请可以对摄像头拍摄的每一时刻的视频帧(或者固定时间间隔的视频帧)执行步骤S401-S405的方法,即本申请的检测装置可以实现持续、实时地进行车辆的盲区检测和盲区危险判定。It is also worth noting that the present application can perform the method of steps S401-S405 on the video frame (or the video frame at a fixed time interval) captured by the camera at each moment, that is, the detection device of the present application can realize continuous and real-time vehicle detection. The blind spot detection and blind spot risk determination.
下面结合图7具体描述步骤S401的具体实施方法:The specific implementation method of step S401 is described below in detail with reference to FIG. 7:
S4011:接收视频数据,提取视频数据中的图像(即视频帧),对图像的尺寸进行标准化。应理解,该视频数据为摄像头实时拍摄的交通道路上的视频流,对视频数据中的图像进行处理可以理解成对当前时刻的交通道路上的画面进行处理。S4011: Receive video data, extract images (that is, video frames) in the video data, and standardize the size of the images. It should be understood that the video data is a video stream on a traffic road captured by a camera in real time, and processing the image in the video data can be understood as processing the picture on the traffic road at the current moment.
该步骤对图像的尺寸进行标准化的目的是:使得标准化后的图像的尺寸可适应目标检测模型的输入。The purpose of standardizing the size of the image in this step is to make the size of the standardized image adaptable to the input of the target detection model.
本申请对图像的尺寸进行标准化的方法不作具体的限定,例如:可以采用拉伸或压缩尺寸大小的方法或者采用填充或裁剪的方法。This application does not specifically limit the method for standardizing the size of the image. For example, a method of stretching or compressing the size or a method of filling or cropping can be used.
S4012:输入尺寸标准化后的图像至目标检测模型,获得目标在图像中的位置信息和类型信息。具体地,目标检测模型对输入的图像进行特征提取,进一步地根据提取的特征对图像中的目标进行检测,目标检测模型输出检测到的图像中的目标的位置信息和类型信息,例如:目标检测模型输出一张输出图像,该输出图像中被检测到的目标由矩形框框出,每个矩形框中还包括该目标的类型信息。目标在图像中的位置信息为一个或者多个点的在图像中的像素坐标,例如:可以是目标对应的矩形框的像素坐标,也可以是目标对应的矩形框的中心的或者左下方角点的像素坐标。目标的类型信息为目标属于的类型,例如:行人、车辆、或者静态物体。S4012: Input the standardized image to the target detection model, and obtain the position information and type information of the target in the image. Specifically, the target detection model performs feature extraction on the input image, and further detects the target in the image based on the extracted features. The target detection model outputs the location information and type information of the target in the detected image, for example: target detection The model outputs an output image, the detected target in the output image is framed by a rectangular frame, and each rectangular frame also includes the type information of the target. The position information of the target in the image is the pixel coordinates of one or more points in the image, for example: it can be the pixel coordinates of the rectangular frame corresponding to the target, or the center or the lower left corner of the rectangular frame corresponding to the target Pixel coordinates. The type information of the target is the type to which the target belongs, for example: pedestrians, vehicles, or static objects.
应理解,目标检测模型之所以能对输入图像进行目标检测,是由于在进行目标检测前目标检测模型由目标检测训练集进行训练过,例如:在本申请实施例中,要使得目标检测模型可以对交通道路上的行车、车辆、静态物体进行检测,则需要利用目标检测训练集中的多个包含行人、车辆、静态物体的图像对目标检测模型进行训练,且目标检测训练集中的每张图像都进行了标注,具体为每张图像中包含的行人、车辆或者静态物体由矩形框进行了框出,且每个矩形框对应有该矩形框内的目标的类型信息。由于目标检测模型在被训练的过程中反复学习了每种类型的目标的特征,因此,训练完成的目标检测模型具备了检测输入图像中的行人、车辆、静态物体的能力。It should be understood that the reason why the target detection model can perform target detection on the input image is because the target detection model is trained by the target detection training set before the target detection is performed. For example, in the embodiment of the present application, the target detection model must be To detect traffic, vehicles, and static objects on traffic roads, it is necessary to use multiple images of pedestrians, vehicles, and static objects in the target detection training set to train the target detection model, and each image in the target detection training set is Annotated, specifically, pedestrians, vehicles or static objects contained in each image are framed by rectangular boxes, and each rectangular box corresponds to the type information of the target in the rectangular box. Because the target detection model repeatedly learns the characteristics of each type of target during the training process, the trained target detection model has the ability to detect pedestrians, vehicles, and static objects in the input image.
值得注意的是,S4012对S4011中接收到的连续时间的视频数据中的图像(或者是固定时间间隔下的图像)均进行目标检测,由此,S4012执行一段时间后,可获得多个连续时间的视频数据中的图像中的目标的位置信息和类型信息。且每张图像对应有时间戳,可通过时间戳对图像进行时间顺序的排序。It is worth noting that S4012 performs target detection on the images in the continuous time video data received in S4011 (or images at a fixed time interval). Therefore, after S4012 is executed for a period of time, multiple continuous times can be obtained. The location information and type information of the target in the image in the video data. And each image corresponds to a time stamp, and the images can be sorted in time order through the time stamp.
S4013:根据前述S4012中检测到的图像中的目标在图像中的位置信息和类型信息,对运动目标进行跟踪,确定目标在包括当前时刻的历史时间段内的视频画面中的运动轨迹。S4013: Track the moving target according to the position information and type information of the target in the image detected in S4012, and determine the moving track of the target in the video frame in the historical time period including the current moment.
目标跟踪指对相邻时刻的视频数据中的两个图像(或者固定时间间隔下的两个图像)中的目标进行跟踪,确定相邻两个图像中的目标在物理世界为同一目标,使两个图像中的两个目标对应同一个目标ID,且在目标轨迹表上记录该目标ID在当前时刻的图像中的像素坐标,所述目标轨迹表为记录存在在摄像头拍摄的区域中的每个目标在当前时刻和历史时刻的像素坐标(由目标的当前时刻和历史时刻的像素坐标可拟合出目标的运动轨迹)。进行目标跟踪时,根据前述步骤S4012获得的目标的类型信息和位置信息与缓存的已处理的前一时刻视频帧中的目标的类型信息和位 置信息进行比较,确定相邻时刻(或固定时间间隔的两个时刻)的两个图像中目标之间的关联,即对两个图像中判断为的同一个目标的目标标记为同一个目标ID,且记录每个目标对应的目标ID及其在图像中的像素坐标,一个目标ID即对应。由此,依据时间顺序对多个目标检测后的图像进行目标跟踪后,可获得一个目标ID对应的多个像素坐标,形成一个像素坐标序列,该像素坐标序列即为该目标的运动轨迹信息。Target tracking refers to tracking the targets in two images (or two images at a fixed time interval) in the video data at adjacent moments, to determine that the targets in the two adjacent images are the same target in the physical world, so that the two Two targets in each image correspond to the same target ID, and the pixel coordinates of the target ID in the image at the current moment are recorded on the target trajectory table. The target trajectory table is recorded in each of the areas captured by the camera. The pixel coordinates of the target at the current and historical moments (the trajectory of the target can be fitted by the pixel coordinates of the target's current and historical moments). During target tracking, the type information and location information of the target obtained in step S4012 are compared with the type information and location information of the target in the buffered and processed video frame at the previous time to determine the adjacent time (or fixed time interval). The association between the targets in the two images at the two moments), that is, the targets judged to be the same target in the two images are marked with the same target ID, and the target ID corresponding to each target and its image A target ID corresponds to the pixel coordinates in. Thus, after target tracking is performed on the images of multiple target detections according to the time sequence, multiple pixel coordinates corresponding to a target ID can be obtained to form a pixel coordinate sequence, and the pixel coordinate sequence is the motion track information of the target.
下面提供一种具体的目标跟踪的流程:The following provides a specific target tracking process:
1:目标匹配。根据当前时刻的图像中被检测到的目标的位置信息(即目标在图像中的像素坐标)和类型信息将当前时刻的图像中被检测到的目标与前一时刻的图像(或者固定时间间隔前的图像)中的目标进行匹配,例如:根据当前图像中的目标的矩形框与前一时刻的图像中的目标的矩形框的重叠率确定当前图像中的目标的目标ID,当当前图像中的目标的矩形框与前一时刻的图像中的某一个目标的矩形框的重叠率大于预设定的阈值,则确定当前时刻的目标与前一时刻的目标为同一个,在目标轨迹表中找到已记录的该目标对应的目标ID,在目标ID对应的序列中记录该目标在当前图像的像素坐标。应理解,对当前图像中检测到的每个目标均执行步骤1及后续步骤。1: Target match. According to the location information of the detected target in the image at the current moment (that is, the pixel coordinates of the target in the image) and type information, the detected target in the image at the current moment is compared with the image at the previous moment (or before a fixed time interval). The target in the current image) is matched, for example, the target ID of the target in the current image is determined according to the overlap ratio between the rectangular frame of the target in the current image and the rectangular frame of the target in the previous image. If the overlap ratio between the rectangular frame of the target and the rectangular frame of a target in the image at the previous moment is greater than the preset threshold, it is determined that the target at the current moment and the target at the previous moment are the same, and the target track table is found The recorded target ID corresponding to the target, and the pixel coordinates of the target in the current image are recorded in the sequence corresponding to the target ID. It should be understood that step 1 and subsequent steps are performed for each target detected in the current image.
2:当在前述步骤1中当前图像中一个或多个目标未与前一时刻的图像中的目标匹配上(即未在前一时刻的图像中找到该一个或多个目标,例如:一个车辆在当前时刻刚驶入该摄像头拍摄的交通路口的区域),则确定该一个或多个目标为交通道路上当前时刻新增的目标,为该目标设立新的目标ID,其中,目标ID唯一地标识该目标,在目标轨迹表中记录该目标ID及其在当前时刻的像素坐标。2: When one or more targets in the current image in step 1 above do not match the targets in the image at the previous moment (that is, the one or more targets are not found in the image at the previous moment, for example: a vehicle At the current moment, you have just entered the area of the traffic intersection captured by the camera), the one or more targets are determined to be new targets on the traffic road at the current moment, and a new target ID is set for the target, where the target ID is uniquely Identify the target, record the target ID and its pixel coordinates at the current moment in the target track table.
3:当在前述步骤1中前一时刻的一个或多个目标未与当前时刻的图像中的目标匹配上(即前一时刻存在目标,在当前时刻未找到了,例如:当前时刻该目标被另一目标部分或完全遮挡的情况或当前时刻该目标已离开摄像头拍摄的交通道路的区域),则根据目标轨迹表中记录的该目标在历史时刻的像素坐标预测该目标在当前时刻的图像中的像素坐标(例如使用三点外推法、轨迹拟合算法等)。3: When one or more targets at the previous moment in the preceding step 1 do not match the target in the image at the current moment (that is, there is a target at the previous moment, but it is not found at the current moment, for example: the target at the current moment is If another target is partially or completely occluded or the target has left the area of the traffic road captured by the camera at the current moment), the target's pixel coordinates at the historical moment recorded in the target track table are used to predict that the target is in the current moment's image Pixel coordinates (for example, using three-point extrapolation, trajectory fitting algorithm, etc.).
4,根据步骤3预测的目标的像素坐标确定目标的存在状态。当预测的目标的像素坐标在当前图像的外部或者边缘位置,可确定该预测的目标在当前时刻已离开摄像头拍摄的视角的画面;当预测的目标的像素坐标在当前视频帧的内部且非边缘位置,确定该目标还在当前时刻的视频帧中。4. Determine the existence state of the target according to the pixel coordinates of the target predicted in step 3. When the predicted target's pixel coordinates are outside or at the edge of the current image, it can be determined that the predicted target has left the camera’s view angle at the current moment; when the predicted target’s pixel coordinates are inside the current video frame and not at the edge Position, confirm that the target is still in the video frame at the current moment.
5,当步骤4中确定预测的目标在当前时刻已离开摄像头拍摄的视角的画面,在目标轨迹表中删除该目标ID及其对应的数据。5. When it is determined in step 4 that the predicted target has left the view of the camera at the current moment, delete the target ID and its corresponding data in the target track table.
6,当步骤4中确定该预测的目标还在当前时刻的图像中,记录预测的目标的像素坐标至目标轨迹表中。6. When it is determined in step 4 that the predicted target is still in the image at the current moment, record the pixel coordinates of the predicted target in the target track table.
值得注意的是,可以对摄像头拍摄的视频数据中的每一时刻的图像(或者固定时间间隔的图像)中检测到的每个目标均执行前述步骤1-5,也可以只对摄像头拍摄的视频数据中的每一时刻的图像(或者固定时间间隔的图像)中在前述S4012中获得的目标检测结果为非静态的目标执行前述步骤1-5。It is worth noting that the aforementioned steps 1-5 can be performed for each target detected in the image at each moment in the video data captured by the camera (or the image at a fixed time interval), or only the video captured by the camera In the image at each moment in the data (or the image at a fixed time interval), the target detection result obtained in the foregoing S4012 is non-static, and the foregoing steps 1-5 are executed.
在目标跟踪后,可获得多个目标ID对应的像素坐标序列,该像素坐标序列即为 目标在该视频画面中的运动轨迹信息。若对静态的目标也执行了目标跟踪操作,可获得该静态的目标的一个像素坐标序列,由于该目标为静止的,该目标的像素坐标序列中的像素坐标会聚集在一个点附近。After the target is tracked, the pixel coordinate sequence corresponding to multiple target IDs can be obtained, and the pixel coordinate sequence is the motion track information of the target in the video screen. If the target tracking operation is also performed on a static target, a pixel coordinate sequence of the static target can be obtained. Since the target is static, the pixel coordinates in the pixel coordinate sequence of the target will be gathered near a point.
下面结合图8具体描述在一种实施例中的步骤S404的具体实施步骤:The specific implementation steps of step S404 in an embodiment are described below in detail with reference to FIG. 8:
S4041:检测车辆的结构属性。S4041: Detect the structural attributes of the vehicle.
由于前述步骤S401对设置于交通道路上的摄像头拍摄的视频数据中记录的目标进行了检测,目标包括车辆、行人等。Since the foregoing step S401 detects the target recorded in the video data captured by the camera set on the traffic road, the target includes vehicles, pedestrians, and so on.
本步骤根据车辆属性检测模型对车辆的结构属性进行检测可以有两种方法,一种是:将视频数据中的每一帧图像或者固定时间间隔的图像输入至已训练完成的车辆属性检测模型,由车辆属性检测模型对输入图像中的车辆进行结构属性检测,获得每辆车在图像中的位置信息和每辆车的结构属性。In this step, there are two ways to detect the structural attributes of the vehicle according to the vehicle attribute detection model. One is: input each frame of image or fixed time interval images in the video data into the trained vehicle attribute detection model. The vehicle attribute detection model detects the structural attributes of the vehicles in the input image, and obtains the position information of each vehicle in the image and the structural attributes of each vehicle.
另一种是:本步骤根据前述步骤S401中检测到的目标类型为车辆的目标的位置信息对图像中的车辆对应的矩形框进行分割,将分割后的车辆子图像输入至车辆属性检测模型,由车辆属性检测模型获得每辆车的结构属性,根据目标检测模型和车辆属性检测模型结合即可获得图像中的目标的位置信息、目标的类型信息以及车辆的结构属性。The other is: this step divides the rectangular frame corresponding to the vehicle in the image according to the position information of the target whose target type is vehicle detected in the foregoing step S401, and inputs the divided vehicle sub-image into the vehicle attribute detection model. The structural attributes of each vehicle are obtained from the vehicle attribute detection model. According to the combination of the target detection model and the vehicle attribute detection model, the position information of the target in the image, the target type information and the structural attributes of the vehicle can be obtained.
具体采用上述哪种方法获得车辆的结构属性,可根据已训练的车辆属性检测模型需要的输入图像的类型来决定。Which method is used to obtain the structural attributes of the vehicle can be determined according to the type of input image required by the trained vehicle attribute detection model.
值得注意的是,由前述可知,车辆属性检测模型是一种神经网络模型,通过车辆属性检测模型监测得到的车辆的结构属性可以是车辆的类型,例如:车辆的型号或者车辆的子分类,这与车辆属性检测模型被训练后所具有的功能有关。例如:车辆属性检测模型在被训练的时候采用的训练图像包含车辆,且训练图像中的车辆被标注了车辆的型号(例如:长安20吨载货汽车、比亚迪30座客车、奔驰c200轿车等),则训练完成的车辆属性检测模型可用于检测车辆的型号;再例如:车辆属性检测模型在被训练的时候采用的训练图像包含车辆,且训练图像中的车辆被标注了车辆的子分类(例如:7座商务车、4座轿车、20吨货车、10吨水泥车等),则训练完成的车辆属性检测模型可用于检测车辆的子分类。It is worth noting that, from the foregoing, the vehicle attribute detection model is a neural network model. The structural attributes of the vehicle monitored by the vehicle attribute detection model can be the type of vehicle, such as the model of the vehicle or the sub-category of the vehicle. It is related to the function of the vehicle attribute detection model after being trained. For example: when the vehicle attribute detection model is trained, the training images used include vehicles, and the vehicles in the training images are marked with the vehicle model (for example: Changan 20-ton truck, BYD 30-seater passenger car, Mercedes-Benz c200 sedan, etc.) , The trained vehicle attribute detection model can be used to detect the model of the vehicle; another example: the training image used when the vehicle attribute detection model is trained contains the vehicle, and the vehicle in the training image is marked with the sub-category of the vehicle (such as : 7-seater commercial vehicle, 4-seater car, 20-ton truck, 10-ton cement truck, etc.), the vehicle attribute detection model that has been trained can be used to detect sub-categories of vehicles.
车辆的结构属性还可以包括:车辆的长宽信息、车辆的驾驶室类型和位置等,对于步骤S4041,也可以采用车辆属性检测模型检测到车辆的长宽信息、车辆的驾驶室类型和位置等信息。应理解,检测车辆的结构属性,可获得车辆的型号、车辆的子分类、车辆的长宽比例、车辆的驾驶室类型和位置等结构属性中的一种或多种。The structural attributes of the vehicle can also include: the length and width information of the vehicle, the type and location of the vehicle cab, etc. For step S4041, the vehicle attribute detection model can also be used to detect the length and width information of the vehicle, the type and location of the vehicle cab, etc. information. It should be understood that by detecting the structural attributes of the vehicle, one or more of structural attributes such as the model of the vehicle, the sub-category of the vehicle, the aspect ratio of the vehicle, the type and location of the cab of the vehicle, etc. can be obtained.
值得注意的是,上述步骤S4041也可以在执行S401中目标检测之后执行,获得的车辆结构属性与目标检测阶段获得的目标的类型和目标的位置信息可共同存储在检测装置中的目标检测与跟踪模块中,或者存储在检测装置的其他模块中,或者存储在检测装置可读取的其他存储装置中。It is worth noting that the above step S4041 can also be performed after the target detection in S401 is performed. The obtained vehicle structural attributes and the target type and target location information obtained in the target detection stage can be stored together in the detection device. Target detection and tracking Module, or stored in other modules of the detection device, or stored in other storage devices readable by the detection device.
S4042:根据车辆的结构属性,查询盲区信息库,获得车辆的盲区信息。S4042: Query the blind area information database according to the structural attributes of the vehicle to obtain the blind area information of the vehicle.
本申请不对盲区信息库的结构和形态进行任何限定,例如:在一种实施例中,盲区信息库可以是一个关系型数据库,盲区信息库提供查询接口,由前述步骤获得 的车辆的结构属性可被发送至该接口,由该接口根据车辆的结构属性查询盲区信息库中对应的盲区信息,盲区信息库通过接口返回查询结果。This application does not limit the structure and form of the blind spot information database. For example, in an embodiment, the blind spot information database may be a relational database, and the blind spot information database provides a query interface. The structural attributes of the vehicle obtained by the foregoing steps can be It is sent to the interface, and the corresponding blind zone information in the blind zone information database is queried by the interface according to the structural attributes of the vehicle, and the blind zone information database returns the query result through the interface.
查询结果即为车辆结构属性对应的车辆的盲区信息,可包括:车辆的盲区相对于车辆的位置、盲区的形状、盲区的数量,其中,盲区相对于车辆的位置可以是盲区中的关键点相对于车辆的中心点的偏移量,例如:一个长方形盲区相对于车辆的位置是该长方形盲区的四个角点相对于车辆的中心点的偏移长度和偏移方向。可选的,查询结果还可以包括:盲区的面积、盲区的危险系数,其中,盲区的危险系数为该盲区可能发生危险情况的概率,用于表示盲区的危险程度,盲区的危险系数可由盲区信息库根据车辆的盲区的面积、位置、形状等因素综合判断,例如:对于车辆的盲区的面积较大、且位于车辆的斜后方位置的盲区,可被判断为盲区的危险系数较高。The query result is the blind zone information of the vehicle corresponding to the vehicle structure attribute, which can include: the position of the blind zone relative to the vehicle, the shape of the blind zone, and the number of blind zones. Among them, the position of the blind zone relative to the vehicle can be the relative key point in the blind zone. The offset from the center point of the vehicle, for example, the position of a rectangular blind area relative to the vehicle is the offset length and direction of the four corner points of the rectangular blind area relative to the center point of the vehicle. Optionally, the query result may also include: the area of the blind zone and the risk factor of the blind zone, where the risk factor of the blind zone is the probability that a dangerous situation may occur in the blind zone, and is used to indicate the degree of danger of the blind zone. The risk factor of the blind zone can be obtained from the blind zone information The library comprehensively judges the area, location, shape and other factors of the blind area of the vehicle. For example, the blind area of the vehicle that has a large blind area and is located diagonally behind the vehicle can be judged as a higher risk factor for the blind area.
根据车辆的结构属性查询盲区信息库获得查询结果的具体方式可以有多种,示例性地有如下三种方式:There are many specific ways to query the blind zone information database according to the structural attributes of the vehicle to obtain the query results. There are three ways as examples:
一、检测装置的盲区确定模块发送车辆的型号或车辆的子分类至盲区信息库接口,盲区信息库根据车辆的型号或车辆的子分类查询到该型号或者该子分类的车辆对应的盲区信息,将盲区信息作为查询结果返回给接口,查询结果可以包括:车辆的盲区相对于车辆的位置、盲区的形状、盲区的数量;可选的,查询结果还可以包括:盲区的面积、盲区的危险系数。1. The blind spot determination module of the detection device sends the model of the vehicle or the sub-category of the vehicle to the interface of the blind spot information database, and the blind spot information database queries the blind spot information corresponding to the model or the sub-category of the vehicle according to the model or sub-category of the vehicle. The blind spot information is returned to the interface as the query result. The query result can include: the position of the blind spot of the vehicle relative to the vehicle, the shape of the blind spot, and the number of blind spots; optionally, the query result can also include: the area of the blind spot and the risk factor of the blind spot .
二、检测装置的盲区确定模块发送车辆的型号或车辆的子分类至盲区信息库接口,盲区信息库根据车辆的型号或车辆的子分类没有查询到符合该型号的或者该子分类的车辆对应的盲区信息,盲区信息库返回查询失败的消息至盲区确定模块,盲区信息库进一步发送车辆的长宽信息、车辆的驾驶室类型和位置等结构属性至盲区信息库接口,盲区信息库根据车辆的长宽信息、车辆的驾驶室类型和位置等结构属性查询盲区信息库中最为接近该车辆的结构属性的一种车辆对应的盲区信息作为查询结果,返回查询结果至接口,查询结果可以包括:车辆的盲区相对于车辆的位置、盲区的形状、盲区的数量;可选的,查询结果还可以包括:盲区的面积、盲区的危险系数。2. The blind spot determination module of the detection device sends the model of the vehicle or the sub-category of the vehicle to the interface of the blind zone information database. The blind spot information database does not find the vehicle corresponding to the model or the sub-category according to the vehicle model or sub-category. Blind zone information, the blind zone information database returns a query failure message to the blind zone determination module. The blind zone information database further sends the length and width information of the vehicle, the type and location of the cab of the vehicle and other structural attributes to the blind zone information database interface. The blind zone information database is based on the length of the vehicle. Query the blind zone information of the vehicle closest to the structural attributes of the vehicle in the blind zone information database for structural attributes such as wide information, the type and location of the vehicle’s cab, as the query result, and return the query result to the interface. The query result can include: The position of the blind zone relative to the vehicle, the shape of the blind zone, and the number of blind zones; optionally, the query result can also include: the area of the blind zone and the risk factor of the blind zone.
三、检测装置的盲区确定模块发送车辆的结构属性至盲区信息库的接口,车辆的结构属性包括:车辆的型号、车辆的子分类、车辆的长宽信息、车辆的驾驶室类型和位置等信息中的一种或多种。盲区信息库根据结构属性,确定该型号或子分类的车辆的盲区信息,或者盲区信息库根据结构属性确定与车辆的结构属性最接近的车辆的盲区信息。盲区信息库最终返回查询结果至盲区确定模块,查询结果可以包括:盲区的数量、车辆的盲区相对于车辆的位置、盲区的形状;可选的,查询结果还可以包括:盲区的危险系数,盲区的面积。3. The blind zone determination module of the detection device sends the structural attributes of the vehicle to the interface of the blind zone information database. The structural attributes of the vehicle include: vehicle model, vehicle sub-category, vehicle length and width information, vehicle cab type and location, etc. One or more of. The blind spot information database determines the blind spot information of the vehicle of the model or sub-category according to the structural attributes, or the blind spot information database determines the blind spot information of the vehicle closest to the structural attributes of the vehicle according to the structural attributes. The blind spot information database finally returns the query result to the blind spot determination module. The query result can include: the number of blind spots, the position of the blind spot relative to the vehicle, and the shape of the blind spot; optionally, the query result can also include: the risk factor of the blind spot, the blind spot The area.
由前述S4042获得了车辆的盲区信息,盲区信息与车辆的属性相关,不同属性的车辆对应不同的盲区信息。The blind area information of the vehicle is obtained from the aforementioned S4042. The blind area information is related to the attributes of the vehicle, and vehicles with different attributes correspond to different blind area information.
S4043:根据盲区信息确定车辆的盲区。S4043: Determine the blind zone of the vehicle according to the blind zone information.
本步骤根据前述步骤S4042获得的车辆的盲区信息、前述步骤S402获得的车辆在交通道路上的运动轨迹信息和前述步骤S403获得的车辆的姿态信息,确定车辆在 当前时刻的盲区。或者,由前述步骤S401-S403获得了车辆在未来时刻的位置信息和车辆在未来时刻的姿态信息,以及车辆的盲区信息,可确定车辆在未来时刻的盲区。In this step, the blind spot information of the vehicle obtained in step S4042, the trajectory information of the vehicle on the traffic road obtained in step S402, and the posture information of the vehicle obtained in step S403 are used to determine the blind spot of the vehicle at the current moment. Alternatively, the position information of the vehicle at the future time and the posture information of the vehicle at the future time, and the blind area information of the vehicle are obtained from the foregoing steps S401-S403, and the blind area of the vehicle at the future time can be determined.
具体确定车辆在当前时刻的盲区的方法为:根据车辆在交通道路上的运动轨迹信息中车辆在当前时刻下的地理坐标和车辆的姿态结合盲区相对于车辆的位置、面积、形状等盲区信息确定车辆在交通道路上的盲区的位置、分布和面积。例如:如图9所示,对于一辆型号为长安20吨载货汽车的车辆,已知该车在交通道路上的地理坐标,即已知该车的中心点的地理坐标,又已知该车的姿态为由东向西,由前述S4042获得的该车辆的盲区信息可知:该车辆存在6块独立盲区,每块独立盲区的形状和面积、每块独立盲区中的关键点相对于车辆的特殊点的偏移量。由此,根据盲区信息和车辆在交通道路上的位置和姿态可以确定车辆的盲区在交通道路上的实际位置和范围。The specific method for determining the blind zone of the vehicle at the current moment is as follows: according to the vehicle's trajectory information on the traffic road, the geographic coordinates and the posture of the vehicle at the current moment are combined with the blind zone information such as the position, area, and shape of the blind zone relative to the vehicle. The location, distribution and area of the blind area of the vehicle on the traffic road. For example: as shown in Figure 9, for a 20-ton truck of Changan, the geographic coordinates of the vehicle on the traffic road are known, that is, the geographic coordinates of the center point of the vehicle are known, and the The attitude of the car is from east to west. The blind spot information of the vehicle obtained from the aforementioned S4042 shows that the vehicle has 6 independent blind areas, the shape and area of each independent blind area, and the key points in each independent blind area relative to the vehicle The offset of the special point. Therefore, the actual position and range of the blind area of the vehicle on the traffic road can be determined according to the blind spot information and the position and posture of the vehicle on the traffic road.
可选的,在步骤S4043根据盲区信息确定车辆的盲区时,若车辆的盲区信息中包括盲区的危险系数,在本申请的另一种实施例中,S4043根据盲区信息中盲区的危险系数确定危险系数大于预设定的危险阈值的高风险盲区块对应的高风险盲区信息,根据高风险盲区信息、当前时刻的车辆在交通道路上的位置信息和姿态信息确定当前时刻的车辆在交通道路上的高风险盲区的位置和范围。该方法仅确定车辆的高风险盲区在交通道路上的位置和分布范围,使得后续进行盲区危险判断时,仅判断高风险盲区是否存在盲区危险即可,降低了计算量,且由于车辆的一些低风险的盲区块发生危险的可能性小,就算盲区中存在其他目标,车辆在行驶时也不易与该低风险盲区中的其他目标发生碰撞和摩擦,因此,不对低风险盲区中存在其他目标的情况进行告警不会对车辆和其他目标造成危险和伤害,且避免了告警过多带给车辆司机和行人干扰。Optionally, when determining the blind zone of the vehicle according to the blind zone information in step S4043, if the blind zone information of the vehicle includes the risk factor of the blind zone, in another embodiment of the present application, S4043 determines the risk according to the risk factor of the blind zone in the blind zone information The high-risk blind area information corresponding to the high-risk blind block whose coefficient is greater than the preset risk threshold is determined based on the high-risk blind area information, the current position information and posture information of the vehicle on the traffic road. The location and extent of high-risk blind spots. This method only determines the location and distribution range of the high-risk blind area of the vehicle on the traffic road, so that when the blind area is judged later, only the high-risk blind area is judged whether there is a blind area hazard, which reduces the amount of calculation and due to the low level of the vehicle. The risk of the blind zone is less likely to be dangerous. Even if there are other targets in the blind zone, the vehicle will not easily collide and rub against other targets in the low-risk blind zone when driving. Therefore, the situation where there are other targets in the low-risk blind zone is not correct. The warning will not cause danger and injury to the vehicle and other targets, and avoid the interference of the vehicle driver and pedestrians caused by excessive warning.
可选的,在步骤S4043根据盲区信息确定车辆的盲区时,还可以根据车辆在交通道路上的运动轨迹信息确定车辆的行车速度,根据车辆的行车速度对车辆的全部或部分盲区的范围进行扩大。例如:车辆的行车速度高于一定阈值,则车辆的前盲区范围乘以预设比例系数,使得盲区范围扩大。再例如:根据预设定的规则将车辆的行车速度转换为一个比例系数,将盲区信息中的盲区的面积乘以该比例系数,且盲区信息中的盲区相对于车辆的位置。具体根据车辆在交通道路上的运动轨迹信息确定车辆的行车速度的方法为:根据运动轨迹信息的地理坐标序列中的相邻的地理坐标之间的距离差值与相邻的地理坐标值对应的视频数据中两个相邻的视频帧之间的时间差值进行相除操作,即可获得车辆在一个时刻的行车速度。Optionally, when determining the blind zone of the vehicle according to the blind zone information in step S4043, the driving speed of the vehicle can also be determined according to the vehicle's movement track information on the traffic road, and the range of all or part of the blind zone of the vehicle is expanded according to the driving speed of the vehicle . For example, if the driving speed of the vehicle is higher than a certain threshold, the front blind zone range of the vehicle is multiplied by the preset proportional coefficient, so that the blind zone range is expanded. For another example, the driving speed of the vehicle is converted into a proportional coefficient according to a preset rule, the area of the blind area in the blind area information is multiplied by the proportional coefficient, and the position of the blind area in the blind area information relative to the vehicle. Specifically, the method of determining the driving speed of the vehicle according to the movement trajectory information of the vehicle on the traffic road is as follows: the distance difference between the adjacent geographic coordinates in the geographic coordinate sequence of the movement trajectory information corresponds to the adjacent geographic coordinate value The time difference between two adjacent video frames in the video data is divided to obtain the driving speed of the vehicle at a moment.
可选的,若获得的盲区信息中不存在盲区的危险系数,在步骤S4043根据盲区信息确定车辆的盲区后,可根据确定的每块盲区所在的地理位置和范围确定盲区的危险系数。或者若前述步骤S4042中获得的盲区信息中存在每块盲区的危险系数,则根据每块盲区的地理位置和范围调整盲区的危险系数,以使得盲区的危险系数更为准确。Optionally, if there is no risk factor of the blind spot in the obtained blind spot information, after determining the blind spot of the vehicle according to the blind spot information in step S4043, the risk factor of the blind spot may be determined according to the geographic location and range of each blind spot determined. Or if the blind area information obtained in the foregoing step S4042 has a risk coefficient of each blind area, the risk coefficient of the blind area is adjusted according to the geographic location and range of each blind area to make the risk coefficient of the blind area more accurate.
经过上述步骤S4041-S4043即可确定车辆在某一时刻在交通道路上的盲区。应理解,本申请中摄像头拍摄的视频数据可以是实时记录交通道路上的目标的运动情况的视频流,因此,确定车辆的盲区的方法可持续执行,确定每一时刻的车辆的盲 区。After the above steps S4041-S4043, the blind area of the vehicle on the traffic road at a certain moment can be determined. It should be understood that the video data captured by the camera in this application may be a video stream that records the movement of objects on the traffic road in real time. Therefore, the method of determining the blind area of the vehicle can be continuously executed to determine the blind area of the vehicle at each moment.
本申请还提供如图4所示的检测装置300,检测装置300包括的模块和功能如前文的描述,在此不再赘述。The present application also provides a detection device 300 as shown in FIG. 4, and the modules and functions included in the detection device 300 are as described above, which will not be repeated here.
在一种实施例中,检测装置300中的目标检测与跟踪模块301用于执行前述方法步骤S401,在另一种更具体的实施例中,目标检测与跟踪模块301用于执行前述方法步骤S4011-S4013及其可选步骤;目标定位模块302用于执行前述方法步骤S402;姿态确定模块303用于执行前述方法步骤S403;盲区确定模块304用于执行前述方法步骤S404-S405,在另一种更具体的实施例中,盲区确定模块304用于执行前述方法步骤S4041-S4043、S405、及前述S404-S405中描述的可选步骤。In an embodiment, the target detection and tracking module 301 in the detection device 300 is used to execute the aforementioned method step S401, and in another more specific embodiment, the target detection and tracking module 301 is used to execute the aforementioned method step S4011. -S4013 and its optional steps; the target positioning module 302 is used to perform the aforementioned method step S402; the posture determination module 303 is used to perform the aforementioned method step S403; the blind spot determination module 304 is used to perform the aforementioned method steps S404-S405, in another In a more specific embodiment, the blind spot determination module 304 is configured to execute the aforementioned method steps S4041-S4043, S405, and the aforementioned optional steps described in S404-S405.
本申请还提供一种检测系统,该检测系统用于检测车辆的盲区,该系统包括车辆动态监控系统和车辆盲区检测系统。车辆动态监控系统,用于接收视频数据,根据视频数据确定车辆当前时刻或未来时刻在交通道路上的位置信息与姿态信息,其中,视频数据由设置于交通道路的摄像头拍摄获得。车辆盲区检测系统,用于获取由车辆结构属性决定的盲区信息,根据车辆的盲区信息、车辆在交通道路上的位置信息和姿态信息,确定车辆当前时刻或未来时刻在交通道路上的盲区。更具体地,上述检测系统用于执行前述S401-S405的方法,上述检测系统的车辆动态监控系统用于执行前述S401-S403,车辆盲区检测系统用于执行前述S404-S405。The present application also provides a detection system for detecting the blind area of a vehicle. The system includes a vehicle dynamic monitoring system and a vehicle blind area detection system. The vehicle dynamic monitoring system is used to receive video data, and determine the position information and posture information of the vehicle on the traffic road at the current time or in the future according to the video data, wherein the video data is captured by a camera set on the traffic road. The vehicle blind spot detection system is used to obtain the blind spot information determined by the structural attributes of the vehicle, and determine the blind spot of the vehicle on the traffic road at the current time or in the future according to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road. More specifically, the aforementioned detection system is used to execute the aforementioned S401-S405 methods, the aforementioned vehicle dynamic monitoring system of the aforementioned detection system is used to execute aforementioned S401-S403, and the vehicle blind spot detection system is used to execute aforementioned S404-S405.
本申请还提供一种车载装置,车载装置设置于车辆上,本申请的车载装置可以用于执行前述S401-S405的方法,车载装置可以提供与检测装置300相同的功能。The present application also provides a vehicle-mounted device, which is installed on a vehicle. The vehicle-mounted device of the present application can be used to execute the aforementioned methods S401-S405, and the vehicle-mounted device can provide the same functions as the detection device 300.
本申请还提供一种车辆,这种车辆包括存储单元和处理单元,该车辆的存储单元用于存储一组计算机指令和数据集合,处理单元执行存储单元存储的计算机指令,处理单元读取所述存储单元的数据集合,以使得该车辆可以执行前述S401-S405的方法。This application also provides a vehicle. The vehicle includes a storage unit and a processing unit. The storage unit of the vehicle is used to store a set of computer instructions and data sets. The processing unit executes the computer instructions stored in the storage unit, and the processing unit reads the The data collection of the storage unit, so that the vehicle can execute the aforementioned methods S401-S405.
上述车辆的存储单元可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。上述车辆的处理单元可以是通用的中央处理器(Central Processing Unit,CPU),应用专用集成电路(Application Specific Integrated Circuit,ASIC),图形处理器(graphics processing unit,GPU)或其任意组合。处理单元也可以包括一个或多个芯片,处理单元还可以包括AI加速器,例如神经网络处理器(neural processing unit,NPU)。The storage unit of the aforementioned vehicle may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device or a random access memory (Random Access Memory, RAM). The processing unit of the vehicle may be a general-purpose central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or any combination thereof. The processing unit may also include one or more chips, and the processing unit may also include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
本申请还提供一种如图3所示的计算设备100,计算设备100中的处理器102读取存储器101存储的一组计算机指令以执行前述检测车辆的盲区的方法。The present application also provides a computing device 100 as shown in FIG. 3. The processor 102 in the computing device 100 reads a set of computer instructions stored in the memory 101 to execute the aforementioned method for detecting the blind area of a vehicle.
由于本申请提供的检测装置300中的各个模块可以分布式地部署在同一环境或不同环境中的多个计算机上,因此,本申请还提供一种如图10所示的系统,该系统包括多个计算机500,每个计算机500包括存储器501、处理器502、通信接口503以及总线504。其中,存储器501、处理器502、通信接口503通过总线504实现彼此之间的通信连接。Since each module in the detection device 300 provided in this application can be distributed on multiple computers in the same environment or in different environments, this application also provides a system as shown in FIG. 10, which includes multiple computers. Each computer 500 includes a memory 501, a processor 502, a communication interface 503, and a bus 504. Among them, the memory 501, the processor 502, and the communication interface 503 realize the communication connection between each other through the bus 504.
存储器501可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器501可以存储计算机指令,当存储器501中存储的计算机指令被处理器502执行时,处理器502和通信接 口503用于执行检测车辆的盲区的部分方法。存储器还可以存储数据集合,例如:存储器501中的一部分存储资源被划分成一个盲区信息库存储模块,用于存储检测装置300所需的盲区信息库。The memory 501 may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device, or a random access memory (Random Access Memory, RAM). The memory 501 may store computer instructions. When the computer instructions stored in the memory 501 are executed by the processor 502, the processor 502 and the communication interface 503 are used to execute part of the method for detecting the blind area of the vehicle. The memory may also store a data set. For example, a part of the storage resources in the memory 501 is divided into a blind area information library storage module for storing the blind area information library required by the detection device 300.
处理器502可以采用通用的中央处理器(Central Processing Unit,CPU),应用专用集成电路(Application Specific Integrated Circuit,ASIC),图形处理器(graphics processing unit,GPU)或其任意组合。处理器502可以包括一个或多个芯片。处理器502可以包括AI加速器,例如神经网络处理器(neural processing unit,NPU)。The processor 502 may adopt a general-purpose central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or any combination thereof. The processor 502 may include one or more chips. The processor 502 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
通信接口503使用例如但不限于收发器一类的收发模块,来实现计算机500与其他设备或通信网络之间的通信。例如,可以通过通信接口503获取盲区信息。The communication interface 503 uses a transceiver module such as but not limited to a transceiver to implement communication between the computer 500 and other devices or communication networks. For example, the blind spot information can be obtained through the communication interface 503.
总线504可包括在计算机500各个部件(例如,存储器501、处理器502、通信接口503)之间传送信息的通路。The bus 504 may include a path for transferring information between various components of the computer 500 (for example, the memory 501, the processor 502, and the communication interface 503).
上述每个计算机500间通过通信网络建立通信通路。每个计算机500上运行目标检测与跟踪模块301、目标定位模块302、姿态确定模块303、盲区确定模块304中的任意一个或多个。任一计算机500可以为云数据中心中的计算机(例如:服务器),或边缘数据中心中的计算机,或终端计算设备。Each of the above-mentioned computers 500 establishes a communication path through a communication network. Each computer 500 runs any one or more of the target detection and tracking module 301, the target positioning module 302, the posture determination module 303, and the blind spot determination module 304. Any computer 500 may be a computer in a cloud data center (for example, a server), a computer in an edge data center, or a terminal computing device.
上述各个附图对应的流程的描述各有侧重,某个流程中没有详述的部分,可以参见其他流程的相关描述。The descriptions of the processes corresponding to each of the above drawings have their respective focuses. For parts that are not described in detail in a certain process, please refer to related descriptions of other processes.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。实现车辆的盲区检测的计算机程序产品包括一个或多个进行检测车辆的盲区的计算机指令,在计算机上加载和执行这些计算机程序指令时,全部或部分地产生按照本发明实施例图5-图7所述的流程或功能。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product that realizes the blind spot detection of the vehicle includes one or more computer instructions for detecting the blind spot of the vehicle. When these computer program instructions are loaded and executed on the computer, all or part of the instructions are generated according to the embodiment of the present invention. Figure 5-7 The process or function described.
所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质存储有实现车辆的盲区检测的计算机程序指令的可读存储介质。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如SSD)。The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server, or data center via wired (such as coaxial cable, optical fiber, digital subscriber line, or wireless (such as infrared, wireless, microwave, etc.)). The computer-readable storage medium stores the implementation A readable storage medium of computer program instructions for detecting the blind spot of a vehicle. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integration The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).

Claims (23)

  1. 一种检测车辆的盲区的方法,其特征在于,所述方法包括:A method for detecting the blind area of a vehicle, characterized in that the method includes:
    接收视频数据,所述视频数据由设置于交通道路的摄像头拍摄获得;Receiving video data, the video data being captured by a camera set on a traffic road;
    根据所述视频数据确定车辆当前时刻或未来时刻在所述交通道路上的位置信息与姿态信息;Determining the position information and posture information of the vehicle on the traffic road at the current time or in the future according to the video data;
    获取所述车辆的盲区信息,所述盲区信息由所述车辆的结构属性决定;Acquiring blind spot information of the vehicle, where the blind spot information is determined by the structural attributes of the vehicle;
    根据所述车辆的盲区信息、所述车辆在所述交通道路上的位置信息和姿态信息,确定所述车辆当前时刻或未来时刻在所述交通道路上的盲区。According to the blind spot information of the vehicle, the position information and posture information of the vehicle on the traffic road, the blind spot of the vehicle on the traffic road at the current time or in the future is determined.
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1, wherein the method further comprises:
    根据所述视频数据以及所述车辆当前时刻或未来时刻在所述交通道路上的盲区,确定所述车辆存在盲区危险,其中,所述盲区危险表示所述车辆在所述交通道路上的盲区内存在其它目标;According to the video data and the blind spot of the vehicle on the traffic road at the current time or in the future, it is determined that the vehicle has a blind spot danger, wherein the blind spot danger indicates that the vehicle is in the blind spot memory on the traffic road In other goals;
    发送盲区告警。Send a blind zone alarm.
  3. 如权利要求2所述的方法,其特征在于,所述盲区告警包括告警数据,所述告警数据包括以下信息中的一种或多种:发生盲区危险的盲区在所述交通道路上的位置和范围、所述其他目标在所述交通道路上的位置信息、所述其他目标的类型。The method according to claim 2, wherein the blind zone warning includes warning data, and the warning data includes one or more of the following information: the position of the blind zone on the traffic road where the blind zone danger occurs, and Range, location information of the other target on the traffic road, and type of the other target.
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述根据所述视频数据确定车辆未来时刻在所述交通道路上的位置信息与姿态信息包括:The method according to any one of claims 1 to 3, wherein the determining the position information and posture information of the vehicle on the traffic road at a future time according to the video data comprises:
    根据所述视频数据确定所述车辆当前时刻在所述交通道路上的位置信息与姿态信息;Determining the position information and posture information of the vehicle on the traffic road at the current time according to the video data;
    根据所述车辆当前时刻在所述交通道路上的位置信息与姿态信息,预测所述车辆在未来时刻在所述交通道路上的位置信息与姿态信息。According to the position information and posture information of the vehicle on the traffic road at the current time, predict the position information and posture information of the vehicle on the traffic road at the future time.
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:根据所述车辆当前时刻或未来时刻在所述交通道路上的盲区,构建可视化盲区图像;The method according to any one of claims 1 to 4, characterized in that the method further comprises: constructing a visual blind spot image according to the blind spot of the vehicle on the traffic road at the current time or the future time;
    发送所述可视化盲区图像。Send the visual blind spot image.
  6. 如权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-5, wherein the method further comprises:
    获取所述车辆的行驶速度;Acquiring the driving speed of the vehicle;
    根据所述车辆的行驶速度调整所述车辆当前时刻或未来时刻在所述交通道路上的盲区。Adjusting the blind area of the vehicle on the traffic road at the current time or in the future according to the driving speed of the vehicle.
  7. 如权利要求2或3所述的方法,其特征在于,所述方法还包括:The method according to claim 2 or 3, wherein the method further comprises:
    向所述存在盲区危险的车辆发送调整指令,其中,所述调整指令指示所述车辆调整行驶路线。Sending an adjustment instruction to the vehicle in danger of a blind spot, wherein the adjustment instruction instructs the vehicle to adjust a driving route.
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:7. The method according to any one of claims 1-7, wherein the method further comprises:
    确定所述盲区中的高风险盲区。Determine the high-risk blind zone in the blind zone.
  9. 如权利要求1-8任一项所述的方法,其特征在于,所述视频数据包括设置于所述交通道路上的不同位置的多个摄像头拍摄的多个视频流;8. The method according to any one of claims 1-8, wherein the video data comprises multiple video streams taken by multiple cameras arranged at different positions on the traffic road;
    所述根据所述视频数据确定车辆当前时刻或未来时刻在所述交通道路上的位置信息,包括:The determining the position information of the vehicle on the traffic road at the current time or the future time according to the video data includes:
    根据所述多个视频流确定当前时刻或未来时刻所述车辆在所述多个视频流中的位置信息;Determine the position information of the vehicle in the multiple video streams at a current moment or a future moment according to the multiple video streams;
    根据当前时刻或未来时刻所述车辆在所述多个视频流中的位置信息确定所述车辆当前时刻或未来时刻在所述交通道路上的位置信息。The position information of the vehicle on the traffic road at the current time or the future time is determined according to the position information of the vehicle in the multiple video streams at the current time or the future time.
  10. 如权利要求1-9任一项所述的方法,其特征在于,所述车辆的盲区信息包括:盲区的数量、每块盲区相对于车辆的位置、盲区的形状。The method according to any one of claims 1-9, wherein the blind area information of the vehicle includes: the number of blind areas, the position of each blind area relative to the vehicle, and the shape of the blind area.
  11. 一种检测装置,其特征在于,包括:A detection device, characterized in that it comprises:
    目标检测与跟踪模块,用于接收视频数据,所述视频数据由设置于交通道路的摄像头拍摄获得;The target detection and tracking module is used to receive video data, the video data being captured by a camera set on a traffic road;
    目标定位模块,用于根据所述视频数据确定车辆当前时刻或未来时刻在所述交通道路上的位置信息;The target positioning module is used to determine the position information of the vehicle on the traffic road at the current time or in the future according to the video data;
    姿态确定模块,用于根据所述视频数据确定车辆当前时刻或未来时刻在所述交通道路上的姿态信息;An attitude determination module, configured to determine the attitude information of the vehicle on the traffic road at the current time or in the future according to the video data;
    盲区确定模块,用于获取所述车辆的盲区信息,所述盲区信息由所述车辆的结构属性决定;还用于根据所述车辆的盲区信息、所述车辆在所述交通道路上的位置信息和姿态信息,确定所述车辆当前时刻或未来时刻在所述交通道路上的盲区。The blind spot determination module is used to obtain the blind spot information of the vehicle, and the blind spot information is determined by the structural attributes of the vehicle; it is also used to obtain the blind spot information of the vehicle and the position information of the vehicle on the traffic road And posture information to determine the blind spot of the vehicle on the traffic road at the current time or in the future.
  12. 如权利要求11所述的检测装置,其特征在于,The detection device according to claim 11, wherein:
    所述盲区确定模块,还用于根据所述视频数据以及所述车辆当前时刻或未来时刻在所述交通道路上的盲区,确定所述车辆存在盲区危险,其中,所述盲区危险表示所述车辆在所述交通道路上的盲区内存在其它目标;发送盲区告警。The blind spot determination module is further configured to determine that the vehicle has a blind spot danger based on the video data and the blind spot of the vehicle on the traffic road at the current or future moments, wherein the blind spot danger indicates the vehicle There are other targets in the blind zone on the traffic road; send a blind zone warning.
  13. 如权利要求12所述的检测装置,其特征在于,所述盲区告警包括告警数据,所述告警数据包括以下信息中的一种或多种:发生盲区危险的盲区在所述交通道路上的位置和范围、所述其他目标在所述交通道路上的位置信息、所述其他目标的类型。The detection device according to claim 12, wherein the blind zone warning includes warning data, and the warning data includes one or more of the following information: the position of the blind zone on the traffic road where the blind zone danger occurs And range, location information of the other target on the traffic road, and the type of the other target.
  14. 如权利要求11-13任一项所述的检测装置,其特征在于,The detection device according to any one of claims 11-13, wherein:
    所述目标定位模块在用于根据所述视频数据确定车辆未来时刻在所述交通道路上的位置信息时,具体用于:When the target positioning module is used to determine the position information of the vehicle on the traffic road at a future time according to the video data, it is specifically used to:
    根据所述视频数据确定所述车辆当前时刻在所述交通道路上的位置信息;Determine the position information of the vehicle on the traffic road at the current time according to the video data;
    根据所述车辆当前时刻在所述交通道路上的位置信息,预测所述车辆在未来时刻在所述交通道路上的位置信息;Predict the location information of the vehicle on the traffic road at a future time according to the location information of the vehicle on the traffic road at the current time;
    所述姿态确定模块在用于根据所述视频数据确定车辆未来时刻在所述交通道路上的姿态信息时,具体用于:When the posture determination module is used to determine the posture information of the vehicle on the traffic road at a future time according to the video data, it is specifically used for:
    根据所述视频数据确定所述车辆当前时刻在所述交通道路上的姿态信息;Determine the posture information of the vehicle on the traffic road at the current moment according to the video data;
    根据所述车辆当前时刻在所述交通道路上的姿态信息,预测所述车辆在未来时刻在所述交通道路上的姿态信息。According to the posture information of the vehicle on the traffic road at the current time, predict the posture information of the vehicle on the traffic road at the future time.
  15. 如权利要求11-14任一项所述的检测装置,其特征在于,The detection device according to any one of claims 11-14, wherein:
    所述盲区确定模块,还用于根据所述车辆当前时刻或未来时刻在所述交通道路上的盲区,构建可视化盲区图像;发送所述可视化盲区图像。The blind spot determination module is further configured to construct a visual blind spot image according to the blind spot on the traffic road at the current time or in the future; and send the visual blind spot image.
  16. 如权利要求11-15任一项所述的检测装置,其特征在于,The detection device according to any one of claims 11-15, wherein:
    所述盲区确定模块,还用于获取所述车辆的行驶速度;根据所述车辆的行驶速度调整所述车辆当前时刻或未来时刻在所述交通道路上的盲区。The blind area determination module is also used to obtain the driving speed of the vehicle; adjust the blind area of the vehicle on the traffic road at the current time or in the future according to the driving speed of the vehicle.
  17. 如权利要求12或13所述的检测装置,其特征在于,The detection device according to claim 12 or 13, wherein:
    所述盲区确定模块,还用于向所述存在盲区危险的车辆发送调整指令,其中,所述调整指令指示所述车辆调整行驶路线。The blind spot determination module is further configured to send an adjustment instruction to the vehicle in danger of the blind spot, wherein the adjustment instruction instructs the vehicle to adjust a driving route.
  18. 如权利要求11-17任一项所述的检测装置,其特征在于,The detection device according to any one of claims 11-17, wherein:
    所述盲区确定模块,还用于确定所述盲区中的高风险盲区。The blind zone determination module is also used to determine the high-risk blind zone in the blind zone.
  19. 如权利要求11-18任一项所述的检测装置,其特征在于,所述视频数据包括设置于所述交通道路上的不同位置的多个摄像头拍摄的多个视频流;The detection device according to any one of claims 11-18, wherein the video data includes multiple video streams taken by multiple cameras set at different positions on the traffic road;
    所述目标检测与跟踪模块,还用于根据所述多个视频流确定当前时刻或未来时刻所述车辆在所述多个视频流中的位置信息;The target detection and tracking module is further configured to determine the position information of the vehicle in the multiple video streams at a current moment or a future moment according to the multiple video streams;
    所述目标定位模块,还用于根据当前时刻或未来时刻所述车辆在所述多个视频流中的位置信息确定所述车辆当前时刻或未来时刻在所述交通道路上的位置信息。The target positioning module is further configured to determine the position information of the vehicle on the traffic road at the current time or the future time according to the position information of the vehicle in the multiple video streams at the current time or the future time.
  20. 如权利要求11-19任一项所述的检测装置,其特征在于,所述车辆的盲区信息包括:盲区的数量、每块盲区相对于车辆的位置、盲区的形状。The detection device according to any one of claims 11-19, wherein the blind area information of the vehicle includes: the number of blind areas, the position of each blind area relative to the vehicle, and the shape of the blind area.
  21. 一种车载装置,所述车载装置设置于车辆上,其特征在于,所述车载装置用于执行所述权利要求1至10中任一项所述的方法。An in-vehicle device, the in-vehicle device is arranged on a vehicle, and is characterized in that the in-vehicle device is used to execute the method of any one of claims 1 to 10.
  22. 一种系统,其特征在于,所述系统包括至少一个存储器和至少一个处理器,所述至少一个存储器用于存储一组计算机指令;A system, characterized in that the system comprises at least one memory and at least one processor, and the at least one memory is used to store a set of computer instructions;
    当所述至少一个处理器执行所述一组计算机指令时,所述系统执行上述权利要求1至10中任一项所述的方法。When the at least one processor executes the set of computer instructions, the system executes the method of any one of claims 1 to 10.
  23. 一种非瞬态的可读存储介质,其特征在于,所述非瞬态的可读存储介质存储有计算机程序代码,当所述计算机程序代码被计算设备执行时,所述计算设备执行上述权利要求1至10中任一项所述的方法。A non-transitory readable storage medium, wherein the non-transitory readable storage medium stores computer program code, and when the computer program code is executed by a computing device, the computing device executes the aforementioned rights The method of any one of 1 to 10 is required.
PCT/CN2020/078329 2019-07-09 2020-03-07 Method and apparatus for detecting blind areas of vehicle WO2021004077A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910616840 2019-07-09
CN201910616840.3 2019-07-09
CN201911024795.9 2019-10-25
CN201911024795.9A CN112216097A (en) 2019-07-09 2019-10-25 Method and device for detecting blind area of vehicle

Publications (1)

Publication Number Publication Date
WO2021004077A1 true WO2021004077A1 (en) 2021-01-14

Family

ID=74048637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078329 WO2021004077A1 (en) 2019-07-09 2020-03-07 Method and apparatus for detecting blind areas of vehicle

Country Status (2)

Country Link
CN (1) CN112216097A (en)
WO (1) WO2021004077A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634188A (en) * 2021-02-02 2021-04-09 深圳市爱培科技术股份有限公司 Vehicle far and near scene combined imaging method and device
CN112937446A (en) * 2021-04-14 2021-06-11 宝能汽车科技有限公司 Blind area video acquisition method and system
CN113060157A (en) * 2021-03-30 2021-07-02 恒大新能源汽车投资控股集团有限公司 Blind zone road condition broadcasting device, road condition information sharing device, system and vehicle
CN113096195A (en) * 2021-05-14 2021-07-09 北京云迹科技有限公司 Camera calibration method and device
CN113479197A (en) * 2021-06-30 2021-10-08 银隆新能源股份有限公司 Control method of vehicle, control device of vehicle, and computer-readable storage medium
CN113619599A (en) * 2021-03-31 2021-11-09 中汽创智科技有限公司 Remote driving method, system, device and storage medium
CN113682319A (en) * 2021-08-05 2021-11-23 地平线(上海)人工智能技术有限公司 Camera adjusting method and device, electronic equipment and storage medium
CN114655131A (en) * 2022-03-29 2022-06-24 东风汽车集团股份有限公司 Vehicle-mounted perception sensor adjusting method, device and equipment and readable storage medium
CN114782923A (en) * 2022-05-07 2022-07-22 厦门瑞为信息技术有限公司 Vehicle blind area detection system
CN114944067A (en) * 2022-05-16 2022-08-26 浙江海康智联科技有限公司 Elastic bus lane implementation method based on vehicle-road cooperation
WO2022204854A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Method for acquiring blind zone image, and related terminal apparatus
CN115171431A (en) * 2022-08-17 2022-10-11 东揽(南京)智能科技有限公司 Intersection multi-view-angle large vehicle blind area early warning method
CN115222767A (en) * 2022-04-12 2022-10-21 广州汽车集团股份有限公司 Space parking stall-based tracking method and system
CN115482679A (en) * 2022-09-15 2022-12-16 深圳海星智驾科技有限公司 Automatic driving blind area early warning method and device and message server
CN115731742A (en) * 2021-08-26 2023-03-03 博泰车联网(南京)有限公司 Collision prompt information output method and device, electronic equipment and readable storage medium
CN116080529A (en) * 2023-04-12 2023-05-09 深圳市速腾聚创科技有限公司 Blind area early warning method and device, electronic equipment and storage medium
CN116564111A (en) * 2023-07-10 2023-08-08 中国电建集团昆明勘测设计研究院有限公司 Vehicle early warning method, device and equipment for intersection and storage medium
CN117734680A (en) * 2024-01-22 2024-03-22 珠海翔越电子有限公司 Blind area early warning method, system and storage medium for large vehicle

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991818A (en) * 2021-01-22 2021-06-18 浙江合众新能源汽车有限公司 Method and system for avoiding collision of automobile due to blind area
CN112927509B (en) * 2021-02-05 2022-12-23 长安大学 Road driving safety risk assessment system based on traffic conflict technology
CN112990114B (en) * 2021-04-21 2021-08-10 四川见山科技有限责任公司 Traffic data visualization simulation method and system based on AI identification
CN113223312B (en) * 2021-04-29 2022-10-11 重庆长安汽车股份有限公司 Camera blindness prediction method and device based on map and storage medium
CN113119962A (en) * 2021-05-17 2021-07-16 腾讯科技(深圳)有限公司 Driving assistance processing method and device, computer readable medium and electronic device
CN113415287A (en) * 2021-07-16 2021-09-21 恒大新能源汽车投资控股集团有限公司 Vehicle road running indication method and device and computer readable storage medium
CN113628444A (en) * 2021-08-12 2021-11-09 智道网联科技(北京)有限公司 Method, device and computer-readable storage medium for prompting traffic risk
CN115705781A (en) * 2021-08-12 2023-02-17 中兴通讯股份有限公司 Vehicle blind area detection method, vehicle, server and storage medium
CN113859118A (en) * 2021-10-15 2021-12-31 深圳喜为智慧科技有限公司 Road safety early warning method and device for large vehicle
CN114582153B (en) * 2022-02-25 2023-12-12 智己汽车科技有限公司 Ramp entry long solid line reminding method, system and vehicle
CN117173652A (en) * 2022-05-27 2023-12-05 魔门塔(苏州)科技有限公司 Blind area detection method, alarm method, device, vehicle, medium and equipment
CN115134491B (en) * 2022-05-27 2023-11-24 深圳市有方科技股份有限公司 Image processing method and device
CN117373248B (en) * 2023-11-02 2024-06-21 深圳市汇芯视讯电子有限公司 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009177245A (en) * 2008-01-21 2009-08-06 Nec Corp Blind corner image display system, blind corner image display method, image transmission apparatus, and image reproducing apparatus
JP2013200819A (en) * 2012-03-26 2013-10-03 Hitachi Consumer Electronics Co Ltd Image receiving and displaying device
CN106373430A (en) * 2016-08-26 2017-02-01 华南理工大学 Intersection pass early warning method based on computer vision
CN107564334A (en) * 2017-08-04 2018-01-09 武汉理工大学 A kind of parking lot vehicle blind zone danger early warning system and method
CN107554430A (en) * 2017-09-20 2018-01-09 京东方科技集团股份有限公司 Vehicle blind zone view method, apparatus, terminal, system and vehicle
CN109671299A (en) * 2019-01-04 2019-04-23 浙江工业大学 It is a kind of based on crossing camera probe to the system and method for pedestrian's danger early warning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2530564A (en) * 2014-09-26 2016-03-30 Ibm Danger zone warning system
CN106143309B (en) * 2016-07-18 2018-07-27 乐视汽车(北京)有限公司 A kind of vehicle blind zone based reminding method and system
CN108932868B (en) * 2017-05-26 2022-02-01 奥迪股份公司 Vehicle danger early warning system and method
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN109278640A (en) * 2018-10-12 2019-01-29 北京双髻鲨科技有限公司 A kind of blind area detection system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009177245A (en) * 2008-01-21 2009-08-06 Nec Corp Blind corner image display system, blind corner image display method, image transmission apparatus, and image reproducing apparatus
JP2013200819A (en) * 2012-03-26 2013-10-03 Hitachi Consumer Electronics Co Ltd Image receiving and displaying device
CN106373430A (en) * 2016-08-26 2017-02-01 华南理工大学 Intersection pass early warning method based on computer vision
CN107564334A (en) * 2017-08-04 2018-01-09 武汉理工大学 A kind of parking lot vehicle blind zone danger early warning system and method
CN107554430A (en) * 2017-09-20 2018-01-09 京东方科技集团股份有限公司 Vehicle blind zone view method, apparatus, terminal, system and vehicle
CN109671299A (en) * 2019-01-04 2019-04-23 浙江工业大学 It is a kind of based on crossing camera probe to the system and method for pedestrian's danger early warning

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634188A (en) * 2021-02-02 2021-04-09 深圳市爱培科技术股份有限公司 Vehicle far and near scene combined imaging method and device
WO2022204854A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Method for acquiring blind zone image, and related terminal apparatus
CN113060157A (en) * 2021-03-30 2021-07-02 恒大新能源汽车投资控股集团有限公司 Blind zone road condition broadcasting device, road condition information sharing device, system and vehicle
CN113060157B (en) * 2021-03-30 2022-07-22 恒大新能源汽车投资控股集团有限公司 Blind zone road condition broadcasting device, road condition information sharing device, system and vehicle
CN113619599A (en) * 2021-03-31 2021-11-09 中汽创智科技有限公司 Remote driving method, system, device and storage medium
CN112937446A (en) * 2021-04-14 2021-06-11 宝能汽车科技有限公司 Blind area video acquisition method and system
CN113096195A (en) * 2021-05-14 2021-07-09 北京云迹科技有限公司 Camera calibration method and device
CN113479197A (en) * 2021-06-30 2021-10-08 银隆新能源股份有限公司 Control method of vehicle, control device of vehicle, and computer-readable storage medium
CN113682319A (en) * 2021-08-05 2021-11-23 地平线(上海)人工智能技术有限公司 Camera adjusting method and device, electronic equipment and storage medium
CN113682319B (en) * 2021-08-05 2023-08-01 地平线(上海)人工智能技术有限公司 Camera adjustment method and device, electronic equipment and storage medium
CN115731742A (en) * 2021-08-26 2023-03-03 博泰车联网(南京)有限公司 Collision prompt information output method and device, electronic equipment and readable storage medium
CN114655131B (en) * 2022-03-29 2023-10-13 东风汽车集团股份有限公司 Vehicle-mounted sensing sensor adjustment method, device, equipment and readable storage medium
CN114655131A (en) * 2022-03-29 2022-06-24 东风汽车集团股份有限公司 Vehicle-mounted perception sensor adjusting method, device and equipment and readable storage medium
CN115222767A (en) * 2022-04-12 2022-10-21 广州汽车集团股份有限公司 Space parking stall-based tracking method and system
CN115222767B (en) * 2022-04-12 2024-01-23 广州汽车集团股份有限公司 Tracking method and system based on space parking space
CN114782923A (en) * 2022-05-07 2022-07-22 厦门瑞为信息技术有限公司 Vehicle blind area detection system
CN114782923B (en) * 2022-05-07 2024-05-03 厦门瑞为信息技术有限公司 Detection system for dead zone of vehicle
CN114944067A (en) * 2022-05-16 2022-08-26 浙江海康智联科技有限公司 Elastic bus lane implementation method based on vehicle-road cooperation
CN114944067B (en) * 2022-05-16 2023-08-15 浙江海康智联科技有限公司 Elastic bus lane implementation method based on vehicle-road cooperation
CN115171431A (en) * 2022-08-17 2022-10-11 东揽(南京)智能科技有限公司 Intersection multi-view-angle large vehicle blind area early warning method
CN115482679A (en) * 2022-09-15 2022-12-16 深圳海星智驾科技有限公司 Automatic driving blind area early warning method and device and message server
CN115482679B (en) * 2022-09-15 2024-04-26 深圳海星智驾科技有限公司 Automatic driving blind area early warning method and device and message server
CN116080529B (en) * 2023-04-12 2023-08-29 深圳市速腾聚创科技有限公司 Blind area early warning method and device, electronic equipment and storage medium
CN116080529A (en) * 2023-04-12 2023-05-09 深圳市速腾聚创科技有限公司 Blind area early warning method and device, electronic equipment and storage medium
CN116564111B (en) * 2023-07-10 2023-09-29 中国电建集团昆明勘测设计研究院有限公司 Vehicle early warning method, device and equipment for intersection and storage medium
CN116564111A (en) * 2023-07-10 2023-08-08 中国电建集团昆明勘测设计研究院有限公司 Vehicle early warning method, device and equipment for intersection and storage medium
CN117734680A (en) * 2024-01-22 2024-03-22 珠海翔越电子有限公司 Blind area early warning method, system and storage medium for large vehicle
CN117734680B (en) * 2024-01-22 2024-06-07 珠海翔越电子有限公司 Blind area early warning method, system and storage medium for large vehicle

Also Published As

Publication number Publication date
CN112216097A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
US11990036B2 (en) Driver behavior monitoring
US20210078562A1 (en) Planning for unknown objects by an autonomous vehicle
JP6494719B2 (en) Traffic signal map creation and detection
CN110920611B (en) Vehicle control method and device based on adjacent vehicles
JP7499256B2 (en) System and method for classifying driver behavior - Patents.com
WO2021238306A1 (en) Method for processing laser point cloud and related device
US11042159B2 (en) Systems and methods for prioritizing data processing
EP4089659A1 (en) Map updating method, apparatus and device
US11681296B2 (en) Scenario-based behavior specification and validation
JP2017535873A (en) Continuous occlusion model for street scene recognition
WO2017123665A1 (en) Driver behavior monitoring
WO2021227586A1 (en) Traffic accident analysis method, apparatus, and device
CN112172663A (en) Danger alarm method based on door opening and related equipment
CN113674523A (en) Traffic accident analysis method, device and equipment
CN114530058A (en) Collision early warning method, device and system
CN116703966A (en) Multi-object tracking
CN111216718B (en) Collision avoidance method, device and equipment
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN108447290A (en) Intelligent avoidance system based on car networking
US20230221408A1 (en) Radar multipath filter with track priors
US20240010233A1 (en) Camera calibration for underexposed cameras using traffic signal targets
US20230399008A1 (en) Multistatic radar point cloud formation using a sensor waveform encoding schema
US20220309693A1 (en) Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
US20230399009A1 (en) Multiple frequency fusion for enhanced point cloud formation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20837345

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20837345

Country of ref document: EP

Kind code of ref document: A1