WO2023179494A1 - 危险预警的方法、装置和车辆 - Google Patents

危险预警的方法、装置和车辆 Download PDF

Info

Publication number
WO2023179494A1
WO2023179494A1 PCT/CN2023/082246 CN2023082246W WO2023179494A1 WO 2023179494 A1 WO2023179494 A1 WO 2023179494A1 CN 2023082246 W CN2023082246 W CN 2023082246W WO 2023179494 A1 WO2023179494 A1 WO 2023179494A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
target object
preset
surrounding environment
information
Prior art date
Application number
PCT/CN2023/082246
Other languages
English (en)
French (fr)
Inventor
石巍巍
白立勋
俞清华
兰国兴
孟亚洲
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023179494A1 publication Critical patent/WO2023179494A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • B60Q5/005Arrangement or adaptation of acoustic signal devices automatically actuated
    • B60Q5/006Arrangement or adaptation of acoustic signal devices automatically actuated indicating risk of collision between vehicles or with pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras

Definitions

  • the present application relates to the field of vehicle technology, and more specifically, to a hazard warning method, device and vehicle.
  • Embodiments of the present application provide a danger warning method, device and vehicle, which enable the vehicle to warn users in advance in various situations and try to avoid traffic accidents.
  • vehicle may include one or more different types of vehicles, or may include one or more different types of vehicles on land (e.g. highways, roads, railways, etc.), water (e.g. waterways) , rivers, oceans, etc.) or transportation or movable objects that operate or move in space.
  • vehicles may include cars, bicycles, motorcycles, trains, subways, airplanes, ships, aircraft, robots, or other types of transportation vehicles or movable objects.
  • a danger warning method includes: a first vehicle obtains surrounding environment information; the first vehicle determines a target object based on the surrounding environment information and preset conditions, and the target object is in the surrounding environment,
  • the preset condition includes at least one of the following: the absolute value of the relative acceleration between the target and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration, or; the height of the target is less than or equal to the preset Assuming a height, or; the target object is in a traffic jam section; the first vehicle outputs prompt information, and the prompt information is used to prompt the user for information about the target object.
  • the first vehicle is its own vehicle
  • the preset conditions include but are not limited to at least one of the three preset conditions mentioned above. Any situation that can be used to determine that the target object is potentially dangerous can fall into the embodiments of this application. protected range. For example, when the target object is in a waiting state at a traffic light intersection, pedestrians or non-motor vehicles often pass through the waiting vehicles, which is also a situation where the target object of this application is potentially dangerous.
  • the danger warning method provided by the embodiment of the present application determines the target object in the surrounding environment through certain preset conditions, and prompts the user of potential dangers in a timely manner. This prevents users from not noticing dangers in time when traffic accidents are likely to occur, such as congested road sections, blind spots, or poor vision. This allows users to have a more complete experience before the real danger arrives. Take sufficient time to take effective countermeasures to avoid traffic accidents as much as possible.
  • the method when the preset condition includes that the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration , the method further includes: the first vehicle determines that the surrounding environment includes a key intersection, and the key intersection satisfies a preset topological structure.
  • key intersections include accident-prone road sections such as crosswalks, main road exits or entrances, U-turn intersections or roundabout right-turn intersections. This application does not limit the method of determining the preset topology.
  • the first vehicle determines that the surrounding environment includes a key intersection, the key road matches the preset topology, and the absolute value of the relative acceleration between the target object and the first vehicle is greater than or When it is greater than or equal to the absolute value of the preset acceleration (that is, the driving state of the target is emergency braking), it means that there may be a pedestrian crossing the road in the blind spot of the first vehicle (for example, in front of the target), and a prompt message is output. Prompt users of potential dangers to avoid traffic accidents caused by failure to remind users in time.
  • the method further includes: the first vehicle acquiring an image sequence of the target object; and the first vehicle determining the relationship between the target object and the target object based on the image sequence of the target object. The absolute value of the relative acceleration between the first vehicles.
  • the absolute value of the relative acceleration between the target object and the first vehicle represents the driving state of the target object.
  • the distance to the first vehicle is calculated respectively, and the relative distance is further calculated.
  • the absolute value of acceleration When the calculated absolute value of the relative acceleration is greater than or equal to the preset absolute value of the relative acceleration, the target is considered to have braked urgently, thereby reminding the user in time.
  • the target object is the vehicle closest to the first vehicle.
  • the first vehicle can more accurately determine whether there is potential danger in the blind area of the first vehicle's field of vision based on the driving state of the target object closest to the first vehicle. Potentially dangerous ones do not have great reference value.
  • the method when the preset condition includes that the height of the target object is less than or equal to the preset height, the method further includes: when the first vehicle detects or predicts When the attitude of the target object is toward the first vehicle, the warning level of the prompt information is increased.
  • the target is a small target (low target). You can determine whether the height of the target is less than or equal to the preset height, or directly determine that the target in the image is a child through the picture taken by the camera. or small animals, thereby determining that the target is a small target.
  • the method further includes: the first vehicle acquiring an image sequence of the target object; and the first vehicle determining or predicting the target object based on the image sequence of the target object. posture.
  • the first vehicle may determine whether the warning level of the prompt information needs to be increased based on the posture of the current target object or the predicted posture of the target object.
  • the method when the preset condition includes that the target object is in a traffic jam, the method further includes: the first vehicle determines that the target object is in a traffic jam based on the surrounding environment information.
  • traffic jam For a road section, the surrounding environment information includes at least one of the number of vehicles, vehicle density, and vehicle speed.
  • the traffic flow status can be determined by inputting at least one of the number of vehicles, vehicle density, and vehicle speed in the surrounding environment information into a pre-trained decision tree.
  • This application does not limit the specific method of pre-training the decision tree.
  • the danger warning method provided by the embodiments of the present application promptly reminds the user that there is potential danger in the current road section when the first vehicle and the target object are on a traffic jam (or other road section prone to traffic accidents), and reminds the user to take measures.
  • the alarm level of the prompt information is increased.
  • the hazard warning method provided by the embodiments of the present application can improve the accuracy of decision-making, and when the probability of coincidence between the current trajectory of the target object and the predicted trajectory and the preset area is higher than the preset probability When the alarm occurs, the alarm level of the prompt message is raised to remind the user to take necessary measures.
  • the method further includes: the first vehicle acquiring an image sequence of the target object; and the first vehicle predicting the trajectory of the target object based on the image sequence of the target object. .
  • obtaining the surrounding environment information includes: the first vehicle obtains image data according to the camera, and obtains the surrounding environment information according to the image data, or; the first vehicle The vehicle obtains the surrounding environment information based on navigation data.
  • the first vehicle obtains image data through a front-view camera, a side-view camera, and a rear-view camera, and further obtains surrounding environment information based on the image data, or the first vehicle introduces navigation data and obtains surrounding environment information through the navigation data.
  • a danger early warning device includes: a sensing unit for acquiring surrounding environment information; and a processing unit for determining a target object based on the surrounding environment information and preset conditions.
  • the target object is in the surrounding environment.
  • the preset condition includes at least one of the following: the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration, or; the height of the target object is less than or equal to The preset height, or; the target is in a traffic jam section; the processing unit is also used to output prompt information, and the prompt information is used to prompt the user for information about the target.
  • the processing unit when the preset condition includes that the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration, is specifically used to determine that the surrounding environment includes a key intersection, and that the key intersection satisfies a preset topological structure.
  • the sensing unit is further configured to obtain an image sequence of the target object; the processing unit is further configured to determine the relationship between the target object and the target object based on the image sequence of the target object.
  • the target object is the vehicle closest to the device.
  • the processing unit when the preset condition includes that the height of the target object is less than or equal to the preset height, the processing unit is specifically configured to: when the processing unit detects or predicts When the attitude of the target object is toward the device, the alarm level of the prompt information is increased.
  • the sensing unit is also used to obtain an image sequence of the target object; the processing unit is also used to determine or predict the target based on the image sequence of the target object. the posture of the object.
  • the processing unit when the preset condition includes that the target object is in a traffic jam section, is specifically configured to: determine that the target object is in a traffic jam section based on the surrounding environment. ,Should The surrounding environment information includes at least one of the number of vehicles, vehicle density, and vehicle speed.
  • the processing unit is also configured to: when the probability of coincidence between the trajectory of the target object and the preset area is higher than the preset probability, increase the alarm level of the prompt information. .
  • the sensing unit is also used to obtain an image sequence of the target object; the processing unit is also used to predict the image sequence of the target object based on the image sequence of the target object. trajectory.
  • the sensing unit is specifically configured to: obtain image data according to the camera, and obtain the surrounding environment information according to the image data, or obtain the surrounding environment information according to the navigation data.
  • a danger warning device in a third aspect, includes: a memory for storing a program; a processor for executing the program stored in the memory. When the program stored in the memory is executed, the processor is used for Implement the above first aspect and any method of danger warning that may be implemented in the first aspect.
  • a fourth aspect provides a hazard warning vehicle, which vehicle includes the second aspect and any hazard warning device that may be implemented in the second aspect or the third aspect.
  • a computer program product includes: computer program code.
  • the above computer program code When the above computer program code is run on a computer, it enables the computer to execute the above first aspect and any one of the first aspects can be realized. method of hazard warning.
  • the above computer program code may be stored in whole or in part on the first storage medium, where the first storage medium may be packaged together with the processor, or may be packaged separately from the processor. This is not the case in the embodiments of this application. Specific limitations.
  • a computer-readable storage medium stores program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to execute any one of the first aspect and the first aspect. Achievable risk warning methods.
  • a chip system in a seventh aspect, includes a processor for calling a computer program or computer instructions stored in a memory, so that the processor executes any of the above aspects and any possible aspects involved in any of the above aspects. this method.
  • the processor is coupled with the memory through an interface.
  • the chip system further includes a memory, and a computer program or computer instructions are stored in the memory.
  • Figure 1 is an application scenario of a danger warning method provided by an embodiment of the present application.
  • Figure 2 is a schematic flowchart of an accident early warning provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a monitoring and early warning method and device provided by an embodiment of the present application.
  • Figure 4 is a schematic scene diagram of a danger warning method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of the system architecture of a danger warning device provided by an embodiment of the present application.
  • Figure 6 is a schematic flow chart of a danger warning method 600 provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of a danger warning method provided by an embodiment of the present application.
  • Figure 8 is a schematic flow chart of a danger warning method 800 provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of a danger warning method provided by an embodiment of the present application.
  • Figure 10 is a schematic flow chart of a danger warning method 1000 provided by an embodiment of the present application.
  • Figure 11 is a schematic flow chart of a danger warning method 1100 provided by an embodiment of the present application.
  • Figure 12 is a schematic block diagram of a danger warning device provided by an embodiment of the present application.
  • Figure 13 is a schematic block diagram of a danger warning device provided by an embodiment of the present application.
  • Figure 1 is an application scenario of a danger warning method provided by an embodiment of the present application.
  • the vehicle 100 may be included.
  • Sensing system 120 may include several types of sensors that sense information about the environment surrounding vehicle 100 .
  • the sensing system 120 may include a positioning system.
  • the positioning system may be a global positioning system (GPS) 121, Beidou system or other positioning systems, an inertial measurement unit (IMU) 122, or a lidar.
  • GPS global positioning system
  • IMU inertial measurement unit
  • lidar 123.
  • millimeter wave radar 124, ultrasonic radar 125 and camera device 126 One or more of millimeter wave radar 124, ultrasonic radar 125 and camera device 126.
  • Vehicle 100 may include composition module 130 .
  • the composition module 130 can use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, simultaneous localization and mapping (SLAM) and other technologies to map the environment.
  • SFM structure from motion
  • SLAM simultaneous localization and mapping
  • Peripheral devices 140 may include a wireless communication system 141 , an on-board screen 142 , a microphone 143 and/or a speaker 144 .
  • peripheral device 140 provides a means for a user of vehicle 100 to interact with user interface 160 .
  • on-board screen 142 may provide information to a user of vehicle 100 .
  • the user interface 160 may also operate the on-board screen 142 to receive user input.
  • the on-board screen 142 can be operated via a touch screen.
  • peripheral device 140 may provide a means for vehicle 100 to communicate with other devices located within the vehicle.
  • microphone 143 may receive audio (eg, voice commands or other audio input) from a user of vehicle 100 .
  • speakers 144 may output audio to a user of vehicle 100 .
  • the computing platform 150 may include processors 151 to 15n (n is a positive integer).
  • the processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities.
  • CPU central processing unit
  • microprocessor graphics processing unit
  • GPU graphics processing unit
  • DSP digital signal processor
  • the processor can realize certain functions through the logical relationship of the hardware circuit. The logical relationship of the hardware circuit is fixed or can be reconstructed.
  • the processor is an application-specific integrated circuit (application-specific integrated circuit). ASIC) or programmable logic device (PLD) implemented hardware circuit, such as FPGA.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the process of the processor loading the configuration file and realizing the hardware circuit configuration can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (NPU), tensor processing unit (TPU), deep learning processing Unit (deep learning processing unit, DPU), etc.
  • the computing platform 150 may also include a memory, which is used to store instructions. Some or all of the processors 151 to 15n may call instructions in the memory to execute the quality to implement corresponding functions.
  • one or more of these components described above may be installed separately or associated with vehicle 100 .
  • the components described above may be communicatively coupled together in wired and/or wireless manners.
  • the above-mentioned vehicle 100 may include one or more different types of vehicles, or may include one or more different types of vehicles on land (for example, roads, roads, railways, etc.), water (for example: waterways, rivers, oceans, etc.) or transportation or movable objects that operate or move in space.
  • vehicles may include cars, bicycles, motorcycles, trains, subways, airplanes, ships, aircraft, robots or other types of transportation vehicles or movable objects, etc., which are not limited in the embodiments of this application.
  • the system consists of a front-vehicle detection module, an iOS development board, a position positioning module, a communication module, a voice alarm module and a vehicle-mounted display.
  • the front-vehicle detection module consists of two ultrasonic radars, which are installed on the left and right sides of the front of the vehicle.
  • the Taiwan development board, position positioning module and communication module Both are installed inside the vehicle, and the antennas of the position positioning module and communication module are connected to the outer casing of the vehicle; the vehicle display is installed inside the vehicle and on the right side of the instrument panel in the vehicle; the communication module is bidirectionally connected to the PC development board.
  • the accident warning system can measure whether there are pedestrians or motor vehicles in front of the vehicle through the vehicle's ultrasonic radar, and collect the current location information of the vehicle in real time through the position positioning module. If there are pedestrians or motor vehicles in front, the on-board display will work to alert the first responder. The vehicle driver avoids pedestrians, and at the same time packages the ultrasonic radar data and the position information of the positioning module and sends it to surrounding vehicles connected by surrounding signals. The surrounding vehicles analyze and process the received data.
  • danger feedback information is generated, and its own on-board display and in-car voice alarm module are triggered to remind the driver of the own vehicle to avoid pedestrians.
  • the signal source vehicle determines the danger based on the received data.
  • the signal source vehicle controls the voice alarm module outside the vehicle to remind pedestrians or motor vehicles outside the vehicle to pay attention to avoid vehicles in the direction of the signal source vehicle.
  • the accident warning system needs to rely on accurate detection of the signal source vehicle and timely communication between vehicles.
  • the device may specifically include: a data acquisition device, a data processing device, a video image acquisition device, an identification device and an alarm device.
  • Detectors can be used to issue early warnings to pedestrians and users at the same time, and send messages to users and pedestrians' mobile phones or other multimedia terminals for warning. If necessary, real-time intersection monitoring videos can be sent to users' mobile phones or other multimedia terminals to help users compare Comprehensively grasp the current intersection situation.
  • the detector can be a camera and radar sensor, which can be installed on a pillar at an intersection to detect vehicle information on the lane that creates a dangerous situation and the distance relationship between it and pedestrians, and issue early warning information.
  • the camera can work with the radar sensor at the same time.
  • the camera is used to monitor the road conditions around the zebra crossing in real time and collect images of passing vehicles to identify the license plate number;
  • the radar sensor is used to detect the distance between the vehicle and the sensor in the lane that generates the alarm phenomenon, and generates the alarm phenomenon.
  • the speed of vehicles traveling in the lane and the distance between pedestrians and sensors is only applicable to intersections where the device is installed, and it is difficult to ensure accurate identification of pedestrians and vehicles, and thus cannot guarantee timely alarms.
  • Embodiments of the present application provide a method, device and vehicle for danger warning. It allows vehicles to estimate the conditions of roads, pedestrians and surrounding vehicles when driving on congested road sections, poor visibility and other road conditions prone to traffic accidents, and to warn users in advance when traffic accidents are likely to occur, to avoid having to wait until the vehicle has been detected. If the user reaches the target object and then alerts the user, the user may be unable to handle the current critical situation in time, resulting in a traffic accident.
  • Figure 4 shows a schematic scene diagram of a danger warning method provided by an embodiment of the present application. It should be understood that the solutions provided by the embodiments of this application can be applied to vehicles.
  • the vehicles can obtain the surrounding environment information through external cameras, millimeter wave radars and other devices, where the environmental information can include information about surrounding vehicles, lanes, pedestrians and non-motor vehicles. , and determine whether an early warning is needed based on the analysis of surrounding environment information.
  • FIG. 5 shows a schematic diagram of the system architecture of a danger warning device provided by an embodiment of the present application.
  • the device includes a sensing module 510, a decision-making module 520 and an alarm module 530.
  • the sensing module 510 may be one or more of the multiple sensors included in the sensing system 120 in FIG. 1 , and specifically may include a global positioning system 121, an IMU 122, a laser radar 123, a millimeter wave radar 124, an ultrasonic radar 125, and a camera device. 126 etc.
  • the perception module obtains the environmental information of the current parking space.
  • the environmental information includes static or dynamic obstacles around the parking space, passable areas, lane lines, pedestrians and non-motorized vehicles, etc.
  • the sensing algorithm used by the sensing module 510 may include road line detection, multi-objective tracking (MOT), road sign recognition, etc., which are not limited in the embodiments of the present application.
  • the decision module 520 and the alarm module 530 may be one or more of the computing platform 150 in FIG. 1 .
  • the sensing module 510 inputs the acquired environmental information into the decision-making module 520 and the alarm module 530 .
  • the decision-making module 520 analyzes the environmental information, based on the environmental information obtained by the perception module 510, such as lane lines, intersections (such as sidewalks, intersections) and vehicles within the first vehicle's field of view, intersection information obtained from navigation, and small targets. Objects (such as children, small animals), non-motorized vehicles, etc. And input the environmental information into the alarm module 530.
  • the alarm module 530 alarms the user based on the decision information obtained by analyzing the environmental information by the decision-making module 520 to remind the user of the possibility of a traffic accident ahead and to remind the user to drive carefully.
  • the decision-making module 520 analyzes the environmental information, including calculating the distance of the nearest vehicle relative to the first vehicle based on the collected N-frame image sequence of the nearest vehicle, and calculating the absolute value of the relative acceleration; or calculating the small target object at the current moment. The distance from the first vehicle; or the traffic flow status is obtained based on the vehicle's statistical information, and the probability of coincidence between the target's trajectory and the first vehicle's preset area is calculated.
  • the decision-making module 520 inputs the corresponding decision-making information into the alarm module 530, and the alarm module 530 alarms the user according to different alarm levels.
  • the form of the alarm may be a display alarm or a voice prompt alarm, etc., which is not limited in this application.
  • embodiments of the present application provide a danger warning method, which can be applied to vehicles, or chips, systems, etc. in vehicles.
  • accident-prone road sections such as congested road sections, slow-moving vehicles, and poor visibility
  • target objects such as non-motorized vehicles and pedestrians
  • the vehicle perception system makes it difficult for the vehicle perception system to identify or prompt the user after detecting the target object.
  • the vehicle perception system In order to effectively prevent traffic accidents.
  • the embodiments of this application by indirectly sensing the driving status of surrounding vehicles, or by predicting the trajectory or posture of the target object, the user is alerted in advance, which can effectively avoid possible traffic accidents caused by failure to alert the user in time.
  • FIG. 6 shows a schematic flow chart of a danger warning method 600 provided by an embodiment of the present application. Specifically, Figure 6 shows that the first vehicle passes lane detection, vehicle detection and discrimination, and the perception of the status of surrounding vehicles are integrated. When it conforms to the preset topology of the road and it is determined that the surrounding vehicles have abnormal driving behavior, the user of the first vehicle is identified in advance. Alert.
  • the method 600 includes:
  • the first vehicle may acquire data before sensing the surrounding environment, or may acquire data while sensing the surrounding environment.
  • the data includes images or maps of the surrounding environment.
  • the first vehicle can obtain the input image of the front camera from the smart cockpit domain controller (CDC).
  • the image can include surrounding vehicle information images, pedestrian information images, and non-motor vehicle information images.
  • the first vehicle can also obtain navigation information, which can help the vehicle obtain more accurate and richer surrounding environment data.
  • the vehicle can also be a front-facing camera such as a driving recorder other than the CDC.
  • the first vehicle can detect lanes, crossings (for example: sidewalks, intersections) and vehicles within the field of view through the sensing module, obtain intersection information (for example: the relative distance to the intersection) from the navigation information, and comprehensively determine the distance between the intersection and the intersection.
  • intersection information for example: the relative distance to the intersection
  • the field of view range can be understood as the range that can be photographed or detected by a camera or millimeter radar wave installed outside the first vehicle, and the nearest neighboring vehicle belongs to the surrounding vehicles of the first vehicle.
  • topology matching mainly includes the following processes: first, identify vehicle information and lane information; second, determine whether the current lane matches the preset topology; Thirdly, when the preset topology structure of the lane is successfully matched, the status of the surrounding vehicles is identified, including the vehicle in front of the first vehicle and the vehicles on both sides; finally, the first vehicle, surrounding vehicles and lanes are topologically matched.
  • the topological structure includes: key intersections prone to accidents such as crosswalks, main road exits or entrances, U-turn intersections or right-turn intersections on roundabouts.
  • the first vehicle tracks the nearest neighbor vehicle detected in the above step S601, and the nearest neighbor vehicle is the vehicle that successfully matches the preset topology structure in step S602.
  • the first vehicle stores the N-frame image sequence of the nearest neighboring vehicle.
  • the image sequence can be obtained through the camera in step S601.
  • the first vehicle calculates the distance of the nearest neighbor vehicle relative to the first vehicle based on the stored N (N is a positive integer) frame image sequence of the nearest neighbor vehicle, thereby calculating the relative acceleration between the nearest neighbor vehicle and the first vehicle. absolute value.
  • the decision-making module 520 of the first vehicle compares the absolute value of the relative acceleration calculated in step S604 with the absolute value of the preset threshold.
  • the absolute value of the relative acceleration is greater than or equal to the absolute value of the preset threshold, it is considered that the nearest vehicle brakes suddenly, and the decision-making module 520 sends a prompt signal to the warning module 530 to warn the user of the first vehicle in advance that there may be pedestrians or non-motor vehicles in front.
  • the first vehicle when the first vehicle is blocked by surrounding vehicles and cannot observe surrounding pedestrians or non-motorized vehicles in time, it can indirectly determine that there may be a blind spot in the first vehicle's field of vision by sensing the driving state of the nearest vehicle (for example, sudden braking).
  • Potential dangers that is, warning the user in advance before the first vehicle observes the target object, avoids the danger caused by the user not being able to respond to emergencies in a timely manner as the user only receives the warning prompt when the target object is observed.
  • Figure 7 shows a schematic diagram of a danger warning method provided by an embodiment of the present application.
  • the dotted line box represents the first vehicle's visual field range
  • the solid line box represents the first vehicle's visual field blind area
  • the solid line box is within the range of the dotted line box.
  • the ranges of the solid and dotted boxes shown in FIG. 7 are only illustrative and do not limit the actual field of view range and blind area of the first vehicle.
  • Method 1 When the height of the small target is lower than (or less than) or equal to the preset height, the target is a small target;
  • the second method is to directly perceive the target object information through the image obtained by the camera outside the vehicle (for example, if the image shows a child or a small animal, the target object is considered to be a small target object).
  • Figure 8 shows a schematic flow chart of a danger warning method 800 provided by an embodiment of the present application. Specifically, the method 800 is applicable to the scenario shown in Figure 7.
  • the method 800 includes:
  • the first vehicle may acquire data before sensing the surrounding environment, or may acquire data while sensing the surrounding environment.
  • the embodiments of the present application do not limit this.
  • the first vehicle may obtain the input image of the surround-view camera from a mobile data center (MDC).
  • the image may include surrounding vehicle information images, pedestrian information images, non-motor vehicle information images, etc.
  • the first vehicle can also obtain navigation information, which can help the vehicle obtain more accurate and richer surrounding environment data.
  • the first vehicle can detect small targets within the surround view range at multiple scales. For example, it can detect small targets within the surround view range through a surround view camera, and can obtain multi-scale detection results through multiple surround view cameras, and fuse multiple Large-scale detection results can obtain more accurate and broader information about the surrounding environment, and try to detect all small targets within the first vehicle's surrounding field of view.
  • the first vehicle tracks the small target detected in step S801, the first vehicle stores an N-frame image sequence of the small target, and predicts the posture of the small target based on the N-frame image sequence.
  • the posture of the small target includes moving toward the first vehicle or away from the first vehicle.
  • the image sequence may be obtained through the surround-view camera in step S801.
  • the first vehicle calculates the distance between the small target at the current moment and the first vehicle based on the stored N-frame image sequence of the small target. For example, the first vehicle calculates the distance between the small target at the current moment and the first vehicle based on the millimeter radar. vehicle distance.
  • the sensing module 510 of the first vehicle can track the small target object, and when it is detected that the small target object is within the range between the preset dotted line box and the solid line box as shown in Figure 7, the alarm module 530 Send a prompt message to alert the user at a low level; when it is detected that the small target is within the dotted frame and its attitude is toward the first vehicle, the alarm level is increased; when it is detected that the small target enters the dotted frame and enters the solid line frame (the first When the small target object is detected to leave the dotted box area, that is, away from the first vehicle, the alarm is lifted.
  • the posture of the small target is further predicted based on the current posture of the small target and the distance to the first vehicle.
  • Hierarchical warnings remind users to pay attention to driving safety.
  • the small target object has not yet entered the blind spot of the field of view, but is already facing the first vehicle, it indicates that there is a possibility that the small target object is about to enter the blind spot of the first vehicle's field of view.
  • the alarm level is raised to remind the user to be more vigilant and avoid small targets.
  • the detection After entering the blind area of the field of view, the detection fails, resulting in the user being unable to respond to dangerous situations in time.
  • FIG 9 is a schematic diagram of a danger warning method provided by an embodiment of the present application.
  • a traffic jam or slow-moving road section has large traffic volume and slow driving speed, and it is easy for pedestrians or non-motor vehicles to break in (for example: the pedestrian trajectory is shown as the dotted line in Figure 9).
  • the pedestrian trajectory is shown as the dotted line in Figure 9.
  • the first vehicle can predict or track the detected target attitude or trajectory.
  • the trajectory of the target will coincide with the solid line area of the first vehicle with a high probability, the user's safety will be improved. Alarm level.
  • the target object before the trajectory of the target object coincides with the preset area (solid line area) of the first vehicle, it is predicted that the target object will enter the preset area of the first vehicle with a high probability, thus providing an early warning and reminding the user to pay attention to driving safety.
  • Figure 10 shows a schematic flow chart of a danger warning method 1000 provided by an embodiment of the present application. Specifically, the method 1000 is applicable to the scenario shown in Figure 9.
  • the method 1000 includes:
  • the first vehicle may acquire data before detecting the target, or may acquire data while detecting the target.
  • the embodiments of the present application do not limit this.
  • the method for the first vehicle to obtain data in this scenario is as described in step S801 above, and will not be described again here.
  • the first vehicle counts the information of surrounding vehicles obtained, specifically, counts the number of surrounding vehicles, vehicle density and other information within a preset time period, and inputs the number of surrounding vehicles, vehicle density and other information into the pre-trained decision tree. , obtain the current traffic flow status classification.
  • the traffic flow status is congestion status
  • the pre-trained decision tree can be understood as a traffic flow state model, which can be used to estimate the traffic flow state based on information such as the number of vehicles and vehicle density.
  • the semantic map construction is triggered based on the acquired surrounding environment information (such as vehicles, roads, pedestrians, and non-motorized vehicles).
  • the first vehicle can more accurately estimate the traffic flow status, which facilitates the decision-making module of the first vehicle to make corresponding decisions.
  • This application does not limit the specific method of semantic map construction.
  • the target object (pedestrian or non-motor vehicle) detected in the above step S1001 is tracked, and an N frame image sequence of the target object is stored.
  • the alarm module 530 issues a low-level warning; the first vehicle compares the target's driving or moving trajectory with the preset area ( (within the solid line range as shown in Figure 9) is estimated. When the coincidence probability of the target's trajectory in the future T time period and the preset area is greater than or equal to the preset threshold P, the alarm level is increased.
  • the solutions provided by the embodiments of the present application can be applied to accident-prone road sections, such as traveling through congestion.
  • the probability of the overlap between the target's movement path and the vehicle's preset area is more accurately estimated, so that an alarm message can be issued before the target moves to the preset area, thereby promptly Alerts the user to the presence of potentially dangerous obstacles.
  • FIG 11 shows a schematic flowchart of a danger warning method 1100 according to an embodiment of the present application. This method 1100 can be applied in a danger warning scenario as shown in Figure 4 .
  • the first vehicle obtains surrounding environment information.
  • the surrounding environment includes surrounding vehicles, non-motor vehicles, pedestrians, lanes, etc. of the first vehicle.
  • the first vehicle can obtain the surrounding environment information through a camera or millimeter wave radar, where the camera includes a front-view camera (such as a driving recorder), a side-view camera or a rear-view camera; the first vehicle can also obtain the surrounding environment information through navigation data. Obtaining information about the surrounding environment is not limited in this application.
  • the first vehicle may acquire an image sequence of the surrounding environment through a camera and perform subsequent steps.
  • the first vehicle determines the target object based on the surrounding environment information and preset conditions.
  • the target is in the surrounding environment, and the preset conditions include at least one of the following:
  • the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration.
  • relative acceleration is a vector, which is the difference between the acceleration of the target object and the acceleration of the first vehicle.
  • the relative acceleration between the target object and the first vehicle and the reference system of the preset relative acceleration are consistent. Take the acceleration of the target object as a 1 and the acceleration of the first vehicle as a 2 as an example: when the first vehicle is used as the reference frame, the relative acceleration between the target object and the first vehicle is a 1 -a 2 ; when the first vehicle is used as the reference frame When the target is the reference frame, the relative acceleration between the target and the first vehicle is a 2 -a 1 .
  • the first vehicle is used as the reference frame.
  • the default relative acceleration is a, and a is less than or equal to 0; when the target object brakes suddenly, the relative acceleration between the target object and the first vehicle is a', and a' is less than or equal to 0.
  • the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration, that is:
  • the target object is decelerating and its acceleration is -5m/s 2 ;
  • the first vehicle is driving in a straight line at a constant speed and its acceleration is 0m/s 2 ;
  • the target object is used as the reference system.
  • the default relative acceleration is a, and a is greater than or equal to 0; when the target object brakes suddenly, the relative acceleration between the target object and the first vehicle is a', and a' is greater than or equal to 0.
  • the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration, that is: a’ ⁇ a.
  • the target object is decelerating, and its acceleration is -5m/s 2 ;
  • the first vehicle is driving in a straight line at a constant speed, and its acceleration is 0m/s 2 ;
  • the first vehicle determines based on the image sequence of the target object that the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration.
  • the first vehicle determines that the surrounding environment includes key intersections, and the key intersections satisfy the preset topology structure, where the key intersections include accident-prone road sections such as crosswalks, main road exits or entrances, U-turn intersections, or roundabout right-turn intersections.
  • This application does not limit the method of determining the preset topology.
  • the target object may be the vehicle closest to the first vehicle.
  • the target object When the target object is traveling in the adjacent lane in the same direction as the first vehicle and the target object is diagonally in front of the first vehicle, the target object may be It will block the first vehicle's field of view. If a pedestrian is crossing the crosswalk in front of the first vehicle and the target, the first vehicle will most likely not be able to observe the pedestrian crossing the crosswalk in time. Therefore, the first vehicle determines whether the target object brakes suddenly by monitoring the absolute value of the relative acceleration between the target object and the first vehicle.
  • the first vehicle reminds the user that there may be danger.
  • it senses the vehicles in the surrounding environment, calculates the driving status of the vehicles in the surrounding environment, determines whether there is abnormal driving behavior (such as sudden braking), and reminds the user of potential dangers.
  • the height of the target is lower than the preset height.
  • the alarm level of the prompt information in step S1103 is increased, or; when the first vehicle predicts that the attitude of the target object is toward the first vehicle. , increase the alarm level of the prompt information.
  • the first vehicle determines the posture of the target object based on the image sequence of the target object, or the first vehicle predicts the posture of the target object based on the image sequence of the target object.
  • the target object can be understood as a small object as shown in Figure 7 A child or a small animal, or a child or a small animal is determined as the target object through the image captured by the camera of the first vehicle, and the target object is referred to as a small target object.
  • the vehicle's field of view such as the blind area within the solid line in Figure 7
  • the first vehicle is within a certain field of view (such as the dotted line in Figure 7)
  • the alarm level may be increased based on the current posture or predicted posture of the small target toward the first vehicle, prompting the user to take necessary measures to avoid when the small target is within
  • the first vehicle cannot detect the small target after entering the blind spot, it is difficult for the user to take necessary measures in time.
  • the target is in a traffic jam.
  • the first vehicle acquires surrounding environment information, and the surrounding environment information includes at least one of vehicle number, vehicle density, and vehicle speed.
  • the first vehicle inputs at least one of the above-mentioned surrounding environment information into a pre-trained decision tree to determine the current traffic flow state.
  • the first vehicle predicts the trajectory of the target object based on the image sequence of the target object.
  • the target may be a pedestrian or a non-motor vehicle passing through a slow-moving vehicle.
  • the target object When the target object is in a traffic jam, it means that the vehicle is in a traffic accident-prone road (for example, a traffic jam), and the user is prompted at this time so that the user can be more vigilant and drive carefully.
  • the predicted probability of coincidence between the target's trajectory and the preset area (for example, the solid line area in Figure 9) is higher than the preset probability, the alarm level is increased to further remind the user to drive carefully.
  • the first vehicle outputs prompt information that prompts the user with information about the target object.
  • the first vehicle can output prompt information to prompt the user to be more vigilant so that the user can take necessary measures in a timely manner, and further improve the prompt information based on the prediction of the attitude or trajectory of the target object.
  • the alarm level of the information For example, the prompt information can be displayed on the center console of the first vehicle, or an alarm signal can be sent. This application does not limit the specific form of the prompt information.
  • the danger warning method provided by the embodiment of the present application is described in detail with reference to FIGS. 4 to 11 .
  • the danger warning device provided by the embodiment of the present application will be described in detail with reference to FIGS. 12 to 13 . It should be understood that the description of the device embodiments corresponds to the description of the method embodiments. Therefore, for content that is not described in detail, please refer to the above method embodiments. For the sake of brevity, they will not be described again here.
  • FIG 12 is a schematic block diagram of a danger warning device provided by an embodiment of the present application.
  • the device 1200 includes a sensing unit 1210 and a processing unit 1220.
  • the sensing unit 1210 can implement corresponding functions of obtaining data or information, and the processing unit 1220 is used to perform data processing or output information.
  • the device 1200 may also include a storage unit, which may be used to store instructions and/or data, and the processing unit 1220 may read the instructions and/or data in the storage unit, so that the device implements the foregoing method embodiments. .
  • the apparatus 1200 may include units for performing the methods in FIGS. 4 to 11 . Moreover, each unit in the device 1200 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiments in Figures 4 to 11.
  • the sensing unit 1210 can be used to execute S1101 in the method 1100
  • the processing unit 1220 can be used to execute S1102 and S1103 in the method 1100.
  • the device 1200 includes: a sensing unit 1210, used to obtain surrounding environment information; a processing unit 1220, used to determine a target object according to the surrounding environment information and preset conditions.
  • the target object is in the surrounding environment.
  • the preset conditions include At least one of the following: the absolute value of the relative acceleration between the target object and the first vehicle is greater than or equal to the absolute value of the preset relative acceleration, or; the height of the target object is less than or less than or equal to the preset height, or;
  • the target object is located in a traffic jam; the processing unit 1220 is also used to output prompt information, and the prompt information is used to prompt the user of the target object. Information about the target object.
  • the processing unit 1220 is specifically configured to: determine the surrounding The environment includes critical intersections that satisfy a preset topology.
  • the sensing unit 1210 is further configured to obtain an image sequence of the target object; the processing unit 1220 is further configured to determine the absolute value of the relative acceleration between the target object and the first vehicle based on the image sequence of the target object. value.
  • the processing unit 1220 is specifically configured to: when the processing unit 1220 detects or predicts that the posture of the target object is toward the device. , increase the alarm level of this prompt information.
  • the sensing unit 1210 is further configured to obtain an image sequence of the target object; the processing unit 1220 is further configured to determine or predict the posture of the target object based on the image sequence of the target object.
  • the processing unit 1220 is specifically configured to: determine that the target object is in a traffic jam section based on the surrounding environment, where the surrounding environment information includes the number of vehicles and vehicle density. , at least one of vehicle speeds.
  • the processing unit 1220 is also configured to: when the probability of coincidence between the trajectory of the target object and the preset area is higher than the preset probability, increase the alarm level of the prompt information.
  • the processing unit 1220 is also configured to: when the probability of coincidence between the trajectory of the target object and the preset area is higher than the preset probability, increase the alarm level of the prompt information.
  • the sensing unit 1210 is specifically configured to: obtain image data according to the camera, and obtain the surrounding environment information according to the image data, or obtain the surrounding environment information according to the navigation data.
  • the processing unit in Figure 12 can be implemented by at least one processor or processor-related circuit, the sensing unit can be implemented by a transceiver or transceiver-related circuit, and the storage unit can be implemented by at least one memory.
  • FIG. 13 is a schematic block diagram of the danger warning device according to the embodiment of the present application.
  • the danger warning device 1300 shown in FIG. 13 may include: a processor 1310, a transceiver 1320, and a memory 1330.
  • the processor 1310, the transceiver 1320 and the memory 1330 are connected through an internal connection path.
  • the memory 1330 is used to store instructions.
  • the processor 1310 is used to execute the instructions stored in the memory 1330, and the transceiver 1330 receives/sends some parameters.
  • the memory 1330 can be coupled with the processor 1310 through an interface or integrated with the processor 1310 .
  • transceiver 1320 may include but is not limited to a transceiver device such as an input/output interface to implement communication between the communication device 1300 and other devices or communication networks.
  • each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor 1310 .
  • the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory 1330.
  • the processor 1310 reads the information in the memory 1330 and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the memory may include read-only memory and random access memory, and provide instructions and data to the processor.
  • Part of the processor may also include non-volatile random access memory.
  • the processor may also store information about the device type.
  • Embodiments of the present application also provide a computer-readable medium that stores program code.
  • the computer-readable medium stores program code
  • the computer program code When the computer program code is run on a computer, it causes the computer to perform any of the methods in FIGS. 4 to 11 described above.
  • An embodiment of the present application also provides a chip, including: at least one processor and a memory.
  • the at least one processor is coupled to the memory and is used to read and execute instructions in the memory to execute the above-mentioned Figures 4 to 11. any method.
  • Embodiments of the present application also provide an autonomous vehicle, including: at least one processor and a memory.
  • the at least one processor is coupled to the memory and is used to read and execute instructions in the memory to execute the above-mentioned Figures 4 to 4. Any of the methods in 11.
  • the method provided by the embodiments of the present application is introduced from the perspective of the electronic device as the execution subject.
  • the electronic device may include a hardware structure and/or a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed as a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • An embodiment of the present application also provides an electronic device, including: a display screen, a processor, a memory, a power button, an application program, and a computer program.
  • an electronic device including: a display screen, a processor, a memory, a power button, an application program, and a computer program.
  • Each of the above devices can be connected through one or more communication buses.
  • the one or more computer programs are stored in the above-mentioned memory and configured to be executed by the one or more processors.
  • the one or more computer programs include instructions, and the above-mentioned instructions can be used to cause the electronic device to execute each of the above-mentioned tasks.
  • This embodiment can divide the electronic device into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, or each functional unit can be integrated into one processing unit.
  • this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method in various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .

Abstract

一种危险预警的方法、装置(1300)和车辆(100),方法包括:第一车辆获取周边环境信息;第一车辆根据周边环境信息和预设条件确定目标物,目标物处于周边环境中,预设条件包括以下至少一项:目标物与第一车辆之间的相对加速度的绝对值大于等于预设相对加速度的绝对值,或目标物的高度小于等于预设高度,或目标物处于车流拥堵路段中;第一车辆输出提示信息,提示信息用于提示用户目标物的信息。

Description

危险预警的方法、装置和车辆
本申请要求于2022年03月24日提交中国专利局、申请号为202210301522.X、申请名称为“危险预警的方法、装置和车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及车辆技术领域,并且更加具体地,涉及一种危险预警的方法、装置和车辆。
背景技术
随着机动车保有量的持续增长,与车辆相关的交通事故频发。为了降低交通事故发生概率,车辆在感知到危险时对用户告警,以提醒用户采取必要的紧急措施。但如果告警不及时,会使得用户来不及反应,导致交通事故发生。
因此,如何及时告警用户,使得用户能够采取有效措施,从而尽量避免交通事故的发生是目前亟待解决的问题。
发明内容
本申请实施例提供一种危险预警的方法、装置和车辆,使得车辆在多种情况提前告警用户,尽量避免交通事故的发生。
在本申请中,“车辆”可以包括一种或多种不同类型的交通工具,也可以包括一种或多种不同类型的在陆地(例如:公路,道路,铁路等),水面(例如:水路,江河,海洋等)或者空间上操作或移动的运输工具或者可移动物体。例如,车辆可以包括汽车,自行车,摩托车,火车,地铁,飞机,船,飞行器,机器人或其它类型的运输工具或可移动物体等。
第一方面,提供了一种危险预警的方法,该方法包括:第一车辆获取周边环境信息;该第一车辆根据该周边环境信息和预设条件确定目标物,该目标物处于周边环境中,该预设条件包括以下至少一项:该目标物与该第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,或;该目标物的高度小于或小于等于预设高度,或;该目标物处于车流拥堵路段中;该第一车辆输出提示信息,该提示信息用于提示用户该目标物的信息。
应理解,第一车辆为自车,预设条件包括但不限于上述三项预设条件中的至少一项,任何能够用于确定目标物存在潜在危险的情况均可落入本申请实施例的保护范围。示例性地,目标物处于红绿灯路口的等候状态时,也常常发生行人或非机动车穿过等待车辆的现象,也属于本申请目标物存在潜在危险的情况。
本申请实施例提供的危险预警的方法,通过一定的预设条件,在周边环境中确定出目标物,及时提示用户存在潜在危险。避免用户在拥堵路段、视野盲区或视野不佳等容易发生交通事故的情况下没有及时察觉危险。使得用户在真正的危险到来之前,能够有较为充 足的时间采取有效的应对措施,尽量避免发生交通事故。
结合第一方面,在第一方面的某些实现方式中,当该预设条件包括该目标物与该第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,该方法还包括:该第一车辆确定该周边环境包括关键路口,该关键路口满足预设拓扑结构。
可选地,关键路口包括人行横道口、主路出口或入口、掉头路口或环岛右转路口等事故多发路段。本申请对确定预设拓扑结构的方法不作限定。
本申请实施例提供的危险预警的方法,第一车辆确定周边环境包括关键路口,该关键路与预设拓扑结构匹配,且该目标物与该第一车辆之间的相对加速度的绝对值大于或大于等于预设加速度的绝对值(即该目标物的行车状态为紧急刹车)时,表示在第一车辆的盲区(例如:在目标物的前方)可能有行人正在横穿马路,输出提示信息,提示用户存在潜在危险,避免因未能及时提醒用户而造成的交通事故。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:该第一车辆获取该目标物的图像序列;该第一车辆根据该目标物的图像序列确定该目标物与该第一车辆之间的相对加速度的绝对值。
应理解,目标物与第一车辆之间的相对加速度的绝对值表示该目标物的行车状态,根据获取的目标物的N帧图像序列,分别计算相对第一车辆的距离,并进一步计算出相对加速度的绝对值。当计算得到的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,则认为该目标物紧急刹车,从而及时提醒用户。
结合第一方面,在第一方面的某些实现方式中,该目标物为与该第一车辆距离最近的车辆。
应理解,第一车辆根据距离第一车辆最近的目标物的行车状态能够更加准确判断第一车辆的视野盲区存在潜在危险,与第一车辆距离较远车辆的行车状态对第一车辆判断是否有潜在危险的并不具有较大的参考价值。
结合第一方面,在第一方面的某些实现方式中,当该预设条件包括该目标物的高度小于或小于等于预设高度时,该方法还包括:当该第一车辆检测或预测到该目标物的姿态朝向该第一车辆时时,提高该提示信息的告警级别。
应理解,该目标物为小目标物(低矮目标物),可以通过判断该目标物的高度是否小于或小于等于预设高度,或者通过摄像头拍摄的图片直接确定图像中的目标物为小孩子或者小动物,从而确定该目标物为小目标物。
本申请实施例提供的危险预警的方法,在通常情况下,小目标物容易进入车辆的视野盲区,具有潜在危险,因此提出本方案,在第一车辆的一定视野范围内检测到小目标物时,便提示用户,并且根据小目标物当前的姿态朝向或者预测后的姿态朝向提高对用户的警示级别,提醒用户采取必要措施。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:该第一车辆获取该目标物的图像序列;该第一车辆根据该目标物的图像序列确定或预测该目标物的姿态。
应理解,第一车辆可根据当前目标物的姿态或者预测目标物的姿态确定是否需要提高提示信息的告警级别。
结合第一方面,在第一方面的某些实现方式中,当该预设条件包括该目标物处于车流拥堵路段时,该方法还包括:该第一车辆根据该周边环境信息确定该目标物处于车流拥堵 路段,该周边环境信息包括车辆数量、车辆密度、车辆速度中的至少一种。
可选地,将周边环境信息中的车辆数量、车辆密度、车辆速度中的至少一种输入预先训练的决策树,便可确定车流状态。本申请对预先训练决策树的具体方法不作限定。
本申请实施例提供的危险预警的方法,当第一车辆和目标物处于车流拥堵路段(或者其他容易发生交通事故的路段)时,及时提醒用户当前路段存在潜在危险,提醒用户采取措施。
结合第一方面,在第一方面的某些实现方式中,当该目标物的轨迹与预设区域的重合概率高于预设概率时,提高该提示信息的告警级别。
本申请实施例提供的危险预警的方法,结合道路车流状态和区域概率轨迹预测,能够提高决策的准确性,并且,当目标物当前轨迹以及预测轨迹与预设区域的重合概率高于预设概率时,提高提示信息的告警级别,提醒用户采取必要措施。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:该第一车辆获取该目标物的图像序列;该第一车辆根据该目标物的图像序列预测该目标物的轨迹。
结合第一方面,在第一方面的某些实现方式中,该获取周边环境信息,包括:该第一车辆根据摄像头获取图像数据,并根据该图像数据获取该周边环境信息,或;该第一车辆根据导航数据获取该周边环境信息。
可选地,第一车辆通过前视摄像头、侧视摄像头、后视摄像头获取图像数据,进一步根据图像数据获取周边环境信息,或者第一车辆引入导航数据,通过导航数据获取周边环境信息。
第二方面,提供了一种危险预警的装置,该装置包括:感知单元,用于获取周边环境信息;处理单元,用于根据周边环境信息和预设条件确定目标物,该目标物处于周边环境中,该预设条件包括以下至少一项:该目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,或;该目标物的高度小于或小于等于预设高度,或;该目标物处于车流拥堵路段中;该处理单元还用于输出提示信息,该提示信息用于提示用户该目标物的信息。
结合第二方面,在第二方面的某些实现方式中,当该预设条件包括该目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,该处理单元具体用于:确定该周边环境包括关键路口,该关键路口满足预设拓扑结构。
结合第二方面,在第二方面的某些实现方式中,该感知单元还用于,获取该目标物的图像序列;该处理单元还用于,根据该目标物的图像序列确定该目标物与该第一车辆之间的相对加速度的绝对值。
结合第二方面,在第二方面的某些实现方式中,该目标物为与该装置距离最近的车辆。
结合第二方面,在第二方面的某些实现方式中,当该预设条件包括该目标物的高度小于或小于等于预设高度时,该处理单元具体用于:当该处理单元检测或预测到该目标物的姿态朝向该装置时,提高该提示信息的告警级别。
结合第二方面,在第二方面的某些实现方式中,该感知单元还用于,获取该目标物的图像序列;该处理单元还用于,根据该目标物的图像序列确定或预测该目标物的姿态。
结合第二方面,在第二方面的某些实现方式中,当该预设条件包括该目标物处于车流拥堵路段时,该处理单元具体用于:根据该周边环境确定该目标物处于车流拥堵路段,该 周边环境信息包括车辆数量、车辆密度、车辆速度中的至少一种。
结合第二方面,在第二方面的某些实现方式中,该处理单元还用于:当该目标物的轨迹与预设区域的重合概率高于预设概率时,提高该提示信息的告警级别。
结合第二方面,在第二方面的某些实现方式中,该感知单元还用于,获取该目标物的图像序列;该处理单元还用于,根据该目标物的图像序列预测该目标物的轨迹。
结合第二方面,在第二方面的某些实现方式中,该感知单元具体用于:根据摄像头获取图像数据,并根据该图像数据获取该周边环境信息,或;根据导航数据获取周边环境信息。
第三方面,提供了一种危险预警的装置,该装置包括:储存器,用于存储程序;处理器,用于执行该存储器中存储的程序,当存储器存储的程序被执行时,处理器用于执行上述第一方面以及第一方面中任一可能实现的危险预警的方法。
第四方面,提供了一种危险预警的车辆,该车辆包括第二方面以及第二方面,或第三方面中任一可能实现的危险预警的装置。
第五方面,提供了一种计算机程序产品,上述计算机程序产品包括:计算机程序代码,当上述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面以及第一方面中任一了能实现的危险预警的方法。
需要说明的是,上述计算机程序代码可以全部或部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。
第六方面,提供了一种计算机可读存储介质,上述计算机可读介质存储由程序代码,当上述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面以及第一方面中任一了能实现的危险预警的方法。
第七方面,提供了一种芯片系统,该芯片系统包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行上述任一方面以及上述任一方面可能的涉及的该方法。
结合第七方面,在一种可能的实现方式中,该处理器通过接口与存储器耦合。
结合第七方面,在一种可能的实现方式中,该芯片系统还包括存储器,该存储器中存储有计算机程序或计算机指令。
附图说明
图1是本申请实施例提供的一种危险预警的方法的应用场景。
图2是本申请实施例提供的一种事故预警的流程示意图。
图3是本申请实施例提供的一种监测及预警的方法和装置示意图。
图4是本申请实施例提供的一种危险预警的方法的示意性场景图。
图5是本申请实施例提供的一种危险预警的装置系统架构示意图。
图6是本申请实施例提供的一种危险预警的方法600的示意性流程图。
图7是本申请实施例提供的一种危险预警的方法的示意图。
图8是本申请实施例提供的一种危险预警的方法800的示意性流程图
图9是本申请实施例提供的一种危险预警的方法的示意图。
图10是本申请实施例提供的一种危险预警的方法1000的示意性流程图。
图11是本申请实施例提供的一种危险预警的方法1100的示意性流程图。
图12是本申请实施例提供的一种危险预警的装置的示意性框图。
图13是本申请实施例提供的一种危险预警的装置的示意性框图。
具体实施方式
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“该”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个、两个或两个以上。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
图1是本申请实施例提供的一种危险预警的方法的应用场景。在该应用场景中,可以包括车辆100。
感知系统120可以包括感测关于车辆100周边的环境的信息的若干种传感器。例如,感知系统120可以包括定位系统,定位系统可以是全球定位系统(global positioning system,GPS)121,也可以是北斗系统或者其他定位系统、惯性测量单元(inertial measurement unit,IMU)122、激光雷达123、毫米波雷达124、超声雷达125以及摄像装置126中的一种或者多种。
车辆100可以包括构图模块130。构图模块130可以使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪、同步定位与地图构建(simultaneous localization and mapping,SLAM)等技术为环境绘制地图。
车辆100通过外围设备140与移动终端、外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备140可包括无线通信系统141、车载屏幕142、麦克风143和/或扬声器144。
在一些可能的实现方式中,外围设备140提供车辆100的用户与用户接口160交互手段。例如,车载屏幕142可向车辆100的用户提供信息。用户接口160还可操作车载屏幕142来接收用户的输入。车载屏幕142可以通过触摸屏进行操作。在其他情况中,外围设备140可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风143可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器144可向车辆100的用户输出音频。
车辆100的部分或所有功能可以由计算平台150控制。计算平台150可包括处理器151至15n(n为正整数),处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学习处理单元(deep learning processing unit,DPU)等。此外,计算平台150还可以包括存储器,存储器用于存储指令,处理器151至15n中的部分或全部处理器可以调用存储器中的指令,执行质量,以实现相应的功能。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
可选地,上述车辆100可以包括一种或多种不同类型的交通工具,也可以包括一种或多种不同类型的在陆地(例如,公路,道路,铁路等),水面(例如:水路,江河,海洋等)或者空间上操作或移动的运输工具或者可移动物体。例如,车辆可以包括汽车,自行车,摩托车,火车,地铁,飞机,船,飞行器,机器人或其它类型的运输工具或可移动物体等,本申请实施例对此不作限定。
随着科学技术的发展,车辆保有量呈现逐年递增的趋势,而随着道路车辆增多,行人或非机动车交通事故致死率占比也在逐年增多。现有一种基于信息交互的事故预警系统,该系统由车前检测模块、arduino开发板、位置定位模块、通信模块、语音报警模块和车载显示器组成。其中,车前检测模块由两个超声波雷达组成,分别安装在车辆车头的左、右两侧内,可用于检测汽车前方是否有行人或机动车横向通行;arduino开发板、位置定位模块和通信模块均安装于车辆内部,且位置定位模块和通信模块的天线均与车辆的外壳相接;车载显示器安装于车辆内部并处于车辆内仪表盘右侧;通信模块与arduino开发板双向连接。
如图2所示,示出了该事故预警系统的流程示意图。该事故预警系统可以通过车辆的超声波雷达测量车辆前方是否有行人或机动车,通过位置定位模块实时采集车辆的当前位置信息,若前方有行人或机动车时,车载显示器工作,用于提醒第一车辆司机避让行人,同时将超声波雷达的数据和位置定位模块的位置信息打包发送至周边信号连通的周边车辆,周边车辆根据接收到的数据进行分析处理,当接收的信息与自身车辆的车速已经构成危险时,生成危险反馈信息,并触发自身的车载显示器和车内语音报警模块以提醒自身车辆司机避让行人,同时经过通信模块反馈给信号源车辆,信号源车辆根据接收到的数据判 断是否与自身发出的预警信息数据相匹配,当匹配成功时,信号源车辆控制车外语音报警模块工作,用以提醒车外行人或机动车注意避让信号源车辆方向的车辆。但该事故预警系统需要依赖于信号源车辆的准确检测以及车辆之间及时通信。首先,当信号源车辆未能正确且及时检测出突发情况时,则不能保证与信号源车辆通信的周边车辆及时获取危险告警信息,不能及时告警往往是发生突发事故的主要原因;其次,车辆之间的通信依赖于车辆安装同一套系统或者采用可相互通信的应用接口,广泛使用仍有一定的困难。
此外,如图3所示,示出了一种监测车辆、车道和行人并预警的方法和装置。该装置具体可包括:数据采集装置、数据处理装置、视频图像采集装置、识别装置和报警装置。可利用探测器对行人和用户同时发出预警,并通过向用户和行人的手机或其它多媒体终端发送消息进行警示,并在必要时向用户的手机或其它多媒体终端发送路口实时监控视频以帮助用户较为全面掌握当前路口情况。可选地,该探测器可以是摄像头和雷达传感器,可安装在路口的支柱上,用探测产生危险状况的车道的车辆信息以及与行人之间的距离关系,并且发出预警信息。摄像头可以与雷达传感器同时工作,摄像头用于实时监控斑马线周围路况并采集过往车辆的图像以识别车牌号码;雷达传感器用于探测产生告警现象的车道上的车辆与传感器之间的距离、产生告警现象的车道上的车辆行驶的速度、以及行人与传感器之间的距离。但该方法仅适用于安装有该装置的路口,并且难以保证对行人和车辆的准确识别,进而无法保证及时告警。
现有技术中大多依赖传感器感知周边对行驶车辆有危险的车辆,难以在拥堵路段、视野不佳等容易发生交通事故的路况下发挥较好作用,且较高依赖周边车辆与自身车辆的相互通信,无法适用于不能相互通信的车辆获取前方事故发生的危险信号,也存在局限于路口斑马线路段识别人与车辆信息的情况。因此,现有技术因上文该的较多依赖性和局限性,使得相关技术方案难以广泛应用。
本申请实施例提供一种危险预警的方法、装置和车辆。使得车辆在拥堵路段、视野不佳等容易发生交通事故的路况时,对道路、行人以及周边车辆的情况进行预估,在有可能发生交通事故的情况时对用户提前告警,避免在车辆已经检测到目标物时再告警用户导致用户可能造成无法及时处理当前危急情况导致交通事故的发生。
图4示出了本申请实施例提供的一种危险预警的方法的示意性场景图。应理解,本申请实施例提供的方案可应用于车辆,车辆可以通过外部的摄像头,毫米波雷达等装置获取周围的环境信息,其中环境信息可以包括周边车辆、车道、行人以及非机动车的信息,并基于对周边环境信息的分析确定是否需要提前告警。
示例性地,图5示出了本申请实施例提供的一种危险预警的装置系统架构示意图。具体地,该装置包括感知模块510、决策模块520以及告警模块530。
感知模块510可以是图1中的感知系统120所包括的多个传感器中的一个或多个,具体可以包括全球定位系统121、IMU122、激光雷达123、毫米波雷达124、超声雷达125以及摄像装置126等。感知模块获取当前车位的环境信息,环境信息包括车位周围静态或动态的障碍物、可通行区域、车道线、行人和非机动车等。感知模块510采用的感知算法可包括道路线检测、多目标跟踪(multi-objective tracking,MOT)以及路标识别等,本申请实施例对此不作限定。
决策模块520和告警模块530可以是图1中的计算平台150中的一个或多个。
感知模块510将获取的环境信息输入决策模块520和告警模块530。决策模块520对环境信息进行分析,根据感知模块510获得的环境信息,例如第一车辆视野范围内的车道线、穿行口(例如:人行道、路口)及车辆、从导航获取的路口信息、小目标物(例如:小孩子、小动物)、非机动车等。并将环境信息输入告警模块530。
告警模块530根据决策模块520分析环境信息得到的决策信息告警用户,以提醒用户前方有发生交通事故的可能,提醒用户谨慎驾驶。
决策模块520对环境信息进行分析,包括,根据采集的最邻近车辆的N帧图像序列,计算最邻近车辆相对第一车辆的距离,并计算出相对加速度的绝对值;或计算当前时刻小目标物距离第一车辆的距离;或根据车辆的统计信息得到车流状态,并计算目标物的轨迹与第一车辆的预设区域的重合概率。决策模块520将相应的决策信息输入告警模块530,告警模块530根据不同的告警级别对用户进行告警。告警的形式可以为显示告警或语音提示告警等,本申请对此不作限定。
鉴于此,本申请实施例提供了一种危险预警的方法,该方法可以应用于车辆、或者车辆中的芯片、系统等。在拥堵路段、车辆缓行、视野不佳等事故多发路段,车辆难以检测到目标物(例如:非机动车、行人)或其轨迹,从而难以通过车辆感知系统识别或检测到目标物后提示用户,以达到有效防止交通事故发生情况下。本申请实施例中,通过间接感知周边车辆的行驶状态,或通过预测目标物的轨迹或姿态,提前告警用户,能够有效避免不能及时告警用户而可能造成交通事故的发生。
图6示出了本申请实施例提供的一种危险预警的方法600的示意性流程图。具体地,图6示出了第一车辆通过车道检测,车辆检测与判别,融合周边车辆状态的感知,当符合道路预设拓扑结构且判别周边车辆存在异常驾驶行为时,对第一车辆用户提前告警。该方法600包括:
S601,感知周边环境信息。
示例性地,第一车辆感知周边环境之前可以获取数据,也可以在感知周边环境的同时获取数据。本申请实施例对此不作限定。其中,数据包括周边环境的图像或者地图。第一车辆可以从智能座舱域控制器(cockpit domain controller,CDC)获取前置摄像头的输入图像,图像可以包括周边车辆信息图像、行人信息图像以及非机动车信息图像等。第一车辆也可获取导航信息,导航信息可帮助车辆获取更准确且更丰富的周围环境数据。
应理解,车辆获取数据的具体途径对此不作限定,也可以是除CDC以外的行车记录仪等前置摄像头。
应理解,第一车辆可通过感知模块检测视野范围内的车道、穿行口(例如:人行道、路口)及车辆,从导航信息获取路口信息(例如:与路口的相对距离),并综合判断出与第一车辆同方向行驶且处于邻近车道的最近邻车辆。其中,视野范围可理解为第一车辆外部设置的摄像头或毫米雷达波所能拍摄或探测到的范围,最近邻车辆属于第一车辆的周边车辆。
S602,拓扑结构匹配。
应理解,第一车辆根据步骤601中检测到车道结构以及周边车辆的位置信息,与预设拓扑结构进行匹配,匹配成功时,则会执行后续步骤。具体地,拓扑结构匹配主要包括以下过程:首先,识别车辆信息和车道信息;其次,判断当前车道是否匹配预设拓扑结构; 再次,当车道的预设拓扑结构匹配成功时,对周边车辆状态识别,周边车辆包括第一车辆的前方车辆与两侧车辆;最后,将第一车辆、周边车辆以及车道进行拓扑匹配。其中,拓扑结构包括:人行横道口、主路出口或入口、掉头路口或环岛右转路口等事故多发的关键路口。
S603,目标跟踪。
应理解,第一车辆跟踪上述步骤S601中检测到的最近邻车辆,该最近邻车辆为步骤S602中与预设拓扑结构成功匹配的车辆。第一车辆存储该最邻近车辆的N帧图像序列,示例性地,可以通过步骤S601中的摄像头获得图像序列。
S604,计算加速度。
应理解,第一车辆根据存储的最邻近车辆的N(N为正整数)帧图像序列,计算最邻近车辆相对第一车辆的距离,从而计算出最邻近车辆与第一车辆之间的相对加速度的绝对值。
S605,告警决策。
示例性地,第一车辆的决策模块520比较步骤S604中计算得到的相对加速度的绝对值与预设阈值的绝对值的大小,当相对加速度的绝对值大于或大于等于预设阈值的绝对值时,认为最邻近车辆急刹车,决策模块520向告警模块530发送提示信号,用以提前告警第一车辆的用户前方可能有行人或非机动车。
应理解,在第一车辆被周边车辆遮挡视线无法及时观察到周边行人或非机动车时,通过感知最邻近车辆的行驶状态(例如:急刹车),间接判断在第一车辆的视野盲区可能存在潜在危险,即在第一车辆观察到目标物之前,提前告警用户,避免了用户在观察到目标物时才获得告警提示导致无法及时应对突发情况所造成的危险。
图7示出了本申请实施例提供的一种危险预警的方法的示意图。如图7所示,虚线框内表示第一车辆的视野范围,实线框内表示第一车辆的视野盲区,实线框在虚线框的范围内。应理解,图7示出的实线框和虚线框的范围仅为示例性说明,并不对第一车辆实际的视野范围和视野盲区作出限定。具体地,第一车辆周边可能存在不易观察到的小目标物(例如:小孩子、小动物)或者进入视野盲区的目标物,通过对小目标物的跟踪,可在小目标物进入第一车辆的视野盲区之前提前告警,以提示用户及时应对可能的突发状况。示例性,有以下几种判断目标物是否为小目标物的方式:方式一,当小目标物的高度低于(或称:小于)或等于预设高度时,该目标物为小目标物;方式二,通过车辆外部的摄像头获取的图像直接感知得到目标物的信息(例如:图像显示为小孩子或小动物时,则认为目标物是小目标物)。
图8示出了本申请实施例提供的一种危险预警的方法800的示意性流程图。具体地,该方法800适用于如图7所示的场景。该方法800包括:
S801,感知周边环境信息。
示例性地,第一车辆感知周边环境之前可以获取数据,也可以在感知周边环境的同时获取数据。本申请实施例对此不作限定。第一车辆可以从移动数据中心(mobile data center,MDC)获取环视摄像头的输入图像,图像可以包括周边车辆信息图像、行人信息图像以及非机动车信息图像等。第一车辆也可获取导航信息,导航信息可帮助车辆获取更准确且更丰富的周围环境数据。
应理解,车辆获取数据的具体途径对此不作限定,也可以是除MDC以外的其他方式,例如上文所述CDC。
应理解,第一车辆可多尺度检测环视视野范围内的小目标,示例性地,可通过环视摄像头检测环视视野范围内的小目标,可通过多个环视摄像头获得多尺度的检测结果,融合多尺度的检测结果能够更准确且更广泛获知周边环境信息,尽量检测到第一车辆环视视野范围内的所有小目标物。
S802,目标跟踪。
应理解,第一车辆跟踪上述步骤S801中检测到的小目标物,第一车辆存储该小目标物的N帧图像序列,根据N帧图像序列预测该小目标物的姿态。其中,小目标物的姿态包括朝向第一车辆或背离第一车辆。
示例性地,可以通过步骤S801中的环视摄像头获得图像序列。
S803,计算距离。
应理解,第一车辆根据存储的小目标物的N帧图像序列,计算小目标物当前时刻距离第一车辆的距离,示例性地,第一车辆根据毫米雷达计算小目标物当前时刻与第一车辆的距离。
S804,告警决策。
示例性地,第一车辆的感知模块510可以跟踪该小目标物,当检测到小目标物处于如图7所示的预设虚线框和实线框之间的范围内时,向告警模块530发送提示信息,低级别告用户;当检测到该小目标物在虚线框内且姿态朝向第一车辆时,提高告警级别;当检测到该小目标物进入虚线框且进入实线框(第一车辆的视野盲区)时,高级别告警用户;当检测到该小目标物离开虚线框区域,也就是远离第一车辆时,告警解除。
应理解,通过感知第一车辆视野范围内的小目标物,并对该小目标物进行跟踪,进一步预测该小目标物的姿态,根据该小目标物当前时刻的姿态以及与第一车辆的距离分级告警提示用户注意行车安全。当小目标物还未进入视野盲区,但姿态已朝向第一车辆,表示存在小目标物即将进入第一车辆的视野盲区的可能性,此时提高告警级别以提醒用户提高警惕,避免小目标物进入视野盲区之后检测失效,导致的用户无法及时应对危险情况。
图9出了本申请实施例提供的一种危险预警的方法的示意图。如图9所示一种交通拥堵或缓行路段,车流量大,行车速度缓慢,容易发生行人或非机动车闯入的情况(例如:行人轨迹如图9中虚线所示),当有人穿行后,其他人也会效仿,容易发生交通事故。当判断车流状态为拥堵时,第一车辆可以对检测到的目标物姿态或运行轨迹进行预测或跟踪,当目标物的轨迹将高概率重合于第一车辆的实线区域时,提高对用户的告警级别。即在目标物轨迹重合于第一车辆的预设区域(实线区域)之前,预测目标物高概率将会进入第一车辆的预设区域,从而提前告警,提示用户注意行车安全。
图10示出了本申请实施例提供的一种危险预警的方法1000的示意性流程图。具体地,该方法1000适用于如图9所示的场景。该方法1000包括:
S1001,目标检测。
示例性地,第一车辆检测目标之前可以获取数据,也可以在检测目标的同时获取数据。本申请实施例对此不作限定。第一车辆在该场景下获取数据的方法如上述步骤S801所述,此处不再赘述。
应理解,基于第一车辆,可以通过环视摄像头获取周边一定范围内的其他车辆,并且检测第一车辆视野范围内的非机动车与行人。
S1002,车流状态统计与分类。
应理解,第一车辆统计获取的周边车辆的信息,具体地,统计在预设时长内,周边车辆的数量、车辆密度等信息,将周边车辆的数量和车辆密度等信息输入预先训练的决策树,获得当前车流状态分类,当车流状态为拥堵状态时,则执行下列步骤。其中,预先训练的决策树可以理解为一种车流状态模型,可以用于根据车辆的数量和车辆密度等信息估算车流状态。
示例性地,根据获取的周边环境信息(例如:车辆、道路、行人及非机动车)触发语义地图构建。通过构建语义地图,第一车辆能够更加准确地估算车流状态,便于第一车辆的决策模块作出相应决策。本申请对语义地图构建的具体方法不作限定。
S1003,目标跟踪。
应理解,当车流状态符合预设的拥堵状态时,跟踪上述步骤S1001中检测到的目标物(行人或非机动车),同时储存目标物的N帧图像序列。
S1004,告警决策。
示例性地,当存在目标物行驶或移动在第一车辆周边一定范围的路面上时,告警模块530发出低级别告警;第一车辆对上述步骤S1003中目标物行驶或移动轨迹与预设区域(如图9所示的实线范围内)的重合概率进行估算,当目标物在将来T时间段内的轨迹与预设区域的重合概率大于或等于预设阈值P时,提高告警级别。
应理解,本申请实施例提供的方案可适用于事故多发路段,例如:拥堵穿行。通过对车流状态轨迹以及对目标物移动轨迹的估计,更加准确地预估了目标物的移动路径与车辆预设区域的重合概率,使得目标物移动到预设区域之前,发出告警信息,从而及时提醒用户存在潜在的危险障碍物。
图11示出了本申请实施例的危险预警的方法1100的示意性流程图。该方法1100可以应用于如图4所示的危险预警的场景中。
S1101,第一车辆获取周边环境信息。
应理解,周边环境包括第一车辆的周边车辆、非机动车、行人、车道等。示例性地,第一车辆可以通过摄像头或者毫米波雷达获取周边环境信息,其中,摄像头包括前视摄像头(例如:行车记录仪)、侧视摄像头或后视摄像头;第一车辆还可以通过导航数据获取周边环境的信息,本申请对此不作限定。
示例性地,第一车辆可以通过摄像头获取周边环境的图像序列,并执行后续步骤。
S1102,第一车辆根据该周边环境信息和预设条件确定目标物。
应理解,该目标物处于周边环境中,该预设条件包括以下至少一项:
其一:该目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值。
应理解,相对加速度为矢量,是目标物的加速度与第一车辆的加速度的差值。该目标物与第一车辆之间的相对加速度,以及预设相对加速度的参考系保持一致。以目标物的加速度为a1,第一车辆的加速度为a2为例:当以第一车辆为参考系时,目标物与第一车辆之间的相对加速度为a1-a2;当以目标物为参考系时,目标物与第一车辆之间的相对加速度 为a2-a1
一种可能的实现方式中,以第一车辆为参考系。预设相对加速度为a,a小于或等于0;当目标物急刹车时,目标物与第一车辆之间的相对加速度为a’,a’小于或等于0。目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,也就是:|a’|≥|a|。
例如:目标物正在减速行驶,其加速度为-5m/s2;第一车辆正匀速直线行驶,其加速度为0m/s2;预设相对加速度为-4m/s2;以第一车辆为参考系,经过计算(-5m/s2-0m/s2=-5m/s2),目标物与第一车辆之间的相对加速度为-5m/s2。因此,满足目标物与第一车辆之间的相对加速度的绝对值大于预设相对加速度的绝对值,也就是:|-5|>|-4|。
另一种可能的实现方式中,以目标物为参考系。预设相对加速度为a,a大于或等于0;当目标物急刹车时,目标物与第一车辆之间的相对加速度为a’,a’大于或等于0。目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,也就是:a’≥a。
例如:目标物正在减速行驶,其加速度为-5m/s2;第一车辆正匀速直线行驶,其加速度为0m/s2;预设相对加速度为4m/s2;以目标物为参考系,经过计算(0m/s2-(-5m/s2)=5m/s2),目标物与第一车辆之间的相对加速度为5m/s2。因此,满足目标物与第一车辆之间的相对加速度的绝对值大于预设相对加速度的绝对值,也就是:|5|>|4|。
示例性地,第一车辆根据该目标物的图像序列确定该目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值。
应理解,第一车辆确定该周边环境包括关键路口,该关键路口满足预设拓扑结构,其中关键路口包括人行横道口、主路出口或入口、掉头路口或环岛右转路口等事故多发路段。本申请对确定预设拓扑结构的方法不作限定。
需要说明的是,该目标物可以为距离第一车辆最近的车辆,当该目标物与第一车辆同方向行驶在相邻车道上且该目标物在第一车辆的斜前方,该目标物可能会遮挡第一车辆的视野,如果有行人正在横穿第一车辆与该目标物之前的人行横道时,第一车辆极有可能无法及时观察到正在横穿人行横道的行人。因此,第一车辆通过监测该目标物与该第一车辆之间的相对加速度的绝对值,判断该目标物是否急刹车,当该目标物与该第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,则认为该目标物正在急刹车,该目标物前方可能存在行人横穿人行横道的情况。此时,第一车辆提醒用户可能存在危险。通过间接告警的方式,感知周边环境的车辆,并计算周边环境的车辆的行驶状态,判断是否存在有异常驾驶的行为(例如:急刹车),提示用户存在潜在的危险。
其二:该目标物的高度低于预设高度。
可选地,当第一车辆检测到该目标物的姿态朝向第一车辆时,提高如步骤S1103中该的提示信息的告警级别,或;当第一车辆预测该目标物的姿态朝向第一车辆时,提高提示信息的告警级别。
示例性地,第一车辆根据该目标物的图像序列确定该目标物的姿态,或;第一车辆根据该目标物的图像序列预测该目标物的姿态。
需要说明的是,当目标物的高度小于预设高度时,该目标物可理解为如图7所示的小 孩子或小动物,或者,通过第一车辆的摄像头所拍摄到的图像确定小孩子或小动物为目标物,该目标物代称为小目标物。小目标物由于高度较低,在车辆的视野范围内可能难以检测到(例如图7中的实线范围内的视野盲区),于是当第一车辆在一定视野范围内(例如图7中的虚线范围内)检测到小目标物时,便提示用户,并可以根据小目标物当前的姿态或者预测的姿态朝向第一车辆时,提高告警级别,提示用户采取必要的措施,避免当小目标物在进入视野盲区后第一车辆检测不到该小目标物时,用户难以及时采取必要措施的问题。
其三:该目标物处于车流拥堵路段中。
应理解,第一车辆获取周边环境信息,该周边环境信息包括车辆数量、车辆密度、车辆速度中的至少一种。示例性地,第一车辆将上述周边环境信息中的至少一种输入预先训练的决策树,确定当前的车流状态。
示例性地,第一车辆根据该目标物的图像序列预测该目标物的轨迹。
需要说明的是,该目标物可以是穿行缓行车辆的行人或非机动车。当该目标物处于车流拥堵路段时,表示该车辆正处于交通事故易发路段(例如:拥堵穿行路段),此时便提示用户,以便用户提高警惕小心驾驶。当预测该目标物的轨迹与预设区域(例如:图9中的实线区域)的重合概率高于预设概率时,提高告警级别,更进一步地提醒用户小心驾驶。
S1103,第一车辆输出提示信息,该提示信息用户提示用户该目标物的信息。
具体地,在目标物满足上述不同预设条件下,第一车辆可以输出提示信息,以提示用户提高警惕,以便用户及时采取必要措施,并进一步根据对目标物姿态或轨迹的预测,提高该提示信息的告警级别。示例性地,提示信息可以显示于第一车辆的中控台,也可以发出报警信号,本申请对提示信息的具体表现形式不作限定。
以上,结合图4至图11详细说明了本申请实施例提供的危险预警的方法。以下,结合图12至13详细说明本申请实施例提供的危险预警的装置。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,这里不再赘述。
图12是本申请实施例提供的危险预警的装置的示意性框图。该装置1200包括感知单元1210和处理单元1220。感知单元1210可以实现相应的获取数据或信息的功能,处理单元1220用于进行数据处理或输出信息。
可选地,该装置1200还可以包括存储单元,该存储单元可以用于存储指令和/或数据,处理单元1220可以读取存储单元中的指令和/或数据,以使得装置实现前述方法实施例。
该装置1200可以包括用于执行图4至图11中的方法的单元。并且,该装置1200中的各单元和上述其他操作和/或功能分别为了实现图4至图11的方法实施例的相应流程。
其中,当该装置1200用于执行图11中的方法1100时,感知单元1210可用于执行方法1100中的S1101,处理单元1220可用于执行方法1100中的S1102和S1103。
具体地,该装置1200包括:感知单元1210,用于获取周边环境信息;处理单元1220,用于根据周边环境信息和预设条件确定目标物,该目标物处于周边环境中,该预设条件包括以下至少一项:该目标物与该第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,或;该目标物的高度小于或小于等于预设高度,或;该目标物处于车流拥堵路段中;该处理单元1220还用于输出提示信息,该提示信息用于提示用户该目 标物的信息。
可选地,当该预设条件包括该目标物与该第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,该处理单元1220具体用于:确定该周边环境包括关键路口,该关键路口满足预设拓扑结构。
可选地,该感知单元1210还用于,获取该目标物的图像序列;该处理单元1220还用于,根据该目标物的图像序列确定该目标物与第一车辆之间的相对加速度的绝对值。
可选地,当该预设条件包括该目标物的高度小于或小于等于预设高度时,该处理单元1220具体用于:当该处理单元1220检测或预测到该目标物的姿态朝向该装置时,提高该提示信息的告警级别。
可选地,该感知单元1210还用于,获取该目标物的图像序列;该处理单元1220还用于,根据该目标物的图像序列确定或预测该目标物的姿态。
可选地,当该预设条件包括该目标物处于车流拥堵路段时,该处理单元1220具体用于:根据该周边环境确定该目标物处于车流拥堵路段,该周边环境信息包括车辆数量、车辆密度、车辆速度中的至少一种。
可选地,该处理单元1220还用于:当该目标物的轨迹与预设区域的重合概率高于预设概率时,提高该提示信息的告警级别。
可选地,该处理单元1220还用于:当该目标物的轨迹与预设区域的重合概率高于预设概率时,提高该提示信息的告警级别。
可选地,该感知单元1210具体用于:根据摄像头获取图像数据,并根据该图像数据获取该周边环境信息,或;根据导航数据获取周边环境信息。
图12中的处理单元可以由至少一个处理器或处理器相关电路实现,感知单元可以由收发器或收发器相关电路实现,存储单元可以通过至少一个存储器实现。
图13是本申请实施例的危险预警装置的示意性框图。图13所示的危险预警装置1300可以包括:处理器1310、收发器1320以及存储器1330。其中,处理器1310、收发器1320以及存储器1330通过内部连接通路相连,该存储器1330用于存储指令,该处理器1310用于执行该存储器1330存储的指令,以收发器1330接收/发送部分参数。可选地,存储器1330既可以和处理器1310通过接口耦合,也可以和处理器1310集成在一起。
需要说明的是,上述收发器1320可以包括但不限于输入/输出接口(input/output interface)一类的收发装置,来实现通信设备1300与其他设备或通信网络之间的通信。
在实现过程中,上述方法的各步骤可以通过处理器1310中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1330,处理器1310读取存储器1330中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
还应理解,本申请实施例中,该存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。处理器的一部分还可以包括非易失性随机存取存储器。例如,处理器还可以存储设备类型的信息。
本申请实施例还提供一种计算机可读介质,该计算机可读介质存储有程序代码,当该 计算机程序代码在计算机上运行时,使得该计算机执行上述图4至图11中的任一种方法。
本申请实施例还提供一种芯片,包括:至少一个处理器和存储器,该至少一个处理器与该存储器耦合,用于读取并执行该存储器中的指令,以执行上述图4至图11中的任一种方法。
本申请实施例还提供一种自动驾驶车辆,包括:至少一个处理器和存储器,该至少一个处理器与该存储器耦合,用于读取并执行该存储器中的指令,以执行上述图4至图11中的任一种方法。
以上各个实施例可以单独使用,也可以相互结合使用,以实现不同的技术效果。
上述本申请提供的实施例中,从电子设备作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
本申请实施例还提供了一种电子设备,包括:显示屏、处理器、存储器、电源键、应用程序以及计算机程序。上述各器件可以通过一个或多个通信总线连接。其中,该一个或多个计算机程序被存储在上述存储器中并被配置为被该一个或多个处理器执行,该一个或多个计算机程序包括指令,上述指令可以用于使电子设备执行上述各实施例中收发红包方法的各个步骤。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各 个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
该功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例该方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。

Claims (25)

  1. 一种危险预警的方法,其特征在于,包括:
    第一车辆获取周边环境信息;
    所述第一车辆根据所述周边环境信息和预设条件确定目标物,所述目标物处于周边环境中,所述预设条件包括以下至少一项:
    所述目标物与所述第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,或;
    所述目标物的高度小于或小于等于预设高度,或;
    所述目标物处于车流拥堵路段中;
    所述第一车辆输出提示信息,所述提示信息用于提示用户所述目标物的信息。
  2. 根据权利要求1所述的方法,其特征在于,当所述预设条件包括所述目标物与所述第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,所述方法还包括:
    所述第一车辆确定所述周边环境包括关键路口,所述关键路口满足预设拓扑结构。
  3. 根据权利要求2所述的方法,特征在于,所述方法还包括:
    所述第一车辆获取所述目标物的图像序列;
    所述第一车辆根据所述目标物的图像序列确定所述目标物与所述第一车辆之间的相对加速度的绝对值。
  4. 根据权利要求2或3所述的方法,其特征在于,所述目标物为与所述第一车辆距离最近的车辆。
  5. 根据权利要求1所述的方法,其特征在于,当所述预设条件包括所述目标物的高度小于或小于等于预设高度时,所述方法还包括:
    当所述第一车辆检测或预测到所述目标物的姿态朝向所述第一车辆时,提高所述提示信息的告警级别。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    所述第一车辆获取所述目标物的图像序列;
    所述第一车辆根据所述目标物的图像序列确定或预测所述目标物的姿态。
  7. 根据权利要求1所述的方法,其特征在于,当所述预设条件包括所述目标物处于车流拥堵路段时,所述方法还包括:
    所述第一车辆根据所述周边环境信息确定所述目标物处于车流拥堵路段,所述周边环境信息包括车辆数量、车辆密度、车辆速度中的至少一种。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    当所述目标物的轨迹与预设区域的重合概率高于预设概率时,提高所述提示信息的告警级别。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述第一车辆获取所述目标物的图像序列;
    所述第一车辆根据所述目标物的图像序列预测所述目标物的轨迹。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述第一车辆获取周边环境信息,包括:
    所述第一车辆根据摄像头获取图像数据,并根据所述图像数据获取所述周边环境信息,或;
    所述第一车辆根据导航数据获取所述周边环境信息。
  11. 一种危险预警的装置,其特征在于,包括:
    感知单元,用于获取周边环境信息;
    处理单元,用于根据周边环境信息和预设条件确定目标物,所述目标物处于周边环境中,所述预设条件包括以下至少一项:
    所述目标物的与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值,或;
    所述目标物的高度小于或小于等于预设高度,或;
    所述目标物处于车流拥堵路段中;
    所述处理单元还用于输出提示信息,所述提示信息用于提示用户所述目标物的信息。
  12. 根据权利要求11所述的装置,其特征在于,当所述预设条件包括所述目标物与第一车辆之间的相对加速度的绝对值大于或大于等于预设相对加速度的绝对值时,所述处理单元具体用于:
    确定所述周边环境包括关键路口,所述关键路口满足预设拓扑结构。
  13. 根据权利要求12所述的装置,其特征在于,
    所述感知单元还用于,获取所述目标物的图像序列;
    所述处理单元还用于,根据所述目标物的图像序列确定所述目标物与所述第一车辆之间的相对加速度的绝对值。
  14. 根据权利要求12或13所述的装置,其特征在于,所述目标物为与所述装置距离最近的车辆。
  15. 根据权利要求11所述的装置,其特征在于,当所述预设条件包括所述目标物的高度小于或小于等于预设高度时,所述处理单元具体用于:
    当所述处理单元检测或预测到所述目标物的姿态朝向所述装置时,提高所述提示信息的告警级别。
  16. 根据权利要求15所述的装置,其特征在于,
    所述感知单元还用于,获取所述目标物的图像序列;
    所述处理单元还用于,根据所述目标物的图像序列确定或预测所述目标物的姿态。
  17. 根据权利要求11所述的装置,其特征在于,当所述预设条件包括所述目标物处于车流拥堵路段时,所述处理单元具体用于:
    根据所述周边环境确定所述目标物处于车流拥堵路段,所述周边环境信息包括车辆数量、车辆密度、车辆速度中的至少一种。
  18. 根据权利要求17所述的装置,其特征在于,所述处理单元还用于:
    当所述目标物的轨迹与预设区域的重合概率高于预设概率时,提高所述提示信息的告警级别。
  19. 根据权利要求18所述的装置,其特征在于,
    所述感知单元还用于,获取所述目标物的图像序列;
    所述处理单元还用于,根据所述目标物的图像序列预测所述目标物的轨迹。
  20. 根据权利要求11至19中任一项所述的装置,其特征在于,所述感知单元具体用于:
    根据摄像头获取图像数据,并根据所述图像数据获取所述周边环境信息,或;
    根据导航数据获取周边环境信息。
  21. 一种危险预警的装置,其特征在于,包括:
    收发器,用于接收和发送消息;
    储存器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求1至10中任一项所述的方法,所述处理器与所述存储器耦合。
  22. 一种危险预警的车辆,其特征在于,包括权利要求11至21中任一项所述的装置。
  23. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时,以使得所述计算机实现如权利要求1至10中任一项所述的方法。
  24. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至10中任一项所述的方法。
  25. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至10中任一项所述的方法。
PCT/CN2023/082246 2022-03-24 2023-03-17 危险预警的方法、装置和车辆 WO2023179494A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210301522.X 2022-03-24
CN202210301522.XA CN116834655A (zh) 2022-03-24 2022-03-24 危险预警的方法、装置和车辆

Publications (1)

Publication Number Publication Date
WO2023179494A1 true WO2023179494A1 (zh) 2023-09-28

Family

ID=88099933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082246 WO2023179494A1 (zh) 2022-03-24 2023-03-17 危险预警的方法、装置和车辆

Country Status (2)

Country Link
CN (1) CN116834655A (zh)
WO (1) WO2023179494A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117465394B (zh) * 2023-12-28 2024-04-16 深圳市开心电子有限公司 一种电动车紧急制动的控制方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014078155A (ja) * 2012-10-11 2014-05-01 Mitsubishi Motors Corp 車両用警報装置
CN109686124A (zh) * 2018-11-28 2019-04-26 法法汽车(中国)有限公司 防碰撞提醒方法及系统、存储介质和电子设备
CN111369831A (zh) * 2020-03-26 2020-07-03 径卫视觉科技(上海)有限公司 一种道路驾驶危险预警方法、装置和设备
JP2022030241A (ja) * 2020-08-06 2022-02-18 株式会社Subaru 車両の走行制御装置及び車両の走行制御システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014078155A (ja) * 2012-10-11 2014-05-01 Mitsubishi Motors Corp 車両用警報装置
CN109686124A (zh) * 2018-11-28 2019-04-26 法法汽车(中国)有限公司 防碰撞提醒方法及系统、存储介质和电子设备
CN111369831A (zh) * 2020-03-26 2020-07-03 径卫视觉科技(上海)有限公司 一种道路驾驶危险预警方法、装置和设备
JP2022030241A (ja) * 2020-08-06 2022-02-18 株式会社Subaru 車両の走行制御装置及び車両の走行制御システム

Also Published As

Publication number Publication date
CN116834655A (zh) 2023-10-03

Similar Documents

Publication Publication Date Title
US11074813B2 (en) Driver behavior monitoring
JP6800899B2 (ja) 視界に制限のある交差点への接近のためのリスクベースの運転者支援
US11967230B2 (en) System and method for using V2X and sensor data
CN111033510B (zh) 用于运行驾驶员辅助系统的方法和装置以及驾驶员辅助系统和机动车
JP5938569B2 (ja) 方位情報を考慮する高度運転者支援システム、及びその動作方法
US10336252B2 (en) Long term driving danger prediction system
CN113968216B (zh) 一种车辆碰撞检测方法、装置及计算机可读存储介质
CN109933062A (zh) 自动驾驶车辆的报警系统
JP2019535566A (ja) 不測のインパルス変化衝突検出器
EP3403219A1 (en) Driver behavior monitoring
KR20090125795A (ko) 안전운전 지원장치
CN111547043A (zh) 通过自主车辆自动响应紧急服务车辆
CN113442917B (zh) 用于宿主机动车辆的警告系统
US20230166731A1 (en) Devices and methods for assisting operation of vehicles based on situational assessment fusing expoential risks (safer)
WO2023179494A1 (zh) 危险预警的方法、装置和车辆
CN107599965B (zh) 用于车辆的电子控制装置及方法
KR102084946B1 (ko) 차량의 이동 경로에 위치한 객체의 통과 높이에 따른 경보 알림 생성 장치 및 방법
JP6962367B2 (ja) イベントマップ生成方法及びイベントマップ生成システム、運転支援方法及び運転支援システム
JP2024513710A (ja) 光投影装置及び方法並びに記憶媒体
JP2021131623A (ja) 交通リスク低減プログラム、情報処理装置及び方法
Jebamani et al. AR Upgraded Windshield
EP4355626A1 (en) Devices and methods for predicting collisions, predicting intersection violations, and/or determining region of interest for object detection in camera images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773742

Country of ref document: EP

Kind code of ref document: A1