WO2022142590A1 - 无人车行为提醒方法、装置、无人车和存储介质 - Google Patents

无人车行为提醒方法、装置、无人车和存储介质 Download PDF

Info

Publication number
WO2022142590A1
WO2022142590A1 PCT/CN2021/123677 CN2021123677W WO2022142590A1 WO 2022142590 A1 WO2022142590 A1 WO 2022142590A1 CN 2021123677 W CN2021123677 W CN 2021123677W WO 2022142590 A1 WO2022142590 A1 WO 2022142590A1
Authority
WO
WIPO (PCT)
Prior art keywords
unmanned vehicle
driving state
target unmanned
change
target
Prior art date
Application number
PCT/CN2021/123677
Other languages
English (en)
French (fr)
Inventor
张少康
Original Assignee
北京航迹科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京航迹科技有限公司 filed Critical 北京航迹科技有限公司
Publication of WO2022142590A1 publication Critical patent/WO2022142590A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • This specification relates to the technical field of unmanned vehicles, and in particular, to a method, device, unmanned vehicle, and storage medium for reminding behavior of unmanned vehicles.
  • Unmanned vehicle is a kind of intelligent vehicle, which mainly relies on the intelligent driving instrument mainly based on the computer system in the vehicle to realize the purpose of unmanned driving.
  • the unmanned vehicle In the process of driving, the unmanned vehicle needs to determine the vehicle driving strategy based on the traffic environment where the unmanned inspection is located, and change the driving state of the vehicle by executing the vehicle driving strategy.
  • changing the driving state of the vehicle is, for example, the unmanned vehicle can perform operations such as switching lanes, turning, starting, accelerating, and decelerating.
  • the embodiments of this specification provide an unmanned vehicle behavior reminding method, device, unmanned vehicle, and storage medium, which can remind passengers of an upcoming driving state change of the unmanned vehicle, so that passengers can prepare safety protection in advance.
  • an embodiment of this specification provides a method for reminding an unmanned vehicle behavior, the method comprising:
  • the driving state change prompt information is output to the passengers in the target unmanned vehicle, and the driving state change prompt information is used to indicate the pending driving state change of the target unmanned vehicle.
  • an unmanned vehicle behavior reminder device comprising:
  • the acquisition module is used to acquire the information of the traffic participants within the preset range around the target unmanned vehicle, and the information of the traffic participants is used to indicate the driving status of the traffic participants;
  • a determination module used for determining whether it is necessary to control the target unmanned vehicle to change the driving state according to the information of the traffic participants;
  • the reminder module is used to output the driving state change prompt information to the passengers in the target unmanned vehicle if it is necessary to control the target unmanned vehicle to change the driving state, and the driving state change prompt information is used to indicate that the driving state change of the target unmanned vehicle will occur. situation.
  • embodiments of this specification provide an unmanned vehicle, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the first aspect when the processor executes the computer program. steps of the method.
  • the embodiments of this specification provide a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method described in the first aspect above are implemented.
  • the unmanned vehicle behavior reminding method, device, unmanned vehicle, and storage medium provided by the embodiments of this specification can remind passengers of an upcoming driving state change of the unmanned vehicle, so that passengers can take safety precautions in advance.
  • the unmanned vehicle behavior reminder method obtains the information of traffic participants within a preset range around the target unmanned vehicle, and then determines whether the target unmanned vehicle needs to be controlled to change the driving state according to the information of the traffic participants; if it is necessary to control the target unmanned vehicle
  • the driving state change prompt information is output to the passengers in the target unmanned vehicle, wherein the driving state change prompt information is used to indicate the pending driving state change of the target unmanned vehicle.
  • the target unmanned vehicle when the target unmanned vehicle needs to change the driving state, it outputs the prompt information of the change of the driving state to the passengers in the target unmanned vehicle, so that the passengers can understand the driving of the target unmanned vehicle that will occur.
  • the state changes state, so that security protection can be prepared in advance to avoid security problems.
  • FIG. 1 is an application environment diagram of a method for reminding an unmanned vehicle behavior in an embodiment of this specification
  • FIG. 2 is a schematic flowchart of a method for reminding an unmanned vehicle behavior according to an embodiment of the specification
  • FIG. 3 is a schematic flowchart of a method for reminding an unmanned vehicle behavior in another embodiment of the present specification
  • FIG. 4 is a schematic flowchart of a method for a target unmanned vehicle to output driving state change prompt information according to an embodiment of the specification
  • FIG. 5 is a schematic flowchart of a method for reminding an unmanned vehicle behavior in another embodiment of the specification
  • FIG. 6 is a schematic flowchart of a method for reminding an unmanned vehicle behavior in another embodiment of the specification
  • FIG. 7 is a structural block diagram of an unmanned vehicle behavior reminder device in an embodiment of the specification.
  • FIG. 8 is an internal structural diagram of an unmanned vehicle according to an embodiment of the specification.
  • Unmanned vehicle is a kind of intelligent vehicle, which mainly relies on the intelligent driving instrument mainly based on the computer system in the vehicle to realize the purpose of unmanned driving. At present, as the technology of unmanned vehicles becomes more and more mature, unmanned vehicles gradually enter people's daily life.
  • unmanned vehicles can be divided into two forms of unmanned vehicles, a narrow unmanned vehicle, that is, a highly intelligent unmanned vehicle without a cab or steering wheel and other driving equipment.
  • the other is a generalized unmanned vehicle, which can be a vehicle in a fully autonomous driving mode.
  • a vehicle with a variety of selectable different driving modes one of which is a fully automatic driving mode.
  • the vehicle in the fully automatic driving mode can be called an unmanned vehicle.
  • the implementation environment may include an unmanned vehicle, and the unmanned vehicle may be a highly intelligent unmanned vehicle without a cab or a steering wheel and other driving equipment. It could also be a normal vehicle in fully self-driving mode.
  • the unmanned vehicle includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus. Among them, the processor of the unmanned vehicle is used to provide computing and control capabilities.
  • the memory of the unmanned vehicle includes a non-volatile storage medium and an internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the unmanned vehicle is used to communicate with external terminal equipment through network connection. Or it is connected to an external terminal device through a short-range communication technology, and the computer program is executed by the processor to realize a method for reminding the behavior of an unmanned vehicle.
  • the display screen of the unmanned vehicle may be a liquid crystal display screen or an electronic ink display screen, and the input device of the unmanned vehicle may be a touch layer covered on the display screen, or a button, a trackball or a button set in the unmanned vehicle. touchpad, etc.
  • FIG. 1 is only a block diagram of a part of the structure related to the solution of this specification, and does not constitute a limitation on the unmanned vehicle to which the solution of this specification is applied.
  • a vehicle may include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a method for reminding the behavior of an unmanned vehicle is provided.
  • the method is applied to the unmanned vehicle shown in FIG. 1, and the method includes the following steps:
  • the target unmanned vehicle acquires information of traffic participants within a preset range around the target unmanned vehicle.
  • traffic participants may include vehicles, pedestrians, non-motor vehicles, animals, traffic lights, ground roads (eg, lane lines, roadside buildings, green belts, etc.).
  • the information of the traffic participant is used to indicate the driving state of the traffic participant.
  • the information of the traffic participant may include the location, speed, movement direction information of the traffic participant, the color of the traffic light, the traffic line, and other information, which are not exhaustive in the embodiments of this specification.
  • the target unmanned vehicle can obtain the information of the traffic participants within the preset range around the target unmanned vehicle.
  • the target unmanned vehicle can obtain the target unmanned vehicle based on its own on-board radar system. Information about traffic participants within a preset range around the car.
  • the target unmanned vehicle can exchange information with the road testing device based on the on-board V2X device, so as to obtain the information of traffic participants within a preset range around the location of the target unmanned vehicle from the road testing device.
  • Step 202 the target unmanned vehicle determines whether the target unmanned vehicle needs to be controlled to change the driving state according to the information of the traffic participants.
  • the target unmanned vehicle after the target unmanned vehicle obtains the information of the traffic participants, it can determine the driving strategy of the target unmanned vehicle according to the information of the traffic participants and the pre-set decision conditions, and the driving strategy is also the target A strategy for changing the driving state of an unmanned vehicle.
  • the target unmanned vehicle can obtain the speed of the traffic participant in front of it, and if the speed of the traffic participant in front of it is less than the speed threshold, the target unmanned vehicle can determine the driving state of the target unmanned vehicle. Change the strategy to switch lanes.
  • the target unmanned vehicle can determine that the change strategy of the driving state of the target unmanned vehicle is that the driving state remains unchanged.
  • the target unmanned vehicle can obtain the information of the traffic light, and when the traffic light is red, determine that the change strategy of the driving state of the target unmanned vehicle is to slow down and stop.
  • Step 203 if the target unmanned vehicle needs to be controlled to change the driving state, the target unmanned vehicle outputs a prompt message of changing the driving state to the passengers in the target unmanned vehicle.
  • a passenger in the target unmanned vehicle may refer to a person riding in the target unmanned vehicle, or may refer to a driver of a vehicle in a fully automatic driving mode.
  • the target unmanned vehicle can output the driving state change prompt information to the passengers by means of voice broadcast, and/or can output the driving state change prompt to the passengers by means of image display information.
  • the driving state change prompt information is used to indicate the driving state change condition of the target unmanned vehicle to occur.
  • the prompt information for changing the driving state may include audio prompt information and image prompt information, wherein the audio prompt information is used to voice prompt passengers to the driving state of the target unmanned vehicle; the image prompt information is used for The image prompts the image information of the traffic participants who cause the driving state of the target unmanned vehicle to change.
  • the unmanned vehicle behavior reminder method obtained by the embodiments of this specification obtains the information of traffic participants within a preset range around the target unmanned vehicle, and then determines whether it is necessary to control the target unmanned vehicle to change the driving state according to the information of the traffic participants; If it is necessary to control the target unmanned vehicle to change the driving state, the driving state change prompt information is output to the passengers, wherein the driving state change prompt information is used to indicate the driving state change condition of the target unmanned vehicle to occur. It can be seen that, in the embodiment of this specification, when the target unmanned vehicle needs to change the driving state, the driving state change prompt information is output to the passengers, so that the passengers can understand the driving state change of the target unmanned vehicle. Prepare safety precautions in advance to avoid security problems.
  • the technical process for the target unmanned vehicle to output driving state change prompt information to passengers in the target unmanned vehicle may include the following:
  • the target unmanned vehicle can control the target display device to display the traffic participants that cause the driving state of the target unmanned vehicle to change.
  • the target display device is a vehicle-mounted display device or a terminal device that establishes a short-range communication connection with the target unmanned vehicle.
  • the terminal device may be, for example, a device with a display screen, such as a mobile phone, a computer, and a wearable device.
  • the terminal device held by the passenger can establish a short-range communication connection with the target unmanned vehicle.
  • the traffic participant who causes the driving state of the target unmanned vehicle to change may be determined.
  • the target unmanned vehicle can display the traffic participants who caused the change of the driving state of the target unmanned vehicle on the target display device, so that the passengers can know the reasons for the change of the driving state of the target unmanned vehicle.
  • the target unmanned vehicle may mark the traffic participants who cause the driving state of the target unmanned vehicle to change on the screen displayed by the target display device.
  • the process that the target unmanned vehicle controls the target display device to display the traffic participants who cause the change of the driving state of the target unmanned vehicle may include:
  • the target display device is controlled to display, in the first traffic background image, a location frame for marking the traffic participant who causes the change in the driving state of the target unmanned vehicle.
  • the target unmanned vehicle may acquire a first traffic background image, and the first traffic background image includes a plurality of traffic participants within a preset range around the target unmanned vehicle.
  • the target unmanned vehicle may present the dynamic positions of the multiple traffic participants in the first traffic background image to the passengers on the target display device.
  • the target unmanned vehicle can mark the location frame of the traffic participant who causes the driving state of the target unmanned vehicle to change in the first traffic background image, and control the target display device to display the location frame. It is convenient for passengers to know the reason for the change of the driving state of the target unmanned vehicle, and it is convenient for the passenger to combine the reason for the change of the driving state of the target unmanned vehicle with the actual scene of the target unmanned vehicle.
  • the process that the target unmanned vehicle controls the target display device to display the traffic participants who cause the change of the driving state of the target unmanned vehicle may include:
  • the target display device is controlled to highlight in the second traffic background image the image of the traffic participant causing the change of the driving state of the target unmanned vehicle.
  • the target unmanned vehicle may acquire a second traffic background image, and the second traffic background image includes multiple traffic participants within a preset range around the target unmanned vehicle.
  • the target unmanned vehicle can highlight the traffic participants who cause the driving state of the target unmanned vehicle to change in the second traffic background image, and then the target unmanned vehicle can control the target display device to display the mark After the second traffic background image. In this way, passengers can intuitively know the reasons for the changes in the driving state of the target unmanned vehicle, and can combine it with the actual scene of the target unmanned vehicle.
  • the target unmanned vehicle needs to continuously change the driving state according to its surrounding environment during the driving process, if the driving state is changed every time, the driving state change prompt information will be output to the passengers, and a large number of driving state change prompts will be output. Information can be disruptive to passengers. Moreover, a large number of driving state change prompt information will make passengers feel fatigued, so that passengers cannot sensitively receive the driving state change prompt information with a relatively large change, and therefore cannot do safety protection in time.
  • the method includes:
  • Step 301 the target unmanned vehicle acquires information of traffic participants within a preset range around the target unmanned vehicle.
  • Step 302 the target unmanned vehicle determines whether the target unmanned vehicle needs to be controlled to change the driving state according to the information of the traffic participants.
  • Step 303 if the target unmanned vehicle needs to be controlled to change the driving state, the target unmanned vehicle determines whether the difference between the changed driving state and the driving state before the change is greater than a first difference threshold.
  • the driving state before the change refers to the current driving state of the target unmanned vehicle.
  • the changed driving state refers to the driving state of the target unmanned vehicle after the change strategy of the driving state of the target unmanned vehicle is executed.
  • the target unmanned vehicle When it is determined that the target unmanned vehicle needs to change the driving state, it can determine the changing strategy of the driving state to be executed, and estimate the changed driving state based on the changing strategy of the driving state. Then, the target unmanned vehicle can calculate the difference between the driving state before the change and the driving state after the change.
  • the target unmanned vehicle can compare the difference with the first difference threshold to determine whether the difference between the driving state after the change and the driving state before the change is greater than The first difference threshold.
  • the driving state may include a speed state.
  • the target unmanned vehicle can obtain the speed of the target unmanned vehicle before the change, and can determine the acceleration of the target unmanned vehicle according to the change strategy of the to-be-executed driving state determined by the target unmanned vehicle, and then calculate the acceleration based on the acceleration.
  • the speed of the target drone after the change can calculate the speed difference between the speed of the target unmanned vehicle before the change and the speed of the target unmanned vehicle after the change.
  • the target unmanned vehicle may compare the magnitude relationship between the speed difference and the first difference threshold to determine whether the difference between the changed driving state and the pre-change driving state is greater than the first difference threshold.
  • the driving state may include a driving direction state, where the driving direction may be represented by the clockwise angle formed by the driving direction of the target unmanned vehicle and the true north direction.
  • the target unmanned vehicle can determine the steering angle of the target unmanned vehicle according to the change strategy of the to-be-executed driving state determined by the target unmanned vehicle. The difference between the driving direction of the target unmanned vehicle before turning and the driving direction of the target unmanned vehicle after turning.
  • the target unmanned vehicle may compare the magnitude relationship between the steering angle of the target unmanned vehicle and the first difference threshold to determine whether the difference between the changed driving state and the before-change driving state is greater than the first difference threshold.
  • Step 304 if the difference between the driving state after the change and the driving state before the change is greater than the first difference threshold, the target unmanned vehicle outputs driving state change prompt information to passengers in the target unmanned vehicle.
  • the target unmanned vehicle outputs the driving state to the passengers. Change the prompt message.
  • the target unmanned vehicle may not output the driving state change prompt information to the passengers.
  • the driving state change prompt information when the difference between the driving state after the change and the driving state before the change is large, the driving state change prompt information is output to the passenger, and when the difference is small, the driving state is not output to the passenger
  • the prompt information is changed, so as to avoid disturbing the passengers, and it is convenient for the passengers to receive the prompt information of the change of the driving state with a relatively large degree of change sensitively.
  • the target unmanned vehicle can have a variety of output methods in the process of outputting the prompt information of the change of the driving state to the passengers.
  • the prompt information of the change of the driving state can be output in different output ways. As shown in Figure 4 below, the technical process for the target unmanned vehicle to output the prompt information for changing the driving state will be described:
  • Step 401 if the target unmanned vehicle needs to be controlled to change the driving state, the target unmanned vehicle obtains the urgency level of the target unmanned vehicle to change the driving state.
  • the target unmanned vehicle may determine the emergency level according to the change strategy of the to-be-executed driving state determined by the target unmanned vehicle.
  • the target unmanned vehicle can preset the mapping relationship between different driving state change strategies and different emergency levels, so that the target unmanned vehicle can obtain the target unmanned vehicle according to the change strategy of the driving state of the target unmanned vehicle and the above mapping relationship.
  • Step 402 the target unmanned vehicle determines an output strategy of the prompt information for changing the driving state according to the emergency level, and outputs the prompt information for changing the driving state according to the output strategy.
  • the process of determining the output strategy of the prompt information for the change of the driving state according to the emergency level of the target unmanned vehicle may be, for example:
  • the output strategy may be: first output the driving state change prompt information, and then execute the driving state change strategy.
  • the output strategy may be: outputting the driving state change prompt information while executing the driving state change strategy.
  • the output strategy may be: focus all control capabilities on executing the driving state change strategy, and not output the driving state change prompt information.
  • the computer system of the target unmanned vehicle may be connected with a CAN (English: Controller Area Network, Chinese: Controller Area Network) bus, and based on the CAN bus, the audio equipment and/or the target display device connection.
  • CAN American Type Network
  • the computer system of the target unmanned vehicle can send a signal to the audio device and/or the target display device through the CAN bus, so as to send a signal to the audio device and/or the target display device through the audio device and/or the target display device.
  • the passenger outputs the driving state change prompt information.
  • the output strategy for outputting the prompt information for changing the driving state is determined based on the level of urgency of the target unmanned vehicle to change the driving state, so as to achieve both reminding passengers of the behavior of the target unmanned vehicle and changing the driving state of the target unmanned vehicle Effect.
  • the embodiment of this specification provides another method for reminding the behavior of an unmanned vehicle. As shown in FIG. 5 , the method includes:
  • Step 501 the target unmanned vehicle obtains the road level of the road where the target unmanned vehicle is currently located.
  • the roads in my country are generally divided into different road grades, such as first-class roads, second-class roads, third-class roads, fourth-class roads and fifth-class roads.
  • first-class roads are roads that connect important political, economic and cultural centers and some interchanges.
  • Secondary roads are arterial roads connecting political and economic centers or large industrial and mining areas, or suburban roads with heavy traffic.
  • the third-level road is the branch road connecting the county or above the county.
  • Grade 4 roads are branch roads connecting counties or towns and townships.
  • Grade-5 roads generally refer to small roads in villages and towns. Roads with different road grades have different road surface conditions.
  • the target unmanned vehicle can determine the road on which the target unmanned vehicle is currently located based on the positioning system, and then can determine the road level of the currently located road through the associated city road information database.
  • the target unmanned vehicle may acquire a road surface image of the road where it is currently located, and determine the road level of the current road based on the analysis of the road surface image.
  • Step 502 when the road level is lower than the target road level, the target unmanned vehicle outputs road bumpy prompt information to passengers in the target unmanned vehicle.
  • the target unmanned vehicle can compare the road level of the current road with the target road level. If the road level of the current road is lower than the target road level, it means that the road surface condition of the current road is poor.
  • the driving process is relatively bumpy. In this case, the number of changes in the driving state of the target unmanned vehicle is greater than the preset number of times the threshold.
  • the target unmanned vehicle can output road bumpy prompt information to the passengers.
  • the target unmanned vehicle can output road bump prompt information to passengers by way of voice broadcast, and/or can output road bump prompt information to passengers by way of image display.
  • the road level of the current road is higher than the target road level, it means that the road surface condition of the current road is good, and bumps will not occur during the driving process, so there is no need to output road bumpy prompt information to the passengers.
  • the road bump prompt information is output to the passengers, so that the passengers can take safety precautions in advance and avoid causing the passengers to be caused by frequent changes in the driving state. safe question.
  • Step 601 in the process of controlling the target unmanned vehicle to change the driving state, obtain the physiological parameters of the passengers in the target unmanned vehicle collected by the wearable device that establishes the short-range communication connection with the target unmanned vehicle.
  • the wearable device is, for example, an electronic watch, a smart bracelet, and the like.
  • the wearable device can establish a short-range communication connection with the target unmanned vehicle.
  • the target unmanned vehicle may acquire the physiological parameters of the passengers in the target unmanned vehicle collected by the wearable device based on the user's operation instruction.
  • the physiological parameters are used to represent the health status of the passengers in the target unmanned vehicle.
  • the target unmanned vehicle may obtain physiological parameters of passengers in the target unmanned vehicle collected by the wearable device after establishing a short-range communication connection with the wearable device.
  • Step 602 when the target unmanned vehicle detects that the physiological parameter does not meet the preset physiological parameter range, it outputs risk prompt information to the passengers in the target unmanned vehicle.
  • the target unmanned vehicle can detect whether the physiological parameters conform to the preset physiological parameter range. If so, it means the passenger's health is normal. If not, it means that the passenger's health status is abnormal.
  • the target unmanned vehicle can output risk prompt information to the passengers, and the risk prompt information is used to remind the passengers in the target unmanned vehicle that there is a health risk.
  • the target unmanned vehicle when the target unmanned vehicle detects that the physiological parameters do not meet the preset physiological parameter range, it can also output a health risk processing strategy to the passengers in the target unmanned vehicle. It is used to instruct passengers to select the desired target strategy; the target strategy may be, for example, going to the hospital, making an emergency call, and driving normally. Passengers can choose the target strategy from the health risk processing strategies. After receiving the passenger's selection instruction, the target unmanned vehicle executes the passenger's selected target strategy according to the selection instruction. In this way, the target unmanned vehicle can carry out emergency response to the emergency situation of passengers to ensure the safety of passengers.
  • the risk prompt information is output to the passengers, and the risk prompt is performed. It can better protect the safety of passengers.
  • steps in the flowcharts of FIGS. 2-6 are sequentially displayed according to the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and the steps may be executed in other orders. Moreover, at least a part of the steps in FIG. 2-FIG. 6 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The order of execution is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages within the other steps.
  • an unmanned vehicle behavior reminder device 700 including: an acquisition module 701, a determination module 702 and a reminder module 703, wherein:
  • the acquisition module 701 is used to acquire the information of the traffic participants within the preset range around the target unmanned vehicle, and the information of the traffic participants is used to indicate the driving state of the traffic participants;
  • a determination module 702 configured to determine whether it is necessary to control the target unmanned vehicle to change the driving state according to the information of the traffic participants;
  • the reminder module 703 is used to output the prompt information of the change of the driving state to the passengers in the target unmanned vehicle if it is necessary to control the target unmanned vehicle to change the driving state, and the prompt information of the change of the driving state is used to indicate the driving state of the target unmanned vehicle to occur. change the situation.
  • the reminder module 703 is specifically used for:
  • the target display device is controlled to display the traffic participants that cause the driving state of the target unmanned vehicle to change, and the target display device is a vehicle-mounted display device or a terminal device that establishes a short-range communication connection with the target unmanned vehicle.
  • the reminder module 703 is specifically used for:
  • the target display device is controlled to display in the first traffic background image a position frame for marking the traffic participant who causes the change of the driving state of the target unmanned vehicle, the first traffic background image includes many objects within a preset range around the target unmanned vehicle. traffic participants.
  • the reminder module 703 is specifically used for:
  • the target display device is controlled to highlight the images of the traffic participants who cause the driving state of the target unmanned vehicle to change in the second traffic background image, where the second traffic background image includes multiple traffic participants within a preset range around the target unmanned vehicle By.
  • the reminder module 703 is specifically used for:
  • the driving state change prompt information is output to the passengers in the target unmanned vehicle.
  • the reminder module 703 is specifically used for:
  • the driving state change prompt information is output.
  • the reminder module 703 is specifically used for:
  • the road bump prompt information is output to the passengers in the target unmanned vehicle.
  • the road bump prompt information is used to indicate that the number of changes in the driving state of the target unmanned vehicle is greater than the preset number of times threshold. .
  • the reminder module 703 is specifically used for:
  • the physiological parameters of the passengers in the target unmanned vehicle collected by the wearable device that establish a short-range communication connection with the target unmanned vehicle are obtained, and the physiological parameters are used to indicate that the target unmanned vehicle is unmanned. the health of passengers in the vehicle;
  • risk prompt information is output to the passengers in the target unmanned vehicle, and the risk prompt information is used to remind the passengers in the target unmanned vehicle that there is a health risk.
  • Each module in the above-mentioned unmanned vehicle behavior reminder device may be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the unmanned vehicle in the form of hardware, or can be stored in the memory of the unmanned vehicle in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • FIG. 8 is a block diagram of an unmanned vehicle 800 according to an exemplary embodiment.
  • the unmanned vehicle 800 includes a processing component 801, a storage component 802 and a communication component 803, wherein the storage component 803 stores computer programs or instructions running on the processor.
  • the processing component 801 generally controls the overall operation of the unmanned vehicle 800, and the processing component 801 may include one or more processors to execute instructions to perform all or part of the steps of the above-described methods. Additionally, processing component 801 may include one or more modules to facilitate interaction between processing component 801 and other components.
  • Storage component 802 is configured to store various types of data to support operation of unmanned vehicle 800 . Examples of such data include instructions for any application or method of operation on the unmanned vehicle 800 .
  • Storage component 802 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the communication component 803 is configured to facilitate communication between the unmanned vehicle 800 and other terminal devices through short-range communication, and communication between the unmanned vehicle 800 and other unmanned vehicles through wireless communication.
  • the unmanned vehicle 800 can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G or 5G or a combination thereof.
  • the communication component 803 receives broadcast signals or broadcast related information from an external broadcast management system via a Bluetooth scan channel.
  • the communication component 803 can also receive V2X information from the roadside device via the V2X device.
  • the communication component 803 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • unmanned vehicle 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field A programmable gate array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components are implemented to implement the above-mentioned method for reminding the behavior of the unmanned vehicle.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field A programmable gate array
  • controller a microcontroller, a microprocessor or other electronic components are implemented to implement the above-mentioned method for reminding the behavior of the unmanned vehicle.
  • a non-transitory computer-readable storage medium including instructions is also provided, such as a storage component 802 including instructions executable by the processing component 801 of the unmanned vehicle 800 to accomplish the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种无人车行为提醒方法、装置、无人车和存储介质,涉及无人车技术领域。该无人车行为提醒方法通过获取目标无人车周围预设范围内的交通参与者的信息,然后根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态;若需要控制目标无人车改变行驶状态,则向目标无人车内的乘客输出行驶状态改变提示信息,其中,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。目标无人车在需要改变行驶状态的情况下,向目标无人车内的乘客输出行驶状态改变提示信息,以便于乘客了解目标无人车将要发生的行驶状态改变状况,从而可以预先做好安全防护,避免出现安全问题。

Description

无人车行为提醒方法、装置、无人车和存储介质
优先权声明
本申请要求2020年12月30日提交的中国申请CN202011616194.X的优先权,在此引用其全部内容。
技术领域
本说明书涉及无人车技术领域,特别是涉及一种无人车行为提醒方法、装置、无人车和存储介质。
背景技术
无人车是智能汽车的一种,其主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目的。
无人车在行驶过程中需要基于无人查所处的交通环境来确定车辆驾驶策略,通过执行车辆驾驶策略来改变车辆行驶状态。其中,改变车辆行驶状态例如是:无人车可以进行切换车道、转弯、起步、加速、减速等操作。
目前,乘客无法提前获知无人车行驶状态的变化,这样就导致乘客在无人车的行驶状态发生变化时,无法预先做好安全防护,因此容易出现安全问题。
发明内容
本说明书实施例提供一种无人车行为提醒方法、装置、无人车和存储介质,可以向乘客提醒无人车待发生的行驶状态改变状况,以便于乘客预先做好安全防护。
第一方面,本说明书实施例提供一种无人车行为提醒方法,该方法包括:
获取目标无人车周围预设范围内的交通参与者的信息,交通参与者的信息用于指示交通参与者的行驶状态;
根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态;
若需要控制目标无人车改变行驶状态,则向目标无人车内的乘客输出行驶状态改变提示信息,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。
第二方面,本说明书实施例提供一种无人车行为提醒装置,该装置包括:
获取模块,用于获取目标无人车周围预设范围内的交通参与者的信息,交通参与者的信息用于指示交通参与者的行驶状态;
确定模块,用于根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态;
提醒模块,用于若需要控制目标无人车改变行驶状态,则向目标无人车内的乘客输出 行驶状态改变提示信息,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。
第三方面,本说明书实施例提供一种无人车,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述第一方面所述的方法的步骤。
第四方面,本说明书实施例提供一种存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述第一方面所述的方法的步骤。
本说明书实施例提供的无人车行为提醒方法、装置、无人车和存储介质,可以向乘客提醒无人车待发生的行驶状态改变状况,以便于乘客预先做好安全防护。该无人车行为提醒方法通过获取目标无人车周围预设范围内的交通参与者的信息,然后根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态;若需要控制目标无人车改变行驶状态,则向目标无人车内的乘客输出行驶状态改变提示信息,其中,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。由此可知,本说明书实施例中,目标无人车在需要改变行驶状态的情况下,向目标无人车内的乘客输出行驶状态改变提示信息,以便于乘客了解目标无人车将要发生的行驶状态改变状况,从而可以预先做好安全防护,避免出现安全问题。
附图说明
图1为本说明书一个实施例中无人车行为提醒方法的应用环境图;
图2为本说明书一个实施例中无人车行为提醒方法的流程示意图;
图3为本说明书另一个实施例中无人车行为提醒方法的流程示意图;
图4为本说明书一个实施例中目标无人车输出行驶状态改变提示信息的方法的流程示意图;
图5为本说明书另一个实施例中无人车行为提醒方法的流程示意图;
图6为本说明书另一个实施例中无人车行为提醒方法的流程示意图;
图7为本说明书一个实施例中无人车行为提醒装置的结构框图;
图8为本说明书一个实施例中无人车的内部结构图。
具体实施方式
为了使本说明书实施例的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本说明书实施例进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本说明书实施例,并不用于限定本说明书实施例。
首先,在具体介绍本说明书实施例的技术方案之前,先对本说明书实施例基于的技术 背景或者技术演进脉络进行介绍。
无人车是智能汽车的一种,其主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目的。目前,随着无人车技术越发成熟,无人车逐渐进入人们的日常生活。
目前,无人车可以分为两种形式的无人车,一种狭义的无人车,即没有驾驶室也没有方向盘等驾驶设备的高度智能化的无人车。另一种是广义的无人车,可以是处于全自动驾驶模式的车辆。现有技术中,存在一种车辆,其具有多种可选择的不同的驾驶模式,其中一种驾驶模式为全自动驾驶模式。当车辆处于全自动驾驶模式时,该车辆的行驶过程完全依赖于车载计算机系统,无需驾驶员操作车辆,因此可以将处于全自动驾驶模式的车辆称为无人车。
对于上述两种形式的无人车,乘客在乘坐时,由于无人车的驾驶操作是基于计算机系统的控制完成的,因此乘客无法预先获知无人车如何改变行驶状态,也无法获知无人车改变行驶状态的原因,这样就会导致乘客无法预先做好防护措施以应对急刹车、急加速以及急转弯等行驶状态改变而带来的安全风险,因此容易出现安全问题。有鉴于此,如何提高无人车的安全性,成为目前亟待解决的难题。
下面结合本说明书实施例所应用的场景,对本说明书实施例涉及的技术方案进行介绍。
请参考图1,该实施环境可以包括无人车,该无人车可以是没有驾驶室也没有方向盘等驾驶设备的高度智能化的无人车。还可以是处于全自动驾驶模式的普通车辆。该无人车包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该无人车的处理器用于提供计算和控制能力。该无人车的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该无人车的网络接口用于与外部的终端设备通过网络连接通信。或者与外部的终端设备通过近距离通信技术连接,该计算机程序被处理器执行时以实现一种无人车行为提醒方法。该无人车的显示屏可以是液晶显示屏或者电子墨水显示屏,该无人车的输入装置可以是显示屏上覆盖的触摸层,也可以是该无人车内设置的按键、轨迹球或触控板等。
本领域技术人员可以理解,图1中示出的结构,仅仅是与本说明书方案相关的部分结构的框图,并不构成对本说明书方案所应用于其上的无人车的限定,具体的无人车可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,如图2所示,提供了一种无人车行为提醒方法,该方法应用于图1所示的无人车中,该方法包括以下步骤:
步骤201,目标无人车获取目标无人车周围预设范围内的交通参与者的信息。
本说明书实施例中,交通参与者可以包括车辆、行人、非机动车、动物、交通指示灯、地面道路(例如车道线、路边建筑、绿化带等)等。交通参与者的信息用于指示交通参与者的行驶状态。可选的,交通参与者的信息可以包括交通参与者的位置、速度、运动方向信息、交通指示灯的颜色、交通标线以及其他信息,本说明书实施例不进行穷举。
本说明书实施例中,目标无人车获取目标无人车周围预设范围内的交通参与者的信息的方式有两种,一种是目标无人车可以基于自身的车载雷达系统获取目标无人车周围预设范围内的交通参与者的信息。另一种是:目标无人车可以基于车载V2X设备与路测设备进行信息交互,以从路测设备获取目标无人车所在的位置的周围预设范围内的交通参与者的信息。
步骤202,目标无人车根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态。
本说明书实施例中,目标无人车在获取到交通参与者的信息之后,可以根据交通参与者的信息以及预先设置好的决策条件,确定目标无人车的驾驶策略,该驾驶策略也即目标无人车的行驶状态的改变策略。
例如,目标无人车可以获取位于其前方的交通参与者的速度,在位于其前方的交通参与者的速度小于速度阈值的情况下,目标无人车可以确定其目标无人车的行驶状态的改变策略为切换车道。
若位于其前方的交通参与者的速度大于速度阈值,那么目标无人车可以确定目标无人车的行驶状态的改变策略为行驶状态不变。
例如,目标无人车可以获取交通指示灯的信息,并在交通指示灯为红灯的情况下,确定目标无人车的行驶状态的改变策略为减速停车。
关于目标无人车根据交通参与者的信息确定目标无人车的行驶状态的改变策略的过程,本说明书实施例不进行穷举。
步骤203,若需要控制目标无人车改变行驶状态,则目标无人车向目标无人车内的乘客输出行驶状态改变提示信息。
本说明书实施例中,目标无人车内的乘客(以下简称为乘客)可以是指乘坐在目标无人车内的人员,也可以是指处于全自动驾驶模式的车辆的驾驶员。
在确定需要控制目标无人车改变行驶状态的情况下,目标无人车可以通过语音播报的方式向乘客输出行驶状态改变提示信息,和/或可以通过图像展示的方式向乘客输出行驶状态改变提示信息。其中,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。
可选的,本说明书实施例中,行驶状态改变提示信息可以包括音频提示信息和图像提示信息,其中,音频提示信息用于语音提示乘客目标无人车待发生的行驶状态;图像提示信息用于图像提示造成目标无人车的行驶状态发生变化的交通参与者的图像信息。
本说明书实施例提供的无人车行为提醒方法通过获取目标无人车周围预设范围内的交通参与者的信息,然后根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态;若需要控制目标无人车改变行驶状态,则向乘客输出行驶状态改变提示信息,其中,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。由此可知,本说明书实施例中,目标无人车在需要改变行驶状态的情况下,向乘客输出行驶状态改变提示信息,以便于乘客了解目标无人车将要发生的行驶状态改变状况,从而可以预先做好安全防护,避免出现安全问题。
在本说明书的一个实施例中,目标无人车向目标无人车内的乘客输出行驶状态改变提示信息的技术过程可以包括以下内容:
目标无人车可以控制目标显示设备展示造成目标无人车的行驶状态发生变化的交通参与者。
其中,目标显示设备为车载显示设备或者为与目标无人车建立近距离通信连接的终端设备。可选的,终端设备可以例如是手机、电脑、可穿戴设备等具有显示屏幕的设备。当乘客乘坐目标无人车时,乘客所持有的终端设备可以与目标无人车建立近距离通信连接。
本说明书实施例中,目标无人车在确定是否需要改变行驶状态的过程中,可以确定出造成目标无人车的行驶状态发生变化的交通参与者。在此基础上,目标无人车可以在目标显示设备中展示造成目标无人车的行驶状态发生变化的交通参与者,以便于乘客获知造成目标无人车的行驶状态发生变化的原因。
可选的,本说明书实施例中,目标无人车可以将造成目标无人车的行驶状态发生变化的交通参与者在目标显示设备展示的画面中标记出来。
在一种可选的实现方式中,目标无人车控制目标显示设备展示造成目标无人车的行驶状态发生变化的交通参与者的过程可以包括:
控制目标显示设备在第一交通背景图像中展示用于标记造成目标无人车的行驶状态发生变化的交通参与者的位置框。
其中,目标无人车可以获取第一交通背景图像,第一交通背景图像包括目标无人车周围预设范围内的多个交通参与者。本说明书实施例中,目标无人车可以将第一交通背景图像中的多个交通参与者的动态位置在目标显示设备上呈现给乘客。于此同时,目标无人车可以 在第一交通背景图像中将造成目标无人车的行驶状态发生变化的交通参与者的位置框标记出来,并控制目标显示设备展示该位置框。以便于乘客获知造成目标无人车的行驶状态发生变化的原因以及便于乘客将造成目标无人车的行驶状态发生变化的原因和目标无人车的实际场景相结合。
在另一种可选的实现方式中,目标无人车控制目标显示设备展示造成目标无人车的行驶状态发生变化的交通参与者的过程可以包括:
控制目标显示设备在第二交通背景图像中高亮展示造成目标无人车的行驶状态发生变化的交通参与者的图像。
其中,目标无人车可以获取第二交通背景图像,第二交通背景图像包括目标无人车周围预设范围内的多个交通参与者。本说明书实施例中,目标无人车可以在第二交通背景图像中将造成目标无人车的行驶状态发生变化的交通参与者高亮标记,然后,目标无人车可以控制目标显示设备展示标记后的第二交通背景图像。这样,乘客可以直观地获知造成目标无人车的行驶状态发生变化的原因,并可以与目标无人车的实际场景相结合。
在实际应用中,由于目标无人车在行驶过程中需要不断地根据其周边环境改变行驶状态,而若每次改变行驶状态时,均向乘客输出行驶状态改变提示信息,大量的行驶状态改变提示信息会对乘客造成干扰。而且,大量的行驶状态改变提示信息会让乘客产生疲惫心理,这样乘客就不能敏感地接收到改变程度比较大的行驶状态改变提示信息,并因此不能及时做好安全防护。
基于此,本说明书实施例中提出了另一种无人车行为提醒方法,如图3所示,该方法包括:
步骤301,目标无人车获取目标无人车周围预设范围内的交通参与者的信息。
步骤302,目标无人车根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态。
步骤303,若需要控制目标无人车改变行驶状态,则目标无人车确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值。
本说明书实施例中,改变前的行驶状态是指目标无人车当前时刻的行驶状态。改变后的行驶状态是指执行了目标无人车的行驶状态的改变策略之后目标无人车的行驶状态。
目标无人车在确定需要改变行驶状态的情况下,可以确定出待执行的行驶状态的改变策略,并基于行驶状态的改变策略估算改变后的行驶状态。然后,目标无人车可以根据改变前的行驶状态和改变后的行驶状态计算二者之间的差异。
在获取改变前的行驶状态和改变后的行驶状态的差异之后,目标无人车可以比较该差异与第一差异阈值的大小,以确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值。
可选的,本说明书实施例中,行驶状态可以包括速度状态。基于此,目标无人车可以获取改变前的目标无人车的速度,并可以根据目标无人车确定的待执行的行驶状态的改变策略确定目标无人车的加速度,然后,基于加速度计算出改变后的目标无人车的速度。最后目标无人车可以根据改变前的目标无人车的速度和改变后的目标无人车的速度计算二者之间的速度差异。目标无人车可以比较速度差异与第一差异阈值之间的大小关系,以确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值。
可选的,本说明书实施例中,行驶状态可以包括行驶方向状态,其中,行驶方向可以用目标无人车的行驶方向与正北方向沿顺时针所形成的夹角表示。基于此,目标无人车可以根据目标无人车确定的待执行的行驶状态的改变策略确定目标无人车的转向角度,转向角度即目标无人车待发生的行驶方向的变化量,也即转向前的目标无人车的行驶方向与转向后的目标无人车的行驶方向的差异。本说明书实施例中,目标无人车可以比较目标无人车的转向角度与第一差异阈值的大小关系,以确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值。
步骤304,若改变后的行驶状态与改变前的行驶状态的差异大于第一差异阈值,则目标无人车向目标无人车内的乘客输出行驶状态改变提示信息。
若改变后的行驶状态与改变前的行驶状态的差异大于第一差异阈值,说明本次行驶状态的改变程度比较大,可能会令乘客出现安全风险,因此,目标无人车向乘客输出行驶状态改变提示信息。
而若改变后的行驶状态与改变前的行驶状态的差异小于等于第一差异阈值,说明本次行驶状态的改变程度较小,不会令乘客出现安全风险,这种情况下,为了避免打扰乘客,目标无人车可以不向乘客输出行驶状态改变提示信息。
本说明书实施例中,在改变后的行驶状态与改变前的行驶状态的差异较大的情况下,向乘客输出行驶状态改变提示信息,而在差异较小的情况下,不向乘客输出行驶状态改变提示信息,从而避免打扰乘客,且可以便于乘客敏感地接收到改变程度比较大的行驶状态改变提示信息。
在实际应用中,目标无人车在向乘客输出行驶状态改变提示信息的过程中可以有多种输出方式,在不同的行驶状态下,可以以不同的输出方式输出行驶状态改变提示信息。下面 如图4所示,对于目标无人车输出行驶状态改变提示信息的技术过程进行说明:
步骤401,若需要控制目标无人车改变行驶状态,则目标无人车获取目标无人车改变行驶状态的紧急程度等级。
本说明书实施例中,目标无人车可以根据目标无人车确定出来的待执行的行驶状态的改变策略确定紧急程度等级。
其中,目标无人车可以预先设置不同的行驶状态的改变策略与不同的紧急程度等级的映射关系,这样目标无人车可以根据目标无人车的行驶状态的改变策略和上述映射关系获取目标无人车改变行驶状态的紧急程度等级。
在实际应用中,紧急程度等级越高,表示执行该行驶状态的改变策略的重要度越高,紧急程度等级越低,表示执行该行驶状态的改变策略的重要度越低。
步骤402,目标无人车根据紧急程度等级确定行驶状态改变提示信息的输出策略,并根据输出策略输出行驶状态改变提示信息。
本说明书实施例中,目标无人车根据紧急程度等级确定行驶状态改变提示信息的输出策略的过程可以例如是:
对于紧急程度等级较低的待执行的行驶状态的改变策略,其输出策略可以为:先输出行驶状态改变提示信息,然后再执行行驶状态的改变策略。
对于紧急程度等级较高的待执行的行驶状态的改变策略,其输出策略可以为:在执行行驶状态的改变策略的同时,输出行驶状态改变提示信息。
对于紧急程度等级最高的待执行的行驶状态的改变策略,其输出策略可以为:将所有控制能力集中于执行行驶状态的改变策略上,不输出行驶状态改变提示信息。
可选的,本说明书实施例中,目标无人车的计算机系统中可以连接有CAN(英文:Controller Area Network,中文:控制器局域网络)总线,并基于CAN总线与音频设备和/或目标显示设备连接。当目标无人车改变行驶状态的紧急程度等级较高时,目标无人车的计算机系统可以通过CAN总线向音频设备和/或目标显示设备发送信号,以通过音频设备和/或目标显示设备向乘客输出行驶状态改变提示信息。
本说明书实施例中,基于目标无人车改变行驶状态的紧急程度等级确定输出行驶状态改变提示信息的输出策略,以达到兼顾向乘客提醒目标无人车的行为和改变目标无人车的行驶状态的效果。
在实际应用中,当目标无人车行驶在例如乡村土路、建筑工地道路或者村镇道路等路况较差的道路上时,目标无人车会频繁地改变行驶状态,而频繁地改变行驶状态可能会对乘 客造成安全问题。基于此,本说明书实施例提供了另一种无人车行为提醒方法,如图5所示,该方法包括:
步骤501,目标无人车获取目标无人车当前所在的道路的道路等级。
目前,我国境内的道路一般分为不同的道路等级,例如一级道路、二级道路、三级道路、四级道路和五级道路。其中,一级道路是连接重要政治经济文化中心、部分立交的公路。二级道路是连接政治、经济中心或大工矿区的干线公路、或运输繁忙的城郊公路。三级道路是沟通县或县以上城市的支线公路。四级道路是沟通县或镇、乡的支线公路。五级道路一般是指村镇小路。不同道路等级的道路的路面状况不同。
本说明书实施例中,目标无人车可以基于定位系统确定目标无人车当前所在的道路,然后可以通过关联城市道路信息数据库确定当前所在道路的道路等级。
可选的,目标无人车可以获取当前所在道路的路面图像,基于对路面图像的分析确定当前所在道路的道路等级。
步骤502,在道路等级低于目标道路等级的情况下,目标无人车向目标无人车内的乘客输出道路颠簸提示信息。
本说明书实施例中,目标无人车可以将当前所在的道路的道路等级与目标道路等级进行比较,若当前所在的道路的道路等级低于目标道路等级,表示当前所在道路的路面状况较差,行驶过程比较颠簸,这种情况下,目标无人车的行驶状态的改变次数大于预设次数阈值,为了便于乘客预先做好安全防护,目标无人车可以向乘客输出道路颠簸提示信息。
其中,目标无人车可以通过语音播报的方式向乘客输出道路颠簸提示信息,和/或可以通过图像展示的方式向乘客输出道路颠簸提示信息。
其中,若当前所在的道路的道路等级高于目标道路等级,表示当前所在道路的路面状况较好,行驶过程不会产生颠簸,因此不需要向乘客输出道路颠簸提示信息。
本说明书实施例中,当目标无人车当前所在的道路的路面状况较差时,向乘客输出道路颠簸提示信息,以便于乘客预先做好安全防护,避免由于频繁地改变行驶状态而对乘客造成安全问题。
在实际应用中,由于目标无人车的行驶状态改变的过程中,乘客会感知到目标无人车在纵向和横向变加速度,这种情况下,乘客的身体会受到目标无人车的行驶状态的改变的影响。例如,在一些紧急刹车、紧急变道等突发情况下,乘客可能会在目标无人车内发生较为剧烈的移动,而这些剧烈的移动对于一些乘客可能会造成严重的不良后果。本说明书实施例中,如图6所示,为了充分保障乘客的生命安全,提出了另一种无人车行为提醒方法,该方 法包括:
步骤601,在控制目标无人车改变行驶状态的过程中,获取与目标无人车建立近距离通信连接的可穿戴设备采集到的目标无人车内的乘客的生理参数。
本说明书实施例中,可穿戴设备例如是电子手表、智能手环等。当乘客乘坐目标无人车时,可穿戴设备可以与目标无人车建立近距离通信连接。
可选的,目标无人车可以是基于用户的操作指令,获取可穿戴设备采集到的目标无人车内的乘客的生理参数。其中,生理参数用于表示目标无人车内的乘客的健康状况。
可选的,目标无人车可以是在与可穿戴设备建立近距离通信连接之后,获取可穿戴设备采集到的目标无人车内的乘客的生理参数。
步骤602,目标无人车在检测到生理参数不符合预设的生理参数范围时,向目标无人车内的乘客输出风险提示信息。
本说明书实施例中,目标无人车在获取到乘客的生理参数之后,可以检测生理参数是否符合预设的生理参数范围。若符合,表示乘客的健康状况正常。若不符合,表示乘客的健康状况存在异常。
在检测到生理参数不符合预设的生理参数范围时,目标无人车可以向乘客输出风险提示信息,风险提示信息用向目标无人车内的乘客提醒存在健康风险。
可选的,本说明书实施例中,目标无人车在检测到生理参数不符合预设的生理参数范围时,还可以向目标无人车内的乘客输出健康风险处理策略,健康风险处理策略用于指示乘客选择所需的目标策略;目标策略例如可以是去医院、拨打急救电话、正常行驶等。乘客可以从健康风险处理策略中选择目标策略,目标无人车在接收到乘客的选择指令后,根据选择指令执行乘客选择的目标策略。这样目标无人车可以对乘客出现的紧急情况进行应急处理,以保障乘客的安全。
本说明书实施例,通过获取乘客的生理参数以便于获知行驶状态的改变对乘客所造成的影响,并在确定乘客的健康状况存在异常的情况下,向乘客输出风险提示信息,进行风险提示。可以更好地保障乘客的安全。
应该理解的是,虽然图2-图6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-图6中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是 可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图7所示,提供了一种无人车行为提醒装置700,包括:获取模块701,确定模块702和提醒模块703,其中:
获取模块701,用于获取目标无人车周围预设范围内的交通参与者的信息,交通参与者的信息用于指示交通参与者的行驶状态;
确定模块702,用于根据交通参与者的信息确定是否需要控制目标无人车改变行驶状态;
提醒模块703,用于若需要控制目标无人车改变行驶状态,则向目标无人车内的乘客输出行驶状态改变提示信息,行驶状态改变提示信息用于指示目标无人车待发生的行驶状态改变状况。
在其中一个实施例中,提醒模块703,具体用于:
控制目标显示设备展示造成目标无人车的行驶状态发生变化的交通参与者,目标显示设备为车载显示设备或者为与目标无人车建立近距离通信连接的终端设备。
在其中一个实施例中,提醒模块703,具体用于:
控制目标显示设备在第一交通背景图像中展示用于标记造成目标无人车的行驶状态发生变化的交通参与者的位置框,第一交通背景图像包括目标无人车周围预设范围内的多个交通参与者。
在其中一个实施例中,提醒模块703,具体用于:
控制目标显示设备在第二交通背景图像中高亮展示造成目标无人车的行驶状态发生变化的交通参与者的图像,第二交通背景图像包括目标无人车周围预设范围内的多个交通参与者。
在其中一个实施例中,提醒模块703,具体用于:
若需要控制目标无人车改变行驶状态,则确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值;
若改变后的行驶状态与改变前的行驶状态的差异大于第一差异阈值,则向目标无人车内的乘客输出行驶状态改变提示信息。
在其中一个实施例中,提醒模块703,具体用于:
若需要控制目标无人车改变行驶状态,则获取目标无人车改变行驶状态的紧急程度等级;
根据紧急程度等级确定行驶状态改变提示信息的输出策略;
根据输出策略输出行驶状态改变提示信息。
在其中一个实施例中,提醒模块703,具体用于:
获取目标无人车当前所在的道路的道路等级;
在道路等级低于目标道路等级的情况下,向目标无人车内的乘客输出道路颠簸提示信息,道路颠簸提示信息用于表征目标无人车待发生的行驶状态的改变次数大于预设次数阈值。
在其中一个实施例中,提醒模块703,具体用于:
在控制目标无人车改变行驶状态的过程中,获取与目标无人车建立近距离通信连接的可穿戴设备采集到的目标无人车内的乘客的生理参数,生理参数用于表示目标无人车内的乘客的健康状况;
在检测到生理参数不符合预设的生理参数范围时,向目标无人车内的乘客输出风险提示信息,风险提示信息用向目标无人车内的乘客提醒存在健康风险。
关于无人车行为提醒装置的具体限定可以参见上文中对于无人车行为提醒方法的限定,在此不再赘述。上述无人车行为提醒装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以以硬件形式内嵌于或独立于无人车中的处理器中,也可以以软件形式存储于无人车中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图8是根据一示例性实施例示出的一种无人车800的框图。该无人车800包括处理组件801、存储组件802和通信组件803,其中,存储组件803上存储有在处理器上运行的计算机程序或者指令。
处理组件801通常控制无人车800的整体操作,处理组件801可以包括一个或多个处理器来执行指令,以完成上述方法的全部或部分步骤。此外,处理组件801可以包括一个或多个模块,便于处理组件801和其他组件之间的交互。
存储组件802被配置为存储各种类型的数据以支持在无人车800的操作。这些数据的示例包括用于在无人车800上操作的任何应用程序或方法的指令。存储组件802可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
通信组件803被配置为便于无人车800和其他终端设备之间通过近距离通信方式进行通信,以及无人车800和其他无人车之间通过无线方式的通信。无人车800可以接入基于通信标准的无线网络,如WiFi,2G、3G、4G或5G或它们的组合。在一个示例性实施例中,通信组件803经由蓝牙扫描通道接收来自外部广播管理系统的广播信号或广播相关信息。在 一个示例性实施例中,通信组件803还可以经由V2X设备接收来自路侧设备的V2X信息。在一个示例性实施例中,所述通信组件803还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,无人车800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述无人车行为提醒方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储组件802,上述指令可由无人车800的处理组件801执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本说明书实施例所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本说明书实施例的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本说明书实施例构思的前提下,还可以做出若干变形和改进,这些都属于本说明书实施例的保护范围。因此,本说明书实施例专利的保护范围应以所附权利要求为准。

Claims (18)

  1. 一种无人车行为提醒方法,其特征在于,所述方法包括:
    获取目标无人车周围预设范围内的交通参与者的信息,所述交通参与者的信息用于指示所述交通参与者的行驶状态;
    根据所述交通参与者的信息确定是否需要控制所述目标无人车改变行驶状态;
    若需要控制所述目标无人车改变行驶状态,则向所述目标无人车内的乘客输出行驶状态改变提示信息,所述行驶状态改变提示信息用于指示所述目标无人车待发生的行驶状态改变状况。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    控制目标显示设备展示造成所述目标无人车的行驶状态发生变化的交通参与者,所述目标显示设备为车载显示设备或者为与所述目标无人车建立近距离通信连接的终端设备。
  3. 根据权利要求2所述的方法,其特征在于,所述控制目标显示设备展示造成所述目标无人车的行驶状态发生变化的交通参与者,包括:
    控制所述目标显示设备在第一交通背景图像中展示用于标记造成所述目标无人车的行驶状态发生变化的交通参与者的位置框,所述第一交通背景图像包括所述目标无人车周围预设范围内的多个交通参与者。
  4. 根据权利要求2所述的方法,其特征在于,所述控制目标显示设备展示造成所述目标无人车的行驶状态发生变化的交通参与者,包括:
    控制所述目标显示设备在第二交通背景图像中高亮展示造成所述目标无人车的行驶状态发生变化的交通参与者的图像,所述第二交通背景图像包括所述目标无人车周围预设范围内的多个交通参与者。
  5. 根据权利要求1所述的方法,其特征在于,所述若需要控制所述目标无人车改变行驶状态,则向所述目标无人车内的乘客输出行驶状态改变提示信息,包括:
    若需要控制所述目标无人车改变行驶状态,则确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值;
    若改变后的行驶状态与改变前的行驶状态的差异大于所述第一差异阈值,则向所述目标无人车内的乘客输出所述行驶状态改变提示信息。
  6. 根据权利要求1所述的方法,其特征在于,所述若需要控制所述目标无人车改变行驶状态,则向所述目标无人车内的乘客输出行驶状态改变提示信息,包括:
    若需要控制所述目标无人车改变行驶状态,则获取所述目标无人车改变行驶状态的紧急程度等级;
    根据所述紧急程度等级确定所述行驶状态改变提示信息的输出策略;
    根据所述输出策略输出所述行驶状态改变提示信息。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述目标无人车当前所在的道路的道路等级;
    在所述道路等级低于目标道路等级的情况下,向所述目标无人车内的乘客输出道路颠簸提示信息,所述道路颠簸提示信息用于表征所述目标无人车待发生的行驶状态的改变次数大于预设次数阈值。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在控制所述目标无人车改变行驶状态的过程中,获取与所述目标无人车建立近距离通信连接的可穿戴设备采集到的所述目标无人车内的乘客的生理参数,所述生理参数用于表示所述目标无人车内的乘客的健康状况;
    在检测到所述生理参数不符合预设的生理参数范围时,向所述目标无人车内的乘客输出风险提示信息,所述风险提示信息用向所述目标无人车内的乘客提醒存在健康风险。
  9. 一种无人车行为提醒装置,其特征在于,所述装置包括:
    获取模块,用于获取目标无人车周围预设范围内的交通参与者的信息,所述交通参与者的信息用于指示所述交通参与者的行驶状态;
    确定模块,用于根据所述交通参与者的信息确定是否需要控制所述目标无人车改变行驶状态;
    提醒模块,用于若需要控制所述目标无人车改变行驶状态,则向所述目标无人车内的乘客输出行驶状态改变提示信息,所述行驶状态改变提示信息用于指示所述目标无人车待发生的行驶状态改变状况。
  10. 根据权利要求9所述的装置,其特征在于,所述提醒模块,具体用于:
    控制目标显示设备展示造成所述目标无人车的行驶状态发生变化的交通参与者,所述目 标显示设备为车载显示设备或者为与所述目标无人车建立近距离通信连接的终端设备。
  11. 根据权利要求10所述的装置,其特征在于,所述提醒模块,具体用于:
    控制所述目标显示设备在第一交通背景图像中展示用于标记造成所述目标无人车的行驶状态发生变化的交通参与者的位置框,所述第一交通背景图像包括所述目标无人车周围预设范围内的多个交通参与者。
  12. 根据权利要求10所述的装置,其特征在于,所述提醒模块,具体用于:
    控制所述目标显示设备在第二交通背景图像中高亮展示造成所述目标无人车的行驶状态发生变化的交通参与者的图像,所述第二交通背景图像包括所述目标无人车周围预设范围内的多个交通参与者。
  13. 根据权利要求9所述的装置,其特征在于,所述提醒模块,具体用于:
    若需要控制所述目标无人车改变行驶状态,则确定改变后的行驶状态与改变前的行驶状态的差异是否大于第一差异阈值;
    若改变后的行驶状态与改变前的行驶状态的差异大于所述第一差异阈值,则向所述目标无人车内的乘客输出所述行驶状态改变提示信息。
  14. 根据权利要求9所述的装置,其特征在于,所述提醒模块,具体用于:
    若需要控制所述目标无人车改变行驶状态,则获取所述目标无人车改变行驶状态的紧急程度等级;
    根据所述紧急程度等级确定所述行驶状态改变提示信息的输出策略;
    根据所述输出策略输出所述行驶状态改变提示信息。
  15. 根据权利要求9所述的装置,其特征在于,所述提醒模块,具体用于:
    获取所述目标无人车当前所在的道路的道路等级;
    在所述道路等级低于目标道路等级的情况下,向所述目标无人车内的乘客输出道路颠簸提示信息,所述道路颠簸提示信息用于表征所述目标无人车待发生的行驶状态的改变次数大于预设次数阈值。
  16. 根据权利要求9所述的装置,其特征在于,所述提醒模块,具体用于:
    在控制所述目标无人车改变行驶状态的过程中,获取与所述目标无人车建立近距离通信连接的可穿戴设备采集到的所述目标无人车内的乘客的生理参数,所述生理参数用于表示所述目标无人车内的乘客的健康状况;
    在检测到所述生理参数不符合预设的生理参数范围时,向所述目标无人车内的乘客输出风险提示信息,所述风险提示信息用向所述目标无人车内的乘客提醒存在健康风险。
  17. 一种无人车,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的方法的步骤。
  18. 一种存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2021/123677 2020-12-30 2021-10-14 无人车行为提醒方法、装置、无人车和存储介质 WO2022142590A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011616194.XA CN112633223A (zh) 2020-12-30 2020-12-30 无人车行为提醒方法、装置、无人车和存储介质
CN202011616194.X 2020-12-30

Publications (1)

Publication Number Publication Date
WO2022142590A1 true WO2022142590A1 (zh) 2022-07-07

Family

ID=75287086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123677 WO2022142590A1 (zh) 2020-12-30 2021-10-14 无人车行为提醒方法、装置、无人车和存储介质

Country Status (2)

Country Link
CN (1) CN112633223A (zh)
WO (1) WO2022142590A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633223A (zh) * 2020-12-30 2021-04-09 北京航迹科技有限公司 无人车行为提醒方法、装置、无人车和存储介质
CN113581072A (zh) * 2021-05-24 2021-11-02 北京汽车研究总院有限公司 车辆及其开门防撞方法、系统、装置及电子设备、介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110103871A (zh) * 2019-04-30 2019-08-09 浙江吉利控股集团有限公司 一种基于安全带的车辆预警方法、装置、设备及终端
CN110428518A (zh) * 2019-07-31 2019-11-08 百度在线网络技术(北京)有限公司 行程中状态的提示方法、装置和存储介质
CN110562264A (zh) * 2019-08-16 2019-12-13 武汉东湖大数据交易中心股份有限公司 面向无人驾驶的道路危险预测方法、装置、设备及介质
US20200369293A1 (en) * 2019-05-20 2020-11-26 Hyundai Mobis Co., Ltd. Autonomous driving apparatus and method
CN112633223A (zh) * 2020-12-30 2021-04-09 北京航迹科技有限公司 无人车行为提醒方法、装置、无人车和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110103871A (zh) * 2019-04-30 2019-08-09 浙江吉利控股集团有限公司 一种基于安全带的车辆预警方法、装置、设备及终端
US20200369293A1 (en) * 2019-05-20 2020-11-26 Hyundai Mobis Co., Ltd. Autonomous driving apparatus and method
CN110428518A (zh) * 2019-07-31 2019-11-08 百度在线网络技术(北京)有限公司 行程中状态的提示方法、装置和存储介质
CN110562264A (zh) * 2019-08-16 2019-12-13 武汉东湖大数据交易中心股份有限公司 面向无人驾驶的道路危险预测方法、装置、设备及介质
CN112633223A (zh) * 2020-12-30 2021-04-09 北京航迹科技有限公司 无人车行为提醒方法、装置、无人车和存储介质

Also Published As

Publication number Publication date
CN112633223A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
CA3013157C (en) Multi-modal switching on a collision mitigation system
JP6659312B2 (ja) 自律的な乗客用の乗り物のためのコンピューティング装置、コンピュータにより実施される方法及びシステム
US10556541B2 (en) Vehicle periphery monitoring device and vehicle periphery monitoring system
JP6239144B2 (ja) 車載器、自動運転車両、自動運転支援システム、自動運転監視装置、道路管理装置及び自動運転情報収集装置
US10259457B2 (en) Traffic light anticipation
US11366477B2 (en) Information processing device, information processing method, and computer readable medium
US9809158B2 (en) External indicators and notifications for vehicles with autonomous capabilities
JP6648721B2 (ja) 支援装置、支援方法およびプログラム
WO2022142590A1 (zh) 无人车行为提醒方法、装置、无人车和存储介质
JP6283484B2 (ja) 検出されたレーダ信号に関連する情報の表示
DE102017113129A1 (de) Aufhebung des Autonomverhaltens bei Nutzung einer Rettungsgasse
US10996668B2 (en) Systems and methods for on-site recovery of autonomous vehicles
US20210237775A1 (en) Method and device for supporting an attentiveness and/or driving readiness of a driver during an automated driving operation of a vehicle
CN109997355A (zh) 信息提供系统、车辆用装置、信息提供程序
JPWO2018100619A1 (ja) 車両制御システム、車両制御方法、および車両制御プログラム
JP6305650B2 (ja) 自動運転装置及び自動運転方法
WO2018163472A1 (ja) モード切替制御装置、モード切替制御システム、モード切替制御方法およびプログラム
CN112601689B (zh) 车辆的行驶控制方法及行驶控制装置
AU2018324525A1 (en) Systems and methods for changing a destination of an autonomous vehicle in real-time
US11989018B2 (en) Remote operation device and remote operation method
US11960280B2 (en) Display control device and display control method
CN113401056B (zh) 显示控制装置、显示控制方法以及计算机可读取存储介质
JP2021021661A (ja) 遠隔操作システム、プログラム及び車両
US11887476B2 (en) Emergency service vehicle notification and acknowledgement
US11670174B2 (en) Reserved vehicle control method, reserved vehicle control device, and reserved vehicle control system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913335

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.10.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21913335

Country of ref document: EP

Kind code of ref document: A1