CN117841995A - Vehicle interaction method and device, vehicle and storage medium - Google Patents

Vehicle interaction method and device, vehicle and storage medium Download PDF

Info

Publication number
CN117841995A
CN117841995A CN202211206145.8A CN202211206145A CN117841995A CN 117841995 A CN117841995 A CN 117841995A CN 202211206145 A CN202211206145 A CN 202211206145A CN 117841995 A CN117841995 A CN 117841995A
Authority
CN
China
Prior art keywords
vehicle
information
target
running
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211206145.8A
Other languages
Chinese (zh)
Inventor
蔡娜
赵作霖
刘姝婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haomo Zhixing Technology Co Ltd
Original Assignee
Haomo Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haomo Zhixing Technology Co Ltd filed Critical Haomo Zhixing Technology Co Ltd
Priority to CN202211206145.8A priority Critical patent/CN117841995A/en
Publication of CN117841995A publication Critical patent/CN117841995A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application is applicable to the technical field of unmanned operation, and provides a vehicle interaction method, a vehicle interaction device, a vehicle and a storage medium. The vehicle interaction method comprises the following steps: acquiring current running state information of a vehicle, running intention information of the vehicle and scene information of the vehicle; determining a target interaction strategy according to the driving state information, the driving intention information and the scene information; and controlling the vehicle to interact with the target interaction strategy. The vehicle interaction method provided by the embodiment of the application can enable the unmanned vehicle to interact with pedestrians or other vehicles, and improves the driving safety of the unmanned vehicle.

Description

Vehicle interaction method and device, vehicle and storage medium
Technical Field
The application belongs to the technical field of unmanned driving, and particularly relates to a vehicle interaction method, a vehicle interaction device, a vehicle and a storage medium.
Background
With the continuous development of unmanned technology, all unmanned vehicles are continuously developed, however, because no driver exists in the unmanned vehicles, pedestrians and vehicles are mixed and even no traffic signal lamp exists, the unmanned vehicles cannot interact with pedestrians or other vehicles in a limb communication or language communication mode, so that the pedestrians or other vehicles can not understand the intention of the unmanned vehicles, or the unmanned vehicles can not understand the intention of the pedestrians or other vehicles to cause dangerous actions, and the problem of potential safety hazards exists when the unmanned vehicles drive is caused.
Disclosure of Invention
The embodiment of the application provides a vehicle interaction method, a vehicle interaction device, a vehicle and a storage medium, which can solve the problem that potential safety hazards exist when an unmanned vehicle runs because the unmanned vehicle cannot interact with pedestrians or other vehicles.
In a first aspect, an embodiment of the present application provides a vehicle interaction method, including acquiring current running state information of a vehicle, running intention information of the vehicle, and scene information of the vehicle;
determining a target interaction strategy according to the driving state information, the driving intention information and the scene information;
and controlling the vehicle to interact with the target interaction strategy.
In a possible implementation manner of the first aspect, the current driving state information includes a driving direction of the vehicle and a driving speed of the vehicle, the driving intention information includes an intention steering angle of the vehicle and an intention driving speed of the vehicle, and the scene information includes an image within a preset range around the vehicle.
In a possible implementation manner of the first aspect, the determining a target interaction policy according to the driving status information, the driving intention information and the scene information includes:
determining a predicted travel track of the vehicle according to the travel state information and the travel intention information;
determining a target obstacle in the images according to the predicted running track, and determining predicted movement tracks of the target obstacle according to a plurality of images;
and determining the target interaction strategy according to the predicted running track and the predicted moving track.
In a possible implementation manner of the first aspect, the determining the target interaction policy according to the predicted driving track and the predicted movement track includes:
predicting whether the target obstacle and the vehicle have collision risks according to the predicted running track and the predicted moving track;
when the target obstacle and the vehicle have collision risk, the determined target interaction strategy comprises turning on a steering lamp corresponding to the driving intention information, and/or turning on an audio device to play voice corresponding to the driving intention information, and/or turning on an image display to display an image corresponding to the driving intention information.
In a possible implementation manner of the first aspect, the predicting whether the target obstacle and the vehicle have a collision risk according to the predicted running track and the predicted moving track includes:
when the target obstacle is a static obstacle and the target obstacle is located on the predicted running track, determining that the target obstacle and the vehicle have collision risks;
and when the target obstacle is a dynamic obstacle and the distance between the target obstacle and the vehicle is smaller than a first preset distance, determining that the target obstacle and the vehicle have collision risk.
In a possible implementation manner of the first aspect, the controlling the vehicle to interact with the target interaction policy includes:
when the distance between the vehicle and the target obstacle is smaller than a second preset distance, controlling a turn light corresponding to the driving intention information to be turned on, and/or controlling an audio device to play voice corresponding to the driving intention information, and/or controlling an image display to display an image corresponding to the driving intention information.
In a possible implementation manner of the first aspect, the vehicle interaction method further includes:
acquiring a destination of the vehicle;
when the vehicle reaches the destination, controlling an image display on the vehicle to display a corresponding image and/or controlling an audio device on the vehicle to play a corresponding voice.
In a second aspect, an embodiment of the present application provides a vehicle interaction device, including:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring current running state information of a vehicle, running intention information of the vehicle and scene information of the vehicle;
the strategy determining module is used for determining a target interaction strategy according to the running state information, the running intention information and the scene information;
and the control module is used for controlling the vehicle to interact with the target interaction strategy.
In a third aspect, an embodiment of the present application provides a vehicle, including a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method according to any one of the first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
in the running process of the unmanned vehicle, the current running state information of the vehicle, the running intention information of the vehicle and the scene information of the vehicle are firstly obtained, then the target interaction strategy is determined according to the running state information, the running intention information and the scene information, and finally the vehicle is controlled to interact with surrounding pedestrians or other vehicles through the target interaction strategy, so that the surrounding pedestrians and other vehicles can know the action of the vehicle, the collision of the departure is prevented, and the running safety of the vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a vehicle interaction method according to an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle interaction method provided in another embodiment of the present application;
FIG. 3 is a flow chart of a vehicle interaction method provided in another embodiment of the present application;
fig. 4 is a schematic structural diagram of a vehicle interaction device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted in context as "when …" or "upon" or "in response to determining" or "in response to detecting". Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Fig. 1 shows a flowchart of a vehicle interaction method according to an embodiment of the present application. Referring to fig. 1, the vehicle interaction method includes steps S101 to S103.
Step S101, current running state information of the vehicle, running intention information of the vehicle, and scene information where the vehicle is located are acquired.
Specifically, the current running state information of the vehicle includes a running direction and a running speed of the vehicle. The traveling direction of the vehicle may be obtained by an angle sensor or a navigation system on the vehicle, and the traveling speed of the vehicle may be obtained by a speed sensor mounted on the vehicle.
The driving intention information of the vehicle may include an intended steering of the vehicle and an intended driving speed of the vehicle, which is obtained by comprehensively analyzing current driving state information of the vehicle and road condition information around the vehicle by a controller on the vehicle.
The position information of the vehicle includes images within a preset range around the vehicle, and the images within the preset range around the vehicle include pedestrian information, obstacle information, other vehicle information, road condition information, traffic light information, and the like. Environmental information in a preset range can be acquired in real time by using an image recognition algorithm and an artificial intelligence technology through a surrounding camera arranged at the top of the vehicle. If the image acquisition equipment acquires that pedestrians or other vehicles exist in the preset range, the moving track information of the pedestrians or other vehicles, such as the positions, behaviors and speeds of the pedestrians or other vehicles, is analyzed. And forming an interaction strategy of the action state of the current vehicle and the surrounding environment according to the movement track information of pedestrians and other vehicles, the speed information of the current vehicle, the current obstacle information, the current road condition information and the current signal lamp information.
For example, one looking around camera may be selected to be placed on the roof of the vehicle, or three cameras may be selected to be placed on both sides of the vehicle head and the vehicle, respectively. For example, an ultra-clear looking-around camera with the model number IMOUTP 7S-4M2.5K can be selected.
It should be noted that, for environments with low visibility such as night, dense fog, dust and the like, specific detection equipment such as an ultrasonic radar or a laser radar can be adopted to perform image acquisition and data acquisition, and lighting equipment of a vehicle is started, so that the visibility of the vehicle is improved, and the accuracy of data acquisition is improved. Data acquisition may also be performed using specific millimeter wave radar or vehicle identification sensing techniques. The vehicle identification sensing technology can be that a pointer carries out point location tracking on the position of a striking mark (the lamp and license plate of other vehicles) of the vehicle, and senses the speed and the direction of other vehicles on a lane.
Step S102, determining a target interaction strategy according to the driving state information, the driving intention information and the scene information.
Specifically, after the running state information, the running intention information and the scene information of the vehicle are determined, the target interaction strategy can be determined by analyzing the running state information, the running intention information and the scene information, and then the vehicle can interact with surrounding pedestrians or other vehicles according to the target interaction strategy, so that the pedestrians or other vehicles know the running state of the vehicle, collision is prevented, and the running safety of the vehicle is improved.
For example, as shown in fig. 2, step S102 may specifically include steps S1021 to S1023.
Step S1021, determining a predicted travel track of the vehicle based on the travel state information and the travel intention information.
Specifically, the current running direction and running speed of the vehicle can be known according to the current running state information of the vehicle, and the predicted running track of the vehicle can be determined by combining the running intention information (for example, 15 degrees for left turn, 30 degrees for right turn, acceleration, deceleration, parking, and the like) of the vehicle.
Step S1022, determining a target obstacle in the image according to the predicted travel track, and determining a predicted movement track of the target obstacle according to the plurality of images.
Specifically, after the image is acquired by the image acquisition device, an object with a distance from the vehicle being smaller than a third preset distance in the image is taken as a target obstacle, wherein the target obstacle can be a pedestrian, other vehicles, signboards on two sides of a road and other objects on the road. By analyzing the plurality of images, a predicted movement locus of the target obstacle is determined.
For example, first, the image capturing device captures a plurality of images, and then, information such as a speed, an acceleration, a direction, and the like of movement of the dynamic obstacle can be determined by calculation from a difference between a front-rear distance of the obstacle and a difference between a time when each image is captured by the image capturing device.
Step S1023, determining a target interaction strategy according to the predicted running track and the predicted moving track.
Specifically, according to the predicted running track of the vehicle and the predicted moving track of the target obstacle, whether the vehicle and the obstacle collide or not can be determined, and if the predicted running track and the predicted moving track have an intersection, and the vehicle and the target obstacle arrive at the intersection at the same time, the risk of collision between the vehicle and the target obstacle is considered. When the target obstacle and the vehicle have collision risk, the determined target interaction strategy comprises turning on a steering lamp corresponding to the driving intention information, and/or turning on an audio device to play voice corresponding to the driving intention information, and/or turning on an image display to display an image corresponding to the driving intention information. The vehicle can interact with pedestrians or other vehicles in a manner of flashing the steering lamp, displaying information by the image display and playing voice by the audio device, so that the effect of warning the pedestrians or other vehicles is achieved, collision is prevented, and the driving safety of the vehicle is improved.
When the target obstacle is a static obstacle (e.g., stone, tree, sign, etc. on a road) and the target obstacle is located on the predicted travel track, it is determined that there is a risk of collision between the target obstacle and the vehicle. And when the target obstacle is a dynamic obstacle and the distance between the target obstacle and the vehicle is smaller than a first preset distance, determining that the target obstacle and the vehicle have collision risk. That is, at some point, the distance between the dynamic obstacle and the vehicle may be less than the first preset distance, and there is a risk of collision between the target obstacle and the vehicle.
When the relative speed of the dynamic obstacle and the vehicle is greater than the upper speed limit and the distance between the dynamic obstacle and the vehicle is reduced, the corresponding target interaction strategy is to turn on a brake lamp and/or turn on an audio device to play a slow-down voice; when the relative speed of the dynamic obstacle and the vehicle is greater than the upper speed limit and the distance between the dynamic obstacle and the vehicle is increased, no interaction is needed at the moment; when the relative speed of the dynamic obstacle and the vehicle is smaller than the lower speed limit and the lanes are larger than or equal to two, the corresponding target interaction strategy is a horn, the far-near light lamp is used in a changing mode, the vehicle is accelerated to run after the left steering lamp is started for three seconds, and after the vehicle is overtaken, the vehicle returns to the original lane after the right steering lamp is started for three seconds, so that overtaking is realized. Meanwhile, an audio device is started to play the voice of overtaking, the overtaken vehicles are reminded to pay attention to avoid, and overtaken vehicles are prevented from suddenly accelerating or changing lanes to cause overtaking failure, so that collision among the vehicles is prevented; when the relative speed of the dynamic obstacle and the vehicle is between the upper speed limit and the lower speed limit, the corresponding target interaction strategy is to start the audio device to play the following voice, and the running speed of the vehicle is the moving speed of the dynamic obstacle.
The first preset distance may be, for example, 10 meters. When the distance between the target obstacle and the vehicle is smaller than 10 meters, determining that the collision risk exists between the target obstacle and the vehicle, determining that the interaction strategy is turning on a steering lamp corresponding to the driving intention information, and/or turning on an audio device to play voice corresponding to the driving intention information, and/or turning on an image display to display an image corresponding to the driving intention information.
Step S103, controlling the vehicle to interact with the target interaction strategy.
Specifically, the vehicle interaction includes overtaking, parking, slowing down, obstacle detouring and normal driving. After the interaction strategy is determined, the lamplight information can be controlled to be the color of the car light and the flicker frequency information of the car light. If the interaction strategy is determined to be overtaking, an overtaking lamp is lighted, the overtaking lamp is red, and the overtaking lamp flicker frequency is 2 times/second; if the interaction strategy is determined to be parking, the standing-vehicle lamp is always on, and the color of the standing-vehicle lamp is red; if the interaction strategy is determined to be slow down, the brake lamp is always on, and the color of the brake lamp is red; if the interaction strategy is determined to be obstacle detouring, the obstacle detouring lamp is turned on, the color of the obstacle detouring lamp is red, and the flicker frequency of the obstacle detouring lamp is 2 times/second; if the interaction strategy is determined to be in a normal running state, the normal running lamp is always on, and the color of the normal running lamp is green.
In the normal running process of the vehicle, if the vehicle needs to turn left, the left steering lamp is turned on, the color of the left steering lamp is yellow, and the flickering frequency of the left steering lamp is 1 time/second; if the vehicle needs to turn right, the right steering lamp is turned on, the color of the right steering lamp is yellow, and the flickering frequency of the right steering lamp is 1 time/second; if the vehicle needs to be reversed, the reversing light is always on, and the reversing light is yellow.
When the distance between the vehicle and the target obstacle is smaller than the second preset distance, the turn light corresponding to the driving intention information is controlled to be turned on, and/or the audio device is controlled to play voice corresponding to the driving intention information, and/or the image display is controlled to display images corresponding to the driving intention information.
When the distance between the vehicle and the target obstacle is smaller than the second preset distance, namely the vehicle collides with the target obstacle, the turn light corresponding to the driving intention information is controlled to be turned on, and/or the audio device is controlled to play voice corresponding to the driving intention information, and/or the image display is controlled to display images corresponding to the driving intention information.
The second preset distance may be, for example, 2 meters. When the distance between the target obstacle and the vehicle is smaller than 2 meters, the target obstacle and the vehicle are determined to collide, and at the moment, the vehicle is controlled to interact with the target interaction strategy, namely, a turn signal corresponding to the driving intention information is controlled to be turned on, and/or an audio device is controlled to play voice corresponding to the driving intention information, and/or an image display is controlled to display an image corresponding to the driving intention information.
In one embodiment of the present application, as shown in fig. 3, the vehicle interaction method may further include step S104 and step S105.
Step S104, a destination of the vehicle is acquired.
Specifically, the destination includes current position information of the vehicle and position information of the destination, wherein the current position information of the vehicle can be input by a user or directly queried from a database, and the position information of the current vehicle can also be obtained through satellite positioning such as GPS, beidou and the like. The destination location information may be entered by a user or directly queried from a database. The navigation route can be planned according to the current position information and the destination position information of the vehicle, wherein the navigation route can be one or a plurality of navigation routes, and when a plurality of routes to be selected exist, the optimal road to be driven can be comprehensively selected according to the wheel damage degree, the vehicle driving mileage, the vehicle density, the ramp floating degree and the like when the vehicle corner is determined according to each corner to be selected.
It should be noted that the location information may be latitude and longitude information, or may be latitude and longitude information and altitude information.
Step S105, when the vehicle arrives at the destination, controls the image display on the vehicle to display the corresponding image, and/or controls the audio device on the vehicle to play the corresponding voice.
Specifically, after the vehicle arrives at the destination, the interaction strategy at this time is to turn on a brake light, turn on an audio device to play the parking voice, and/or display an image of the parking space on an image display. Thus controlling the vehicle to turn on the brake light, controlling the audio device to play the voice of parking, and/or controlling the image display to display the image of the parking space.
Referring to fig. 4, the vehicle interaction device includes:
an obtaining module 41, configured to obtain current running state information of a vehicle, running intention information of the vehicle, and scene information of the vehicle;
a policy determining module 42, configured to determine a target interaction policy according to the driving status information, the driving intention information, and the scene information;
a control module 43 for controlling the vehicle to interact with the target interaction strategy.
In one embodiment of the present application, the obtaining module 41 is further configured to:
the current running state information includes a running direction of the vehicle and a running speed of the vehicle, the running intention information includes an intended steering angle of the vehicle and an intended running speed of the vehicle, and the scene information includes an image within a preset range around the vehicle.
In one embodiment of the present application, the policy determination module 42 is further configured to:
determining a predicted travel track of the vehicle according to the travel state information and the travel intention information;
determining a target obstacle in the images according to the predicted running track, and determining predicted movement tracks of the target obstacle according to a plurality of images;
and determining the target interaction strategy according to the predicted running track and the predicted moving track.
In one embodiment of the present application, the policy determination module 42 is further configured to:
predicting whether the target obstacle and the vehicle have collision risks according to the predicted running track and the predicted moving track;
when the target obstacle and the vehicle have collision risk, the determined target interaction strategy comprises turning on a steering lamp corresponding to the driving intention information, and/or turning on an audio device to play voice corresponding to the driving intention information, and/or turning on an image display to display an image corresponding to the driving intention information.
In one embodiment of the present application, the policy determination module 42 is further configured to:
when the target obstacle is a static obstacle and the target obstacle is located on the predicted running track, determining that the target obstacle and the vehicle have collision risks;
and when the target obstacle is a dynamic obstacle and the distance between the target obstacle and the vehicle is smaller than a first preset distance, determining that the target obstacle and the vehicle have collision risk.
In one embodiment of the present application, the policy determination module 42 is further configured to:
when the distance between the vehicle and the target obstacle is smaller than a second preset distance, controlling a turn light corresponding to the driving intention information to be turned on, and/or controlling an audio device to play voice corresponding to the driving intention information, and/or controlling an image display to display an image corresponding to the driving intention information.
In one embodiment of the present application, the vehicle interaction device further includes:
a destination acquisition module that acquires a destination of the vehicle;
and the corresponding control module is used for controlling an image display on the vehicle to display a corresponding image and/or controlling an audio device on the vehicle to play a corresponding voice when the vehicle reaches the destination.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, the vehicle 5 of this embodiment may include: at least one processor 50 (only one processor 50 is shown in fig. 5), a memory 51 and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the processor 50 implementing the steps of any of the various method embodiments described above, e.g. steps S101 to S103 in the embodiment shown in fig. 1, when executing the computer program 52. The processor 50, when executing the computer program 52, performs the functions of the modules/units of the device embodiments described above, such as the functions of the modules 41 to 43 shown in fig. 4.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present invention. The one or more modules/units may be a series of instruction segments of the computer program 52 capable of performing a specific function for describing the execution of the computer program 52 in the unmanned vehicle 5.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), the processor 50 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the vehicle 5, such as a hard disk or a memory of the vehicle 5. The memory 51 may in other embodiments also be an external storage device of the vehicle 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the unmanned vehicle 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the vehicle 5. The memory 51 is used to store an operating system, application programs, boot Loader (Boot Loader), data, other programs, etc., such as program code of the computer program 52. The memory 51 may also be used to temporarily store data that has been output or is to be output.
The present embodiments also provide a computer readable storage medium storing a computer program 52, which computer program 52, when executed by a processor 50, implements steps that may be implemented in the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. With such understanding, the present application implements all or part of the flow of the method of the above-described embodiments, and may be implemented by a computer program 52 to instruct related hardware, where the computer program 52 may be stored in a computer readable storage medium, and where the computer program 52, when executed by the processor 50, may implement the steps of the method embodiments described above. The computer program 52 comprises computer program code, which may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a terminal device, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A vehicle interaction method, comprising:
acquiring current running state information of the vehicle, running intention information of the vehicle and scene information of the vehicle;
determining a target interaction strategy according to the driving state information, the driving intention information and the scene information;
and controlling the vehicle to interact with the target interaction strategy.
2. The vehicle interaction method according to claim 1, wherein the current running state information includes a running direction of the vehicle and a running speed of the vehicle, the running intention information includes an intended steering angle of the vehicle and an intended running speed of the vehicle, and the scene information includes an image within a preset range around the vehicle.
3. The vehicle interaction method according to claim 2, wherein the determining a target interaction policy according to the driving state information, the driving intention information, and the scene information includes:
determining a predicted travel track of the vehicle according to the travel state information and the travel intention information;
determining a target obstacle in the images according to the predicted running track, and determining predicted movement tracks of the target obstacle according to a plurality of images;
and determining the target interaction strategy according to the predicted running track and the predicted moving track.
4. The vehicle interaction method of claim 3, wherein said determining said target interaction strategy from said predicted travel trajectory and said predicted movement trajectory comprises:
predicting whether the target obstacle and the vehicle have collision risks according to the predicted running track and the predicted moving track;
when the target obstacle and the vehicle have collision risk, the determined target interaction strategy comprises turning on a steering lamp corresponding to the driving intention information, and/or turning on an audio device to play voice corresponding to the driving intention information, and/or turning on an image display to display an image corresponding to the driving intention information.
5. The vehicle interaction method according to claim 4, wherein predicting whether there is a collision risk between the target obstacle and the vehicle based on the predicted travel trajectory and the predicted movement trajectory includes:
when the target obstacle is a static obstacle and the target obstacle is located on the predicted running track, determining that the target obstacle and the vehicle have collision risks;
and when the target obstacle is a dynamic obstacle and the distance between the target obstacle and the vehicle is smaller than a first preset distance, determining that the target obstacle and the vehicle have collision risk.
6. The vehicle interaction method of claim 4, wherein the controlling the vehicle to interact with the target interaction strategy comprises:
when the distance between the vehicle and the target obstacle is smaller than a second preset distance, controlling a turn light corresponding to the driving intention information to be turned on, and/or controlling an audio device to play voice corresponding to the driving intention information, and/or controlling an image display to display an image corresponding to the driving intention information.
7. The vehicle interaction method according to any one of claims 1 to 6, characterized in that the vehicle interaction method further comprises:
acquiring a destination of the vehicle;
when the vehicle reaches the destination, controlling an image display on the vehicle to display a corresponding image and/or controlling an audio device on the vehicle to play a corresponding voice.
8. A vehicle interaction device, comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring current running state information of a vehicle, running intention information of the vehicle and scene information of the vehicle;
the strategy determining module is used for determining a target interaction strategy according to the running state information, the running intention information and the scene information;
and the control module is used for controlling the vehicle to interact with the target interaction strategy.
9. A vehicle comprising a processor, a memory and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, which when executed by a processor implements the method according to any one of claims 1-7.
CN202211206145.8A 2022-09-30 2022-09-30 Vehicle interaction method and device, vehicle and storage medium Pending CN117841995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211206145.8A CN117841995A (en) 2022-09-30 2022-09-30 Vehicle interaction method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211206145.8A CN117841995A (en) 2022-09-30 2022-09-30 Vehicle interaction method and device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117841995A true CN117841995A (en) 2024-04-09

Family

ID=90531735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211206145.8A Pending CN117841995A (en) 2022-09-30 2022-09-30 Vehicle interaction method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117841995A (en)

Similar Documents

Publication Publication Date Title
US11462022B2 (en) Traffic signal analysis system
JP6840240B2 (en) Dynamic route determination for autonomous vehicles
US10643474B2 (en) Vehicle control device, vehicle control method, and recording medium
US10943133B2 (en) Vehicle control device, vehicle control method, and storage medium
CN108688660B (en) Operating range determining device
US20190315348A1 (en) Vehicle control device, vehicle control method, and storage medium
US11170651B2 (en) Vehicle control device, vehicle control method, and storage medium
US20190138002A1 (en) Vehicle control system, vehicle control method, and vehicle control program
US20190071075A1 (en) Vehicle control system, vehicle control method, and vehicle control program
CN110036426B (en) Control device and control method
CN107949875B (en) Method and system for determining traffic participants with interaction possibilities
EP3794571A2 (en) System and method for using v2x and sensor data
JP2024023319A (en) Detection of emergency vehicle
US20200094875A1 (en) Vehicle control device, vehicle control method, and storage medium
JP2020086986A (en) Traffic guide object recognition device, traffic guide object recognition method, and program
CN111731295B (en) Travel control device, travel control method, and storage medium storing program
CN111731294B (en) Travel control device, travel control method, and storage medium storing program
CN111932881A (en) Traffic intersection management method and device, terminal device and storage medium
JP2019039831A (en) Automatic driving device
US11027651B2 (en) Vehicle control device, vehicle control system, vehicle control method, and storage medium
CN111731296A (en) Travel control device, travel control method, and storage medium storing program
CN108715164A (en) Driving ancillary equipment and method for vehicle
US20210291736A1 (en) Display control apparatus, display control method, and computer-readable storage medium storing program
CN115092186A (en) Vehicle automatic driving method and device, electronic equipment and storage medium
CN117841995A (en) Vehicle interaction method and device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination