CN113320537A - Vehicle control method and system - Google Patents

Vehicle control method and system Download PDF

Info

Publication number
CN113320537A
CN113320537A CN202110806617.2A CN202110806617A CN113320537A CN 113320537 A CN113320537 A CN 113320537A CN 202110806617 A CN202110806617 A CN 202110806617A CN 113320537 A CN113320537 A CN 113320537A
Authority
CN
China
Prior art keywords
vehicle
user
information
driving
feedback information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110806617.2A
Other languages
Chinese (zh)
Inventor
李昌远
阮春彬
李晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Voyager Technology Co Ltd
Original Assignee
Beijing Voyager Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Voyager Technology Co Ltd filed Critical Beijing Voyager Technology Co Ltd
Priority to CN202110806617.2A priority Critical patent/CN113320537A/en
Publication of CN113320537A publication Critical patent/CN113320537A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle

Abstract

The embodiment of the specification discloses a vehicle control method and a vehicle control system, wherein the method comprises the following steps: acquiring feedback information of a vehicle running condition input by a user at a user terminal; and operating the running system based on the feedback information, wherein the running system is set to directly or indirectly change the running or the allocation of the vehicle.

Description

Vehicle control method and system
Technical Field
The present disclosure relates to vehicle control technologies, and in particular, to a vehicle control method and system.
Background
With the continuous development of computer technology, intelligent travel (e.g., shared travel or automatic driving, etc.) has also been rapidly developed. Taking automated driving as an example, an autonomous vehicle may refer to a vehicle capable of achieving a certain level of driving automation. For example, autonomous vehicles may be controlled by a system (e.g., a back-end remote control) to enable autonomous driving. The automatic driving vehicle can be used as a taxi or a public transport means and the like, and the problem to be solved is urgent to control the automatic driving vehicle and better meet different requirements of users while the automatic driving vehicle provides services for human beings.
Disclosure of Invention
One of the embodiments of the present specification provides a vehicle control method, including: acquiring feedback information of a vehicle running condition input by a user at a user terminal; and operating a driving system based on the feedback information, wherein the driving system is set to directly or indirectly change the driving or the allocation of the vehicle.
One of the embodiments herein provides a vehicle control system, the system including: the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring feedback information of a vehicle running condition input by a user at a user terminal; and an operation module for operating a driving system based on the feedback information, wherein the driving system is configured to directly or indirectly change the driving or allocation of the vehicle.
One of the embodiments herein provides a vehicle control apparatus, the apparatus including at least one processor and at least one memory; the at least one memory is for storing computer instructions; the at least one processor is configured to execute at least some of the computer instructions to implement operations corresponding to the vehicle control method of any of the preceding claims.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement operations corresponding to the vehicle control method according to any one of the preceding claims.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a vehicle control system according to some embodiments herein;
FIG. 2 is a flow chart of a vehicle control method according to some embodiments herein;
FIG. 3 is a flow diagram illustrating obtaining feedback information based on a first reminder information according to some embodiments of the present description;
FIG. 4 is a flow diagram illustrating sending a first reminder based on status information according to some embodiments of the present description;
5A-5D are user interface diagrams shown in accordance with some embodiments of the present description;
FIG. 6 is a flow diagram illustrating sending a first reminder message according to some embodiments of the present description;
FIG. 7 is a flow diagram illustrating predicting a target location according to some embodiments of the present description;
FIG. 8 is a flow diagram illustrating determining status information according to some embodiments of the present description;
FIG. 9 is a flow chart illustrating operation of a travel system according to some embodiments of the present description;
FIG. 10 is a flow chart illustrating operation of a travel system according to some embodiments of the present description;
FIG. 11 is a flow chart illustrating operation of a travel system according to some embodiments of the present description;
FIG. 12 is a schematic illustration of the operation of a travel system according to some embodiments of the present description;
FIG. 13 is a flow chart illustrating sending a second reminder message according to some embodiments of the present description;
FIG. 14 is a schematic illustration of the operation of a travel system according to some embodiments of the present description;
FIG. 15 is a schematic illustration of the operation of a travel system according to some embodiments of the present description;
FIG. 16 is a flow chart illustrating updating vehicle control parameters according to some embodiments herein;
FIG. 17 is a flow chart illustrating operation of a travel system according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
FIG. 1 is a schematic diagram of an application scenario of a vehicle control system according to some embodiments of the present disclosure. The vehicle control system 100 may be applied to a transportation service system, a traffic service system, and the like. In some embodiments, the vehicle control system 100 may be applied to an autonomous vehicle. In some embodiments, the vehicle control system 100 may be applied to a web appointment service, a designated drive service, express delivery, take-away, and the like. In some embodiments, the vehicle control system 100 may include a server 110, a network 120, a user terminal 130, a database 140, and a vehicle 150. The server 110 may include a processing device 112.
In some embodiments, the server 110 may be used to process information and/or data related to vehicle control. The server 110 may be a stand-alone server or a group of servers. The set of servers can be centralized or distributed (e.g., server 110 can be a distributed system).
In some embodiments, the server 110 may be regional or remote. For example, server 110 may access information and/or profiles stored in user terminal 130, database 140, vehicle 150, and detection unit 152 via network 120. In some embodiments, the server 110 may be directly connected to the user terminal 130, the database 140, the vehicle 150, and the detection unit 152 to access information and/or material stored therein. In some embodiments, the server 110 may execute on a cloud platform. For example, the cloud platform may include one or any combination of a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like.
In some embodiments, the server 110 may include a processing device 112. The processing device 112 may process data and/or information related to vehicle control to perform one or more of the functions described in the specification. For example, the processing device 112 may acquire the state information and transmit the first warning information to the user terminal based on the state information, so that the user inputs the feedback information on the driving condition of the vehicle at the user terminal. For another example, the processing device 112 may determine an operation to be performed on the travel system based on the feedback information, where the operation includes, but is not limited to: updating a first navigation route of the vehicle, determining driving parameters of the vehicle at a target location, updating user preferences, updating vehicle control parameters, determining information related to vehicle maintenance, and the like. In some embodiments, the processing device may act as a travel system to control the vehicle. For example, the processing device 112 controls the travel of the vehicle based on the updated first navigation route. For another example, the processing device 112 may control the travel of the vehicle at the target location based on the travel parameters of the vehicle at the target location. As another example, the processing device 112 may control the travel and deployment of the vehicle to the user based on the user's preferences. For another example, the processing device 112 may control travel of the associated vehicle in at least one of the associated location and the associated travel environment based on the updated vehicle control parameters. For another example, the processing device 112 may control whether or not to collect the vehicle for maintenance or the like based on the vehicle maintenance-related information.
In some embodiments, the processing device 112 may include one or more sub-processing devices (e.g., a single core processing device or a multi-core processing device). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processor (GPU), a Physical Processor (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic circuit (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof. In some embodiments, processing device 122 may be integrated into vehicle 150 and/or user terminal 130.
Network 120 may facilitate the exchange of data and/or information. In some embodiments, one or more components in system 100 (e.g., server 110, user terminal 130, database 140, vehicle 150, detection unit 152) may send data and/or information to other components over network 120. In some embodiments, the network 120 may be any type of wired or wireless network. For example, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or Internet switching points 120-1, 120-2, …, through which one or more components of the system 100 may connect to the network 120 to exchange data and/or information.
In some embodiments, the user terminal 130 may be a terminal in which a user inputs feedback information on the running condition of the vehicle. In some embodiments, the user may be a service user. For example, the service users may include passengers of a networked car appointment platform, passengers of an autonomous vehicle, navigation service users, and transport service users, among others. In some embodiments, the user terminal 130 may include one or any combination of a mobile device 130-1, a tablet 130-2, a laptop 130-3, an in-vehicle device (not shown), a wearable device (not shown), and the like. In some embodiments, the mobile device 130-1 may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, and the like, or any combination thereof. The smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, the like, or any combination thereof. The virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyeshields, augmented reality helmets, augmented reality glasses, augmented reality eyeshields, and the like, or any combination thereof. In some embodiments, the in-vehicle device may include an in-vehicle computer, an in-vehicle television, or the like. In some embodiments, the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart garment, smart backpack, smart accessory, or the like, or any combination thereof. In some embodiments, the user terminal 130 may be a device having a positioning technology for locating the position of the user terminal 130.
Database 140 may store data and/or instructions. In some embodiments, database 140 may store data obtained from user terminals 130, vehicle 150 detection unit 152, processing device 112, and the like. In some embodiments, database 140 may store information and/or instructions for server 110 to perform or use to perform the exemplary methods described herein. For example, the database 140 may store current vehicle states (e.g., pitch, speed, acceleration, etc.) collected by the detection unit 152. For another example, the database 140 may store feedback information input by the user at the user terminal 130. As another example, the database 140 may also store user preferences and determined target locations, etc. updated by the processing device 112.
In some embodiments, database 140 may include mass storage, removable storage, volatile read-write memory (e.g., random access memory RAM), read-only memory (ROM), the like, or any combination thereof. In some embodiments, database 140 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like, or any combination thereof.
In some embodiments, database 140 may be connected to network 120 to communicate with one or more components of system 100 (e.g., server 110, user terminal 130, vehicle 150, detection unit 152, etc.). One or more components of system 100 may access data or instructions stored in database 140 via network 120. For example, the server 110 may obtain the user status from the database 140 and perform corresponding processing. In some embodiments, database 140 may be directly connected to or in communication with one or more components in system 100 (e.g., server 110, user terminal 130). In some embodiments, database 140 may be part of server 110. In some embodiments, database 140 may be integrated into vehicle 110.
The vehicle 150 may be any type of driving vehicle, such as an autonomous vehicle, a net reservation driver, etc. As used herein, an autonomous vehicle may refer to a vehicle that is capable of achieving a level of driving automation. For example, the level of driving automation may include a first level, i.e., the vehicle is primarily supervised by humans and has a particular autonomous function (e.g., autonomous steering or acceleration), a second level, i.e., the vehicle has one or more advanced driver assistance systems (ADAS, e.g., adaptive cruise control systems, lane keeping systems) that may control braking, steering, and/or accelerating the vehicle, a third level, i.e., the vehicle is capable of being automatically driven when one or more certain conditions are met, a fourth level, i.e., the vehicle may be operated without human input or inattention, but still subject to certain limitations (e.g., limited to a certain area), a fifth level, i.e., the vehicle may be operated autonomously in all circumstances, and the like, or any combination thereof.
In some embodiments, vehicle 150 may have an equivalent structure that enables vehicle 150 to move or fly. For example, the vehicle 150 may include the structure of a conventional vehicle, such as a chassis, a suspension, a steering device (e.g., a steering wheel), a braking device (e.g., a brake pedal), an accelerator, and so forth. As another example, the vehicle 150 may have a body and at least one wheel. The body may be any type of body, such as a sports vehicle, a coupe, a sedan, a light truck, a station wagon, a Sport Utility Vehicle (SUV), a minivan, or a switch car. At least one wheel may be configured as all-wheel drive (AWD), front-wheel drive (FWR), rear-wheel drive (RWD), or the like. In some embodiments, it is contemplated that the vehicle 150 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, a conventional internal combustion engine vehicle, or the like.
In some embodiments, the vehicle 150 is able to sense its environment and travel using one or more detection units 152. The at least two detection units 152 may include sensor devices (e.g., radars (e.g., lidar devices)), Global Positioning System (GPS) modules, Inertial Measurement Units (IMUs), cameras, and the like, or any combination thereof. A radar (e.g., a lidar device) may be configured to scan the surroundings of vehicle 150 and generate corresponding data. A GPS module may refer to a device capable of receiving geolocation and time information from GPS satellites and determining the geographic location of the device. An IMU may refer to an electronic device that uses various inertial sensors to measure and provide specific forces, angular rates of a vehicle, and sometimes magnetic fields around the vehicle. In some embodiments, the various inertial sensors may include acceleration sensors (e.g., piezoelectric sensors), velocity sensors (e.g., hall sensors), distance sensors (e.g., radar, infrared sensors), steering angle sensors (e.g., tilt sensors), traction-related sensors (e.g., force sensors), and the like. The camera may be configured to acquire one or more images related to a target (e.g., a person, animal, tree, barricade, building, or vehicle) within range of the camera.
It will be understood by those of ordinary skill in the art that when an element (or component) of the vehicle control system 100 executes, the element may execute via electrical and/or electromagnetic signals. For example, when user terminal 130 sends feedback information to server 110, a processor of user terminal 130 may generate an electrical signal encoding the feedback information. The processor of the user terminal 130 may then send the electrical signal to the output port. If user terminal 130 communicates with server 110 via a wired network, the output port may be physically connected to a cable, which may also send electrical signals to the input port of server 110. If user terminal 130 communicates with server 110 via a wireless network, the output port of user terminal 130 may be one or more antennas that convert electrical signals to electromagnetic signals. Within an electronic device, such as user terminal 130 and/or server 110, when its processor processes instructions, issues instructions, and/or performs actions, the instructions and/or actions are performed by electrical signals. For example, when the processor retrieves or saves data from a storage medium (e.g., database 140), it may send electrical signals to the storage medium's read/write device, which may read or write structured data in the storage medium. The configuration data may be transmitted to the processor in the form of electrical signals via a bus of the electronic device. Herein, an electrical signal may refer to one electrical signal, a series of electrical signals, and/or at least two discrete electrical signals.
In some embodiments, a processing device (e.g., processing device 112) may include an acquisition module and an operation module.
The acquisition module can be used for acquiring feedback information of the running condition of the vehicle, which is input by a user at the user terminal. The running condition includes: at least one of a driving route, a driving location, a driving section, a driving environment, a driving state, and a driving parameter. In some embodiments, the feedback information is input by the user at the user terminal in response to the first reminder information.
In some embodiments, the obtaining module may be configured to obtain the status information, and send the first reminding information to the user terminal based on the status information. The status information includes at least one of a vehicle status and a user status. The vehicle state includes at least one of a current vehicle state, a vehicle state after a preset time, and a vehicle state at the target position. For example, the acquisition module may be configured to determine the status information based on image information of the user and vehicle-related audio information.
In some embodiments, the acquisition module may be used to acquire the target location. For example, the acquisition module may predict the target location on the first navigation route based on historical feedback information and its corresponding historical driving condition, wherein the first navigation route is a navigation route being used by the vehicle, the historical feedback information being related to at least one of a track point and a user on the first navigation route. For another example, the acquisition module may determine the target location from the first navigation route based on the feedback information and its corresponding driving condition.
In some embodiments, the obtaining module may be configured to determine a feedback reminding mode, generate first reminding information based on the feedback reminding mode, and send the first reminding information to the user terminal.
The operation module may be configured to operate the travel system based on the feedback information.
In some embodiments, the operation module may be configured to determine an acceptance of the feedback information based on the user-associated information, and determine whether to operate the travel system based on the acceptance.
In some embodiments, the operation module may be configured to generate a plurality of second navigation routes based on the current location and the destination of the vehicle, and determine a target navigation route from the plurality of second navigation routes to update the first navigation route.
In some embodiments, the operating module may be used to adjust a driving parameter of the vehicle at the target location. For example, the operating module may be further configured to adjust a driving parameter of the vehicle at the target location based on historical feedback information and a corresponding historical driving condition thereof, the historical feedback information being associated with at least one of the target location or the user.
In some embodiments, the operation module may be configured to send second reminding information to the driving system based on the feedback information, where the second reminding information is used to remind the driving terminal to adjust the driving of the vehicle.
In some embodiments, the operation module may be configured to send the second reminder information to the driving system based on a relationship between a current location of the vehicle and the target location.
In some embodiments, the operation module may be configured to update the user preferences based on the feedback information and its corresponding driving conditions; wherein the user preferences are used by the travel system to determine the deployment or travel of the vehicle for the user.
In some embodiments, the operation module may be configured to update a vehicle control parameter based on the feedback information and its corresponding driving condition, the vehicle control parameter being a driving parameter of the associated vehicle at least one of the associated location and the associated driving environment. For example, the operation module may be configured to update the vehicle control parameter based on the feedback information and its corresponding travel condition, and historical feedback information and its corresponding historical travel information, the historical feedback information relating to at least one of the associated vehicle at the associated location and the associated travel environment.
In some embodiments, the operation module may be configured to send third reminder information to the driving system based on the feedback information, the base information, and the usage maintenance information, the third reminder information being related to maintenance of the vehicle.
It should be noted that the above description of the vehicle control system and its modules is merely for convenience of description and should not limit the present disclosure to the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the disclosed acquisition module and operation module may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
FIG. 2 is a flow chart of a vehicle control method according to some embodiments described herein. As shown in fig. 2, the process 200 may include the following steps 210 and 220. In some embodiments, flow 200 may be performed by a processing device (e.g., processing device 112).
Step 210, obtaining feedback information of the running condition of the vehicle, which is input by the user at the user terminal. In some embodiments, step 210 may be performed by an acquisition module.
The user refers to a passenger riding in the vehicle. The vehicle may be a vehicle used in an intelligent trip. For example, the vehicle may be a shared vehicle in a networked appointment. Also for example, the vehicle may be an autonomous vehicle or the like.
The user terminal is a terminal that a user performs an operation or receives information. In some embodiments, the user terminal may be a terminal that provides an online service for the user. For example, the user terminal may be a terminal that provides a network car-booking service for the user. For another example, the user terminal may be a terminal that provides an automatic driving service to the user. The user terminal may include, but is not limited to, one or more combinations of a notebook, a tablet, a cell phone, a Personal Digital Assistant (PDA), an in-car camera, a dedicated touch device, and a wearable device, among others. In some embodiments, the user terminal may also be a device terminal inside the vehicle for collecting information or providing information, such as a vehicle-mounted terminal, an electronic device, or a remote control device.
The running condition of the vehicle may include information related to running of the vehicle. The running condition may include: one or more combinations of a travel route, a travel position, a travel section, a travel environment, a travel state, a travel parameter, and the like.
The travel route may be a route on which the vehicle travels from a departure point to a destination. In some embodiments, the travel route may be at least one navigation route automatically generated by the processing device based on the origin and destination. The origin and destination may be obtained in a variety of ways. For example, by a user entering the acquisition at a user terminal.
The travel position may be a position through which the vehicle travels. The location may include a point, area, or segment. In some embodiments, the travel location may be a location that the vehicle has passed, a current location, or an upcoming location. The driving location may be a track point in the driving route. For example, the driving location may include a departure place, a destination, a current location, and the like.
The road segment may be a route formed by at least two position points on a road network. The at least two location points may be adjacent or non-adjacent. For example, a road segment may be a section or the entirety of a road.
In some embodiments, the travel segment may be a segment over which the vehicle travels. In some embodiments, the travel segment may be a segment that the vehicle has traveled, a segment that the current location is on, or a segment that is about to travel. In some embodiments, the travel segment may also be a preset segment included in the travel route. The preset section may be specifically set according to an actual demand, and for example, the preset section may include at least one of a turning section, a roundabout section, a no-parking section, a one-way driving section, and an accident-prone section.
The running environment may be environmental information related to the vehicle or the running of the vehicle. The running environment may include an environment inside the vehicle when the vehicle runs and an environment outside the vehicle when the vehicle runs. In some embodiments, the driving environment may include air quality inside the vehicle, humidity inside the vehicle, temperature inside the vehicle, number of occupants inside the vehicle, tidiness inside the vehicle, light inside the vehicle, appearance of occupants inside the vehicle, and the like. In some embodiments, the driving environment may include at least one of a weather environment, a road condition environment, and a time environment.
In some embodiments, the weather environment may reflect air temperature, air pressure, humidity, visibility, and the like. For example, the weather environment may include rain, snow, sunny, and the like.
In some embodiments, the road condition environment may reflect a smooth driving condition of the vehicle. For example, the road condition environment may reflect road severity, number of lanes, road congestion, road closure, road construction, traffic control, and the like. For example, the road conditions may include traffic flow, number of traffic lights, construction ahead, etc.
In some embodiments, the time environment may reflect time information of vehicle travel. For example, the time context may include whether it is a holiday, whether it is a workday, whether it is a commute peak, and so forth.
The running state may be state information of the vehicle during running of the vehicle. In some embodiments, the driving conditions may include: a normal running state, an abnormal running state, and the like. The abnormal driving state may be driving in which a driving event occurs, and the normal driving state may be driving in which no driving event occurs. The driving event can be events related to vehicle operation and events affecting the ride experience of the user, which occur during the driving process of the vehicle. In some embodiments, the driving event may include a start, stop, acceleration, deceleration, jerk, overspeed, jerk, turn, and bump event of the vehicle.
The running parameter refers to a parameter reflecting a running condition of the vehicle. In some embodiments, the driving parameters include, but are not limited to: velocity parameters and acceleration parameters, etc.
The feedback information may reflect the attitude of the user to the driving condition of the vehicle. In some embodiments, the feedback information may include whether the driving condition of the vehicle is satisfactory and/or satisfactory (e.g., highly satisfactory, generally satisfactory and unsatisfactory, etc.), and/or the like. In some embodiments, the feedback information may also reflect other information. For example, a cause of satisfaction or dissatisfaction, a recommendation of a running condition of the vehicle, and the like.
The ways of the different embodiments can be combined. For example, in some embodiments, the user may input feedback information on the driving parameters of the vehicle in the mobile phone terminal; in some embodiments, the processing device may determine whether to send the first reminding information based on the vehicle state after the preset time, and the user may input feedback information on the driving route of the vehicle in the vehicle-mounted terminal at an appropriate time after receiving the first reminding information.
In some embodiments, the user may enter the feedback information in a variety of ways. For example, the user may send feedback information by clicking an icon or text button, sending text or voice, or making a gesture action or a facial action, etc. For example, the user sends feedback information representing dissatisfaction by clicking the 'click-on' icon. And the user makes a preset gesture corresponding to 'satisfaction' as the sent feedback information facing the camera of the terminal.
The ways of the different embodiments can be combined. For example, in some embodiments, a user may click on an icon at a cell phone interface to input feedback information for a travel event; in some embodiments, the processing device may determine whether to send the first reminding information based on a vehicle state at the target location, and if so, the processing device sends the first reminding information to a wearable device of the user when the distance from the target location is preset, and the user may send voice as feedback information on the driving route through the wearable device after receiving the first reminding information.
After the user inputs the feedback information, the feedback information may be stored in a storage device or directly transmitted to a processing device. The processing device (e.g., the obtaining module) may obtain the feedback information directly from the user terminal, or may read the feedback information from the storage device.
In some embodiments, after the processing device obtains the feedback information, the processing device may process the feedback information to determine an attitude reflected by the feedback information. For example, the processing device may determine the attitude represented by the icon clicked on the user terminal by the user through a preset relationship between the icon and the attitude. For another example, the processing device may identify (e.g., via a model or algorithm, etc.) text, pictures, video, or speech sent by the user terminal to determine the reflected attitude.
In some embodiments, the user may input the feedback information at the user terminal according to his or her will or needs. For example, the user inputs feedback information at the user terminal according to a vehicle running condition (e.g., the vehicle is running too fast or the vehicle is suddenly braked). For another example, the user inputs feedback information at the user terminal at any time.
In some embodiments, the user may also input feedback information based on the first reminder information sent by the processing device. For specific details of the feedback based on the reminder information, reference may be made to fig. 3 and the related description thereof, which are not described herein again.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may determine whether to send the first reminding information based on a current user state (e.g., a current emotion of the user), and the user may input the feedback information in the mobile phone terminal based on the first reminding information received by the vehicle-mounted device; in some embodiments, the processing device may determine whether to send the first prompting information based on a vehicle state after a preset time, and if so, send the first prompting information to the mobile phone end at a certain moment before the preset time, and the user may input feedback information in the mobile phone end based on the first prompting information received by the mobile phone end, and when the feedback information reflects that the user is not satisfied with the navigation route, the navigation route may be updated.
Based on the feedback information, the driving system is operated 220, wherein the driving system is configured to directly or indirectly change the driving or the allocation of the vehicle. In some embodiments, step 220 may be performed by an operational module.
In some embodiments, the travel system may include a system that directly changes the travel of the vehicle. For example, the travel system may include an autonomous vehicle control platform. The autonomous vehicle control platform may send the adjusted driving parameters to the autonomous vehicle to directly change the driving of the autonomous vehicle.
In some embodiments, the travel system may include a system that indirectly alters travel of the vehicle. For example, the driving system may include a driving terminal (e.g., a driver terminal, a vehicle terminal), a network appointment control platform, and the like. The driver may receive the adjusted driving parameter from the driving terminal and drive the vehicle according to the adjusted driving parameter to indirectly change the driving of the vehicle. The network appointment control platform can send an instruction to the driving terminal, and the instruction can comprise adjustment information of the driving parameters of the vehicle. For another example, the driving terminal may determine the adjusted driving parameters directly according to the feedback information of the user.
In some embodiments, the driving system may include a system that directly changes the vehicle's mix. For example, a net appointment control platform, an automatic driving control platform, etc.
In some embodiments, the driving system may include a system that indirectly alters vehicle deployment. Such as a user terminal. The user terminal can send an instruction to the online appointment control platform to indirectly change the vehicle appointment, wherein the instruction comprises information related to vehicle appointment, such as preference of the user on the vehicle or vehicle running.
In some embodiments, the operation on the travel system may be an operation related to vehicle travel or deployment. For example, updating a navigation route being used by the vehicle, adjusting a driving parameter of the vehicle at a target location, sending a second reminder to the driving terminal, updating a user preference, updating a vehicle control parameter, and/or sending a third reminder to the driving system, etc.
In some embodiments, different feedback information may correspond to different operations. For example, if the feedback information reflects that the user is dissatisfied with the vehicle speed, the vehicle control parameters may be updated. For another example, if the feedback information reflects that the user is dissatisfied with the navigation route, the navigation route may be updated. For specific details regarding the operation of the driving system of the vehicle based on the feedback information, reference may be made to fig. 8, 9, 10, 11, 13, 14, 16 and their related descriptions below, which are not repeated herein.
Some embodiments of the present description may operate a driving system based on feedback information of a user, improve a driving condition corresponding to user feedback dissatisfaction, and improve a riding experience of the user; meanwhile, the user can timely feed back the travel process, the situation that the travel control automatically set by the automatic driving vehicle cannot meet the actual requirements of the user can be avoided, and the riding experience of the user is further improved.
In some embodiments, the processing device may send a feedback reward to the user terminal based on the feedback information. The feedback reward may be in a variety of forms. For example, the feedback reward may be a coupon or the like. The feedback rewards may be preset according to requirements. In some embodiments, the processing device may determine whether to feedback a reward to the user or determine a reward degree for feeding back the reward based on the receptivity of the feedback information. The processing device may calculate the reward degree based on a plurality of negative feedback information of the driving condition and corresponding preset weights thereof. By giving feedback rewards to the users, the users can actively feed back the feedback information, and the information collection integrity is improved.
FIG. 3 is a flow chart illustrating obtaining feedback information based on first reminder information according to some embodiments of the present description. As shown in fig. 3, the process 300 may include the following steps 310 and 320. In some embodiments, flow 300 may be performed by a processing device (e.g., processing device 112).
And step 310, sending the first reminding information to the user terminal. In some embodiments, step 310 may be performed by an acquisition module.
The first warning information may be information for guiding the user terminal to feed back the driving condition. The first reminding information comprises the content and the mode of the first reminding information.
In some embodiments, the content of the first reminder information may be preset. In some embodiments, the content of the first reminder information may be determined based on the driving condition for which the feedback is directed. For example, if the sensor of the vehicle detects a bump, the content of the first warning information may include "please feed back whether the vehicle is stationary". For another example, if the target position Q is a turning section, the content of the first reminder information may include "please feedback whether the turning at the position Q is smooth. In some embodiments, the content of the first reminder information may be determined based on the user status. For example, if the state of the user is obtained as the body shaking, the content of the first reminding information may include "please feedback whether sudden braking of the vehicle occurs". In some embodiments, the content of the first reminder information may be determined in conjunction with the user state and the driving condition for which the feedback is directed. For example, if no abnormality of the vehicle is found, but the user is angry, the content of the first reminder information may include "please feedback whether the running condition of the current vehicle is satisfactory" or the like. The content of the corresponding first reminding information can be set according to different user states and/or the driving conditions for which the feedback is directed, and the processing device can directly obtain the corresponding content after obtaining the user states and/or the driving conditions for which the feedback is directed.
The first reminder information may be represented in a variety of ways. For example, the first reminder information may be voice, text, image, video, vibration, etc. For example, the user terminal may display an icon of "click on" or "like" as the first reminder information.
In some embodiments, the processing device may determine the representation of the first reminder information according to at least one of a user state, a user terminal state, a driving environment, and user-related information, and refer to fig. 6 and its related description in detail.
In some embodiments, the processing device may send the first reminder information to the user terminal based on a preset rule. The preset rules can be specifically set according to actual conditions.
In some embodiments, the preset rule may include sending the first reminder information at preset intervals. For example, the first reminding information is sent every 5 min. The preset time can be specifically set according to actual conditions, so that poor riding experience of a user caused by frequently sending reminding information or incomplete feedback information collection caused by untimely reminding can be avoided.
In some embodiments, the preset rule may include sending the first alert message at a preset distance of travel. For example, the first warning message is transmitted every 50m of the travel distance. The preset rule may further include sending the first reminder information at a preset location. For example, at red, yellow, green lights, etc. In some embodiments, the preset rules may be adjusted and optimized according to the driving conditions. For example, if the weather is rainy or snowy, the preset time or the preset travel distance may be shortened.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may send the content of the first reminder information related to the vehicle speed to a wearable device worn by the user, to be alerted by the wearable device in a vibrating manner. In some embodiments, the processing device may send first reminding information "please feedback the current driving status" to the user terminal at preset time intervals, where the sending modes of the first reminding information are different, and the sending modes are respectively voice, image, text, and the like.
In some embodiments, the processing device may determine whether to send the first reminder information based on the status information (including the user status and the vehicle status). For specific details of sending the first reminder information based on the status information, reference may be made to fig. 4 and the related description thereof, which are not described herein again.
In some embodiments, the processing device may send the first reminder information to the user terminal based on the user representation. The user representation may reflect the interests and needs of the user. The user representation may be determined based on the user association information. For more description of the user association information, reference may be made to fig. 9 and its associated description. For example, when the user representation reflects that the user is interested in the speed of the vehicle, the processing device may send a first alert related to the speed of the vehicle. By means of the user portrait, the reminding information which is more interesting to the user can be sent, the feedback interest of the user is improved, and therefore more feedback information can be obtained.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may determine whether to send the first reminder information and the content of the first reminder information based on the vehicle state and the user profile. For example, the user representation reflects that the user is sensitive to speed, and the processing device may send a first alert when the speed of travel of the vehicle is detected to be excessive.
And step 320, obtaining feedback information, wherein the feedback information is input by the user in the user terminal in response to the first reminding information. In some embodiments, step 320 may be performed by an acquisition module.
For details on obtaining the feedback information, see step 210 and its related description above.
In some embodiments, the processing device may send the first reminder information to the other terminal. The other terminals may be terminals of other participants. Other participants may include, but are not limited to: drivers, net appointment platform workers (e.g., operators, security personnel, and/or customer service), passengers of other vehicles traveling in similar conditions, and the like. The other participants may enter other feedback information at the other terminals in response to the first reminder information.
In some embodiments of the present description, the user terminal performs feedback through different first prompting information, so as to more specifically prompt the user for feedback, and improve feedback efficiency.
FIG. 4 is a flow diagram illustrating sending first reminder information based on status information according to some embodiments of the present description. As shown in fig. 4, the process 400 may include the following steps 410 and 420. In some embodiments, flow 400 may be performed by a processing device (e.g., processing device 112).
At step 410, status information is obtained, the status information including at least one of a vehicle status and a user status. In some embodiments, step 410 may be performed by an acquisition module.
In some embodiments, the vehicle state is related to a driving condition of the vehicle. The vehicle state may include information of a driving parameter, a driving state, a driving environment, a driving section, a driving location, a driving route, and the like.
In some embodiments, the vehicle state may include at least one of a current vehicle state, a vehicle state after a preset time, and a vehicle state at the target location.
In some embodiments, the current vehicle state may reflect some or all of the current driving conditions of the vehicle. For example, a navigation route being used by the vehicle, a road segment or position where the vehicle is currently located, a current driving state of the vehicle, a current driving parameter of the vehicle, and the like are reflected. For specific details of the driving condition, reference may be made to the above step 210 and the related description thereof, which are not described herein again.
In some embodiments, different data of the current vehicle state may be obtained in different ways. For example, the current running parameters may be acquired by sensors (e.g., a speed sensor and an acceleration sensor) installed in the vehicle. For another example, the current driving state may be acquired by a monitoring system provided in the vehicle. As another example, the current weather environment and the current time environment are obtained from a weather service platform. For another example, the road condition environment may be reported by users of other vehicles, or obtained by a map or a navigation service platform. The current travel section and the current travel position may be obtained by a positioning technique, or the like.
The vehicle state after the preset time may reflect part or all of the driving condition of the vehicle after the preset time. The vehicle state at the target position may reflect part or all of the traveling condition of the vehicle at the target position. For specific details of the vehicle state after the preset time and the vehicle state at the target position, reference may be made to fig. 5 and the related description thereof, which are not described herein again.
In some embodiments, the user status is related to one or more of the following information: vehicle usage by the user, idle status by the user, and mood by the user. In some embodiments, the user state may include a current user state, a user state after a preset time, and a user state at the target location.
The vehicle usage of the user may reflect a state relationship between the user and the vehicle. For example, the user has got on the vehicle, the user is waiting for the vehicle to pick up, the user is about to finish using the vehicle (or the user finishes using the vehicle after a certain time), etc.
The user's idle condition may reflect at least whether the user has time to feedback. For example, the user's idle conditions may include very busy, relatively busy, idle, and so on. The busy level may be set on demand. For example, a user may be considered very busy if he is making a call. If the user is using the mobile phone for entertainment (such as chatting, playing games, etc.), the user can be considered as busy.
The emotion of the user may reflect the user's feeling of the driving condition of the vehicle. For example, the user mood may include whether or not to be angry, obsessive, panic, happy, bored, and depressed, etc.
In some embodiments, the processing device may obtain the current state information based on image information of the user and/or audio information of the vehicle. For example, the current user status is obtained. For specific details of obtaining the status information, reference may be made to fig. 8 and the related description thereof, which are not described herein again.
In some embodiments, the processing device may determine the user's mood based on data (e.g., heartbeat, heart rate, pulse, and blood pressure) collected by the wearable device. For example, if the number of heartbeats exceeds a preset threshold for a preset time, the user may be stressed. In some embodiments, the processing device may determine the current user state based on the state of the user terminal. For example, the user emotion is determined from the web page viewed or information viewed by the user in the current user state.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may obtain an emotional state of the user based on data monitored by the wearable device, obtain a vehicle state based on the storage device; in some embodiments, the processing device may obtain vehicle usage or idle conditions of the user based on the image data and/or audio data, and obtain driving parameters based on the sensors.
And step 420, determining whether to send the first reminding information to the user terminal or not based on the state information. In some embodiments, step 420 may be performed by an acquisition module.
In some embodiments, the processing device may determine whether to send the first reminder information to the user terminal or/and determine the content of the sent first reminder information based on the vehicle status and/or the user status. In some embodiments, the processing device may determine whether to transmit the first reminder information, or/and determine the content of the transmitted first reminder information, based on current status information (including current vehicle status and/or current user status).
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the current vehicle state. The processing device may determine whether to send the first reminder information and its content based on one of the information of the current vehicle state. For example, the processing device may determine whether to send the first warning information based on whether the current vehicle state is an abnormal driving state in which a driving event occurs. Specifically, if the current vehicle state is an abnormal driving state, the first reminding information is sent. The processing device may determine the content of the first reminding information according to the type of the driving event of the abnormal driving state. For example, when the driving event "bump" occurs in the current vehicle state, the processing device may send a first warning message "please feedback whether the vehicle bumps". The processing device may determine whether to send the first reminder information and the content thereof based on the travel section of the current vehicle. For example, if the vehicle is in a turning road section, a first reminding message "please feed back whether the vehicle turns steadily" is sent. The processing device may determine whether to send the first reminder information and the content thereof based on the driving environment of the current vehicle. For example, when the road condition environment of the current vehicle is congested, the processing device may send a first reminding message "please feed back whether sudden braking of the vehicle frequently occurs".
In some embodiments, the processing device may determine whether to send the first reminder information, and the content of the first reminder information, based on a variety of information about the current vehicle state. For example, the processing device may determine whether to send the first reminder information in combination with a variety of information of the current vehicle state based on preset rules. The preset rule may be that a plurality of kinds of information of the current vehicle state are respectively scored, scores corresponding to the plurality of kinds of information are fused (for example, averaging, weighted summation, weighted averaging, and the like), and whether the first reminding information is sent or not is determined based on the fused scores (for example, the first reminding information is sent if the fused score is greater than a certain threshold value). The content of the first reminder information may be determined based on information of the plurality of information having a score greater than a threshold value.
In some embodiments, the processing device may process the plurality of information of the current vehicle state based on the trained first reminder model, determine whether to send the first reminder information, and the content of the first reminder information. The processing equipment inputs various information into the first reminding model and outputs whether the first reminding information is sent or not and the content theme of the first reminding information. Before entering the model, the processing device may pre-process the various information, including screening, vector representation, vector normalization, etc. The content theme refers to the driving condition for which user feedback is required. The content topics may include: bump, overspeed, hard braking, etc. The content of the corresponding first reminding information can be preset based on the content theme.
The trained first reminding model can be obtained by training based on a plurality of first training samples, wherein the first training samples comprise sample vehicle states, and the labels comprise whether the first reminding information needs to be sent or not and content themes of the first reminding information. The first alert model can include, but is not limited to, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, etc.
In some embodiments, the processing device may determine whether to send the first reminder information, and/or determine the content of the first reminder information, based on the current user state. In some embodiments, the processing device may determine whether to send the first reminder information and its content based on one of the information of the current user state. For example, the processing device may send a first reminder message "please evaluate the automatic driving technique of the vehicle" when the user's idle condition reflects that the user is idle. For another example, the emotion of the user reflects that the emotion of the user is panic, and the processing device may send the first reminding information "whether the vehicle is stationary" or "whether the vehicle is braked suddenly", or the like. For another example, the user's vehicle usage reflects that the user has got on the vehicle, the processing device may send a first reminder message "whether the vehicle start is smooth," or the like.
In some embodiments, the processing device may determine whether to send the first reminder information, and the content of the first reminder information, based on a variety of information of the current user state. For example, the processing device may determine whether to send the first reminder information and the content thereof based on a preset rule in combination with various information of the current user state. For example, the vehicle usage of the user reflects that the user has got on the vehicle and the user is busy, and the processing device may not need to send the first reminder information. The emotion of the user is panic, but the user is busy, and the processing device may send the first reminder information.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on one or more information in the current user state and one or more information of the current vehicle state. In some embodiments, the processing device may determine whether to send the first reminder information, and the content of the first reminder information, based on the current user state and the current vehicle state using preset rules. The preset rules can be set according to requirements. For example, the current driving road segment of the vehicle is a turning road segment, and the user emotion is panic, the processing device may transmit the first reminding information "whether the current turning is smooth. For another example, if the current driving state of the vehicle is overspeed and the user is in a difficult state, the processing device may transmit the first warning information "whether the current vehicle speed is too fast". For another example, the user's emotion is uncomfortable, the current in-vehicle ambient temperature of the vehicle is high, and the processing device may send the first warning message "whether the current in-vehicle temperature is too high".
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the current user state and the current vehicle state using the second reminder model. The second reminder model is similar to the first reminder model.
In some embodiments, the content subject matter output by the second reminder model may also be related to the user's mood as compared to the first reminder model. Correspondingly, the content of the first reminding information can comprise matching content with the emotion of the user. For example, if the user's emotion is angry, the content of the first reminder information may include content that soothes the user.
Whether the first reminding information is sent or not is determined by comprehensively considering the current vehicle state and the current user state, so that the current running condition of the vehicle can be more accurately evaluated. Therefore, the user can be reminded in time under the condition that the user feedback is needed, and a basis is provided for the optimization of subsequent driving. Moreover, interference to the user can be avoided.
In some embodiments, the processing device may further determine whether to send the first reminding information based on the vehicle state after the preset time, the vehicle state at the target location, the user state after the preset time, and the user state at the target location, specifically referring to fig. 5 and its related description.
In some embodiments, the processing device may determine whether to send the first reminder information based on the user's willingness to feedback. For example, when the user feedback will reflect that the user does not want to perform feedback, the processing device does not send the first reminding information to the user terminal. The user feedback will may be determined according to the user association information. See fig. 9 and its associated description for user association information.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may send the first reminder information to the in-vehicle terminal in a text manner based on the vehicle state; in some embodiments, the processing device may send the first reminder information to the wearable device of the user in a vibrating manner based on the user state; in some embodiments, the processing device may send the first reminding information to a mobile phone end of the user in an image manner based on the vehicle state and the user state, and the user performs feedback in a manner of clicking an icon after receiving the first reminding information.
In some embodiments, the processing device may determine whether to send the first reminder information, and the content and/or manner of the first reminder information, based on subsequent status information. The subsequent state information comprises a vehicle state after a preset time, a user state after the preset time, a vehicle state at the target position and/or a user state at the target position.
The vehicle state after the preset time may be a vehicle state after the preset time starting from the current time. The preset time may be one or more. The preset time may be preset, for example, 10min, 20min, etc. The preset time may be determined according to a driving condition, a user state, and/or user-related information, etc. For example, if the current road is congested, the preset time may be extended. For another example, if it is determined that the user is not active for feedback according to the user association information, the preset time may be extended.
In some embodiments, the vehicle state may be predicted after a preset time. And predicting the vehicle state after the preset time according to the current vehicle state and the preset time. For example, predicting whether there is a bumpy driving event after a preset time may be determined based on a possible driving location of the vehicle after the preset time. The driving position after the predicted preset time may be determined based on the current driving speed and the driving route.
In some embodiments, the prediction may be implemented by a first vehicle state prediction model. The first vehicle state prediction model can process the current vehicle state and the preset time to obtain the vehicle state after the preset time.
The second training sample used in the training of the first vehicle state prediction model comprises a sample current vehicle state and sample preset time, and the label represents the vehicle state after the preset time. The first vehicle state prediction model may include, but is not limited to, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, among others.
In some embodiments, the input of the first vehicle state prediction model may include a navigation route (i.e., a first navigation route) used by the vehicle to travel currently, and the like, in addition to the preset time and the current vehicle state, and accordingly, the second training sample may include a sample first navigation route.
In some embodiments, the processing device may further optimize or update the vehicle state after the preset time by historical vehicle travel conditions at locations where the vehicle arrives after the preset time. For example, if the processing device predicts that the vehicle reaches the position X after the preset time, and the vehicle state predicted by the processing device after the preset time includes an abnormal traveling state of "jerk", and the historical vehicle traveling condition exceeding the threshold value at the position X does not include "jerk", the vehicle state after the preset time may be updated to "no jerk". The vehicle state after the preset time is optimized or updated according to the historical vehicle running state of the vehicle at the position where the vehicle arrives after the preset time, the vehicle state after the preset time can be obtained more in accordance with the actual situation according to the running states of a plurality of historical vehicles, and the prediction accuracy of the vehicle state after the preset time is improved.
The user state after the preset time is similar to the vehicle state after the preset time, and is not described again. In some embodiments, predicting the user state after the preset time may be implemented by a first user state prediction model. The first user state prediction model can process the current user state and the preset time to obtain the user state after the preset time. The first user state prediction model can also process the user state, the preset time and the user association information to obtain the user state after the preset time. See fig. 9 and its associated description for user association information. The first user state prediction model is similar to the first vehicle state prediction model and will not be described in detail herein.
The target position in the vehicle state at the target position or the user state at the target position may be a preset position. For example, a user terminal, a driver terminal, a processing device, or the like is set or determined in advance. The target location may be a location including: a point, area, or segment.
The target location may be one or more. In some embodiments, the target position may be set based on preset rules. The preset rule may be to determine a target position every preset distance. The preset rule may also be to take a location where a travel event may occur as a target location. For example, a turning position, a pedestrian congestion (e.g., school, etc.) link or position, etc. are taken as the target positions. The processing device, the user terminal, or the driving terminal may acquire a navigation route (i.e., a first navigation route) currently traveled by the vehicle, and determine a location where a travel event may occur from the navigation route as a target location.
In some embodiments, the target position may be predicted based on feedback information, historical information, and the like, and specific details about the predicted target position may be referred to in fig. 7 and its related description, which are not described herein again.
In some embodiments, the vehicle state at the target location may be a predictive acquisition. And predicting the vehicle state at the target position according to the current vehicle state and the target position. In some embodiments, the prediction is achieved by a second vehicle state prediction model. The second vehicle state prediction model may process the current vehicle state and the target position to obtain a vehicle state at the target position. And the third training sample used for training the second vehicle state prediction model comprises a sample current vehicle state and a sample target position, and the label is used for representing the vehicle state at the target position. In some embodiments, the second vehicle condition prediction model may include, but is not limited to, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, among others.
In some embodiments, the processing device may also determine the travel condition at the target location based on historical travel conditions at the target location. For example, the history of the running condition at the target position, the current vehicle state, and the target position are input to the second vehicle state prediction model, and the running condition at the target position is determined.
In some embodiments, the processing device may further calculate a preset time for the vehicle to travel to the target position based on the target position and the current travel parameter, so as to obtain the vehicle state at the target position by using the first vehicle state prediction model based on the preset time and the current vehicle state.
In some embodiments, the prediction of the user state at the target location may be achieved by a second user state prediction model. The second user state prediction model can process the current user state and the target position to obtain the user state at the target position. The second user state prediction model is similar to the second vehicle state prediction model and will not be described in detail herein.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the vehicle state after a preset time, similar to determining whether the first reminder information occurs based on the current vehicle state. For example, the processing device may determine whether a driving event occurs based on the vehicle state after a preset time, and if so, may send the first warning information.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the user state after a preset time, similar to determining whether to send the first reminder information based on the current user state.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the preset time later vehicle state and the preset time later user state, similar to determining whether to send the first reminder information based on the current user state and the current vehicle state.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the vehicle state at the target location, similar to determining whether to send the first reminder information based on the current vehicle state.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the user status at the target location, similar to determining whether to send the first reminder information based on the current user status.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the vehicle state at the target location and the user state at the target location, similar to determining whether to send the first reminder information based on the current user state and the current vehicle state.
The first reminding information determined based on the subsequent state information is early reminding information sent to the user, namely, the user is reminded in advance to feed back the subsequent running condition. At this time, the time for sending the first reminding information to the user terminal can be adjusted according to the actual situation. For example, the transmission may be sent at a preset distance from the target location. Also for example, the transmission may be made some time before the preset time is reached. For another example, the transmission time may be determined based on a user status, a user terminal status, a driving environment, or/and user-related information. For example, the transmission is performed in a case where a user status or a user terminal status is idle until a preset time or a target location is reached.
In some embodiments, the processing device may determine the manner of the first reminder information based on a subsequent user state, a subsequent user terminal state, a subsequent driving environment, and user association information. Refer to fig. 6 and the related description thereof, which are not repeated herein.
In some embodiments, the subsequent ue status may include the ue status after a preset time and the ue status at the target location. Reference may be made to fig. 6 and its associated description regarding the status of the user terminal.
In some embodiments, the state of the user terminal or the state of the user at the target location after the preset time may be predicted. For example, the chat content may be identified to predict whether the chat is over after a predetermined time. In some embodiments, the processing device may predict a subsequent user terminal state based on the current user terminal state, the user association information, and/or the current user state.
The ways of the different embodiments can be combined. For example, in some embodiments, subsequent user terminal states may be predicted based on user emotions and current user terminal states. For example, if the user emotion is "happy" and the current user terminal state is game play, the predicted subsequent user terminal state is game play. In some embodiments, subsequent user terminal states may be predicted based on user idle conditions, user mood, and current user terminal state.
In some embodiments, the subsequent driving environment may include a driving environment after a preset time and a driving environment at the target position. In some embodiments, the driving environment and the driving environment at the target location may be obtained from a service platform after a preset time (e.g., a map service or a weather service, etc.).
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may send the first reminder information to the user terminal in real time based on the status information; in some embodiments, the processing device may send the first reminding information to the user terminal in real time in an image manner based on the current state information, and send the first reminding information to the user terminal in a voice manner at a time before reaching the preset time or the target position based on the subsequent state information.
In some embodiments, the first reminder information determined based on the subsequent status information may be displayed directly on a navigation interface of the user terminal. In this way, the user can know the content needing feedback and the position of the feedback in advance by viewing the navigation route.
As shown in fig. 5A-D, fig. 5B is a travel interface for a vehicle, and as shown in fig. 5B, the vehicle is traveling from a current location a to a location B1, the processing device may display the estimated time of traveling from the current location to the next location on the interface of the user terminal to prompt the user for feedback when the next location is reached. For example, the user terminal may display "going to B1, time expected to be fed back: 20 min.
In some embodiments, the processing device may display the road condition at the driving position corresponding to the subsequent vehicle state (i.e., the vehicle state after the preset time and the vehicle state at the target position) on the current position feedback interface and the traveling interface of the vehicle. Taking fig. 5A or 5B as an example, the link icons from the position a to the position B1 displayed on the user terminal interface indicate that the link is a delayed travel link. For example, a road is muddy or a road section requiring a longer time to travel than usual is constructed; the link icons from position B2 to position B3 indicate that the link is a link with school; the link icons of the position B3 to the position B4 indicate that the link is a link with more traffic lights.
In some embodiments, the processing device may also display the road condition at the current location of the vehicle on a travel interface of the vehicle. For example, as shown in FIG. 5B, the floating window on the right in FIG. 5B may show that the vehicle is currently muddy/under construction.
In some embodiments, the processing device may dynamically adjust the current position feedback interface and the travel interface of the vehicle based on preset instructions of the user. The preset instruction may be an instruction generated after the user feeds back the driving condition at the current position. Still taking fig. 5A as an example, if the user clicks the emoticon, that is, the driving condition at the current position is fed back, the processing device may switch the current position feedback interface to the driving interface of the vehicle.
In some embodiments, the user may zoom the current position feedback interface and the travel interface of the vehicle on the user terminal interface. For example, using fig. 5A or 5B as an example, the user may click on the "-" icon and the "+" icon displayed on the current location feedback interface and the travel interface of the vehicle to zoom the interface.
In the user interface shown in fig. 5A, the current vehicle position is position a, and the vehicle position after the preset time is position B1、B2And B3Wherein B is1、B2And B3The vehicle position after 10min, the vehicle position after 20min, and the vehicle position after 30min, respectively. Displaying a first reminding message of feedback on the driving condition after 10min on the current user interface, wherein the first reminding message is used for predicting that the 10min reaches the position B1Please prepare to feed back the driving condition after 10min, the user can click three emoticons on the interface to feed back, wherein the three emoticons represent very satisfactory, general satisfactory and unsatisfactory from left to right. After 10min, the user interface is updated to FIG. 5B, and the current vehicle position is updated to position B1,B2And B3The first reminding information on the current interface is updated to 'predicted 10min arrival position B' and the second reminding information on the current interface is respectively the vehicle position after 10min and the vehicle position after 20min2Please prepare for the driving condition after 10minLine feedback ".
As shown in FIG. 5C, in the user interface, the current vehicle position is position A and the target position includes position C1、C2、C3、C4、C5And C6Wherein, C1、C2、C3、C4、C5And C6The turning position, the school road section, the turning position, the traffic light position and the turning position are respectively. On the current user interface, position C is displayed1First warning message "please prepare for C" for feedback of driving condition1And if the turning is smooth and smooth, the user can click three emoticons on the interface to perform feedback. When the vehicle runs to C1Then, the user gives feedback on the running condition (specifically, whether or not the turn is smooth) at that position. And, the user interface proceeds to FIG. 5D, where the current vehicle position is updated to C1The target position is updated to position C2、C3、C4、C5And C6The first reminding information on the current interface is updated to' please prepare for C2Whether the turning is smooth or not is treated.
Therefore, the vehicle state after the preset time and the vehicle state at the target position are both used for reflecting the future driving condition of the vehicle, the user can be reminded in advance to feed back before the preset time or the target position is reached by sending the first reminding information based on the future driving condition, and the situation that the user does not send the feedback information when the vehicle passes through the position or the target position at the preset time due to the time delay of sending the first reminding information is avoided.
FIG. 6 is a flow chart illustrating sending a first reminder message according to some embodiments of the present description. As shown in fig. 6, the process 600 may include the following steps 610 and 620. In some embodiments, flow 600 may be performed by a processing device (e.g., processing device 112).
Step 610, determining a feedback reminding mode. In some embodiments, step 610 may be performed by an acquisition module.
In some embodiments, the feedback alert mode may be used to determine the representation mode of the first alert information. Such as voice, text, images, video, vibrations, etc. The feedback reminding mode can be related to at least one of the user state, the user terminal state, the driving environment and the user associated information. For specific details of the user status, reference may be made to step 410 and its related description, which are not described herein again. For specific details of the driving environment, reference may be made to step 210 and its related description, which are not described herein again.
The user terminal state may reflect the terminal power, the terminal standby time, the terminal usage time and whether the terminal is being used, the current usage mode of the terminal (watching video, listening to songs, etc.), etc.
In some embodiments, the feedback alert mode may be determined based on the user state.
The ways of the different embodiments can be combined. For example, in some embodiments, when the idle condition of the user reflects that the user is reading a book, the feedback reminding manner may be voice; in some embodiments, when the idle condition of the user reflects that the user is listening to a song, the feedback reminding mode can be a text display or an image; in some embodiments, when the user's emotion is anger, then the feedback alert mode may be voice.
In some embodiments, the feedback alert mode may be determined based on the state of the user terminal.
The ways of the different embodiments can be combined. For example, in some embodiments, when the user terminal state reflects that the user is talking on a speaker or playing music, etc., the feedback reminding manner may be an image display or text; in some embodiments, when the state of the user terminal reflects that the mobile phone of the user is in a low power or standby state, the feedback reminding mode may be performed by the vehicle-mounted device or the wearable device, for example, by a voice display, an image display, a text display, or the like of the vehicle-mounted device or the smart watch.
In some embodiments, the feedback alert mode may be determined based on the driving environment.
The ways of the different embodiments can be combined. For example, in some embodiments, when the in-vehicle environment information reflects that the environment is noisy and it is inconvenient to listen to audio, the feedback reminding manner may be through image display or text, and when the in-vehicle environment information reflects that the in-vehicle light is dim or the light is too bright, the feedback reminding manner may be voice; in some embodiments, when the road condition environment is a congested road segment, the braking frequency of the vehicle is high, the user may be uncomfortable due to long-time viewing of the mobile phone screen, and the feedback reminding mode may be voice or vibration.
In some embodiments, the feedback reminding manner can be further determined based on the user association information. For example, the user-related information reflects that the age of the user exceeds a preset age, and the text and the sound in the feedback reminding mode can be amplified. For another example, the user information reflects that the common language of the user is english, and the characters and sounds in the feedback reminding mode can be switched to english. For another example, the user information reflects that the user is a Chinese, and the characters and sounds in the feedback reminding mode can be switched to Chinese.
In some embodiments, the feedback reminding manner can also be determined based on multiple types of user states, user terminal states, driving environments and user associated information. The processing device may determine the model based on a preset rule or a trained feedback reminding mode, process various information among the user state, the user terminal state, the driving environment, and the user association information, and determine the feedback reminding mode.
For example, as described above, each of the user state, the user terminal state, the driving environment, and the user association information may include a plurality of types, each type may preset a corresponding feedback reminding manner, the preset rule may be to score a plurality of information in the user state, the user terminal state, the driving environment, and the user association information, and determine a final feedback reminding manner (e.g., select a feedback reminding manner with a maximum score after fusion) by fusing scores of feedback reminding manners of the same type (e.g., averaging, weighted summing, weighted averaging, etc.).
For another example, the processing device may determine the model based on a preset rule or a trained feedback reminding mode, and simultaneously process a plurality of information in the user state, the user terminal state, the driving environment, and the user association information to determine a plurality of initial feedback reminding modes; and determining a final feedback reminding mode from the plurality of initial feedback reminding modes based on a preset screening rule. The preset screening rule can be random screening or set according to requirements.
In some embodiments, the feedback reminding mode can be changed in real time based on the actual conditions. For example, when the user is making a call, the feedback reminding mode can be changed from the original voice display to a text display or an image display.
And step 620, generating first reminding information based on the feedback reminding mode, and sending the first reminding information to the user terminal. In some embodiments, step 620 may be performed by an acquisition module.
For specific details of generating and sending the first reminder information to the user terminal, reference may be made to step 420 and related description thereof, which are not described herein again. In some embodiments, the processing device may send the first reminding information to the user terminal through a transmission method such as short message, network, or bluetooth.
In some embodiments of the present description, the corresponding feedback reminding manner is determined intelligently and dynamically based on various information (e.g., a user state, a user terminal state, a driving environment, user association information, and the like), so that the fixed feedback reminding manner is prevented from disturbing the user, and the riding experience of the user is improved.
FIG. 7 is a flow diagram illustrating predicting a target location according to some embodiments of the present description. As shown in fig. 7, the process 700 may include the following steps 710 and 720. In some embodiments, flow 700 may be performed by a processing device (e.g., processing device 112).
Step 710, obtaining historical feedback information related to at least one of the track point and the user on the first navigation route, and historical driving conditions corresponding to the historical feedback information. In some embodiments, step 710 may be performed by an acquisition module.
In some embodiments, the first navigation route may be a navigation route being used by the vehicle. As previously described in step 210, the travel routes may include at least one navigation route, and in some embodiments, the first navigation route may be one selected by the user from the travel routes, or may be an optimal navigation route automatically selected by the processing device from the travel routes (e.g., the shortest travel navigation route, the shortest time-consuming navigation route, or the least expensive navigation route, etc.).
In some embodiments, historical feedback information from historical users on historical driving conditions of the historical vehicle may be stored in a storage device (e.g., database 140), the platform. It can be understood that there is at least one historical feedback information corresponding to historical driving conditions of the historical vehicle. For example, the correspondence relationship of the historical user, the historical vehicle, the historical traveling condition, and the historical feedback information may be stored in the storage device.
The track points on the first navigation route may be location points included on the first navigation route. In some embodiments, the acquisition module may acquire historical feedback information related to the track points on the first navigation route. For example, the obtaining module may obtain historical feedback information corresponding to the track point on the first navigation route and historical driving conditions corresponding to the historical feedback information from a storage device or a platform.
In some embodiments, the acquisition module may acquire historical feedback information related to the user. For example, the obtaining module may obtain historical feedback information of a historical user for the user from a storage device or a platform, and historical driving conditions corresponding to the historical feedback information.
In some embodiments, the acquisition module may acquire historical feedback information related to the user and the track point on the first navigation route. For example, the obtaining module obtains historical feedback information of a historical user for track points on the first navigation route and historical driving conditions corresponding to the historical feedback information from a storage device or a platform, wherein the historical user is the user.
And step 720, predicting a target position on the first navigation route based on the historical feedback information and the historical driving condition. In some embodiments, step 720 may be performed by the acquisition module.
In some embodiments, the processing device may predict the target location on the first navigation route based on historical feedback information (which may be referred to as "first historical feedback information") related to the track points and/or the user on the first navigation route. For example, the processing device may determine, as the target location, a trajectory point corresponding to negative feedback (e.g., feedback dissatisfied) in the first historical feedback information. For example, if the user or another user is dissatisfied with the feedback information of the historical driving conditions of the track point 1 and the track point 2 included in the first navigation route in the past, the processing device may determine the track points 1 and 2 as the target positions. For another example, the processing device may determine, as the target position, a trace point in the first historical feedback information that includes a negative feedback number greater than a threshold. The preset times can be specifically set according to actual requirements, for example, 3 times or 5 times and the like.
In some embodiments, the processing device may predict the target location on the first navigation route based on a historical travel condition (which may be referred to as a "first historical travel condition") corresponding to the first historical feedback information. For example, the processing device may determine, as the target position, a trajectory point in the history of the travel condition on which the travel event is reflected. By way of example, if the driving event "turn" is reflected at track points 2 and 3, track points 2 and 3 can be determined as target positions.
In some embodiments, the processing device may predict a target location on the first navigation route based on the first historical feedback information and the first historical driving conditions. In some embodiments, the processing device may determine, as the target position, a trajectory point in the first historical driving situation in which the driving event is reflected and whose corresponding first historical feedback information is negative feedback. Still taking the above example as an example, the trajectory point 2 may be determined as the target position.
In some embodiments, the processing device may predict a target location on the first navigation route based on a trained target location prediction model. Specifically, the target position prediction model may process the first history feedback information and the first history traveling condition, and output the target position. The target location prediction model may include, but is not limited to, a graph neural network model, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, among others.
When the target position prediction model is the graph neural network model, the nodes of the graph neural network model are track points on the first navigation route, and the characteristics of the nodes at least can include: the first historical feedback information, the first historical driving condition and the attributes of the track points. Attributes of the track points may reflect the type of track point, e.g., turn road segment, congested road segment, near school, etc. The edges of the graph neural network model are relations between the track points, and the characteristics of the edges can include distance relations, whether the edges appear on the same navigation route and/or the same road segment, and the like. And inputting the characteristics of the edges and the characteristics of the nodes into the graph neural network model, and outputting whether the corresponding track points are the target positions or not by the nodes of the graph neural network model.
As previously described, the first historical feedback information may be classified into a plurality of categories: the user past feedback information on the track points on the first navigation route, the other user past feedback information on the track points on the first navigation route, and the user past feedback information on other positions (i.e., positions other than the track points on the first navigation route). In some embodiments, weights may be set for different historical feedback information when determining the target location based on the historical feedback information. The weight may be set according to circumstances. For example, the feedback information of the user to the track point in the past has the highest weight, so that the determined target position can better meet the requirement of the user.
In some embodiments of the present description, the target position is determined according to the historical feedback information and the historical driving condition, track points that have been historically fed back or have been historically determined to have driving events on a navigation route being used by the vehicle may be determined as the target position, and then the driving parameters of the vehicle at the target position are adjusted according to the feedback information of the user on the target position, so that a problem similar to the previous feedback when the vehicle passes through the target position is avoided, and the riding experience of the user is improved.
FIG. 8 is a flow diagram illustrating determining status information according to some embodiments of the present description. As shown in fig. 8, the process 800 may include steps 810 through 840 described below. In some embodiments, flow 800 may be performed by a processing device (e.g., processing device 112).
Step 810, obtaining image information of the user from the first terminal. In some embodiments, step 810 may be performed by the acquisition module.
The first terminal may be a terminal device for acquiring an image. For example, a camera device, a monitoring device, a user terminal with a camera, or an in-vehicle device with a camera, etc. The image information of the user may be an image of the user photographed by the first terminal or an image frame included in a video of the user photographed by the first terminal.
In some embodiments, the first terminal may obtain image information of the user in real time and upload the image information to the database. The processing device may obtain image information of the user from the first terminal in real time. The processing device may also retrieve image information of the user from a database.
At step 820, vehicle-related audio information is obtained from the second terminal. In some embodiments, step 820 may be performed by an acquisition module.
In some embodiments, the second terminal may be a terminal device that obtains audio information. For example, a sound recording apparatus, an image pickup apparatus, a user terminal with a sound recording function, or an in-vehicle apparatus with a sound recording function, and the like. The second terminal and the first terminal may be the same or different.
Vehicle-related audio information refers to audio data emitted by a vehicle or emitted within the vehicle's internal environment. In some embodiments, the vehicle-related audio information may include: audio emitted by the vehicle during driving (e.g., engine sound, brake sound, etc.), audio emitted or received by the user using a user terminal, or communication audio between the user and a driver of the vehicle during driving, etc.
In some embodiments, the second terminal may obtain vehicle related audio information in real time and upload to the database. The processing device may obtain vehicle-related audio information from the second terminal in real-time. The processing device may also retrieve vehicle-related audio information from a database.
Step 830, extracting image features of the image information and audio features of the audio information. In some embodiments, step 830 may be performed by the acquisition module.
In some embodiments, the image features may include color features, texture features, shape features, spatial features, and the like. In some embodiments, the processing device may extract image features through an image feature extraction algorithm or an image feature extraction model. For example, the image feature extraction model may include a convolutional neural network model or the like.
In some embodiments, the audio features may include sampling frequency, bit rate, number of channels, frame rate, zero-crossing rate, short-time energy, and short-time autocorrelation coefficients, among others. In some embodiments, the processing device may extract the audio features through an audio feature extraction algorithm. The audio feature extraction algorithm may include, but is not limited to, Linear Prediction Coefficients (LPC), Perceptual Linear Prediction Coefficients (PLP), Linear Prediction Cepstral Coefficients (LPCC), Mel-Frequency Cepstral Coefficients (MFCC), and the like.
In some embodiments, the processing device pre-processes the image information or audio information prior to extracting the audio features or image features. The preprocessing of the image information comprises: smoothing, noise elimination, edge enhancement, edge feature extraction, and the like. The pre-processing of the audio information comprises: pre-emphasis, framing, windowing, etc.
At step 840, state information is determined based on the processing of the image features and the audio features. In some embodiments, step 840 may be performed by an acquisition module.
In some embodiments, the processing may be performed by a pre-trained feature processing model. The pre-trained feature processing model may determine state information based on image features and audio features. In some embodiments, the fourth training sample of the feature processing model includes sample image features and sample audio features of the sample vehicle, and the tag is used to characterize state information of the sample vehicle. When the determined status information is a vehicle status, then the tag represents the vehicle status, e.g., the tag represents a driving event of the vehicle: start, stop, accelerate, or turn, etc. When the determined state information is a user state, then the tag represents the user state, e.g., the tag represents the user's mood: engendering qi, uneasiness, panic or happiness, etc. The processing device may iteratively update parameters of the initial feature processing model based on a plurality of training samples such that a loss function of the model satisfies a preset condition, e.g., the loss function converges, or the loss function value is less than a preset value. And finishing the model training when the loss function meets the preset condition to obtain a trained feature processing model.
In some embodiments, the image feature extraction model and the feature processing model may be trained in an end-to-end training manner. Specifically, sample image information of a sample user is input into an initial image feature extraction model, sample audio features are input into an initial feature processing model, a loss function is constructed based on a label representing state information of a sample vehicle and a result output by the initial feature processing model, and parameters of the initial feature processing model are updated iteratively based on the loss function until preset conditions are met.
In some embodiments of the present description, the current user state and the current vehicle state may be determined in real time through the user image information and the vehicle audio information acquired in real time, so as to facilitate subsequent acquisition of the current state information in real time and transmission of the first reminding information.
FIG. 9 is a flow chart illustrating operation of a travel system according to some embodiments of the present description. As shown in fig. 9, the process 900 may include steps 910 and 920 described below. In some embodiments, flow 900 may be performed by a processing device (e.g., processing device 112).
Step 910, determining the acceptance of the feedback information based on the user association information. In some embodiments, step 910 may be performed by an operational module.
The user-related information refers to information related to the user. The user association information includes: user basic information and user historical behavior information. The basic information of the user comprises credit rating of the user, gender, age, occupation, hobbies, duration of a user registration platform, feedback intention of the user, feedback intention degree and the like. The user historical behavior information comprises: user historical order information, user historical evaluation information, user historical feedback information and the like. In some embodiments, the processing device may obtain the user-associated information from a storage device or platform (e.g., a network appointment platform).
In some embodiments, the acceptability of the feedback information may reflect the degree to which the feedback information is accepted or approved. The receptivity may be represented by a score or a grade. For example, the higher the rating or score, the higher the degree of acceptance.
In some embodiments, the processing device may determine the acceptability of the feedback information based on the rule and the user association information. The rule refers to a corresponding relationship between the user association relationship and the acceptance, and the corresponding relationship may be preset. For example, if the credit rating of the user is high, the acceptability is high. For another example, if the credit rating of the user is good and the time period for the user to register the platform is 2 years, the acceptance is medium.
In some embodiments, the processing device may determine the acceptability of the feedback information based on the processing of the user-associated information by the acceptability prediction model. The types of models may include, but are not limited to, linear regression models, neural network models, decision trees, support vector machines, and na iotave bayes, among others. The model can be obtained by training a plurality of fifth training samples carrying labels. The fifth training sample may include sample user association information for the sample user, and the label may characterize acceptability of the feedback information for the sample user. The sample user association information may be historical data, and/or user historical behavior information acquired by a worker after multiple ride trials (e.g., acquired after multiple ride trials). The labels may be obtained by manual labeling. The label of acceptance is determined, for example, by staff testing the feedback, based on a comparison of the current feedback to historical feedback.
As described above, the first reminding information may be sent to other user terminals, and accordingly, other users may feed back other feedback information. In some embodiments, other feedback information may be used to assess the acceptability of the user's input of feedback information at the user terminal. For example, based on the similarity between the feedback information and other feedback information, the similarity may be positively correlated with the acceptability.
And 920, judging whether the acceptance meets a preset condition or not, and operating the running system in response to the meeting. In some embodiments, step 920 may be performed by an operational module.
In some embodiments, the preset condition may be specifically set according to actual conditions. For example, the acceptance is above a preset threshold. In some embodiments, the processing device may operate the travel system in response to a satisfaction. For specific details of the operation of the driving system, reference may be made to fig. 10 to 17 and the related description thereof, which are not described herein again.
As can be seen from the above description, the embodiment of the present disclosure may operate the driving system only when the user acceptance meets the preset condition. The phenomenon that the driving system is operated wrongly due to feedback information which is not credible by a user is avoided. For example, if the credit rating difference of the user and/or the number of complaints of the user is very large, it indicates that the user may not feed back according to the actual situation, and the reliability of the user feedback information is low.
FIG. 10 is a flow chart illustrating operation of a travel system according to some embodiments of the present description. As shown in fig. 10, the process 1000 may include steps 1010 and 1020 described below. In some embodiments, flow 1000 may be performed by a processing device (e.g., processing device 112).
At step 1010, a plurality of second navigation routes are generated based on the current location and the destination of the vehicle. In some embodiments, step 1010 may be performed by an operations module.
In some embodiments, the current position of the vehicle may be obtained through a positioning technique. Such as a positioning system like GPS or GNSS. In some embodiments, the processing device may automatically generate a plurality of second navigation routes based on the current location and destination of the vehicle.
Step 1020, determining a target navigation route from the plurality of second navigation routes based on the feedback information and the driving condition corresponding to the feedback information, and updating the first navigation route based on the target navigation route. In some embodiments, step 1020 may be performed by an operations module.
The different embodiments may be combined. For example, in some embodiments, the feedback information reflects that the user has fed back that the vehicle at a plurality of turns included in the driving condition is not stationary, the processing device may determine a route with fewer turns from the second navigation route as the target navigation route; in some embodiments, the feedback information reflects that the user has fed back that the driving condition for a plurality of road jams included in the driving condition is not satisfied, and the processing device may determine a route with clear roads as the target navigation route from the second navigation route.
As previously mentioned, the first navigation route is the navigation route being used by the vehicle. In some embodiments, the processing device may replace the route from the current location to the destination in the first navigation route with the target navigation route, resulting in an updated first navigation route.
In some embodiments, the processing device may display the updated navigation route to the user to enable the user to confirm whether to update or adjust. Wherein the content of the target navigation route may be highlighted. Such as color highlighting, course bolding highlighting, etc.
In some embodiments, the processing device may further compare and display the updated first navigation route and the first navigation route information before updating. The content displayed by contrast may include, but is not limited to: time consumption comparison, road condition comparison and driving event comparison. For example, comparison of the number of turning roads and the number thereof, comparison of the number of maintenance roads and the number thereof, and comparison of the degree of road congestion.
Since the first navigation route is updated, the target position determined on the first navigation route is also updated, and further, the processing device may display the first reminding information at the target position corresponding to the target navigation route (i.e., the updated first navigation route). For the first reminding information, reference may be made to fig. 4 and 5 and the related description thereof, which are not described herein again.
Some embodiments of the present description update the first navigation route by feeding back information, so that the user can be prevented from repeatedly passing through the same or similar position where the user has fed back unsatisfied, and the riding experience of the user is improved.
FIG. 11 is a flow chart illustrating operation of a travel system according to some embodiments of the present description. As shown in fig. 11, the process 1100 may include steps 1110 and 1120 described below. In some embodiments, flow 1100 may be performed by a processing device (e.g., processing device 112).
Step 1110, determining a target position from the first navigation route based on the feedback information and the driving condition corresponding to the feedback information. In some embodiments, step 1110 may be performed by an operational module.
As previously mentioned, the first navigation route is the navigation route being used. In some embodiments, the processing device may determine whether to use the travel section or the travel position corresponding to the feedback information as the reference position based on the attitude of the user reflected by the feedback information, for example, if the feedback information is negative feedback, then use the travel section or the travel position corresponding to the feedback information as the reference position. Further, the processing device determines a target location associated with the reference location from the first navigation route. The target position related to the reference position may be a position similar to the driving condition of the reference position, for example, the reference position is a turning curve, and the target position is also a turning curve.
The ways of the different embodiments can be combined. For example, in some embodiments, the target position determined in step 1110 (hereinafter referred to as the second target position) may be a target position predicted based on the historical feedback information and the historical driving conditions in step 720 (hereinafter referred to as the first target position); in some embodiments, the processing device may further filter the first target location to obtain a second target location based on the feedback information and the corresponding driving condition. For example, the first target position at which the vehicle speed is unsatisfactory in the running condition is reflected as the second target position by the filtering feedback information. For another example, the screening feedback information reflects that the first target position with the preset driving event in the driving condition is the second target position. The preset driving event can be specifically set according to actual requirements.
And 1120, adjusting the running parameters of the vehicle at the target position. In some embodiments, step 1120 may be performed by an operational module.
For specific details of the driving parameters, reference may be made to step 210 and the related description thereof, which are not described herein again. In some embodiments, the processing device may dynamically adjust the driving parameters based on the actual conditions.
In some embodiments, the adjustment mode of the driving parameter may be determined according to the feedback information. For example, if the user feeds back that the vehicle speed is too fast, the vehicle speed is reduced. As another example, the user may feedback that the bump is severe, slowing down and/or bypassing bumps in the road segment.
In some embodiments, the processing device may further acquire historical feedback information (may be simply referred to as "second historical feedback information") relating to at least one of the target location or the user and a historical driving condition corresponding thereto (may be simply referred to as "second historical driving condition"), and adjust the driving parameter of the vehicle at the target location based on the second historical feedback information and the second historical driving condition corresponding thereto.
The historical feedback information associated with the target location may include all historical user past feedback information for the target location. Historical feedback information about the user is described with reference to fig. 16 and its associated description. The historical feedback information relating to the user and the target location may include past feedback information of the user to the target location.
In some embodiments, the processing device may determine the adjusted travel parameter at the target location based on the second historical feedback information and its corresponding second historical travel condition. For example, the history travel parameters corresponding to the positive feedback in the second history feedback information are processed (for example, weighted average), and the processed parameters are used as the travel parameters adjusted at the target position. For another example, the second history traveling condition positively fed back in the second history feedback information is matched with the traveling condition at the target position, including traveling environment matching, traveling position matching, and the like, and the target position adjusted traveling parameter is determined based on the traveling parameter in the second history traveling condition whose matching degree satisfies the requirement.
Some embodiments of the present description are advantageous to reduce the negative feedback of the user to the same type of position again by determining the target position and adjusting the driving parameters of the target position in advance, and are advantageous to improve the experience of the user in riding the vehicle.
FIG. 12 is a schematic illustration of operation of a travel system according to some embodiments of the present description. As illustrated by the diagram 1200, a processing device (e.g., the processing device 112) may send second reminder information to the driving system based on the feedback information.
The driving terminal may be a terminal at which the driver inputs and/or receives information. For example, for the field of online booking, the driving terminal may be a driver terminal or a vehicle-mounted terminal. After the driver receives the second reminding information through the driving terminal, the driving of the vehicle can be adjusted. For example, when the driver again travels to the position corresponding to the feedback information, the travel is adjusted. For another example, when the traveling is in another traveling condition similar to the traveling condition corresponding to the feedback information, the traveling is adjusted. For example, when the driving condition is a turn, the driver may adjust the driving condition based on the feedback information when the driver turns the turn later.
In some embodiments, the processing device may determine whether to send the second reminder information to the driving system based on the feedback information. For example, if the feedback information is negative feedback, the feedback information is sent, otherwise, the feedback information is not sent. The second reminding information can be used for reminding the driving terminal to adjust the driving of the vehicle.
In some embodiments, the content of the second reminder information may include feedback information of the user. Illustratively, "sudden braking occurs at the user feedback position a, please note driving". The driver is further reminded whether the driver needs to adjust the subsequent running or not by informing the driver whether the user is satisfied with the running condition or not, or whether the driving parameters such as speed need to be adjusted or not is determined when the driver encounters other running conditions similar to the fed-back running condition.
In some embodiments, the content of the second reminder information may include adjustment suggestions for subsequent driving, and the like. After the driver receives the second reminding information, the driver can adjust according to the adjustment suggestion in the second reminding information. The adjustment advice for the subsequent travel may be a travel parameter at the target position or the like. Reference may be made to fig. 11 and its associated description for determining the driving parameters at the target position. For example, the user may feed back that sudden braking occurs at a position a, where the position a is a traffic light, and the processing device may generate a second reminding message "please start to slow down at 200 meters from the traffic light, so as to avoid sudden braking", and the like.
In some embodiments, the processing device may determine whether to send the second reminder information to the driving system based on the feedback information and its corresponding driving condition. For example, when the feedback information is negative and the driving condition is the preset driving condition, the second reminding information is sent. The preset driving condition may be a driving event caused by driving of a driver, such as sudden braking, excessive speed of the vehicle, etc.
In some embodiments, the feedback information and/or the driving condition may also determine the manner of the second reminder information. In some embodiments, the type of feedback information is different, and the manner of the second reminder information may be different. For example, if the feedback information is positive feedback, the second reminding information may be in a mode of adding an emotion encouraging function such as an image or voice, and if the feedback information is negative feedback, the second reminding information may be in a mode of a character or an icon. In some embodiments, the driving conditions are different, and the manner of the second reminding information can be different. For example, if the driving condition is sudden braking, the mode may be voice, and if the driving condition is too fast, the mode may be vibration.
In some embodiments, the reminding mode of the second reminding information can be determined based on the state of the driving terminal. The reminding mode of the second reminding information may be similar to the feedback reminding mode of the first reminding information, and specific details may refer to fig. 6 and the related description thereof, which are not described herein again.
The ways of the different embodiments can be combined. For example, in some embodiments, the manner of the second reminding information may be determined based on the driving terminal state and the driving condition, for example, if the driving terminal state is playing navigation, and the driving condition is sudden braking, the manner of the second reminding information is characters or images; in some embodiments, the manner of the second reminding information may be determined based on the state of the driving terminal, the feedback information and the driving condition, for example, if the driving terminal is playing navigation, the feedback information is negative feedback, and the driving condition is severe overspeed, the second reminding manner may be voice plus image.
FIG. 13 is a flow chart illustrating sending second reminder information according to some embodiments of the present description. As shown in fig. 13, the process 1300 may include a step 1310 and a step 1320. In some embodiments, flow 1300 may be performed by a processing device (e.g., processing device 112).
Step 1310, a target position is determined from the first navigation route based on the feedback information and the driving condition corresponding to the feedback information. In some embodiments, step 1310 may be performed by an operational module.
In some embodiments, the target location is derived from a track point in the first navigation route. For details of step 1310, refer to step 1110 and its related description, and are not described herein.
Step 1320, sending the second reminding information to the driving system based on the relationship between the current position of the vehicle and the target position. In some embodiments, step 1320 may be performed by an operational module.
In some embodiments, the current location to target location relationship may include a distance between the current location and the target location, a time taken for the current location to travel to the target location, and/or the like. The time spent by the current position to travel to the target position is related to the current driving environment and can be acquired through a navigation system.
In some embodiments, the processing device may determine whether to send the second reminder information to the driving system based on whether a relationship between the current position of the vehicle and the target position satisfies a preset condition. And if the second reminding information is satisfied, sending the second reminding information, and if the second reminding information is not satisfied, not sending the second reminding information. The preset condition may be a condition related to time or distance, for example, the preset condition may be that the distance is less than a threshold distance, the time is shorter than a threshold distance, or the like.
Some embodiments of the present description may send the second warning information to the driving system in advance through a relationship between the current position of the vehicle and the target position, so that the driving terminal knows how to operate the driving system in advance.
In some embodiments, the processing device may also adjust the environment inside the vehicle in real-time based on the feedback information and the mood of the user. For example, if the feedback information of the user is negative feedback (e.g., dissatisfaction) and the emotion of the user is also negative emotion (e.g., restlessness, difficulty or depression), the processing device may adjust the internal environment of the vehicle (e.g., increase air humidity, adjust the temperature in the vehicle, play music, etc.) to relieve the negative emotion of the user and improve the riding experience of the user.
In some embodiments, the processing device may determine whether the vehicle has a safety issue, and if not, the processing device may send a notification message to the user to notify the user that the vehicle has no safety issue. For specific details of determining the security issue, reference may be made to fig. 17 and the related description thereof, which are not described herein again.
FIG. 14 is a schematic illustration of operation of a travel system according to some embodiments of the present description. As shown by the diagram 1400, a processing device (e.g., the processing device 112) may update the user preferences based on the feedback information and the travel conditions to which the feedback information corresponds.
The user preferences may be used by the travel system to determine the deployment or travel of the vehicle for the user. In some embodiments, the user preferences may reflect the user's preferences or behavior habits. In some embodiments, the user preferences may reflect a user's preferred vehicle speed, vehicle type, vehicle parking location, commonly used seats, commonly used languages, preferred feedback alerts, in-vehicle temperature, in-vehicle light intensity, vehicle color, and the like.
In some embodiments, the processing device may update the user preferences based on the feedback information and the driving conditions corresponding to the feedback information. For example, if the vehicle speed is 70, the user gives negative feedback information, and if the feedback vehicle speed is too fast, the vehicle speed in the user preference may be adjusted (e.g., decreased, etc.). For another example, if the user gives a negative feedback mainly on the running condition of a car and gives a positive feedback mainly on the running condition of an off-road vehicle, the user's preferred middle model can be updated to the off-road vehicle. For another example, if the feedback information given by the user indicates that the vehicle is too noisy or bumpy, the level of the vehicle type in the preference of the user can be increased.
The deployment of the vehicle includes the type of vehicle used by the user for travel. The traveling of the vehicle includes a traveling route, traveling parameters, and the like of the vehicle when the user is riding the vehicle.
In some embodiments, the processing device may determine a vehicle or vehicle travel for a subsequent ride by the user based on the user preferences.
In some embodiments, the processing device may determine the allocation or travel of the vehicle based on the feedback information and its corresponding travel road conditions. For example, the feedback information is negative feedback, and the processing device may exclude a route including a travel link or a travel position corresponding to the feedback information as much as possible when determining a travel route of the vehicle later. For another example, if the feedback information is negative and a certain trip needs to pass through the travel section or the travel position corresponding to the feedback information, the processing device does not consider the vehicle or other vehicles similar to the vehicle model and other information when allocating the vehicle.
FIG. 15 is a schematic illustration of operation of a travel system according to some embodiments of the present description. As illustrated by the diagram 1500, a processing device (e.g., the processing device 112) may update vehicle control parameters based on the feedback information and the driving conditions to which the feedback information corresponds.
In some embodiments, the associated vehicle may be another vehicle similar to the vehicle model, vehicle performance, vehicle color, etc. of the current vehicle. The associated position may be another position related to the travel position in the travel condition. For example, the road condition environment of the associated location is similar to the road condition environment of the driving location, the road segment of the associated location is similar to the road segment of the driving location, the driving event possibly sent by the associated location is similar to the driving event of the driving location, and the like. As an example, the associated position and the travel position are traffic light positions. The associated driving environment may be another driving environment similar to the driving environment. For example, if the driving environment is a clear day, the other driving environments may be a clear day, a high temperature weather, and the like.
In some embodiments, the processing device may update the vehicle control parameter based on the feedback information and the driving condition to which the feedback information corresponds. The vehicle control parameter is a travel parameter of the associated vehicle in at least one of the associated environment and the associated travel environment. For example, the feedback information is positive feedback, and the driving parameter corresponding to the feedback information is taken as the driving parameter of the related vehicle driving at the related position and in the related environment. For another example, the feedback information is a negative feedback, and if the running parameter of the associated vehicle running at the associated position and in the associated environment is the same as or similar to the running parameter corresponding to the feedback information, the running parameter of the associated vehicle running at the associated position and in the associated environment may be changed. For specific details regarding updating the vehicle control parameters, reference may be made to fig. 16 and the related description thereof, which are not repeated herein.
In some embodiments of the present description, the driving parameters of the associated vehicle in the platform are adjusted through the feedback information of the user, so that the control effect of the platform on the vehicle can be optimized, and the driving of the vehicle can meet the requirements of the user as much as possible.
FIG. 16 is a flow chart illustrating updating vehicle control parameters according to some embodiments herein. As shown in fig. 16, the process 1600 may include a step 1610 and a step 1620. In some embodiments, flow 1600 may be performed by a processing device (e.g., processing device 112).
Step 1610, historical feedback information of the associated vehicle at least one of the associated position and the associated running environment is acquired, and historical running conditions corresponding to the historical feedback information are acquired. In some embodiments, step 1610 may be performed by an operational module.
The historical feedback information of the associated vehicle at least one of the associated position and the associated running environment includes historical feedback information given to the associated position by all users when the associated vehicle is in the associated running environment.
In some embodiments, the processing device may obtain at least one historical feedback information (which may be referred to simply as "third historical feedback information") of the associated vehicle in the associated location and associated driving environment, and a historical driving condition (which may be referred to simply as "third historical driving condition") to which the historical feedback information corresponds from a storage device (e.g., a database) or a platform.
Step 1620, updating the vehicle control parameters based on the feedback information and the corresponding driving condition, and the historical feedback information and the corresponding historical driving condition. In some embodiments, step 1620 may be performed by an operation module.
As previously described, the processing device may update the vehicle control parameters based on the feedback information and its corresponding travel condition. Similarly, the processing device may update the vehicle control parameters based on the third history feedback information and its corresponding third history driving condition. For example, if the third history feedback information reflects that the other users consider that the vehicle speed of the associated vehicle a at the associated position c is too fast in the associated running environment d (e.g., rainy day), the processing device may decrease the vehicle speeds of all the associated vehicles at the associated position c in rainy days.
In some embodiments, the processing device may update the vehicle control parameters based on the feedback information and its corresponding travel condition, and the historical feedback information and its corresponding historical travel condition. The processing device may determine how to adjust the vehicle control parameters by comprehensively considering the feedback information and its corresponding driving condition, the third history feedback information and its corresponding third history driving condition. For example, the travel parameter corresponding to the feedback information fed back positively out of the feedback information and the third history feedback information is processed (for example, averaged or equalized) to replace the existing vehicle control parameter. In some embodiments, the feedback information and the third history feedback information may also be weighted, e.g., the feedback information is weighted more heavily. In some embodiments, the vehicle control parameters may be determined by a vehicle control model. Specifically, the feedback information and the corresponding running condition thereof, and the third history feedback information and the corresponding third history running condition thereof are input into the vehicle control model, and the vehicle control parameters are output.
FIG. 17 is a flow chart illustrating operation of a travel system according to some embodiments of the present description. As shown in fig. 17, the flow 1700 may include steps 1710 and 1720. In some embodiments, flow 1700 may be performed by a processing device (e.g., processing device 112).
And step 1710, acquiring basic information and use and maintenance information of the vehicle. In some embodiments, step 1710 may be performed by an operation module.
In some embodiments, the basic information of the vehicle may include at least one of a license plate number, a vehicle type, an engine number, a chassis number, a usage unit, a responsible person, a fueling card number, an insurance company, a road bridge fee payment date, a vehicle type, an engine number, a vehicle usage time, and a manufacturing plant home.
In some embodiments, the usage maintenance information may include at least vehicle repair information and vehicle maintenance information. For example, the vehicle repair information may include a reason for repair, a number of repairs, a merchant of repair, and the like. The vehicle maintenance information may include maintenance items, maintenance time, maintenance times, and the like.
Step 1720, sending third reminding information to the driving system based on the feedback information, the basic information and the use maintenance information, wherein the third reminding information is related to the maintenance of the vehicle. In some embodiments, step 1720 may be performed by the operation module.
In some embodiments, the third reminder information is related to maintenance of the vehicle. For example, the third warning message may be "please refuel in time", "please maintain the vehicle in time", or "the engine of the vehicle has been repaired many times".
In some embodiments, the processing device may send the third reminder information to the driving system based on the feedback information. For example, if the feedback information is negative feedback and the driving event corresponding to the feedback information is a preset event (e.g., bump, vehicle shake, etc.), the processing device may send a third reminding message "please check whether the vehicle needs to be repaired in time" to the driving system.
In some embodiments, the processing device may send the third reminder information to the driving system based on a plurality of information among the feedback information, the base information, and the usage maintenance information. For example, if the driving event corresponding to the feedback information is vehicle shake, and the usage and maintenance information of the vehicle reflects that the vehicle has repaired the shock absorber, the processing device may send a third reminding message "please check whether the shock absorber of the vehicle is normal. For another example, if the driving event corresponding to the feedback information is vehicle shake, the basic information of the vehicle reflects that the vehicle has been used for too long, and the use and maintenance information of the vehicle reflects that the vehicle has not been maintained or repaired for a long time, the processing device may send a third reminding message "please check whether the vehicle has a fault".
In some embodiments, the processing device may process the feedback information, the base information, and the usage maintenance information through a maintenance determination model, determine whether to send the third reminder information, and the content of the third reminder information. The sixth training sample of the training maintenance determination model includes: sample feedback information, sample base information, and sample usage maintenance information, a label representing whether the vehicle requires maintenance, and a type of maintenance. The maintenance type represents the location or component of maintenance. The different maintenance types may correspond to different third reminding information, for example, if the maintenance type is engine maintenance, the third reminding information is "please check and repair the engine". The maintenance determination model may be a deep neural network, a convolutional neural network, a decision tree, a support vector machine and naive bayes, or the like.
For example, in some embodiments, the processing device may send the third reminding information "please check whether the vehicle needs to be maintained in time" to the vehicle-mounted terminal when the feedback information is negative feedback and the driving event corresponding to the feedback information is a preset event, and the vehicle-mounted terminal prompts the third reminding information in an image manner; in some embodiments, the processing device may send a third reminder message "please refuel in time" to the driver terminal, which is prompted by the driver terminal in text.
In some embodiments, the processing device may also tag the segment based on the feedback information. In some embodiments, the tags for a road segment may reflect feedback conditions for that road segment, e.g., the amount of positive or negative feedback, negatively fed driving events, etc. By adding the labels to the road sections, when a user selects a driving route, the driving route containing the disliked road sections can be avoided according to requirements.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.
The embodiment of the application discloses TS1, a vehicle control method, including: acquiring feedback information of a vehicle running condition input by a user at a user terminal; and operating a driving system based on the feedback information, wherein the driving system is set to directly or indirectly change the driving or the allocation of the vehicle.
TS2, the method of TS1, further comprising: acquiring state information, wherein the state information comprises at least one of a vehicle state and a user state; sending first reminding information to the user terminal based on the state information; wherein the feedback information is input by the user at the user terminal in response to the first reminder information.
TS3, the method of TS2, the user status relating to one or more of the following information: vehicle usage of the user, idle condition of the user, and mood of the user.
TS4, the method of TS2, the vehicle state including at least one of a current vehicle state, a vehicle state after a preset time, and a vehicle state at a target location; the vehicle state is related to a running condition of the vehicle.
TS5, the method of sending a first alert message to the user terminal according to TS2, comprising: determining a feedback reminding mode, wherein the feedback reminding mode is related to at least one of a user state, a user terminal state, a driving environment and user associated information; and generating the first reminding information based on the feedback reminding mode, and sending the first reminding information to the user terminal.
TS6, the method of TS1, further comprising: generating a plurality of second navigation routes based on the current location and destination of the vehicle; wherein the operating the travel system based on the feedback information comprises: determining a target navigation route from the plurality of second navigation routes based on the feedback information and the driving condition corresponding to the feedback information, and updating a first navigation route based on the target navigation route; wherein the first navigation route is a navigation route being used by the vehicle.
TS7, the method of TS1, the operating the driving system based on the feedback information, comprising: determining a target position from a first navigation route based on the feedback information and the driving condition corresponding to the feedback information; and adjusting the driving parameters of the vehicle at the target position; wherein the first navigation route is a navigation route being used by the vehicle.
TS8, the method of TS1, the operating the driving system based on the feedback information, comprising: and sending second reminding information to the driving system based on the feedback information, wherein the second reminding information is used for reminding a driving terminal to adjust the driving of the vehicle.
TS9, the method of TS8, wherein sending a second warning message to the driving system based on the feedback information includes: determining a target position from a first navigation route based on the feedback information and the driving condition corresponding to the feedback information; sending the second reminding information to the driving system based on the relation between the current position of the vehicle and a target position, wherein the target position is derived from track points in a first navigation route; wherein the first navigation route is a navigation route being used by the vehicle.
TS10, the method of TS1, the operating the driving system based on the feedback information, comprising: updating user preference based on the feedback information and the driving condition corresponding to the feedback information; wherein the user preferences are used by the driving system to determine a deployment or driving of a vehicle by the user.
TS11, the method of TS1, the operating the driving system based on the feedback information, comprising: and updating vehicle control parameters based on the feedback information and the running condition corresponding to the feedback information, wherein the vehicle control parameters are running parameters of the associated vehicle in at least one of the associated position and the associated running environment.
TS12, the method of TS1, the operating the driving system based on the feedback information, comprising: acquiring basic information and use and maintenance information of the vehicle; and sending third reminding information to the running system based on the feedback information, the basic information and the use maintenance information, wherein the third reminding information is related to the maintenance of the vehicle.
TS13, the vehicle being an autonomous vehicle according to the method of TS 1.
The embodiment of the application discloses TS14, a vehicle control system includes: the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring feedback information of a vehicle running condition input by a user at a user terminal; and an operation module for operating a driving system based on the feedback information, wherein the driving system is configured to directly or indirectly change the driving or allocation of the vehicle.
The embodiment of the application discloses TS15, a vehicle control device, the device includes at least one processor and at least one memory; the at least one memory is for storing computer instructions; the at least one processor is configured to execute at least a portion of the computer instructions to implement the method as recited in any one of TS 1-TS 13.
The embodiment of the application discloses a TS16, a computer readable storage medium storing computer instructions which, when executed by a processor, implement the method as set forth in any one of TS 1-TS 13.

Claims (10)

1. A vehicle control method comprising:
acquiring feedback information of a vehicle running condition input by a user at a user terminal; and
operating a driving system based on the feedback information, wherein the driving system is configured to directly or indirectly change the driving or the allocation of the vehicle.
2. The method of claim 1, further comprising:
acquiring state information, wherein the state information comprises at least one of a vehicle state and a user state; and
sending first reminding information to the user terminal based on the state information; wherein the content of the first and second substances,
the feedback information is input by the user at the user terminal in response to the first reminding information.
3. The method of claim 2, wherein sending the first alert message to the ue comprises:
determining a feedback reminding mode, wherein the feedback reminding mode is related to at least one of a user state, a user terminal state, a driving environment and user associated information; and
and generating the first reminding information based on the feedback reminding mode, and sending the first reminding information to the user terminal.
4. The method of claim 1, further comprising:
generating a plurality of second navigation routes based on the current location and destination of the vehicle; wherein the operating the travel system based on the feedback information comprises:
determining a target navigation route from the plurality of second navigation routes based on the feedback information and the driving condition corresponding to the feedback information, and updating a first navigation route based on the target navigation route; wherein the content of the first and second substances,
the first navigation route is a navigation route being used by the vehicle.
5. The method of claim 1, the operating the travel system based on the feedback information, comprising:
determining a target position from a first navigation route based on the feedback information and the driving condition corresponding to the feedback information; and
adjusting the driving parameters of the vehicle at the target position; wherein the content of the first and second substances,
the first navigation route is a navigation route being used by the vehicle.
6. The method of claim 1, the operating the travel system based on the feedback information, comprising:
updating user preference based on the feedback information and the driving condition corresponding to the feedback information; wherein the content of the first and second substances,
the user preferences are used by the driving system to determine deployment or driving of the vehicle by the user.
7. The method of claim 1, the operating the travel system based on the feedback information, comprising:
and updating vehicle control parameters based on the feedback information and the running condition corresponding to the feedback information, wherein the vehicle control parameters are running parameters of the associated vehicle in at least one of the associated position and the associated running environment.
8. A vehicle control system comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring feedback information of a vehicle running condition input by a user at a user terminal; and
and the operation module is used for operating a running system based on the feedback information, wherein the running system is set to directly or indirectly change the running or the allocation of the vehicle.
9. A vehicle control apparatus, the apparatus comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any of claims 1 to 7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202110806617.2A 2021-07-16 2021-07-16 Vehicle control method and system Pending CN113320537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110806617.2A CN113320537A (en) 2021-07-16 2021-07-16 Vehicle control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110806617.2A CN113320537A (en) 2021-07-16 2021-07-16 Vehicle control method and system

Publications (1)

Publication Number Publication Date
CN113320537A true CN113320537A (en) 2021-08-31

Family

ID=77426353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110806617.2A Pending CN113320537A (en) 2021-07-16 2021-07-16 Vehicle control method and system

Country Status (1)

Country Link
CN (1) CN113320537A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520788A (en) * 2024-01-05 2024-02-06 成都亚度克升科技有限公司 Sound box parameter determining method and system based on artificial intelligence and big data analysis
CN117725764A (en) * 2024-02-07 2024-03-19 中汽研汽车检验中心(天津)有限公司 Regression model-based vehicle chassis multi-objective optimization method, equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101669090A (en) * 2007-04-26 2010-03-10 福特全球技术公司 Emotive advisory system and method
CN106682090A (en) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment
US20170267256A1 (en) * 2016-03-15 2017-09-21 Cruise Automation, Inc. System and method for autonomous vehicle driving behavior modification
CN109000635A (en) * 2017-06-07 2018-12-14 本田技研工业株式会社 Information provider unit and information providing method
US20180365740A1 (en) * 2017-06-16 2018-12-20 Uber Technologies, Inc. Systems and Methods to Obtain Passenger Feedback in Response to Autonomous Vehicle Driving Events
US20190047584A1 (en) * 2017-08-11 2019-02-14 Uber Technologies, Inc. Systems and Methods to Adjust Autonomous Vehicle Parameters in Response to Passenger Feedback
KR20190103521A (en) * 2018-02-13 2019-09-05 현대자동차주식회사 Vehicle and control method for the same
US20190351912A1 (en) * 2018-05-18 2019-11-21 Hyundai Motor Company System for determining driver's emotion in vehicle and control method thereof
WO2020039994A1 (en) * 2018-08-23 2020-02-27 オムロン株式会社 Car sharing system, driving control adjustment device, and vehicle preference matching method
US20200073478A1 (en) * 2018-09-04 2020-03-05 Hyundai Motor Company Vehicle and control method thereof
CN112829763A (en) * 2019-11-05 2021-05-25 北京新能源汽车股份有限公司 Voice interaction processing method and system and automobile
CN112955362A (en) * 2018-10-19 2021-06-11 伟摩有限责任公司 Assessing ride quality of autonomous vehicles

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101669090A (en) * 2007-04-26 2010-03-10 福特全球技术公司 Emotive advisory system and method
US20170267256A1 (en) * 2016-03-15 2017-09-21 Cruise Automation, Inc. System and method for autonomous vehicle driving behavior modification
CN106682090A (en) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment
CN109000635A (en) * 2017-06-07 2018-12-14 本田技研工业株式会社 Information provider unit and information providing method
US20180365740A1 (en) * 2017-06-16 2018-12-20 Uber Technologies, Inc. Systems and Methods to Obtain Passenger Feedback in Response to Autonomous Vehicle Driving Events
US20190047584A1 (en) * 2017-08-11 2019-02-14 Uber Technologies, Inc. Systems and Methods to Adjust Autonomous Vehicle Parameters in Response to Passenger Feedback
KR20190103521A (en) * 2018-02-13 2019-09-05 현대자동차주식회사 Vehicle and control method for the same
US20190351912A1 (en) * 2018-05-18 2019-11-21 Hyundai Motor Company System for determining driver's emotion in vehicle and control method thereof
WO2020039994A1 (en) * 2018-08-23 2020-02-27 オムロン株式会社 Car sharing system, driving control adjustment device, and vehicle preference matching method
US20200073478A1 (en) * 2018-09-04 2020-03-05 Hyundai Motor Company Vehicle and control method thereof
CN112955362A (en) * 2018-10-19 2021-06-11 伟摩有限责任公司 Assessing ride quality of autonomous vehicles
CN112829763A (en) * 2019-11-05 2021-05-25 北京新能源汽车股份有限公司 Voice interaction processing method and system and automobile

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520788A (en) * 2024-01-05 2024-02-06 成都亚度克升科技有限公司 Sound box parameter determining method and system based on artificial intelligence and big data analysis
CN117520788B (en) * 2024-01-05 2024-03-22 成都亚度克升科技有限公司 Sound box parameter determining method and system based on artificial intelligence and big data analysis
CN117725764A (en) * 2024-02-07 2024-03-19 中汽研汽车检验中心(天津)有限公司 Regression model-based vehicle chassis multi-objective optimization method, equipment and medium

Similar Documents

Publication Publication Date Title
US20230351302A1 (en) Controlling Vehicles Using Contextual Driver And/Or Rider Data Based On Automatic Passenger Detection and Mobility Status
CN111433102B (en) Method and system for collective vehicle control prediction in an autonomous vehicle
RU2683902C2 (en) Vehicle, method and system for scheduling vehicle modes using the studied user's preferences
US10192171B2 (en) Method and system using machine learning to determine an automotive driver's emotional state
US20240078481A1 (en) Controlling vehicles using contextual driver and/or rider data based on automatic passenger detection and mobility status
US9651395B2 (en) Navigation systems and associated methods
CA3026415C (en) Traveling-based insurance ratings
CN111433103B (en) Method and system for adaptive movement planning based on occupant reaction to movement of vehicle in an autonomous vehicle
DE102014203724A1 (en) Method and system for selecting navigation routes and providing advertising on the route
CN107521485A (en) Driving behavior analysis based on vehicle braking
US20100131300A1 (en) Visible insurance
US9928833B2 (en) Voice interface for a vehicle
US11727451B2 (en) Implementing and optimizing safety interventions
JP2020165694A (en) Controller, method for control, and program
CN113320537A (en) Vehicle control method and system
CN112955362A (en) Assessing ride quality of autonomous vehicles
CN111859178A (en) Method and system for recommending boarding points
CN113723528A (en) Vehicle-mounted voice-video fusion multi-mode interaction method, system, device and storage medium
US11460309B2 (en) Control apparatus, control method, and storage medium storing program
DE112020003033T5 (en) Method and apparatus for improving a geolocation database
JP2019032681A (en) Digital signage control device, digital signage control method, program, recording medium
CN111797755A (en) Automobile passenger emotion recognition method and electronic equipment
CN112109645A (en) Method and system for providing assistance to a vehicle user
US20220357172A1 (en) Sentiment-based navigation
CN113320536A (en) Vehicle control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination