CN116160978A - Vehicle interaction method, device, electronic equipment, storage medium and program product - Google Patents
Vehicle interaction method, device, electronic equipment, storage medium and program product Download PDFInfo
- Publication number
- CN116160978A CN116160978A CN202211550995.XA CN202211550995A CN116160978A CN 116160978 A CN116160978 A CN 116160978A CN 202211550995 A CN202211550995 A CN 202211550995A CN 116160978 A CN116160978 A CN 116160978A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- mode
- interaction
- voice
- instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 303
- 238000000034 method Methods 0.000 title claims abstract description 95
- 230000000694 effects Effects 0.000 claims description 104
- 230000001795 light effect Effects 0.000 claims description 90
- 230000002452 interceptive effect Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 17
- 230000001960 triggered effect Effects 0.000 abstract description 10
- 230000006870 function Effects 0.000 description 59
- 230000004044 response Effects 0.000 description 34
- 230000008569 process Effects 0.000 description 29
- 230000001276 controlling effect Effects 0.000 description 23
- 238000004891 communication Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 17
- 238000004378 air conditioning Methods 0.000 description 15
- 230000009471 action Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 241001465754 Metazoa Species 0.000 description 7
- 239000003086 colorant Substances 0.000 description 6
- 239000003205 fragrance Substances 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 230000002159 abnormal effect Effects 0.000 description 5
- 230000006378 damage Effects 0.000 description 5
- 230000009545 invasion Effects 0.000 description 5
- 239000000725 suspension Substances 0.000 description 5
- 238000012795 verification Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000010438 heat treatment Methods 0.000 description 3
- 230000001105 regulatory effect Effects 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 206010013954 Dysphoria Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
- B60R16/0373—Voice control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/20—Means to switch the anti-theft system on or off
- B60R25/24—Means to switch the anti-theft system on or off using electronic identifiers containing a code not memorised by the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
The application provides a vehicle interaction method, a vehicle interaction device, electronic equipment, a storage medium and a program product, and belongs to the technical field of vehicles. The method comprises the following steps: responding to a first interaction instruction, determining that a vehicle accords with a first out-of-vehicle interaction mode, and displaying a first display interface based on the first out-of-vehicle interaction mode, wherein the first display interface is used for displaying an execution result of the first interaction instruction; and responding to a second interaction instruction, determining that the vehicle accords with a second external interaction mode, and switching the first display interface into a second display interface based on the second external interaction mode, wherein the second display interface is used for displaying an execution result of the second interaction instruction. When the vehicle exterior interaction is carried out based on the interaction instruction, the interaction interface corresponding to the vehicle exterior interaction mode is displayed according to the vehicle exterior interaction mode triggered by the interaction instruction, and then the execution result of the interaction instruction is displayed on the interaction interface, so that the interaction form is rich.
Description
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to a vehicle interaction method, device, electronic apparatus, storage medium, and program product.
Background
In modern life, vehicles have become a major travel tool for people to travel. In order to meet the use requirements of vehicle owners in external scenes, vehicles have various external interaction modes, such as a welcome mode, an external voice mode and the like. In the welcome mode, the vehicle performs welcome through executing preset welcome actions, so that the ceremony feeling of driving the vehicle by the vehicle owner is increased. In the off-vehicle voice mode, the vehicle reduces the operation complexity of the vehicle owner by executing the off-vehicle voice command.
However, when the interaction is performed based on the out-vehicle interaction mode, the interaction form is single, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a vehicle interaction method, a vehicle interaction device, electronic equipment, a storage medium and a program product, and interaction forms can be enriched. The technical scheme is as follows:
in a first aspect, a vehicle interaction method is provided, the method comprising:
responding to a first interaction instruction, determining that a vehicle accords with a first out-of-vehicle interaction mode, and displaying a first display interface based on the first out-of-vehicle interaction mode, wherein the first display interface is used for displaying an execution result of the first interaction instruction;
and responding to a second interaction instruction, determining that the vehicle accords with a second external interaction mode, and switching the first display interface into a second display interface based on the second external interaction mode, wherein the second display interface is used for displaying an execution result of the second interaction instruction.
In a second aspect, there is provided a vehicle interaction device, the device comprising:
the first determining module is used for responding to the first interaction instruction and determining that the vehicle accords with a first out-of-vehicle interaction mode;
the first display module is used for displaying a first display interface based on the first out-of-vehicle interaction mode, and the first display interface is used for displaying an execution result of the first interaction instruction;
the second determining module is used for responding to a second interaction instruction and determining that the vehicle accords with a second out-of-vehicle interaction mode;
the switching module is used for switching the first display interface into a second display interface based on the second external interaction mode, and the second display interface is used for displaying an execution result of the second interaction instruction.
In a third aspect, a vehicle is provided, the vehicle comprising a memory and a processor, the memory storing at least one computer program, the at least one computer program being loaded and executed by the processor to implement the vehicle interaction method according to the first aspect.
In a fourth aspect, a computer readable storage medium is provided, in which at least one computer program is stored, which, when being executed by a processor, enables the vehicle interaction method according to the first aspect.
In a fifth aspect, a computer program product is provided, the computer program product comprising a computer program, which, when executed by a processor, is capable of implementing the vehicle interaction method according to the first aspect.
In the embodiment of the application, when the vehicle exterior interaction is performed based on the interaction instruction, according to the vehicle exterior interaction mode triggered by the interaction instruction, an interaction interface corresponding to the vehicle exterior interaction mode is displayed, and the interaction interface is used for displaying an execution result of the interaction instruction. Under the external interaction scene, the switching between different interaction modes is not easy to perceive for the user, and the execution results of the interaction instructions under the different interaction modes are displayed on the interaction interface, so that not only are interaction modes enriched, but also the user can view different interaction information, and therefore the interaction function can be better used. In addition, in the process, a user does not need to manually switch different interaction modes, and the interaction modes can be automatically switched according to the interaction instruction, so that the interaction efficiency is improved.
Drawings
FIG. 1 illustrates a block diagram of an electronic device provided in an exemplary embodiment of the present application;
fig. 2 is a schematic diagram of a display device of an electronic apparatus according to an exemplary embodiment of the present application;
3-4 are flowcharts of a vehicle interaction method provided by an embodiment of the present application;
fig. 5 is an effect diagram of vehicle interaction in a greeting mode according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a voice mode switching control provided by an embodiment of the present application;
fig. 7 is a light effect diagram displayed by a three-dimensional vehicle model in an external interaction scene according to an embodiment of the present application;
FIG. 8 is an effect diagram of vehicle interaction in an inactive state provided by an embodiment of the present application;
fig. 9 to 11 are schematic diagrams of an effective policy of off-board voice according to an embodiment of the present application;
FIG. 12 is an effect diagram of vehicle interaction in a listening state provided in an embodiment of the present application;
FIG. 13 is an effect diagram of vehicle interaction in a recognition state according to an embodiment of the present application;
FIGS. 14-15 are effect diagrams of vehicle interaction in a feedback state provided by embodiments of the present application;
FIG. 16 is an interactive flow chart of a parking process provided by an embodiment of the present application;
FIG. 17 is an interactive flow chart of a parking process provided by an embodiment of the present application;
FIG. 18 is an interactive flow chart of a door opening/closing process provided by an embodiment of the present application;
FIG. 19 is an interactive flow chart of a door locking process provided in an embodiment of the present application;
FIG. 20 is an interactive flow chart of a door unlocking process provided by an embodiment of the present application;
FIG. 21 is a schematic view of a different mode of validation strategy provided by embodiments of the present application;
FIG. 22 is a flow chart of a vehicle interaction method provided by an embodiment of the present application;
FIGS. 23-26 are graphs of alarm effects triggered by a sentinel mode provided in embodiments of the present application;
FIGS. 27-37 are flowcharts of a vehicle interaction method provided by embodiments of the present application;
fig. 38 is a schematic structural diagram of a vehicle interaction device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It will be understood that, as used in the embodiments of the present application, the terms "each," "plurality," and "any" and the like, a plurality includes two or more, each refers to each of the corresponding plurality, and any refers to any of the corresponding plurality. For example, the plurality of words includes 10 words, and each word refers to each of the 10 words, and any word refers to any one of the 10 words.
Information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to herein are user-authorized or fully authorized by parties, and the collection, use, and processing of relevant data requires compliance with relevant laws and regulations and standards of the relevant country and region. For example, the voice information referred to in this application is obtained with sufficient authorization.
With the development of vehicle technology, voice interaction has become the mainstream man-vehicle interaction mode. The in-vehicle voice mode includes a single-person voice interaction mode. In a single person voice interaction mode, a host driver in the cabin may send voice commands to the vehicle to achieve full vehicle control and entertainment functions, such as turning on/off the air conditioning system, adjusting the temperature of the air conditioning system, controlling seat heating, turning on/off the navigation system, turning on/off the fragrance system, playing/off the music player, playing/off the video player, and so on. In order to reduce the driving burden of the main driving user, the in-car voice mode is also developed from a single voice interaction mode to a multi-person voice interaction mode. Under the multi-person voice interaction mode, a main driving user, a secondary driving user and a rear-row user in the cabin can control the vehicle by sending voice instructions to the vehicle.
In order to meet the use requirements of users in the external scene of the vehicle, the voice interaction mode is also developed from an in-vehicle voice mode to an external voice mode. In the off-vehicle voice mode, the vehicle controls the controlled component to execute corresponding operations based on the received voice command. In order to meet other interaction requirements of users in the external scene of the vehicle, a plurality of external interaction modes such as a welcome mode, a charging mode, a parking mode, a pet mode, a child mode, a sentinel mode and the like are provided besides the external voice mode.
Based on the above-mentioned external interaction modes, the embodiment of the application provides a vehicle interaction method, which can enrich interaction modes under external scenes of vehicles and improve interaction efficiency. The vehicle interaction method can be executed by electronic equipment, and the electronic equipment is not only suitable for vehicles, but also suitable for carriers such as aerocars, ships and the like. In the following embodiments, an example of a vehicle will be described. The electronic device may be a vehicle-mounted terminal (e.g., a vehicle-mounted system), a vehicle, or a device for implementing a part of functions in the vehicle, or may be a mobile electronic device, such as a tablet computer, a mobile phone, or a computer, a wearable device, a VR, an AR device, or the like, which is not limited in this embodiment.
A vehicle to which the electronic device is applied may include passenger cabins, the number of which may be determined based on the number of passengers, for example, a vehicle may include at least two passenger cabins, and at least 1 seat may be provided in each passenger cabin. The vehicle has two passenger cabins, the two passenger cabins form the complete vehicle interior space of the vehicle, the cabins can be limited by using the first and the second cabins, and the order, the form and the like of the cabin space are not limited only for distinguishing the two cabin spaces. For example, the passenger compartment of the vehicle includes a first cabin that is a space formed by a front main seat and a front passenger seat of the vehicle, and a second cabin that is a space formed by two or more rear seats of a rear row of the vehicle.
Referring to fig. 1, the electronic device includes a processor 101. Some or all of the functionality of the electronic device may be controlled by a computing platform. The computing platform may include one or more processors 101, which is a circuit with signal processing capabilities, and in one implementation, may be a circuit with instruction reading and execution capabilities, such as a central processing unit (central processing unit, CPU), microprocessor, graphics processor (graphics processing unit, GPU) (which may be understood as a microprocessor), or digital signal processor (digital signal processor, DSP), etc.; in another implementation, the processor may implement a function through a logical relationship of hardware circuitry that is fixed or reconfigurable, e.g., a hardware circuitry implemented for an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD), e.g., a Field Programmable Gate Array (FPGA). In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, a hardware circuit designed for artificial intelligence may be used, which may be understood as an ASIC, such as a neural network processing unit (neural network processing unit, NPU), tensor processing unit (tensor processing unit, TPU), deep learning processing unit (deep learningprocessing unit, DPU), etc. In addition, the computing platform may also include a memory for storing instructions, and some or all of the processors may call the instructions in the memory to execute the instructions to implement the corresponding functions.
The electronic device may further comprise a display means 102. The display device 102 may be used to display images, videos, and the like. The display device may include a display panel, which may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrixorganic light emitting diode), a flexible light-emitting diode (FLED), a mini, micro led, micro-OLED, a quantum dot light-emitting diode (quantum dot lightemitting diodes, QLED), or the like.
The display device 102 may be touched to receive input signals from a user, may sense boundaries of touch or slide actions, and may also detect durations and pressures associated with touch or slide operations.
Optionally, the display device 102 is mainly used for displaying information to an occupant in the passenger cabin, and a surface of the display area of the display device 102 is substantially consistent with an extending direction of a surface of the seat backs of the main driving seat and the auxiliary driving seat, which are arranged side by side, and is substantially perpendicular to the direction of the vehicle head. Wherein the display area of the display device 102 extends at least continuously from the front of the primary driver seat to the front of the secondary driver seat. In other words, the extension range of the display area of the display device 102 may just cover the range corresponding to the front of the main driver seat and the front of the passenger seat, or may exceed the range corresponding to the front of the main driver seat and the front of the passenger seat, which is not limited in this embodiment. In some embodiments, the width of the display device is at least greater than two-thirds of the width of the vehicle to meet the information acquisition needs of the occupants in the primary and secondary seats.
In some embodiments, the display area of the display device 102 may be dynamically divided into one or more successive display areas according to the targeted occupant in the cabin. The multiple display areas being continuous means that the multiple display areas are obtained by dynamically dividing pixels in the display areas, and not by physical split screens. For example, when the target occupant is a single occupant, the display area may be divided into one or more display areas according to the position or interaction requirements of the single target occupant. When the target passenger is a plurality of passengers, full-screen display can be performed or the target passenger is divided into a plurality of display areas matched with the plurality of passengers according to the positions or interaction requirements of the plurality of target passengers.
For another example, when the target passenger is located in a different passenger cabin, the display area may be divided into one or more display areas according to the positions or interaction modes of the passenger cabins in which the different passengers are located. For example, each passenger compartment may correspond to one display area, one passenger compartment may correspond to a plurality of display areas, or a plurality of passenger compartments may correspond to one display area, which is not limited herein.
Based on the area division mode, the display device 102 can flexibly and dynamically display interactive contents, and one-screen multi-purpose is realized. Referring to fig. 2, the display device 102 includes a plurality of display areas, namely a display area 201, a display area 202 and a display area 203. The display area 201 is used for displaying voice information of a main driver and a three-dimensional vehicle mode; the display area 202 is used for displaying multimedia information; the display area 203 is used to display voice information of the secondary driving, and the like.
Note that, the size and position of the different display areas are not fixed, that is, when the display area 201 and the display area 202 are adjacent, the boundary between the display area 201 and the display area 202 may move; alternatively, the display area 201 may partially cover the display area 202; alternatively, the display area 202 may partially cover the display area 201.
Similarly, when the display region 202 and the display region 203 are adjacent, the boundary between the display region 202 and the display region 203 can be moved; alternatively, the display area 202 may partially cover the display area 203; alternatively, the display area 203 may partially cover the display area 202.
In embodiments of the present application, the processor may control icons and/or cards of the application displayed by the display device 102. In some possible implementations, information such as multimedia information may also be stored in the form of data in memory in the computing platform.
It should be appreciated that the operations described above may be performed by the same processor or may be performed by one or more processors, as embodiments of the present application are not limited in detail.
Further, as shown in fig. 1, the electronic device further includes: a memory 103 and a communication component 104. The memory 103 of FIG. 1, described above, may be used to store one or more computer instructions. The memory 103 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Processor 101 may execute one or more computer instructions stored in memory 103 to perform the steps provided by the various embodiments of the present application.
The communication component 104 of fig. 1 described above is configured to facilitate communication between the device in which the communication component resides and other devices, either wired or wireless. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi,2G, 3G, 4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies. Based on the communication component 104, the electronic device can obtain data sent by other devices. For example, the sensor data transmitted from the in-vehicle sensor and the control instruction transmitted from the vehicle control system, etc., the present embodiment is not limited.
For the above-mentioned several external vehicle interaction modes, the embodiments of the present application provide a vehicle interaction method, taking a vehicle executing the embodiments of the present application as an example, where the vehicle has at least one driving mode of an automatic driving mode and a manual driving mode. Referring to fig. 3, a method flow provided in an embodiment of the present application includes:
301. And responding to the first interaction instruction, determining that the vehicle accords with a first out-of-vehicle interaction mode, and displaying a first display interface based on the first out-of-vehicle interaction mode.
302. And responding to the second interaction instruction, determining that the vehicle accords with a second external interaction mode, and switching the first display interface to a second display interface based on the second external interaction mode.
The interaction instruction is used for determining an out-of-vehicle interaction mode which the vehicle accords with. And dividing the interaction instruction into a first interaction instruction and a second interaction instruction according to the triggered interaction mode. Specifically, the first out-of-vehicle interaction instruction is for triggering a first out-of-vehicle interaction mode, and the second out-of-vehicle interaction instruction is for triggering a second out-of-vehicle interaction mode. The interactive instruction can be generated according to the effective condition corresponding to the vehicle external interactive mode which is met by the vehicle, and when the fact that the vehicle meets the effective condition corresponding to the first vehicle external interactive mode is detected, a first interactive instruction is generated; and when the fact that the vehicle meets the effective condition corresponding to the second out-of-vehicle interaction mode is detected, generating a second interaction instruction.
In order to enrich the interactive content in the first interactive mode, in response to the first interactive instruction, it is determined that the vehicle conforms to a first off-board interactive mode, and based on the first off-board interactive mode, the vehicle will display a first display interface. The first display interface is used for displaying an execution result of the first interaction instruction, and when the first display interface is displayed, an interface can be directly displayed on the display screen to serve as the first display interface; the display screen can be flicked to serve as a first display interface; an interface or popup window can also be displayed on the displayed application interface as a first display interface. The display screen may be a central control screen in the vehicle cabin, a rear-row screen, an LED (Light Emitting Diode ) display screen for displaying multimedia information (such as advertisement) installed in the vehicle cabin, or the like, and of course, the display screen may be other display screens in the vehicle or outside, which are not described one by one.
In the embodiment of the application, the priority of the first external interaction mode is lower than the priority of the second external interaction mode, so that in the first external interaction mode, when the vehicle is detected to meet the effective condition corresponding to the second external interaction mode, the vehicle generates a second interaction instruction, and in response to the second interaction instruction, the vehicle switches the current external interaction mode from the first external interaction mode to the second external interaction mode, and then switches the first display interface to the second display interface, and the second display interface is used for displaying the execution result of the second interaction instruction. When the first display interface is switched to the second display interface, the first display interface can be closed, then the second display interface is displayed on the display area of the first display interface, or the first display interface is not closed, but the second display interface is overlapped on the first display interface, and the first display interface is completely or partially blocked.
The first vehicle external interaction mode may be one of a welcome mode, a charging mode, a pet mode, a child mode, a sentinel mode, etc., and the second vehicle external interaction mode may be one of a welcome mode, a vehicle external voice mode, a parking mode, etc. In the following embodiments, details will be given for different out-of-vehicle interaction modes.
According to the method provided by the embodiment of the application, when the vehicle exterior interaction is carried out based on the interaction instruction, according to the vehicle exterior interaction mode triggered by the interaction instruction, the interaction interface corresponding to the vehicle exterior interaction mode is displayed, and the interaction interface is used for displaying the execution result of the interaction instruction. Under the external interaction scene, the switching between different interaction modes is not easy to perceive for the user, and the execution results of the interaction instructions under the different interaction modes are displayed on the interaction interface, so that not only are interaction modes enriched, but also the user can view different interaction information, and therefore the interaction function can be better used. In addition, in the process, a user does not need to manually switch different interaction modes, the interaction modes can be automatically switched according to the interaction instruction, and the interaction efficiency is high.
The embodiment of the application provides a vehicle interaction method, taking a first vehicle external interaction mode as a welcome mode and a second vehicle external interaction mode as a vehicle external voice mode as an example, referring to fig. 4, the method provided by the embodiment of the application includes:
401. and responding to the welcome instruction, determining that the vehicle accords with a welcome mode, and displaying a welcome interface based on the welcome mode.
The welcome mode is an interaction mode for increasing the ceremony feeling of a host driver when the host driver approaches the vehicle and facilitating the host driver to get on or off the vehicle. The interaction mode not only relates to interaction outside the vehicle such as external vehicle lamps, speakers and vehicle doors, but also relates to interaction in intelligent cabins such as seats, fragrance systems and air conditioning systems. The opening of the greeting mode requires the vehicle to meet the greeting condition. Specifically, the welcome conditions are: the digital key is located within a first preset range. The first preset range is a communication range between the digital key and the vehicle, and is determined by the communication capability of the digital key and the vehicle.
Wherein, the welcome instruction is generated when the vehicle is determined to meet the welcome condition. And responding to the welcome instruction, and carrying out vehicle external interaction by the vehicle based on the welcome mode. When the vehicle exterior interaction is performed based on the welcome mode, the vehicle can be controlled to execute a welcome instruction according to the welcome mode, and the process is that the welcome component of the vehicle is controlled to execute a preset welcome action. The welcome component comprises at least one of a main driving door, an external car lamp, an internal car lamp, an external car loudspeaker, a suspension, an air conditioning system, a fragrance system, a main driving seat and the like. The external car lamp at least comprises four pixel lamps, a front interaction lamp, a rear interaction lamp, four wheel arch lamps or other equipment capable of lighting. The interior lights include at least an atmosphere light, a roof light, or other lighting device, etc. For the different welcome components, when the vehicle controls the welcome components to execute preset welcome actions, the vehicle comprises at least one of the following components:
aiming at the main driving door, the main driving door can be controlled to be opened to a preset angle. The preset angle can be set on a welcome function setting interface; the preset angle can also be determined according to the surrounding environment, and particularly, if other vehicles are not parked near the main driving door, a larger value can be selected for the preset angle; if other vehicles are parked near the main driving door, the preset angle can be selected to be smaller, so that the main driving door and the other vehicles are prevented from being scratched.
The external car lamp can be controlled to display according to the set lamp effect. The set light effect can be set on the welcome function setting interface. For example, a first preset color (e.g., red, yellow, green, white, etc.) may be used to display all or a portion of the exterior lights, a set sequence may be used to display multiple colors, or a first preset frequency may be used to flash all or a portion of the exterior lights.
The internal car lamp can be controlled to display according to the set lamp effect. The set light effect can be set on the welcome function setting interface. The set lighting effect can be displayed by adopting a preset second preset color (such as yellow, red and the like) for controlling the internal car lamp, can also display a plurality of colors by adopting a set sequence, and can also flash by adopting a first preset frequency for controlling all or part of the external car lamp.
Aiming at the loudspeaker outside the vehicle, the loudspeaker outside the vehicle can be controlled to play the preset welcome sound effect. The preset welcome sound effect can be set on a welcome function setting interface. The preset welcome sound effect may be a sound effect locally stored in the vehicle, or may be a sound effect obtained from the outside, for example, a sound effect downloaded from the internet, or a sound effect received through communication with a handheld terminal (for example, a smart phone, a tablet computer, etc.) of the owner user.
For the suspension, the suspension of the vehicle may be controlled to be lowered to a preset height. The preset height can be determined according to the height of the vehicle and the attribute of the main driving user. For example, if the height of the vehicle itself is high and the primary driver is a female user, the height is relatively low, the preset height may be set to a small value.
For an air conditioning system of a vehicle, the air conditioning of the vehicle may be controlled to be turned on.
The fragrance system of the vehicle can be controlled to be started aiming at the fragrance system of the vehicle.
For a main driving seat, whether the main driving seat is heated or not can be determined according to the temperature outside the vehicle. And when the temperature outside the automobile is detected to be lower than the preset temperature, controlling the heating of the main driving seat. Wherein the preset temperature may be 20 degrees, 15 degrees, etc.
Depending on the distance between the digital key and the vehicle, the greeting mode includes a far-field greeting mode and a near-field greeting mode. When the distance between the digital key and the vehicle is far, a far-field welcome mode can be adopted for welcome; when the distance between the digital key and the vehicle is relatively short, the welcome can be carried out by adopting a near-field welcome mode.
In different greeting modes, different greeting parts can be selected for greeting, the same greeting parts can also be selected for greeting, and when the same greeting parts are selected for greeting, the greeting actions executed by the same greeting parts in different greeting modes are different. For example, in far-field welcome mode, the distance between the user and the vehicle is far, and the welcome component may be an external car light, an external speaker, etc.; in the near-field welcome mode, the distance between the user and the vehicle is relatively short, and the welcome component can be a main driving door, an internal car lamp, an external car loudspeaker, a suspension, an air conditioning system, a fragrance system, a main driving seat and the like. And for the external loudspeaker, different sound effects are played under different welcome scenes to welcome.
When the vehicle exterior interaction is performed based on the welcome mode, the embodiment of the application also displays a welcome interface. When the welcome interface is displayed, the welcome interface can be directly displayed on a display screen, can be displayed on a popup window of the display screen, and can be displayed on an application interface displayed on the display screen in a superposition manner. The welcome interface is used for displaying the execution result of the welcome instruction, and the execution result of the welcome instruction is the execution result of the welcome component according to the preset welcome action. To more vividly present the execution results, components existing in the driving area, for example, three-dimensional vehicle models, may be multiplexed for presentation. Specifically, according to the execution result of the welcome action according to the preset welcome action by the welcome component, the welcome action of each welcome component on the three-dimensional vehicle model can be displayed. The dynamic effect refers to a dynamic effect, specifically, a motion effect presented on each screen such as a center control screen and a rear-row screen of the vehicle. In the welcome mode, the method comprises at least one of main driving door driving effect, suspension reduction driving effect, main driving seat heating driving effect and the like on the three-dimensional vehicle model. Of course, the welcome interface may also display welcome text information, and the content of the welcome text information may be "welcome vehicle", etc.
The welcome interface can also display a welcome mode exit option and a welcome mode setting option. In the greeting mode, responding to the touch operation of the exit option of the greeting mode, the vehicle exits the greeting mode, and after exiting the greeting mode, the greeting component stops executing the greeting action, and the greeting interface is closed. And responding to the touch operation aiming at the setting option, displaying a welcome mode setting interface by the vehicle, and adjusting the welcome component and the welcome mode based on the setting interface.
According to the method and the device, through displaying the welcome interface, not only can the welcome be carried out by being matched with the welcome component better, the ceremony feeling of a target object driving vehicle is increased, but also the final welcome effect diagram is directly displayed to the user, so that the user can adjust the welcome mode according to own preference.
In another embodiment of the application, in response to the welcome instruction, a light effect corresponding to an execution result of the welcome instruction is displayed. The lighting effect refers to an effect which is presented when one or more lamps in or outside the vehicle flow in different lamplight colors, different brightness, different flicker frequencies or different light flow directions. The car light that is used for welcome under the welcome mode includes inside car light and outside car light, and inside car light mainly includes the atmosphere lamp, and outside car light mainly includes four pixel lamps, wheel arch lamp, two mutual lamps around.
The lamp effect of different lamps in the welcome mode may be different. Fig. 5 shows a light effect in a welcome mode in which the front interactive lamp 501 and the rear interactive lamp 504 welcome in a counterclockwise light flow direction, the pixel lamp 502 welcome with maximum brightness, and the wheel arch lamp 503 welcome with yellow light.
402. And responding to the external voice command, determining that the vehicle accords with the external voice mode, and switching the greeting interface into the external voice interface based on the external voice mode.
The external voice command comprises an external voice query command which is used for querying whether external voice functions are available. When the vehicle is determined to meet the external voice availability condition, the external voice function is determined to be available. The vehicle-exterior voice availability condition is a condition which needs to be met when the vehicle has the vehicle-exterior voice acquisition capability. The conditions for the voice availability outside the vehicle include the following:
the first item and the digital key are in a second preset range.
The digital key is a car key with a communication function. The digital key is provided with a communication module, and the digital key can communicate with the vehicle based on the communication module. The communication module comprises any one of a Bluetooth module, an infrared module, an NFC module, a WIFI module and the like. The second preset range is a range in which the microphone outside the vehicle can collect voice information, and the second preset range can be determined according to the number and the positions of the microphones outside the vehicle and the collection capacity of each microphone.
The second item, the primary drive is not seated or the vehicle is not engaged.
The third item, the vehicle, is not in a park condition.
The parking state comprises a parking state or a parking state. When the vehicle is in a parking state, other voice commands may affect the parking effect and even damage the vehicle, so that the parking command should be preferentially executed at this time without activating the off-vehicle voice function.
The fourth, off-vehicle speech mode is on.
In order to enable a user to flexibly select corresponding voice modes in different scenes, the embodiment of the application supports switching between the in-vehicle voice mode and the out-of-vehicle voice mode. When the switching between different voice interaction modes is realized, the following modes can be adopted:
the first way is to manually switch based on a voice mode switch control displayed on a central control screen. Under the external voice mode, the external voice mode is switched into the internal voice mode by touching the voice mode switching control; and in-car voice mode, switching the in-car voice mode into an out-car voice mode by touching the voice switching control. After the voice mode outside the vehicle is started, the voice mode inside the vehicle is in a disabled state, and at the moment, the voice image used for representing the voice function inside the vehicle on the central control screen is in an unavailable state. Fig. 6 shows a voice mode switching control displayed on a central control screen, on which a voice image and a text box for characterizing a voice function in a vehicle are displayed, and a voice mode switching switch and a currently enabled voice mode are displayed in the text box. In fig. 6, the voice function outside the car is available, the voice image is in an unavailable state, the text information in the voice start outside the car is displayed in the text box, and the text information in the cut-back car is displayed on the voice mode switch. In the external voice mode, when the touch operation of the voice mode change-over switch is detected, the external voice mode can be changed into the internal voice mode.
The second way is to switch based on voice instructions. Under the scene in the vehicle, when an opening instruction for the external voice mode is received, responding to the opening instruction, and opening the external voice mode; and when a closing instruction for the voice mode outside the vehicle is received, closing the voice mode outside the vehicle in response to the closing instruction. And detecting whether the user account is logged in or not after receiving an opening/closing instruction of the external voice mode, prompting the user to open the external voice mode if the user account is not logged in, and directly opening/closing the external voice mode if the user account is required to log in.
Further, when the outside-vehicle voice mode is turned on/off, whether the user account is voiceprint registered or not may be detected, and if the user account is not voiceprint registered, the user is prompted to perform voiceprint registration so as to use the outside-vehicle voice function.
And responding to the voice inquiry command outside the vehicle, and when the voice function outside the vehicle is determined to be available, determining that the vehicle accords with the voice mode outside the vehicle, wherein the voice function outside the vehicle is not activated at the moment, and the vehicle cannot execute the voice command. And when the vehicle is determined to accord with the external voice mode, the vehicle switches the welcome interface to the external voice interface. The form of the voice interface outside the vehicle can be the form of an application interface, the form of a popup window and the like. The interface switching mode may include the following cases:
And in the first case, closing the welcome interface, and displaying the external voice interface on the display area of the welcome interface.
And in the second case, superposing an out-of-vehicle voice interface on the welcome interface, wherein the welcome interface is partially or completely shielded.
The vehicle exterior voice interface is used for displaying first indication information, the first indication information is used for indicating available information of vehicle exterior voice, and the available information of the vehicle exterior voice is effect information and text information for marking available areas of the vehicle exterior voice and positions of microphones by dynamic effects such as flowing and breathing. The position of the microphone is determined from the position of the microphone in the real scene. The text information not only comprises text information for identifying the available area of the voice outside the vehicle and the position of the microphone, but also comprises text information for prompting a user how to activate the voice function outside the vehicle. Fig. 7 shows an off-board voice interface showing available information for off-board voice.
In another embodiment of the present application, in response to the external voice query command, it is determined that the vehicle conforms to the external voice mode, and in order to be able to indicate to the user that the external voice function is available, the vehicle further displays the control target lamp according to the sixth lighting effect. The target lamp includes all or part of the lamps of the vehicle, and the embodiments of the present application do not specifically limit the target lamp. The sixth light effect may be a light effect that the target vehicle lamp continuously breathes and flashes according to the second preset frequency, or may be a light effect that the target vehicle lamp is turned off, that is, a light effect that does not adopt any light effect to indicate the voice inactive state outside the vehicle. The effect of this inactive state can be seen in fig. 8. Because the voice function outside the automobile is not activated, the voice instruction cannot be executed at the moment, and if the automobile external loudspeaker plays the preset sound effect in the welcome mode, the automobile external loudspeaker can be controlled to continuously play the preset welcome sound effect at the moment.
When the guest-welcoming mode is switched to the external-vehicle voice mode, the external-vehicle voice function is not activated, and the vehicle executes part of guest-welcoming operation in the guest-welcoming mode, and at the moment, the external-vehicle voice interface simultaneously displays part of execution results in the guest-welcoming mode and part of execution results in the external-vehicle voice mode. For example, the vehicle exterior voice interface displays the welcome effect of part of welcome parts in the three-dimensional vehicle model, and also displays the effect information and text information for marking the available area of the vehicle exterior voice and the position of the microphone by dynamic effects such as flowing and breathing, and the lamp effect of the target car lamp.
In the application embodiment, in response to a query instruction of the voice outside the vehicle, the vehicle is in the voice mode outside the vehicle, at the moment, the voice function outside the vehicle is available, the capability of collecting voice information outside the vehicle is provided, but the voice function outside the vehicle is not activated, voice interaction outside the vehicle cannot be performed, and voice information of a target object needs to be collected to activate the voice function outside the vehicle, so that voice interaction outside the vehicle is realized. The target object is a user whose voice instruction can be executed in the scene outside the vehicle, and the target object can be a user of the vehicle owner, other users closely related to the outside of the vehicle, such as family members, friends, etc. of the user of the vehicle owner. The off-board voice function may be activated in a variety of ways under conditions where off-board voice is available. For example, activation of the off-board voice function may be triggered by at least one of the following two conditions:
The first condition is: voiceprint registration of a target object
In the external voice interaction scene, in order to avoid illegal control of vehicles by illegal users outside the vehicle, the target object needs to be registered in the vehicle in advance. At the time of voiceprint registration, the vehicle stores voice information of the target object. And when the voice information is matched with any one of the stored registered voice information, determining that the target object is a legal user which has undergone voiceprint registration. When the voice information is compared with the stored voice information, the voiceprint characteristics of the voice information can be extracted, the extracted voiceprint characteristics are compared with the voiceprint characteristics of each stored registered voice information, and when the extracted voiceprint characteristics are matched with the voiceprint characteristics of any stored registered voice information, the voice information of the target object is determined to be matched with the stored voice information.
When the voice information of the target object is not matched with the stored voice information, the target object can be determined to be not subjected to voiceprint registration, and the voice function outside the vehicle can be used after the target object is reminded to be subjected to voiceprint registration. When the target object is subjected to voiceprint registration reminding, prompt information for voiceprint registration, for example, "you do not carry out voiceprint registration, and can use the external voice function for the vehicle only after carrying out voiceprint registration in the vehicle"; and prompt information for voiceprint registration can be displayed on the central control screen.
The second condition is: the voice information of the target object contains a preset wake-up word
And when the target object is determined to be a legal user which has performed voiceprint registration, identifying the voice information of the target object. When the voice information of the target object is identified, the voice information can be input into a voice identification model, the voice identification model is used for processing, text content corresponding to the voice information is output, and whether the text content contains a preset wake-up word or not is identified. The speech recognition model is used to convert speech information into text content, and can be trained by existing deep learning methods, and will not be described in detail herein.
When the outside voice mode is started, the digital key is located in a first preset range, the main driver is not seated or the vehicle is not in gear, the vehicle is not in a parking state, the target object is registered with voiceprint, and the voice information of the target object contains a preset wake-up word, the outside voice function can be determined to be activated. That is, when the vehicle meets all the above items, it is determined that the voice function outside the vehicle is activated, and at this time, voice information of the target object can be collected, and corresponding operation is performed; when the vehicle meets the above partial items, it is determined that the off-vehicle voice function is not activated.
Fig. 9 to 11 show several off-vehicle scenarios, in which 901 indicates a range in which a microphone can collect voice information, 902 indicates a communication range of a digital key, 903 indicates a digital key, 904 indicates a registered voiceprint, 905 indicates an unregistered voiceprint, and 906 indicates that a host driver is seated. Referring to fig. 9, in this scenario, the off-vehicle voice function is turned on, the vehicle is not in an automatic parking state, the digital key is located in a second preset range, the target object is already voiceprint registered, and the voice information of the target object includes a preset wake-up word, so that it can be determined that the off-vehicle voice function is activated. Referring to fig. 10, in this scenario, if the target object is not voiceprint registered, it may be determined that the off-vehicle voice function is not activated. Referring to fig. 11, in this scenario, if the main drive is already seated, it may be determined that the off-vehicle voice function is not activated.
When the off-vehicle voice function is activated, the vehicle is in a listening state to the voice information of the target object. In a listening state, the vehicle can also call a microphone arranged around the vehicle body to detect the voice information of the target object in real time, if the voice information of the target object is detected within a preset time period, for example, within 15 seconds, the voice information is identified, if the voice information is valid voice information, namely, valid voice information is included, the vehicle is controlled to execute corresponding operations based on the voice instruction identified in the voice information, and the voice instruction comprises a parking instruction, a door opening instruction, a door closing instruction, a door locking instruction, a door unlocking instruction and the like.
In the embodiment of the application, after the control vehicle executes the instruction according to the external voice mode, the execution condition of the instruction is broadcasted. The content played by the external speaker controlled by the vehicle is different according to the voice state in which the voice information of the target object is located. When the voice information of the target object is in a listening state, the vehicle can control the external loudspeaker to play the listening prompt information so as to convey the feeling of being radio to the target object. The listening prompt may be "tweed", "hello", etc.; when the voice information is in a broadcasting state, the external loudspeaker of the vehicle is controlled to play the execution result of the execution instruction of the external voice, the type of the execution result comprises a positive execution result and a negative execution result, the positive execution result is used for indicating that the instruction is successfully executed, and the negative execution result is used for indicating that the instruction is failed to execute or cannot be identified. Before broadcasting by using the external speaker, the forward execution result needs to be judged whether to be broadcasted or not, if yes, the external speaker is used for broadcasting the forward execution result, for example, a vehicle door is opened/closed, the vehicle door is locked/unlocked, and the like. For the negative execution result, the reasons that the instruction cannot be executed, for example, the instruction cannot be identified, the execution of the instruction is temporarily not supported, and the like, can be broadcasted by adopting off-board voice.
In another embodiment of the application, in response to an execution instruction of the off-board voice, a light effect corresponding to an execution result of the execution instruction is displayed. According to the execution result of the execution instruction, when the vehicle displays the light effect corresponding to the execution result, the following situations can be included:
in the first case, when the execution result of the execution instruction is that the voice information of the target object is in a listening state, the target car lamp of the car is controlled to display the first light effect.
The listening state is a sound receiving state of voice information of the target object, at the moment, effective voice information of the target object is not collected, and a target car lamp of the vehicle is controlled to display a first light effect, wherein the first light effect can be a light effect that the target car lamp continuously breathes and flashes. The light effect in this listening state can be seen in fig. 12.
And in the second case, when the execution result of the execution instruction is that the voice information is in the recognition state, controlling the target car lamp of the vehicle to display the second light effect.
The recognition state is a state of collecting voice information of the target object and recognizing the voice information. The second light effect can be a light effect that the target car lamp of the vehicle breathes and flashes along with fluctuation of the voice information of the target object, and the change of the second light effect is consistent with the fluctuation of the voice information of the target object, so that the feeling that the voice information of the target object is collected is transmitted to a user. When the target car lamp of the control vehicle displays the second light effect, voice characteristics such as voice pitch value and the like can be extracted from voice information of the target object, and then the target car lamp is controlled to breathe and flicker along with fluctuation of the voice information of the target object according to the extracted voice pitch value. The effect of the lamp in this recognition state can be seen in fig. 13.
And in the third case, when the execution result of the execution instruction is that the voice information is in a feedback state, according to the type of the execution result of the second interaction instruction, controlling the target car lamp to display according to the lamp effect corresponding to the type.
According to the embodiment of the application, different light effects can be preset for different types of execution results, so that when the execution results are obtained, different light effects can be adopted for showing aiming at the different types of execution results. Specifically, the light effect that can be set for the positive execution result is the third light effect, and the light effect that can be set for the negative execution result is the fourth light effect. Because the types and the number of the lamps inside and outside the vehicle are more in the embodiment of the application, and the types of the lamps of different types are different, when the voice command execution is successful or failed by means of the lamps, different lamp effects can be set for the lamps of different types, and finally the voice command execution can be performed in the whole. Specifically, for the front and rear interactive car lights, different light flow directions can be set for different types of execution results, and for the four pixel lamps and the four wheel arch lamps, different colors and brightness can be set for different types of execution results. For example, for the forward execution result, the third light effect is set as follows: the light streams in the front and rear interactive car lights flow in the clockwise direction, and the four pixel lamps and the four wheel arch lamps display green light effects; aiming at the negative execution result, the set fourth light effect is as follows: the light streams in the front and back interactive lamps flow in the anticlockwise direction, and the four pixel lamps and the four wheel arch lamps display red light effects.
Based on the preset light effects for different types of execution results, when the type of the execution result is successful, the vehicle control target lamp is displayed by adopting a third light effect, and the light effect of the forward execution result can be seen in fig. 14; when the type of the execution result is failure, the vehicle control target lamp is displayed by adopting the fourth lamp effect, and the lamp effect of the negative execution result can be seen in fig. 15.
The above parking instruction, door opening instruction, door closing instruction, door locking instruction, and door unlocking instruction are described below when controlling the vehicle to execute the instructions in the outside-vehicle voice mode.
For the parking instruction, referring to fig. 16, in response to the parking instruction, the vehicle determines whether the voice print corresponding to the parking instruction and the account corresponding to the digital key are the same user, if so, executes the instruction, and if not, prompts the user that the outside vehicle voice function cannot be used. Responding to the parking instruction, judging whether the parking instruction is executable by the vehicle, if so, controlling the vehicle to park in a parking space, and matching corresponding light effect and sound effect in the executing process of the parking instruction; if the parking instruction is not executable, for example, a person/pet in the vehicle is detected, four doors and two covers are detected to be not closed, a charging gun is not pulled out, a parking space is not selected, and the like, the reason that the voice broadcasting is not executable is detected.
For the parking instruction, referring to fig. 17, in response to the parking instruction, the vehicle determines whether the voice print corresponding to the parking instruction and the account corresponding to the digital key are the same user, if so, executes the instruction, and if not, prompts the user that the outside-vehicle voice function cannot be used. Responding to the parking instruction, judging whether the parking instruction is executable by the vehicle, if so, controlling the vehicle to park from outside the vehicle, and matching corresponding light effect and sound effect in the executing process of the parking instruction; if the parking instruction is not executable, for example, a person/pet in the vehicle is detected, four doors and two covers are detected to be not closed, a charging gun is not pulled out, a parking space is not selected, and the like, the reason that the voice broadcasting is not executable is detected.
For the open/close door command, referring to fig. 18, in response to the open/close door command, it is determined whether an accurate control area is included in the open/close door command, including a main driving area, a sub driving area, a left rear area, a right rear area, a front row area (including main driving and sub driving), a rear row area, a left side area (including main driving and left rear), a right side area (including main driving and right rear), main driving and rear row areas, sub driving and rear row areas, main driving and right rear area, main driving and left rear area, sub driving and rear row areas, sub driving and left rear area, all areas, and the like, if the accurate control area is not included, it is determined whether the open/close door command includes a fuzzy control area including a front door area, a rear door area, and the like, if the fuzzy control area is not included, a voice bar and the like is "on the car again say" if the control area is included, it is determined whether the open/close door command is located in a sound area, if the wake-up sound area is located, the wake-up area is controlled to correspond to the front door or the left rear area, all areas are not reported, if the wake-up door is not required to be opened, and the voice bar is turned on a specific door is turned on. If the accurate control area is included, a door opening/closing instruction is executed based on the accurate control area, whether the door opening/closing instruction is allowed or not needs to be judged when the door opening/closing instruction is executed, if the door opening/closing instruction is allowed to be executed, the door opening/closing is controlled, and corresponding light effect and sound effect are matched in the executing process of the door opening/closing instruction; if the door open/close command is not allowed to be executed, for example, the door has been opened/closed, there is an obstacle in the vicinity of the door, or other reason, the voice broadcast is not executable.
For a door locking instruction, referring to fig. 19, in response to the door locking instruction, before the door locking instruction is executed, whether the door is not closed (including a tail door) is detected, and if the door is not closed, voice broadcasting that the door is not closed is performed; if there is no door that is not closed, determining whether the instruction is executable, and if the instruction is not executable, e.g., the door is locked or for other reasons, not executing the instruction; if the instruction is executable, the door is locked based on the instruction, and corresponding light effect and sound effect are matched in the process of executing the instruction.
For a door unlocking instruction, referring to fig. 20, in response to the door unlocking instruction, judging whether the door unlocking instruction is executable successfully, if yes, unlocking the door, and matching corresponding light effect and sound effect; if the execution is not successful, the reason of the execution failure can be voice broadcast.
In the embodiment of the application, in response to the execution instruction of the off-board voice, when the vehicle is controlled to execute the instruction according to the off-board voice mode, an execution result in the off-board voice mode is displayed on an off-board voice interface, and the execution result comprises the dynamic effect, the text information and the like of the three-dimensional vehicle model. The vehicle is in a vehicle exterior voice mode from an available state to an activated state, and when the vehicle exterior voice function is in the available state, the dynamic effect and the text information of the three-dimensional vehicle model are displayed on the vehicle exterior voice interface, but the interactive content of the vehicle exterior voice function is different in different states, and the dynamic effect and the text information of the three-dimensional vehicle model displayed on the vehicle exterior voice interface are different.
When the external voice function is activated, the dynamic effect of the three-dimensional vehicle model displayed on the external voice interface is the dynamic effect of the action executed by the controlled component, for example, the controlled component is a vehicle door, the executed action is to open/close the vehicle door, and the dynamic effect of the opening/closing of the controlled vehicle door in the three-dimensional vehicle model is displayed; the text information displayed on the voice interface outside the vehicle is text information describing the success or failure of execution of the instruction.
In the embodiment of the application, a voice mode switching control is further displayed on the external voice interface, and in the external voice mode, the external voice mode is switched to the in-vehicle voice mode in response to touch operation of the voice mode switching control.
The embodiment of the application describes switching between a guest-welcoming mode and a foreign-vehicle voice mode, wherein the guest-welcoming mode and the foreign-vehicle voice mode both have corresponding effective ranges under a foreign-vehicle scene, as shown in fig. 21, 2101 represents a guest-welcoming range, 2102 represents a foreign-vehicle voice unlocking range, 2103 represents a foreign-vehicle voice usable range, when a vehicle owner enters the guest-welcoming range, the vehicle carries out guest-welcoming based on the guest-welcoming mode, when the vehicle owner enters the foreign-vehicle voice unlocking range, a foreign-vehicle voice function is unlocked, and at the moment, the foreign-vehicle voice function is not usable; when the vehicle owner enters the usable range of the external voice, the external voice function is usable.
Referring to fig. 22, a vehicle interaction method is provided in the embodiment of the present application, taking a first external vehicle interaction mode as a sentinel mode and a second external vehicle interaction mode as an external vehicle voice mode as an example, and the method provided in the embodiment of the present application includes:
2201. in response to the sentry instruction, it is determined that the vehicle meets a sentry mode, and a sentry interface is displayed based on the sentry mode.
The sentry mode is an interaction mode for realizing vehicle monitoring, vehicle warning, remote notification and event recording after a main driving user leaves a vehicle based on vehicle body anti-theft alarm and other sensing signals. The sentinel mode may be turned on in the following manner:
the first way, the automatic start sentinel mode. When the vehicle is switched to the P gear, if the current position is located outside the geographic white list, a sentinel mode is automatically started. The geographic white list can be set by a main driving user; and the places where the main driving user frequently parks can be added into the geographic white list according to the driving route of the vehicle.
The second way, manually activate the sentinel mode. When the vehicle is switched to the P gear, the user exhales the sentry mode switch, and the sentry mode is started by touching the sentry mode switch. Under the P grade, when the sentinel mode is started, a status bar displays a sentinel icon; when the sentry mode is not started, the status bar does not display the sentry icon, the sentry icon on the status bar can not be clicked, and only the status bar is displayed. The sentinel mode enables the switch to default to off, which is in the gray state when the following conditions are not met:
1. The current vehicle is in the P gear;
2. the current account is a legitimate user;
3. software and hardware relied on by the sentry mode are normal in function;
4. electric quantity condition: is being charged or is currently displaying an electrical power above 10%.
And thirdly, when the vehicle is switched to the P gear, the vehicle adopts a voice broadcasting or screen display mode to prompt the user to start the sentinel mode. Specifically, when the gear is switched to the P gear, it is detected whether the sentinel enabling switch and the scenerization switch are in the off state, and whether the electric quantity meets the condition, when it is determined that the sentinel enabling switch and the scenerization switch are turned on, and the electric quantity meets the condition, the scenery engine judges whether the recommended sentinel condition is currently met, and if so, the P gear component pushes the Widget (simply, can be understood as an application Widget) to inquire whether the sentinel mode is started by the main driving user. When the main driving user selects to start the sentinel mode, the sentinel is requested to be started to enable.
After the sentinel mode is turned on, the sentinel idle state is activated. When the fact that the main driving user leaves the car lock and is powered down is detected, the whistle mode is operated, whether the vehicle meets the condition that the electric quantity is being charged or the current display electric quantity is higher than 10% is judged, if the fact that the electric quantity is not being charged or the current display electric quantity is higher than 10% is not required, the whistle mode is failed to operate, a whistle switch is automatically closed, and a remote message is sent to remind the main driving user; if the situation that the electric quantity is being charged or the current display electric quantity is higher than 10% is met, continuously judging whether the electric quantity is higher than 20%, if the electric quantity is not higher than 20%, sending a remote message to remind a main driving user, and if the electric quantity is higher than 20% in the duration of a currently supportable whistle, sensing the external environment by calling an ultrasonic radar, an electric internal millimeter wave radar, an AVM (Around View Monito) camera and the like installed on a vehicle body, shooting videos of the external environment, storing event photos and videos to a storage device, and then remotely prompting a user terminal (such as a mobile phone) to check the photos and the videos.
When the sentry mode is started, the vehicle can display the sentry pattern on the display screen, and prompt messages for starting the sentry mode are broadcast through voice. In order to save electric energy, the sentinel pattern is not always displayed, and when the display duration of the sentinel pattern reaches a certain duration (for example, 5 seconds, 10 seconds, etc.), the display screen is turned off. Fig. 23 shows a prompt effect diagram of a sentinel mode, wherein 2301 represents an external vehicle lamp, 2302 represents a display screen in the vehicle, 2303 represents an external speaker of the vehicle, and when the sentinel mode is on, the display screen displays a sentinel pattern thereon, and the external speaker of the vehicle broadcasts a prompt message for the sentinel mode to be on.
When the vehicle exits the sentinel mode, the sentinel is enabled to close. The exit method of the sentinel mode comprises the following steps: the user unlocks the vehicle door; remotely closing the sentinel mode; the electric quantity is lower than the electric quantity of the operation of the sentry mode; the user manually withdraws from the vehicle; the user is in car voice exit, etc.
When the user gets on the bus, if the abnormal event of the sentry is recorded during the period that the user gets off the bus and the currently logged account is a legal account, for example, the user starts the sentry mode for the first time and registers or logs on the account, the monitored sentry event is reminded, for example, the sentry event Widget pushes "the abnormal event is detected during the period that the user gets off the bus, and the clicking can enter an album-sentry video to check.
In the sentry mode, the vehicle calls the external camera, the radar and other equipment to monitor the external environment in real time, and when detecting that the external environment changes and accords with the sentry mode alarm condition, the vehicle is controlled to adopt different alarm modes to alarm according to the triggering information of the sentry mode in response to the fact that the vehicle accords with the sentry mode alarm condition.
Specifically, according to triggering information of the sentinel mode, the vehicle is controlled to alarm in different alarm modes, including but not limited to the following cases:
in the first case, when the digital key of the vehicle is not detected, and the duration that the distance between the object outside the vehicle and the vehicle is detected to be smaller than the first preset distance is longer than the first preset duration, the vehicle lamp of the vehicle is controlled to display according to the third light effect, and alarm information is displayed on the first display interface.
Wherein the external object may be a human or non-human (e.g., an animal). The first preset distance is different for different external objects. When the external object is a person, the first preset distance may be 0.3 meter, 0.5 meter, 1 meter, etc.; when the external object is a non-person, the first preset distance may be 0.2 meters, 0.1 meters, etc. The first preset duration may be 30 seconds, 60 seconds, etc. The third light effect is used for warning the external object that the external object is close to the external object, and the third light effect can be that the external car lamp is continuously lighted and the internal car lamp is yellow. Along with the change of the light effect, the alarm information is synchronously displayed on a first display interface, wherein the first display interface is a sentinel interface, and the content of the alarm information can be 'video recording', and the like.
Fig. 24 shows a diagram of an alarm effect triggered by a sentinel mode, wherein 2301 represents an external car light, 2302 represents an in-car display screen, 2303 represents an out-car speaker, 2304 represents an internal atmosphere light, and 2305 represents a person outside the car. When the distance between the person 2305 and the vehicle is detected to be smaller than the first preset distance and longer than the first preset time period, the external car lamp 2301 and the internal atmosphere lamp 2304 are turned on, and the internal atmosphere lamp 2304 is yellow in color while the alarm information is displayed on the display screen 2302.
In the second case, when the digital key of the vehicle is not detected, and the duration of the time when the distance between the object outside the vehicle and the vehicle is detected to be smaller than the second preset distance is longer than the second preset time, the target car lamp of the vehicle is controlled to display according to the fourth light effect, the loudspeaker outside the vehicle is controlled to play the alarm sound effect, and the alarm information is displayed on the first display interface.
The second preset distance is smaller than the first preset distance, and the second preset distance can be 01 m, 0.05 m and the like. The second preset time period may be 50 seconds, 60 seconds, etc. The fourth light effect is used for warning the external object to stop approaching, and the fourth light effect can be the light effect that the external car lamp is continuously lighted and the internal car lamp is red light. Along with the change of the light effect, the loudspeaker outside the automobile plays an alarm sound effect, the alarm prompt information is synchronously displayed on the first display interface, and the content of the alarm prompt information can be 'video recording', and the like.
Fig. 25 shows another alarm effect diagram triggered by a sentinel mode, in which when a digital key of a vehicle is not detected, when a distance between a person 2305 and the vehicle is detected to be smaller than a second preset distance and longer than a second preset time period, an external car lamp 2301 and an internal atmosphere lamp 2304 are turned on, the internal atmosphere lamp 2304 is red in color, and an external speaker plays an alarm sound effect and alarm information is displayed on a display screen 2302.
And in the third case, when the digital key of the vehicle is not detected and an intrusion event of an object outside the vehicle is detected, controlling the vehicle lamp of the vehicle to display according to the fifth light effect, controlling the speaker outside the vehicle to play the alarm sound effect, and displaying the alarm information on the first display interface.
The fifth light effect is used for warning that an external object is warned to stop invading, and the fifth light effect can also be the light effect that the external car lamp is continuously lighted and the internal car lamp continuously flashes by adopting red light. Along with the change of the light effect, the loudspeaker outside the automobile plays an alarm sound effect, the alarm prompt information is synchronously displayed on the first display interface, and the content of the alarm prompt information can be 'video recording', and the like.
Fig. 26 shows another alarm effect diagram triggered by a sentinel mode, in which when an intrusion event of a person 2305 is detected without detecting a digital key of a vehicle, an external car lamp 2301 and an internal atmosphere lamp 2304 are turned on, and the internal atmosphere lamp 2304 continuously blinks with red light, and at the same time, an external speaker plays an alarm sound effect, and alarm information is displayed on a display screen 2302.
2202. And responding to the external voice command, determining that the vehicle accords with the external voice mode, and switching the sentry interface into the external voice interface based on the external voice mode.
In the embodiment of the application, when the vehicle is in the sentry mode, and the vehicle meets the condition of the voice availability outside the vehicle, the vehicle generates the inquiry command of the voice outside the vehicle, the vehicle is determined to accord with the voice mode outside the vehicle in response to the inquiry command of the voice outside the vehicle, the vehicle turns off the light effect of the sentry mode based on the voice mode outside the vehicle, controls the target vehicle lamp to display the light effect corresponding to the inactive state, and continues to play the alarm sound effect of the sentry mode.
In an embodiment of the application, the vehicle also switches the sentinel interface to an off-board voice interface based on the off-board voice mode. The off-board voice interface is used for indicating available information of off-board voice. When the whistle mode is switched to the external voice mode, the external voice function is not activated, and the vehicle continues to execute part of operation in the whistle mode, so that the external voice interface simultaneously displays part of execution results in the whistle mode and part of execution results in the external voice mode. For example, the vehicle exterior voice interface displays the welcome effect of part of welcome parts in the three-dimensional vehicle model, and also displays the effect information and text information for marking the available area of the vehicle exterior voice and the position of the microphone by dynamic effects such as flowing and breathing, and the lamp effect of the target car lamp.
Further, when the out-of-car voice function is activated based on the available information, the vehicle generates an execution instruction of out-car voice, turns off an alarm sound effect of the sentry mode in response to the execution instruction of out-car voice, and performs a corresponding operation based on the execution instruction. And in the process of executing the execution instruction, displaying the light effect corresponding to the execution result according to the execution result, and simultaneously displaying the execution result of the execution instruction on the external voice interface. For the display mode of the light effect corresponding to the execution result and the display effect of the execution result on the external voice interface, refer to the above steps, which are not described herein in detail.
Because the voice function outside the vehicle is activated, the vehicle does not execute the operation in the sentry mode any more, and at the moment, the execution result of the execution instruction is displayed on the voice interface outside the vehicle.
Referring to fig. 27, a vehicle interaction method is provided in an embodiment of the present application, taking a first external interaction mode as a sentinel mode and a second external interaction mode as a greeting mode as an example, and the method provided in the embodiment of the present application includes:
2701. in response to the sentry instruction, it is determined that the vehicle meets a sentry mode, and a sentry interface is displayed based on the sentry mode.
2702. And responding to the welcome instruction, determining that the vehicle accords with a welcome mode, and switching the sentinel interface into the welcome interface based on the welcome mode.
In an embodiment of the application, when the vehicle is in a sentinel mode and the vehicle meets a greeting condition, the vehicle generates a greeting instruction. And responding to the welcome instruction, determining that the vehicle accords with the welcome mode, closing the light effect and the sound effect of the sentry mode based on the welcome mode, and executing the welcome instruction according to the welcome mode.
In an embodiment of the application, the vehicle also switches the sentinel interface to a greeting interface based on the greeting mode. When the sentry interface is switched to the welcome interface, the sentry interface can be closed, and the welcome interface is displayed on a display area of the sentry interface, wherein the welcome interface can be in the form of an application interface or a popup window; the welcome interface can also be displayed directly on the sentinel interface; and a popup window can be arranged on the sentry interface to display a welcome interface. The welcome interface displays the execution result of the welcome mode. The display mode and the display content of the execution result of the welcome mode are not described in detail here.
Referring to fig. 28, a vehicle interaction method is provided in an embodiment of the present application, taking a first external vehicle interaction mode as a charging mode and a second external vehicle interaction mode as an external vehicle voice mode as an example, and the method provided in the embodiment of the present application includes:
2801. And responding to the charging instruction, determining that the vehicle accords with a charging mode, and displaying a charging interface based on the charging mode.
The charging mode is an off-vehicle interaction mode in which the vehicle is charged by inserting the charging gun into the charging port. For an automatic driving vehicle, in order to facilitate a user to check the electric quantity of the battery, the automatic driving vehicle can be charged timely when the electric quantity of the battery is low, and when the vehicle is in a D gear, the residual electric quantity of the battery and corresponding endurance mileage are displayed on a vehicle central control screen. When the vehicle is not in a charged state, the user may adjust the target charge amount, i.e., the charge limit value, by voice, for example, adjust the target electric quantity to 90%; the user may also adjust the target electric quantity value by voice while the vehicle is in a charged state. In the charging process, the charging related information is intensively displayed in the driving information area, and in order to avoid information dispersion, after the vehicle module assembly is successfully executed, the vehicle module assembly displayed by the central control screen can be reused to display the adjustment result of the target electric quantity.
When the vehicle is in the P gear, the charging port is opened, and the charging gun is inserted into the charging port at the moment, so that the vehicle can be charged. In the charging mode, after the charging gun is plugged in and pulled out, the current charging state of the vehicle can be broadcasted through voice. For example, a voice broadcast vehicle is currently in a state of charge; for example, when the charging gun is not successfully connected with the charging port, the voice broadcast is abnormally connected, and the charging gun needs to be plugged in and plugged out again, so that the vehicle can be charged normally. In the charge mode, the user may also voice the charge amount, the remaining charge period, the duration of the journey, etc. of the vehicle.
Under the scene outside the vehicle, when the insertion operation of the charging gun is detected, the vehicle generates a charging instruction, and based on the charging instruction, the vehicle is determined to accord with a charging mode, and then a charging interface is displayed based on the charging mode. The charging interface is used for displaying a charging result of the vehicle, wherein the charging result comprises a charging state of the vehicle, a remaining charging time and the like. The charging state comprises a uncapped non-gun-inserting state, a gun-inserting non-charging state, a general charging state, a quick charging state, a charging abnormality/fault state and the like. In order to better show the states, the perception of users to different states is enhanced, and the charging effects of the charging ports, the charging guns, the charging piles, the charging effects of the external lamps of the vehicle, the charging parking spaces and other elements in the scene can be used for reflecting the charging effects of various states.
And in the state of uncapping and not inserting the gun, the charging gun is not inserted into the charging port, and the three-dimensional car model can be displayed on the charging interface and positioned on the charging parking space, but the moving effect of the charging gun is not achieved.
For the state of uncharged state of the gun, the charging gun is inserted into the charging port, but the vehicle is not charged, and the connection state between the charging gun and the three-dimensional vehicle model can be continuously displayed so as to reflect the gun inserting effect.
For the general charging state, the charging gun is inserted into the charging port at the moment, and the vehicle is charged at the common speed, and the charging effect of the common speed can be reflected by the 3D element color (for example, the colors of the charging pile, the charging gun and the charging port are adjusted to be blue), the relatively slow energy flow speed, the charging state of the external car lamp and other elements.
For the fast-flushing state, the charging gun is inserted into the charging port at the moment, and the vehicle is charged at a faster speed, and the charging effect at the faster speed can be reflected by the 3D element color (for example, the colors of the charging pile, the charging gun and the charging port are regulated to be purple), the relatively slower energy flow speed, the charging state of the external car lamp and other elements.
For the abnormal/fault state of charging, the charging gun is inserted into the charging port at this time, but the vehicle cannot be normally charged, the abnormal state of charging caused by various reasons can be represented by the color and the blinking state of the 3D element, for example, the color of the charging pile, the charging gun and the charging port is adjusted to red, the charging gun and the charging port continuously blink, and the like.
In the embodiment of the application, in addition to displaying the charge state, the remaining charge time, and the like of the vehicle on the charge interface, a three-dimensional vehicle model, dock, status bar, and the like of other vehicles around the vehicle may be displayed.
In the embodiment of the application, in order to enhance the perception of the user on the charging state, when the charging gun is successfully inserted into the charging port, a prompt message of successful charging connection can be played; when the vehicle is successfully charged, a prompt message for starting charging can be played; when the charging is completed, a prompt message of the completion of the charging may be played.
In the embodiment of the application, the external car lamp can be controlled to display by adopting different lamp effects based on different charging states of the vehicle. For example, when the charging flap has been opened but no gun inserted, the exterior vehicle lights may be controlled to display a second preset color, e.g., yellow; when the charging gun is inserted into the charging port, the external car lamp can be controlled to flash at a third preset frequency; when the vehicle starts to be charged at the normal speed, the external vehicle lamp can be controlled to flow at a first preset speed; when the vehicle starts to be charged at a faster speed, the external vehicle lamp can be controlled to flow at a second preset speed, wherein the first preset speed is smaller than the second preset speed; when the charging is abnormal/failed, the external vehicle lamp may be controlled to display a third preset color, for example, red; when the charging is completed, the external vehicle lamp may be controlled to display a fourth preset color, for example, green.
2802. And responding to the external voice command, determining that the vehicle accords with the external voice mode, and switching the charging interface to the external voice interface based on the external voice mode.
When the vehicle is in a charging mode, and the vehicle meets the condition of the external voice validation, the vehicle generates an inquiry command of external voice, and determines that the vehicle accords with the external voice mode in response to the inquiry command of the external voice, and based on the external voice mode, the vehicle turns off the light effect of the charging mode, so that the light effect of the external voice in an inactive state is displayed.
In this embodiment of the present application, based on the external voice mode, the vehicle switches the charging interface to the external voice interface, and the external voice interface displays not only the available information indicating the external voice, but also the charging information of the three-dimensional vehicle model, for example, the three-dimensional vehicle model is displayed on the external voice interface, and the three-dimensional vehicle mode may present a semitransparent state, and displays the battery, the current electric quantity, whether the charging state and the target electric quantity.
When the vehicle-exterior voice function is activated based on the available information, the vehicle generates an execution instruction of the vehicle-exterior voice, responds to the execution instruction of the vehicle-exterior voice, executes corresponding operation based on the execution instruction, further displays an execution result of the execution instruction on the vehicle-exterior voice interface, and displays a lamp effect corresponding to the execution result.
Referring to fig. 29, a vehicle interaction method is provided in an embodiment of the present application, taking a first vehicle external interaction mode as a charging mode and a second vehicle external interaction mode as a sentinel mode as an example, and a method flow provided in the embodiment of the present application includes:
2901. and responding to the charging instruction, determining that the vehicle accords with a charging mode, and displaying a charging interface based on the charging mode.
2902. And responding to the sentry warning instruction, determining that the vehicle accords with the sentry mode, and switching the charging interface into the sentry interface based on the sentry mode.
And when the vehicle is in a charging mode and the vehicle meets the guard mode effective condition, the vehicle generates a guard alarm instruction. In response to the sentry warning instruction, the vehicle is determined to conform to the sentry mode, and based on the sentry mode, the light effect and the sound effect of the charging mode are turned off, so that the sentry warning instruction is executed according to the sentry mode.
In the embodiment of the application, the vehicle also switches the charging interface to the sentinel interface based on the sentinel mode. When the charging interface is switched to the sentry interface, the charging interface can be closed, the sentry interface is displayed on a display area of the charging interface, and the sentry interface can be in the form of an application interface or a popup window; the sentinel interface can also be directly displayed on the charging interface; the sentry interface can also be displayed on the charging interface through a popup window. The sentinel interface not only displays the execution result of the sentinel instruction, but also displays the charging information of the three-dimensional vehicle model.
Referring to fig. 30, a vehicle interaction method is provided in the embodiment of the present application, taking a first external vehicle interaction mode as a pet mode and a second external vehicle interaction mode as a sentinel mode as an example, and a specific interaction process is as follows:
3001. and responding to the pet mode starting instruction, determining that the vehicle accords with the pet mode, and displaying a pet mode interface based on the pet mode.
The pet mode is an external interaction mode which is started when the existence of pets in the accompanying passengers is detected under the external scene of the vehicle. The pet mode can operate when the car owner takes the car and after leaving the car, so that a comfortable riding environment can be provided for the pet in the riding process, the safety of the pet can be ensured after the car owner leaves the car, and the pet is prevented from being irritated. After the pet mode is started in the scene outside the vehicle, a pet mode starting prompt message can be played so as to reduce the worry of users carrying pets for riding.
Under the scene outside the vehicle, when the existence of the pet in the accompanying passenger is detected, in order to pacify the pet and ensure the riding safety of the pet, the vehicle generates a pet mode starting instruction, and the vehicle is determined to accord with the pet mode in response to the pet mode starting instruction, so that the vehicle is controlled to execute the pet mode starting instruction according to the pet mode based on the pet mode, and a comfortable riding environment is built for the pet.
Specifically, the control vehicle executes a pet mode start instruction according to a pet mode, including at least one of the following: regulating the temperature of the air conditioning system to a temperature suitable for the pet; the angle of the back row seat is regulated down, so that more movable space is reserved for the pet; the play can pacify the pet's audio/video.
In the embodiment of the application, the target car lamp of the vehicle can be controlled to take corresponding lamp effect for display in response to the pet mode starting instruction. For example, controlling the in-vehicle atmosphere lamp exhibits a fifth preset color, e.g., yellow, etc. After the pet mode is started, a prompt message for starting the pet mode can be played.
In the embodiment of the application, based on the pet mode, the vehicle will also display a pet mode interface for displaying the execution result of the pet mode, wherein the execution result includes the current temperature in the vehicle, the angle of the seat, the played audio/video, the pet image and the like. When the user leaves the vehicle and the pet is left on the vehicle in the pet mode, the vehicle is not powered down, and the air conditioning system continuously works to ensure the proper environment in the vehicle and avoid the discomfort and the dysphoria of the pet. At this time, the pet mode interface also displays text prompt information, which is used to prompt the pet to be in a safe state in the mode, for example, "no worry about the pet, quick return of my owner", "safe and comfortable my pet" etc.
Further, in the pet mode, when the user is detected to leave the car and the pet is not accompanying, the user can be prompted by voice to pay attention to the safety of the pet when the pet is left on the car.
3002. And responding to the guard alarm instruction, determining that the vehicle accords with the guard mode, and switching the pet mode interface into the guard interface based on the guard mode.
In the pet mode, when the vehicle meets the sentinel mode effective condition, the vehicle generates a sentinel alarm instruction. In response to the sentry warning instruction, the vehicle is determined to conform to the sentry mode, and based on the sentry mode, the light effect and the sound effect of the pet mode are closed, and then the sentry warning instruction is executed according to the sentry mode.
In the embodiment of the application, the pet mode interface is also switched to the sentinel interface based on the sentinel mode. When the pet mode interface is switched to the sentinel interface, the pet mode interface can be closed, the sentinel interface is displayed on a display area of the pet mode interface, and the sentinel interface can be in the form of an application interface or a popup window; the sentinel interface can also be displayed directly on the pet mode interface; the sentry interface can also be displayed on a popup window on the pet mode interface.
In order to protect the safety of pets in the vehicle, when the pet mode is switched to the sentinel mode, the vehicle air conditioning system continuously works so as to avoid the over-high or over-low temperature in the vehicle and endanger the safety of the pets.
In addition, in order to avoid the invasion event of the vehicle caused by the danger of the pet in the vehicle by the mistake of the external object, after the pet mode interface is switched to the sentinel interface, the sentinel interface not only displays the execution result of the sentinel mode, but also can display the execution result of the pet mode.
Referring to fig. 31, an embodiment of the present application provides a vehicle interaction method, taking a first vehicle external interaction mode as a pet mode and a second vehicle external interaction mode as a guest mode as an example, and specific interaction processes are as follows:
3101. and responding to the pet mode starting instruction, determining that the vehicle accords with the pet mode, and displaying a pet mode interface based on the pet mode.
3102. And responding to the welcome instruction, determining that the vehicle accords with a welcome mode, and switching the pet mode interface into the welcome interface based on the welcome mode.
In the pet mode, when the vehicle meets the effective condition of the greeting mode, the vehicle generates an greeting instruction, responds to the greeting instruction, determines that the vehicle meets the greeting mode, and based on the greeting mode, the light effect and the sound effect of the pet mode are closed, so that the greeting instruction is executed according to the greeting mode, and meanwhile, the pet mode interface is switched to the greeting interface. When the pet mode interface is switched to the welcome interface, the pet mode interface can be closed, and the welcome interface is displayed on a display area of the pet mode interface, wherein the welcome interface can be in the form of an application interface or a popup window; the welcome interface can also be displayed directly on the pet mode interface; and a popup window can be displayed on the pet mode interface to display a welcome interface.
In order to protect the safety of pets in the vehicle, when the pet mode is switched to the welcome mode, the vehicle air conditioning system continuously works so as to avoid the damage to the safety of the pets due to the fact that the temperature in the vehicle is too high or too low.
In addition, in order to avoid the invasion event of the vehicle caused by the danger of the pet in the vehicle by the mistake of the external object, after the pet mode interface is switched to the welcome interface, the welcome interface not only displays the execution result of the welcome mode, but also can display the execution result of the pet mode.
Referring to fig. 32, a vehicle interaction method is provided in an embodiment of the present application, taking a first external vehicle interaction mode as a pet mode and a second external vehicle interaction mode as an external vehicle voice mode as an example, a method flow provided in the embodiment of the present application includes:
3201. and responding to the pet mode starting instruction, determining that the vehicle accords with the pet mode, and displaying a pet mode interface based on the pet mode.
3202. And responding to the external voice command, determining that the vehicle accords with the external voice mode, and switching the pet mode interface into the external voice interface based on the external voice mode.
In the pet mode, when the vehicle meets the condition that the external voice is effective, the vehicle generates an inquiry command of external voice, the vehicle is determined to be in accordance with the external voice mode in response to the inquiry command of the external voice, and based on the external voice mode, the vehicle turns off the light effect of the pet mode, so that the light effect of the external voice in an inactive state is displayed. Based on the off-vehicle voice mode, the vehicle also switches the pet mode interface to an off-vehicle voice interface. Specifically, the pet mode interface can be closed, and a vehicle external voice interface is displayed on a display area of the pet mode interface, wherein the vehicle external voice interface can be in the form of an application interface or a popup window; the external voice interface of the car can also be directly displayed on the pet mode interface; and a popup window can be arranged on the pet mode interface to display a voice interface outside the vehicle.
In order to protect the safety of pets in the vehicle, when the mode of the pets is switched to a voice interface outside the vehicle, the air conditioning system of the vehicle continuously works so as to avoid the over-high or over-low temperature in the vehicle and harm the safety of the pets.
In addition, in order to avoid the invasion event of the vehicle caused by the danger of the pet in the vehicle by the mistake of the external object, after the pet mode interface is switched to the external voice interface, the external voice interface displays the available information of the external voice and also displays the execution result in the pet mode.
When the voice function outside the vehicle is activated based on the available information, the vehicle generates an execution instruction of the voice outside the vehicle, responds to the execution instruction of the voice outside the vehicle, executes the execution instruction, and displays the light effect corresponding to the execution result of the execution instruction. At this time, the execution result of the execution instruction and the execution result in the pet mode are displayed on the external voice interface.
Referring to fig. 33, a vehicle interaction method is provided in an embodiment of the present application, taking a first vehicle external interaction mode as a child mode and a second vehicle external interaction mode as a greeting mode as an example, where a method flow provided in the embodiment of the present application includes:
3301. And responding to the child mode starting instruction, determining that the vehicle accords with the child mode, and displaying a child mode interface based on the child mode.
The child mode is an out-of-vehicle interaction mode which is started when the existence of a child in the accompanying passenger is detected. The child mode can operate when the car owner takes the car and after the car is away from the car, so that a comfortable riding environment can be provided for the child in the riding process, and the safety of the child can be ensured after the car owner leaves the car. After the child mode is started in the scene outside the vehicle, a child mode starting prompt message can be played.
Under the scene outside the vehicle, when detecting that a child exists in the accompanying passengers, in order to ensure the riding safety of the child, the vehicle generates a child mode starting instruction, and responds to the child mode starting instruction to determine that the vehicle accords with the child mode, so that the vehicle is controlled to execute the child mode starting instruction according to the child mode based on the child mode, and a comfortable riding environment is built for the child.
Specifically, the control vehicle executes a child mode start instruction according to the child mode, including at least one of the following: adjusting the temperature of the air conditioning system to a temperature suitable for children; adjusting the angle of the seat; closing the vehicle window; playing the child song or video that the child likes to hear.
In the embodiment of the application, based on the child mode, the vehicle will also display a child mode interface for displaying the execution result of the child mode, wherein the execution result includes the current temperature in the vehicle, the angle of the seat, the child image and the like. When the user leaves the car and the child is left on the car, the corresponding functions are kept in the child mode, and the air conditioner can continuously work to ensure that the environment in the car is proper. At this time, the child mode interface is also displayed with text prompt information for prompting that the child is in a safe state in the mode.
In the embodiment of the application, the target car lamp of the vehicle can be controlled to take the lamp effect for display in response to the child mode starting instruction. For example, the control interior atmosphere lamp exhibits a sixth preset color, e.g., green, etc. After the child mode is turned on, a prompt message for the child mode to be turned on can also be played.
Further, in the child mode, when the user is detected to leave the vehicle and the child is not accompanying, the user can be prompted by voice to pay attention to the child safety when the child is left on the vehicle.
3302. And responding to the welcome instruction, determining that the vehicle accords with a welcome mode, and switching the child mode interface into the welcome interface based on the welcome mode.
In another embodiment of the application, in the child mode, when the vehicle meets a guest-welcoming mode validation condition, the vehicle generates a guest-welcoming instruction, and in response to the guest-welcoming instruction, the vehicle is determined to conform to the guest-welcoming mode, based on the guest-welcoming mode, the light effect and the sound effect of the child mode are turned off, and then the guest-welcoming instruction is executed according to the guest-welcoming mode, and meanwhile, the child mode interface is switched to the guest-welcoming interface. When the child mode interface is switched to the welcome interface, the welcome mode interface can be closed, and the welcome interface is displayed on a display area of the welcome mode interface, wherein the welcome interface can be in the form of an application interface or a popup window; the welcome interface can also be directly displayed on the welcome mode interface; and the welcome interface can be displayed on the popup window on the welcome mode interface.
In order to protect the safety of children in the vehicle, when the welcome mode is switched to the welcome interface, the vehicle air conditioning system continuously works so as to avoid the damage of the safety of the children due to the fact that the temperature in the vehicle is too high or too low.
In addition, in order to avoid the invasion event of the vehicle caused by the danger of the children in the vehicle by the mistake of the external object, after the child mode interface is switched to the welcome interface, the welcome interface not only displays the execution result of the welcome mode, but also can display the execution result of the child mode.
Referring to fig. 34, a vehicle interaction method is provided in an embodiment of the present application, taking a first vehicle external interaction mode as a child mode and a second vehicle external interaction mode as a vehicle external voice mode as an example, a method flow provided in the embodiment of the present application includes:
3401. and responding to the child mode starting instruction, determining that the vehicle accords with the child mode, and displaying a child mode interface based on the child mode.
3402. And responding to the external voice command, determining that the vehicle accords with the external voice mode, and switching the child mode interface to the external voice interface based on the external voice mode.
In another embodiment of the present application, in a child mode, when a vehicle meets a condition for validating a vehicle-exterior voice, the vehicle generates a query command for the vehicle-exterior voice, and determines that the vehicle conforms to the vehicle-exterior voice mode in response to the query command for the vehicle-exterior voice, based on the vehicle-exterior voice mode, the vehicle turns off the light effect of the child mode, thereby displaying the light effect of the vehicle-exterior voice in an inactive state. Based on the off-vehicle voice mode, the vehicle also switches the child mode interface to an off-vehicle voice interface. Specifically, the child mode interface can be closed, and the external voice interface of the vehicle is displayed on the display area of the child mode interface, wherein the external voice interface of the vehicle can be in the form of an application interface or a popup window; the external voice interface of the vehicle can also be directly displayed on the child mode interface; and the external voice interface of the vehicle can be displayed on the child mode interface through a popup window.
In order to protect the safety of children in the vehicle, when the mode of the children is switched to a voice interface outside the vehicle, the vehicle air conditioning system continuously works so as to avoid the damage to the safety of the children due to the fact that the temperature in the vehicle is too high or too low.
In addition, in order to avoid the invasion event of the vehicle caused by the danger of the children in the vehicle by the mistake of the external object, after the child mode interface is switched to the external voice interface, the external voice interface not only displays the available information of the external voice, but also can display the execution result in the child mode.
When the voice function outside the vehicle is activated based on the available information, the vehicle generates an execution instruction of the voice outside the vehicle, responds to the execution instruction of the voice outside the vehicle, executes the execution instruction, and displays the light effect corresponding to the execution result of the execution instruction. At this time, the execution result of the execution instruction and the execution result in the child mode are displayed on the external voice interface.
Referring to fig. 35, a vehicle interaction method is provided in an embodiment of the present application, taking a first vehicle external interaction mode as a welcome mode and a second vehicle external interaction mode as a parking mode as an example, and the method provided in the embodiment of the present application includes:
3501, in response to the welcome instruction, determines that the vehicle meets a welcome mode, and displays a welcome interface based on the welcome mode.
3502. And responding to the parking instruction, determining that the vehicle accords with a parking mode, and switching the welcome interface to the parking interface based on the parking mode.
The parking mode is an out-vehicle interaction mode in which the vehicle automatically parks out of the parking space. The parking mode can be started by two mechanisms, wherein the first mechanism is that a main driving user starts the parking mode through a central control screen before leaving the automobile; the second mechanism is that after the main driving user leaves the vehicle, the vehicle receives a parking instruction of the main driving user and starts a parking mode. The source of the parking instruction is two modes, and the first mode is that a main driving user sends the parking instruction to a vehicle through a digital key; the second way is that the host driver sends a parking instruction to the vehicle through voice information.
In the second mode, after the vehicle receives the parking instruction of the main driving user, before the parking mode is started, the identity of the main driving user is checked, and when the verification is passed, the parking mode is started. When identity verification is carried out on a main driving user, voiceprint features of the main driving user can be extracted, a registration account corresponding to the voiceprint features is obtained based on the extracted voiceprint features, if the registration account is the same account as the registration account corresponding to the digital key, the main driving user is determined to pass the verification, and then the vehicle is parked out of a parking space based on a parking mode.
In the parking mode, the parking logic of the vehicle is different according to the type of parking space and the direction of the vehicle head. When the vehicle is positioned in the vertical parking space, if the vehicle head faces outwards, a half vehicle body is parked in a straight line; when the vehicle is positioned in the vertical parking space, if the vehicle head faces inwards, the vehicle is parked out of the position where the vehicle door can be opened; when the vehicle is positioned in the horizontal parking space, the parking direction is automatically judged and the vehicle is parked.
In the parking mode, the vehicle can remind people and animals outside the vehicle of paying attention to safety through the voice outside the vehicle, and the parking success message of the vehicle can be broadcast through voice when the vehicle is successfully parked to a parking space, so that a main driving user can be reminded of driving the vehicle.
In the welcome mode, responding to the parking instruction, determining that the vehicle accords with the parking mode, turning off the light effect and the sound effect of the welcome mode based on the parking mode, and executing the parking instruction according to the parking mode. When a parking instruction is executed according to a parking mode, the vehicle judges whether the voice print corresponding to the parking instruction and the account corresponding to the digital key are the same user, and if so, the vehicle executes the instruction. Responding to the parking instruction, judging whether the parking instruction is executable by the vehicle, if so, controlling the vehicle to park from outside the vehicle, and matching corresponding light effect and sound effect in the executing process of the parking instruction; if the parking instruction is not executable, for example, a person/pet in the vehicle is detected, four doors and two covers are detected to be not closed, a charging gun is not pulled out, a parking space is not selected, and the like, the reason that the voice broadcasting is not executable is detected.
In the embodiment of the application, in the process of executing the parking instruction according to the parking mode, if a person or animal outside the vehicle is detected, the person or the action outside the vehicle is also prompted by voice to pay attention to safety.
In the embodiment of the application, the vehicle also switches the welcome interface to the parking interface based on the parking mode. The parking interface may display the execution result of the parking instruction, where the execution result includes the dynamic effect of the parking process of the vehicle, surrounding environment information, and the like, for example, a three-dimensional model of the vehicle on a surrounding parking space, surrounding characters, animals, and the like.
Referring to fig. 36, a vehicle interaction method is provided in the embodiment of the present application, taking a first external vehicle interaction mode as an external vehicle voice mode and a second external vehicle interaction mode as a parking mode as an example, and the method provided in the embodiment of the present application includes:
3601. and responding to the external voice command, determining that the vehicle accords with the external voice mode, and displaying an external voice interface based on the external voice mode.
3602. In response to the parking instruction, determining that the vehicle conforms to a parking mode, and switching the off-vehicle voice interface to the parking interface based on the parking mode.
The parking mode is an out-of-vehicle interaction mode in which the vehicle automatically parks into a parking space. The parking mode can be started by two mechanisms, wherein the first mechanism is that a main driving user starts the parking mode through a central control screen before leaving the automobile; the second mechanism is that after a main driving user leaves a vehicle, the vehicle receives a parking instruction sent by the main driving user and starts a parking mode. The source of the parking instruction is two modes, and the first mode is that a main driving user sends the parking instruction to the vehicle through a digital key; the second way is that the host driver sends a parking instruction to the vehicle through voice information.
In a second mode, after the vehicle receives the parking instruction of the main driving user, before the parking mode is started, the identity of the main driving user is checked, and when the identity of the main driving user passes the check, the parking mode is started. When identity verification is carried out on a main driving user, voiceprint features of the main driving user can be extracted, a registration account corresponding to the voiceprint features is obtained based on the extracted voiceprint features, if the registration account is the same account as the registration account corresponding to the digital key, the main driving user is determined to pass the verification, and then the vehicle is parked in a parking space based on a parking mode.
In the parking mode, the vehicle can remind people and animals outside the vehicle of paying attention to safety through the voice outside the vehicle, and can also broadcast a parking success notification message through voice when the vehicle is successfully parked in a parking space, so that the anxiety of a main driving user in leaving the vehicle is reduced.
In the off-board voice mode, in response to the parking instruction, determining that the vehicle conforms to the parking mode, turning off the light effect and the sound effect of the off-board voice mode based on the parking mode, and executing the parking instruction according to the parking mode. Specifically, whether the voice print corresponding to the parking instruction and the account corresponding to the digital key are the same user or not is judged, if yes, the instruction is executed, and if not, the user is prompted that the outside voice function cannot be used. Responding to the parking instruction, judging whether the parking instruction is executable by the vehicle, if so, controlling the vehicle to park in a parking space by adopting an APA or HAPA system, and matching corresponding light effect and sound effect in the executing process of the parking instruction; if the parking instruction is not executable, for example, a person/pet in the vehicle is detected, four doors and two covers are detected to be not closed, a charging gun is not pulled out, a parking space is not selected, and the like, the reason that the voice broadcasting is not executable is detected.
In the embodiment of the application, the vehicle also switches the external voice interface to the parking interface based on the parking mode. The parking interface may display the execution result of the parking instruction, where the execution result includes the dynamic effect of the parking process of the vehicle, surrounding environment information, and the like, for example, a three-dimensional model of the vehicle on a surrounding parking space, surrounding characters, animals, and the like.
Referring to fig. 37, a vehicle interaction method is provided in the embodiment of the present application, taking a first external vehicle interaction mode as an external vehicle voice mode and a second external vehicle interaction mode as a parking mode as an example, a method flow provided in the embodiment of the present application includes:
3701. and responding to the external voice command, determining that the vehicle accords with the external voice mode, and displaying an external voice interface based on the external voice mode.
3702. And responding to the parking instruction, determining that the vehicle accords with a parking mode, and switching the voice interface outside the vehicle to the parking interface based on the parking mode.
In the off-board voice mode, in response to the parking instruction, determining that the vehicle conforms to the parking mode, turning off the light effect and the sound effect of the off-board voice mode based on the parking mode, and executing the parking instruction according to the parking mode. When a parking instruction is executed according to a parking mode, the vehicle judges whether the voice print corresponding to the parking instruction and the account corresponding to the digital key are the same user, and if so, the vehicle executes the instruction. Responding to the parking instruction, judging whether the parking instruction is executable by the vehicle, if so, controlling the vehicle to park from outside the vehicle, and matching corresponding light effect and sound effect in the executing process of the parking instruction; if the parking instruction is not executable, for example, a person/pet in the vehicle is detected, four doors and two covers are detected to be not closed, a charging gun is not pulled out, a parking space is not selected, and the like, the reason that the voice broadcasting is not executable is detected.
In the embodiment of the application, the vehicle also switches the external voice interface to the parking interface based on the parking mode. The parking interface may display the execution result of the parking instruction, where the execution result includes the dynamic effect of the parking process of the vehicle, surrounding environment information, and the like, for example, a three-dimensional model of the vehicle on a surrounding parking space, surrounding characters, animals, and the like.
The above-mentioned examples merely exemplify the switching between the external modes of the vehicle, and the switching between the external modes of the vehicle is not exhaustive, and in practice, the mode switching may be performed as long as the mode switching condition is satisfied.
Referring to fig. 38, a schematic structural diagram of a vehicle interaction device according to an embodiment of the present application is provided, where the device may be implemented by software, hardware, or a combination of both, and is all or part of an electronic device, and the device includes:
a first determining module 3801 configured to determine, in response to a first interaction instruction, that the vehicle conforms to a first off-vehicle interaction mode;
the first display module 3802 is configured to display a first display interface based on the first off-vehicle interaction mode, where the first display interface is configured to display an execution result of the first interaction instruction;
A second determining module 3803 configured to determine, in response to a second interaction instruction, that the vehicle conforms to a second out-of-vehicle interaction mode;
the switching module 3804 is configured to switch the first display interface to a second display interface based on the second external interaction mode, where the second display interface is configured to display an execution result of the second interaction instruction.
In another embodiment of the application, the first interaction instruction is an inquiry instruction of the voice outside the vehicle, and the first indication information is displayed on the first display interface and is used for indicating available information of the voice outside the vehicle; the apparatus further comprises:
the receiving module is used for receiving a second interaction instruction, and the second interaction instruction is determined based on the available information of the voice outside the vehicle;
the first control module is used for controlling the vehicle to execute a second interaction instruction according to the external voice mode when the second external voice mode is the external voice mode;
and the second display module is used for displaying the execution result in the out-of-car voice mode on the second display interface.
In another embodiment of the present application, the control module is configured to control the speaker outside the vehicle to play the listening prompt information when the voice information of the target object is in a listening state; and when the voice information is in a broadcasting state, controlling the loudspeaker outside the vehicle to play the execution result of the second interaction instruction.
In another embodiment of the present application, the apparatus further comprises:
the first display module is used for responding to the first interaction instruction and displaying the execution result of the first interaction instruction and the corresponding light effect;
the second display module is used for responding to the second interaction instruction and displaying a second light effect corresponding to the execution result of the second interaction instruction.
In another embodiment of the present application, the second external interaction mode is an external vehicle voice mode, and the second display module is configured to control a target vehicle lamp of the vehicle to display the first light effect when the execution result of the second interaction instruction is that the voice information of the target object is in a listening state; when the execution result of the second interaction instruction is that the voice information is in the recognition state, controlling a target car lamp of the vehicle to display a second light effect; when the execution result of the second interaction instruction is that the voice information is in a feedback state, according to the type of the execution result of the second interaction instruction, the target car lamp is controlled to display according to the lamp effect corresponding to the type.
In another embodiment of the application, the first off-vehicle interaction mode is a sentinel mode and the first interaction instruction is a sentinel alarm instruction of the vehicle; the apparatus further comprises:
the second control module is used for responding to the whistle alarm instruction of the vehicle and controlling the vehicle to alarm in different alarm modes according to the triggering information of the whistle mode.
In another embodiment of the present application, the second control module is configured to control, when the digital key of the vehicle is not detected, and the duration when the distance between the object outside the vehicle and the vehicle is detected to be smaller than the first preset distance is longer than the first preset duration, display the lamp of the vehicle according to the third light effect, and display alarm information on the first display interface;
the second control module is used for controlling a target car lamp of the vehicle to display according to a fourth light effect, controlling a loudspeaker outside the vehicle to play an alarm sound effect, and displaying alarm information on a first display interface, wherein the duration of detecting that the distance between an object outside the vehicle and the vehicle is smaller than a second preset distance is longer than a second preset duration;
and the second control module is used for controlling the lamps of the vehicle to display according to the fifth lamp effect when the digital key of the vehicle is not detected and the intrusion event of the object outside the vehicle is detected, controlling the loudspeaker outside the vehicle to play the alarm sound effect and displaying the alarm information on the first display interface.
In another embodiment of the present application, the first out-of-vehicle interaction mode is a greeting mode and the second out-of-vehicle interaction mode is an out-of-vehicle voice mode; and the switching module is used for responding to the second interaction instruction and switching the greeting interface into the external voice interface based on the external voice mode.
In summary, the device provided in the embodiments of the present application,
when the vehicle exterior interaction is carried out based on the interaction instruction, according to the vehicle exterior interaction mode triggered by the interaction instruction, displaying an interaction interface corresponding to the vehicle exterior interaction mode, wherein the interaction interface is used for displaying an execution result of the interaction instruction. Under the external interaction scene, the switching between different interaction modes is not easy to perceive for the user, and the execution results of the interaction instructions under the different interaction modes are displayed on the interaction interface, so that not only are interaction modes enriched, but also the user can view different interaction information, and therefore the interaction function can be better used. In addition, in the process, a user does not need to manually switch different interaction modes, the interaction modes can be automatically switched according to the interaction instruction, and the interaction efficiency is high.
Embodiments of the present application provide a computer readable storage medium having at least one computer program stored therein, which when executed by a processor, enables the vehicle interaction method described above to be implemented.
The methods in this application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described herein are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, a core network device, OAM (Operation Administration and Maintenance, operations administration maintenance), or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs; but also semiconductor media such as solid state disks. The computer readable storage medium may be volatile or nonvolatile storage medium, or may include both volatile and nonvolatile types of storage medium.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (11)
1. A method of vehicle interaction, the method comprising:
responding to a first interaction instruction, determining that a vehicle accords with a first out-of-vehicle interaction mode, and displaying a first display interface based on the first out-of-vehicle interaction mode, wherein the first display interface is used for displaying an execution result of the first interaction instruction;
and responding to a second interaction instruction, determining that the vehicle accords with a second external interaction mode, and switching the first display interface into a second display interface based on the second external interaction mode, wherein the second display interface is used for displaying an execution result of the second interaction instruction.
2. The method of claim 1, wherein the first interactive instruction is an inquiry instruction of an off-board voice, and the first display interface displays first indication information, wherein the first indication information is used for indicating available information of the off-board voice; the method further comprises the steps of:
receiving the second interaction instruction, wherein the second interaction instruction is determined based on the available information of the off-board voice;
when the second external interaction mode is the external vehicle voice mode, controlling the vehicle to execute the second interaction instruction according to the external vehicle voice mode, and displaying an execution result in the external vehicle voice mode on the second display interface.
3. The method of claim 2, wherein the controlling the vehicle to execute the second interactive instruction in the off-board speech mode comprises:
when the voice information of the target object is in a listening state, controlling an external loudspeaker of the vehicle to play listening prompt information;
and when the voice information is in a broadcasting state, controlling the speaker outside the vehicle to broadcast the execution result of the second interaction instruction.
4. The method according to claim 1, wherein the method further comprises:
Responding to the first interaction instruction, and displaying a light effect corresponding to an execution result of the first interaction instruction;
and responding to the second interaction instruction, and displaying the light effect corresponding to the execution result of the second interaction instruction.
5. The method of claim 4, wherein the second external interaction mode is an external voice mode, and the displaying the light effect corresponding to the execution result of the second interaction instruction comprises:
when the execution result of the second interaction instruction is that the voice information of the target object is in a listening state, controlling a target car lamp of the car to display a first lamp effect;
when the execution result of the second interaction instruction is that the voice information is in the recognition state, controlling a target car lamp of the vehicle to display a second light effect;
and when the execution result of the second interaction instruction is that the voice information is in a feedback state, controlling the target car lamp to display according to the lamp effect corresponding to the type according to the type of the execution result of the second interaction instruction.
6. The method of any one of claims 1 to 5, wherein the first off-vehicle interaction pattern is a sentinel pattern, the first interaction instruction being a sentinel alert instruction of the vehicle; the method further comprises the steps of:
Responding to the whistle alarm instruction of the vehicle, and controlling the vehicle to alarm in different alarm modes according to the triggering information of the whistle mode.
7. The method of claim 6, wherein the controlling the vehicle to alarm in different alarm modes according to the triggering information of the sentinel mode comprises:
when the digital key of the vehicle is not detected, and the duration that the distance between the object outside the vehicle and the vehicle is detected to be smaller than the first preset distance is longer than the first preset duration, controlling the vehicle lamp of the vehicle to display according to the third light effect, and displaying alarm information on the first display interface;
when the digital key of the vehicle is not detected and the duration that the distance between the object outside the vehicle and the vehicle is smaller than a second preset distance is longer than a second preset time, controlling a target car lamp of the vehicle to display according to a fourth light effect, controlling a loudspeaker outside the vehicle to play an alarm sound effect, and displaying the alarm information on the first display interface, wherein the second preset distance is smaller than the first preset distance;
when the digital key of the vehicle is not detected and an intrusion event of an object outside the vehicle is detected, controlling the vehicle lamp of the vehicle to display according to a fifth light effect, controlling the speaker outside the vehicle to play an alarm sound effect, and displaying the alarm information on the first display interface.
8. The method of any one of claims 1 to 5, wherein the first off-board interaction mode is a greeting mode and the second off-board interaction mode is an off-board voice mode; the switching the first display interface to a second display interface based on the second out-of-vehicle interaction mode includes:
and responding to the second interaction instruction, and switching the greeting interface into the external voice interface based on the external voice mode.
9. A vehicle interaction device, the device comprising:
the first determining module is used for responding to the first interaction instruction and determining that the vehicle accords with a first out-of-vehicle interaction mode;
the first display module is used for displaying a first display interface based on the first out-of-vehicle interaction mode, and the first display interface is used for displaying an execution result of the first interaction instruction;
the second determining module is used for responding to a second interaction instruction and determining that the vehicle accords with a second out-of-vehicle interaction mode;
the switching module is used for switching the first display interface into a second display interface based on the second external interaction mode, and the second display interface is used for displaying an execution result of the second interaction instruction.
10. An electronic device comprising a memory and a processor, wherein the memory stores at least one computer program that is loaded and executed by the processor to implement the vehicle interaction method of any of claims 1-8.
11. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, is capable of implementing the vehicle interaction method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211550995.XA CN116160978A (en) | 2022-12-05 | 2022-12-05 | Vehicle interaction method, device, electronic equipment, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211550995.XA CN116160978A (en) | 2022-12-05 | 2022-12-05 | Vehicle interaction method, device, electronic equipment, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116160978A true CN116160978A (en) | 2023-05-26 |
Family
ID=86410109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211550995.XA Pending CN116160978A (en) | 2022-12-05 | 2022-12-05 | Vehicle interaction method, device, electronic equipment, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116160978A (en) |
-
2022
- 2022-12-05 CN CN202211550995.XA patent/CN116160978A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106394553B (en) | Driver assistance and its control method | |
CN107054245B (en) | Vehicle-use convenience device and vehicle | |
CN107200018B (en) | Driving assistance device and control method thereof | |
CN110871749B (en) | Method and system for controlling a vehicle seating arrangement | |
CN106484275A (en) | Method, driver assistance and the vehicle including it of stopped vehicle Move Mode are provided | |
US20230083504A1 (en) | Systems and methods for capturing images around vehicle for insurance claim processing | |
US10311693B1 (en) | Vehicle and a method for controlling the same | |
US20240075870A1 (en) | Method for operating an environment lighting device of a vehicle | |
US11364868B2 (en) | Vehicle control system and vehicle control method | |
US12026959B2 (en) | Systems and methods for deterrence of intruders | |
US20240246547A1 (en) | Artificial intelligence-enabled alarm for detecting passengers locked in vehicle | |
US20240071231A1 (en) | Information presentation device | |
US11845315B2 (en) | Intelligent power management while in camp mode | |
CN114572141A (en) | Occupant monitoring system, occupant leaving reminding method and related equipment | |
US20240166127A1 (en) | Light System Control Method, Light System, and Vehicle | |
US10783407B2 (en) | System and method to detect trapped flexible material | |
CN116160978A (en) | Vehicle interaction method, device, electronic equipment, storage medium and program product | |
CN109969075A (en) | Interactive approach and system, vehicle between a kind of vehicle | |
WO2021131789A1 (en) | Vehicle | |
JP6923952B2 (en) | Equipment and programs | |
US20240217305A1 (en) | Animal location based vehicle climate control | |
CN116001725A (en) | Vehicle control method and device, vehicle and computer readable storage medium | |
CN211765409U (en) | Vehicle driving qualification limiting system | |
CN118632221A (en) | Car voice screen control system, method and device | |
CN116691560A (en) | Control method and device for intelligent welcome of vehicle and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |