CN115681178A - Cabin interaction method, system, vehicle and computer storage medium - Google Patents

Cabin interaction method, system, vehicle and computer storage medium Download PDF

Info

Publication number
CN115681178A
CN115681178A CN202110848215.9A CN202110848215A CN115681178A CN 115681178 A CN115681178 A CN 115681178A CN 202110848215 A CN202110848215 A CN 202110848215A CN 115681178 A CN115681178 A CN 115681178A
Authority
CN
China
Prior art keywords
interaction
instruction
processing module
preset image
holographic fan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110848215.9A
Other languages
Chinese (zh)
Inventor
徐平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Pateo Connect and Technology Shanghai Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect and Technology Shanghai Corp filed Critical Pateo Connect and Technology Shanghai Corp
Priority to CN202110848215.9A priority Critical patent/CN115681178A/en
Publication of CN115681178A publication Critical patent/CN115681178A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a cockpit interaction method, a cockpit interaction system, a vehicle and a computer storage medium. The cockpit interaction method comprises the following steps: acquiring a first interactive instruction; determining a first interaction strategy according to the first interaction instruction; and controlling the holographic fan to display a preset image for executing the first interaction strategy. The cabin interaction method, the cabin interaction system, the vehicle and the computer storage medium can generate the travel partner through the holographic fan, so that the driving experience of a driver is improved, the driving interest is enhanced, and the driving process of the vehicle is more humanized.

Description

Cabin interaction method and system, vehicle and computer storage medium
Technical Field
The application relates to the technical field of cabin interaction, in particular to a cabin interaction method, a cabin interaction system, a vehicle and a computer storage medium.
Background
At present, the application of intelligent automobiles is more and more extensive. Among them, the main representatives of smart cars are also developing at a rapid pace. The intelligent cockpit is a vehicular product equipped with intellectualization and networking, can intelligently interact with people, roads and vehicles, and is an important link and key node for the evolution of the relation between people and vehicles from a tool to a partner. The intelligent cabin is a riding space at first, as long as corresponding driving conditions can be indicated, a radio can listen to broadcasting, a mechanical key controls an air conditioner, and Bluetooth, a touch large screen, mobile phone interconnection and the like are gradually arranged behind the intelligent cabin. However, the development of the existing intelligent cockpit focuses on interaction of a vehicle-mounted display screen, and the intelligent cockpit is large in limitation and not strong in interestingness.
Disclosure of Invention
The application provides a cabin interaction method, a cabin interaction system, a vehicle and a computer storage medium, which are used for solving the problem that people are easy to get bored or tired when driving.
In one aspect, the present application provides a cockpit interaction method, in particular, the cockpit interaction method comprising: acquiring a first interaction instruction; determining a first interaction strategy according to the first interaction instruction; and controlling the holographic fan to display a preset image for executing the first interaction strategy.
Optionally, the cockpit interaction method comprises, before performing the step of controlling the holographic fan to present the preset avatar implementing the first interaction policy: displaying a plurality of images to be selected; acquiring a selection instruction; and taking the image to be selected corresponding to the selection instruction as the preset image.
Optionally, the step of executing the to-be-selected image corresponding to the selection instruction as the preset image according to the selection instruction by the cockpit interaction method includes: acquiring user identity information; and correspondingly storing the preset image aiming at the user identity information.
Optionally, the cockpit interaction method comprises, in the step of executing the step of determining the first interaction policy according to the first interaction instruction: acquiring a keyword in the first interactive instruction; and matching the first interaction strategy according to the keywords.
Optionally, the cockpit interaction method, in the step of executing the control holographic fan to display the preset avatar executing the first interaction policy, includes: determining a target gear of the holographic fan according to the first interaction strategy; and controlling the holographic fan to rotate at the target gear to display the interaction action of the preset image.
Optionally, the cockpit interaction method comprises, after performing the step of controlling the holographic fan to display the preset avatar performing the first interaction strategy: receiving a second interaction instruction; determining a second interaction strategy according to the second interaction instruction; and controlling the holographic fan to display the preset image for executing the second interaction strategy.
Optionally, the cockpit interaction method comprises, before performing the step of obtaining the first instructions of interaction: and establishing communication connection with the holographic fan.
Optionally, the cockpit interaction method, in the step of executing the control holographic fan to display the preset avatar executing the first interaction strategy, includes: acquiring information of a copilot position; and when the passenger seat is not in the auxiliary driving position, displaying the preset image.
Optionally, the cockpit interaction method comprises, in the step of executing the step of determining the first interaction policy according to the first interaction instruction: reading the incoming call information of the mobile phone and/or the car machine; acquiring a preset image corresponding to the incoming call name according to the incoming call information; and displaying the preset image as the first interaction strategy.
Optionally, the cockpit interaction method includes, in the step of obtaining the first interaction instruction: acquiring vehicle driving data; and generating the first interaction instruction according to the vehicle driving data.
On the other hand, this application still provides a cockpit interactive system, specifically, cockpit interactive system includes instruction acquisition module, processing module and the holographic fan that sets up in the cockpit that connects gradually, wherein: the instruction acquisition module is used for acquiring and sending a first interaction instruction to the processing module; the processing module is used for determining a first interaction strategy according to the first interaction instruction and controlling the holographic fan to display a preset image for executing the first interaction strategy.
Optionally, the holographic fan in the cockpit interaction system is further configured to display a plurality of candidate images; the instruction acquisition module is also used for acquiring and sending a selection instruction to the processing module; the processing module is further used for taking the to-be-selected image corresponding to the selection instruction as the preset image.
Optionally, the cockpit interaction system further includes a storage module connected to the processing module, and the instruction obtaining module is further configured to obtain and send user identity information to the processing module; the processing module is used for controlling the storage module to correspondingly store the preset image aiming at the user identity information.
Optionally, the processing module in the cabin interaction system acquires a keyword in the first interaction instruction, and matches the first interaction policy according to the keyword.
Optionally, the processing module in the cabin interaction system is further configured to determine a target gear of the holographic fan according to the first interaction policy, so as to control the holographic fan to rotate at the target gear to display the preset image.
Optionally, the instruction obtaining module in the cabin interaction system is further configured to receive and send a second interaction instruction to the processing module; and the processing module is used for determining a second interaction strategy according to the second interaction instruction and controlling the holographic fan to display the preset image for executing the second interaction strategy.
Optionally, the cockpit interaction system further includes a camera and/or a sensor connected to the processing module, and the processing module is further configured to obtain a co-driver position information through the camera and/or the sensor, so as to control the holographic fan to display the preset image when the co-driver position is not occupied.
Optionally, the processing module in the cabin interaction system is further configured to read incoming call information of a mobile phone and/or a car machine when a call is accessed, and obtain a preset image corresponding to an incoming call name according to the incoming call information; and the processing module is used for displaying the preset image as the first interaction strategy.
Optionally, the processing module in the cockpit interaction system is further configured to obtain vehicle driving data, so as to generate the first interaction instruction according to the vehicle driving data.
Optionally, the vehicle driving data in the cockpit interaction system is selected from at least one of vehicle power-up, start-up, stop.
In another aspect, the present application also provides a vehicle, in particular, the vehicle comprising a vehicle body and a cabin interaction system as described above.
In another aspect, the present application also provides a computer storage medium, in particular, a computer program stored on the computer storage medium, which when executed by a computer, can implement the cockpit interaction method as described above.
As described above, the cabin interaction method, the cabin interaction system, the vehicle and the computer storage medium provided by the application can generate the travel partner through the holographic fan, so that the driving experience of a driver is improved, the driving interestingness is enhanced, and the driving process of the vehicle is more humanized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive step.
Fig. 1 is a flowchart of a cockpit interaction method according to an embodiment of the present application.
Fig. 2 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Fig. 3 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Fig. 4 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Fig. 5 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Figure 6 is a flow chart of a cabin interaction method according to another embodiment of the present application.
Fig. 7 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Fig. 8 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Figure 9 is a block diagram of a cockpit interaction system in accordance with an embodiment of the present application.
Figure 10 is a block diagram of a cabin interaction system according to another embodiment of the present application.
The implementation, functional features and advantages of the object of the present application will be further explained with reference to the embodiments, and with reference to the accompanying drawings. Specific embodiments of the present application have been shown by way of example in the drawings and will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, a reference to an element identified by the phrase "comprising one of 82308230a of 82303030, or an element defined by the phrase" comprising another identical element does not exclude the presence of the same element in a process, method, article, or apparatus comprising the element, and elements having the same designation may or may not have the same meaning in different embodiments of the application, the particular meaning being determined by its interpretation in the particular embodiment or by further reference to the context of the particular embodiment.
It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In one aspect, the present application provides a cabin interaction method, and fig. 1 is a flowchart of the cabin interaction method according to an embodiment of the present application.
Referring to fig. 1, in an embodiment, a cockpit interaction method includes:
s10: and acquiring a first interactive instruction.
The first command of interaction may be issued by the user or generated directly by the software based on a preset algorithm, by detecting the current cabin environmental conditions.
S20: and determining a first interaction strategy according to the first interaction instruction.
And selecting an interaction strategy by software based on a preset algorithm through different instruction information.
S30: and controlling the holographic fan to display the preset image for executing the first interaction strategy.
The preset image of the driving partner can be 2D picture display, and can also be 3D or even 4D display.
The holographic fan generates a driving partner according to the instruction to interact with the user in the cabin, so that the driving fatigue of the user can be weakened, and the driving interestingness is increased. It should be noted that the interaction between the user and the driving partner may be in various forms, such as touch screen interaction, key interaction, voice interaction, motion sensing interaction (somatosensory technology), visual interaction, multi-channel interaction (i.e., multi-channel fusion interaction such as human face and gesture), emotional interaction, and the like.
Fig. 2 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Referring to fig. 2, in an embodiment, the cockpit interaction method performs S30: the step of controlling the holographic fan to display the preset image for executing the first interaction strategy comprises the following steps:
s40: and displaying a plurality of images to be selected.
S50: and acquiring a selection instruction.
S60: and taking the to-be-selected image corresponding to the selection instruction as a preset image.
In one embodiment, the user can select the holographic fan to generate the display image of the partner according to the preference of the user, for example, a character image, a pet image, a monster image, a robot image, a cartoon image and the like can be selected.
Fig. 3 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Referring to fig. 3, in one embodiment, the cockpit interaction method performs S60: according to the selection instruction, the step of taking the to-be-selected image corresponding to the selection instruction as the preset image comprises the following steps of:
s61: and acquiring user identity information.
S62: and correspondingly storing a preset image aiming at the user identity information.
In one embodiment, the holographic fan is used for storing preset images selected by the user in a targeted manner according to the identity information of the user, so that a driving partner which the user likes can be set for each user individually.
Fig. 4 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Referring to fig. 4, in an embodiment, the cockpit interaction method performs S20: the step of determining the first interaction policy according to the first interaction instruction comprises:
s21: and acquiring a keyword in the first interaction instruction.
S22: and matching the first interaction strategy according to the keywords.
In one embodiment, the keywords in the first interaction instruction are multiple enough to satisfy the daily interaction requirement of the user. The system captures keywords captured in the interaction, which often represent the user's current relevant needs or concerns.
In an embodiment, the first interaction strategy in the car interaction method is selected from at least one of smiling, nodding, blinking, singing, dancing, angry, cozy, poor, crying, tabby and frightening.
The driving partner enhances the anthropomorphic experience in the interaction with the user through various characters, thereby bringing better experience for the user.
Fig. 5 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Referring to fig. 5, in an embodiment, the cockpit interaction method performs S30: the step of controlling the holographic fan to display the preset image for executing the first interaction strategy comprises the following steps:
s31: and determining a target gear of the holographic fan according to the first interaction strategy.
S32: and controlling the holographic fan to rotate at a target gear to display the interaction action of the preset image.
In one embodiment, different target gears correspond to different preset interactive actions of the image, for example, when the holographic fan is in the first gear, the interactive actions displayed by the preset image are smiling, hello and blinking; when the holographic fan is in a second gear, the interaction displayed by the preset image is used as gas generation, delightful spreading and poor movement; when the holographic fan is in three grades, the preset interactive actions displayed by the image are crying, phthisis and frightening.
In an embodiment, the cockpit interaction method performs S30: the step of controlling the holographic fan to display the preset image for executing the first interaction strategy comprises the following steps: and controlling the holographic fan to start the air supply function.
The holographic fan is arranged in the cabin, not only can generate a travel partner, but also can send out hot air or cold air when necessary. In another embodiment, the blowing function of the holographic fan may be turned off when blowing is not required. For example, the holographic fan is arranged to be inverted or the fan blades of the holographic fan are arranged to be variable in angle, thereby producing a calm effect.
Fig. 6 is a flowchart of a cockpit interaction method according to another embodiment of the present application.
Referring to fig. 6, in an embodiment, the cockpit interaction method performs S30: the step of controlling the holographic fan to display the preset image for executing the first interaction strategy comprises the following steps:
s33: and receiving a second interactive instruction.
S34: and determining a second interaction strategy according to the second interaction instruction.
S35: and controlling the holographic fan to display a preset image for executing the second interaction strategy.
In an embodiment, after the preset image of the driving partner and the user interact for the first time, the user can continue to send an instruction to perform the second interaction, and so on, the user can perform third, fourth or even more times of interaction with the driving partner, so that better experience is brought to the user.
In an embodiment, the cockpit interaction method performs S10: the step of obtaining the first interactive instruction comprises the following steps: and establishing communication connection with the holographic fan.
And the software based on the preset algorithm is in communication connection with the holographic fan, so that the execution of the subsequent instruction sending is facilitated.
Figure 7 is a flow chart of a cabin interaction method according to another embodiment of the present application.
Referring to fig. 7, in an embodiment, the cockpit interaction method performs S30: the step of controlling the holographic fan to display the preset image for executing the first interaction strategy comprises the following steps:
s36: acquiring the position information of the copilot;
s37: and when the passenger does not sit in the passenger seat, displaying a preset image.
The holographic fan firstly needs to use a camera and/or a sensor to analyze and confirm whether people exist in a to-be-projected area in the vehicle or not before projection, if the to-be-projected area has people, the holographic fan does not need to use a partner function, and therefore even if a first interaction instruction is received, the holographic fan cannot be triggered to carry out projection. In general, the area to be projected is a passenger driving area so as not to affect the line of sight of the user during driving.
In one embodiment, before projection, the holographic fan firstly confirms whether people exist in a to-be-projected area in a vehicle, and if people exist in the to-be-projected area, the holographic fan is not required to use a partner function, so that even if a first interactive instruction is received, the holographic fan is not triggered to perform projection. In general, the area to be projected is a passenger driving area so as not to affect the line of sight of the user in driving.
Figure 8 is a flow chart of a cabin interaction method according to another embodiment of the present application.
Referring to fig. 8, in an embodiment, the cockpit interaction method performs S20: the step of determining the first interaction policy according to the first interaction instruction comprises:
s23: reading at least one incoming call information of a mobile phone and a vehicle machine;
s24: acquiring a preset image corresponding to the name of the incoming call according to the incoming call information;
s25: and displaying the preset image as a first interaction strategy.
In one embodiment, the user name is matched with the preset image, and when the holographic fan receives the user name sent by the incoming call information, the corresponding preset image is displayed, wherein the preset image can be a photo or a video.
In an embodiment, the cockpit interaction method performs S10: the step of obtaining the first interactive instruction comprises:
s11: acquiring vehicle driving data;
s12: and generating a first interactive instruction according to the vehicle driving data.
In one embodiment, the vehicle driving data in the cockpit interaction method is selected from at least one of vehicle power-up, start-up, stop, and shut-down.
In one embodiment, according to different vehicle driving data in practical situations, a first interactive instruction is generated to remind a user of driving safety. If the vehicle driving data shows that the vehicle is started or stopped, the first interactive instruction calls out a specific animation effect, and if the vehicle driving data shows that the vehicle is overspeed, the first interactive instruction calls out a specific warning effect.
On the other hand, the present application further provides a cabin interaction system, and fig. 9 is a block diagram of the cabin interaction system according to an embodiment of the present application.
Referring to fig. 9, in an embodiment, the cabin interaction system includes an instruction obtaining module 100, a processing module 200, and a holographic fan 300 disposed in the cabin, which are connected in sequence.
Wherein: the instruction obtaining module 100 is configured to obtain and send a first interactive instruction to the processing module 200; the processing module 200 is configured to determine a first interaction policy according to the first interaction instruction, and control the holographic fan 300 to display a preset image for executing the first interaction policy.
In an embodiment, a first interactive instruction is mutually sent to the instruction obtaining module 100, the instruction obtaining module 100 sends the first interactive instruction to the processing module 200, the processing module 200 determines a first interactive strategy according to the first interactive instruction, and the holographic fan 300 displays a preset image according to the first interactive strategy to complete interaction between a user and a cockpit, so that the driving experience of the user is improved, and the driving interestingness is improved.
It should be noted that the first interactive command may be issued by the user, or may be directly generated by software based on a preset algorithm by detecting the current cabin environment. The interaction between the user and the driving partner can be in various forms, such as touch screen interaction, key interaction, voice interaction, action induction interaction (somatosensory technology), visual interaction, multi-channel interaction (namely multi-channel fusion interaction of human faces, gestures and the like), emotion interaction and the like. The system selects an interaction strategy by software based on a preset algorithm through different instruction information. The preset image of the driving partner can be 2D picture display, and can also be 3D or even 4D display. Holographic fan 300 generates driving companion according to the instruction and interacts with the user in the cabin, so that driving fatigue of the user can be weakened, and driving interestingness is increased.
In one embodiment, the holographic fan 300 in the cockpit interaction system is also used to present a plurality of candidate images; the instruction obtaining module 100 is further configured to obtain and send a selection instruction to the processing module 200; the processing module 200 is further configured to use the candidate image corresponding to the selection instruction as the preset image.
In one embodiment, the holographic fan 300 stores a plurality of user favorite preset images, so that the processing module 200 can select the preset images according to the interactive instructions, for example, a human image, a pet image, a monster image, a robot image, a cartoon image, etc. can be selected.
Figure 10 is a block diagram of a cabin interaction system according to another embodiment of the present application.
Referring to fig. 10, in one embodiment, the cabin interaction system further comprises a storage module 400 connected to the processing module 200.
The instruction obtaining module 100 is further configured to obtain and send user identity information to the processing module 200; the processing module 200 is configured to control the storage module 400 to correspondingly store a preset image for the user identity information.
The holographic fan 300 pertinently stores the preset image selected by the user according to the identity information of the user, so that a driving partner which the user likes can be individually set for each user.
In an embodiment, the processing module 200 in the cockpit interaction system obtains a keyword in the first interaction instruction and matches the first interaction policy according to the keyword.
The system captures keywords captured in the interaction, which often represent the user's current relevant needs or concerns. Therefore, the interaction strategy is matched by the keywords in the instruction, and more accurate experience can be provided for the user.
In an embodiment, the first interaction strategy in the cockpit interaction system is selected from at least one of smiling, nodding, blinking, singing, dancing, warning, angering, cozy, poor, crying, tabby and startle.
The driving partner enhances the anthropomorphic experience in the interaction with the user through various characters, thereby bringing better experience for the user.
In an embodiment, the processing module 200 in the cockpit interaction system is further configured to determine a target gear of the holographic fan according to a first interaction policy, so as to control the holographic fan to rotate at the target gear to display the interaction of the preset image.
Different target gears correspond to different preset image interaction actions, for example, when the holographic fan is at the first gear, the interaction actions displayed by the preset image are smiling, hello and blinking; when the holographic fan is in a second gear, the interaction displayed by the preset image is used as gas generation, delightful spreading and poor movement; when the holographic fan is in three stages, the preset interactive actions displayed by the image are crying, tabby and frightening.
In one embodiment, the processing module 200 in the cabin interaction system is further configured to control the holographic fan 300 to start the blowing function.
The holographic fan 300 is disposed in the cabin, and can generate a travel partner and send hot or cold air when necessary. In another embodiment, the blowing function of the hologram fan 300 may be turned off when blowing is not required. For example, the holographic fan 300 is arranged to be inverted or the fan blades of the holographic fan 300 are arranged to be variable in angle, thereby generating a calm effect.
In an embodiment, the instruction obtaining module 100 in the cabin interaction system is further configured to receive and send a second interaction instruction to the processing module 200; the processing module 200 is configured to determine a second interaction policy according to the second interaction instruction, and control the holographic fan 300 to display a preset image for executing the second interaction policy.
In one embodiment, after the user interacts with the preset image of the driving partner for the first time, the user can continue to send a second interaction instruction for the second interaction, and by analogy, the user can interact with the driving partner for the third, fourth or even more times, so that better experience is brought to the user.
In an embodiment, the cockpit interaction system further includes a camera and/or a sensor connected to the processing module 200, and the processing module 200 is further configured to obtain the copilot location information through the camera and/or the sensor, so as to control the holographic fan to display the preset image when no accompanying person is seated in the copilot location.
Before projection, the holographic fan firstly needs to confirm whether personnel exist in a to-be-projected area in a vehicle by using a camera and/or a sensor, if the personnel exist in the to-be-projected area, the holographic fan does not need to use a partner function, and therefore, even if a first interaction instruction is received, the holographic fan cannot be triggered to perform projection. In general, the area to be projected is a passenger driving area so as not to affect the line of sight of the user during driving.
In an embodiment, the processing module 200 in the cabin interaction system is further configured to, when a call is accessed, obtain incoming call information of at least one of a mobile phone and a car machine, and obtain a preset image corresponding to an incoming call name according to the incoming call information. At this time, the processing module 200 uses a preset image corresponding to the displayed incoming call name as a first interaction policy.
And matching the user name with a preset image, and displaying the corresponding preset image when the holographic fan receives the user name sent by the incoming call information, wherein the preset image can be a contact photo or a video. Through the communication mode, the user has face-to-face immersion in the communication, and the user experience is effectively improved.
In an embodiment, the processing module 200 in the cabin interaction system is further configured to obtain vehicle driving data to generate the first interaction instruction according to the vehicle driving data. The vehicle driving data is selected from at least one of vehicle power-up, start-up, stop, and shutdown.
According to different driving data of the vehicle in actual conditions, the processing module 200 generates a first interactive instruction to remind a user of driving safety. If the driving data of the vehicle shows that the vehicle is started or stopped, the first interactive instruction calls out a specific animation effect, and if the driving data of the vehicle shows that the vehicle is overspeed, the first interactive instruction calls out a specific warning effect.
In another aspect, the present application further provides a vehicle, in particular, a vehicle comprising a vehicle body and a cabin interaction system as above.
In one embodiment, the cabin interaction steps of the vehicle are as follows:
(1) A user sends a first interaction instruction to an instruction acquisition module 100 in a cabin system;
(2) The instruction obtaining module 100 sends a first interactive instruction to the processing module 200;
(3) The processing module 200 extracts a keyword in the first interaction instruction, selects a corresponding first interaction strategy, simultaneously selects a preset image according to the user's preference, and sends the first interaction strategy to the holographic fan 300;
(4) The holographic fan 300 makes corresponding actions according to the first interaction strategy to complete the first interaction;
(5) Thereafter, depending on the user's needs, a second, third or even more interactions may be performed, with the specific steps described above.
The travel partner is generated through the holographic fan, so that the driving experience of a driver is improved, the driving interest is enhanced, and the driving process of the vehicle is more humanized.
In another aspect, the present application further provides a computer storage medium, in particular, a computer program stored on the computer storage medium, wherein the computer program, when executed by a computer, can implement the above cockpit interaction method. When the computer program implements the cockpit interaction method, the technical solution adopted is the same as that in the above embodiment, and is not described herein again.
The cabin interaction method, the cabin interaction system, the vehicle and the computer storage medium can generate the travel partner through the holographic fan, so that the driving experience of a driver is improved, the driving interest is enhanced, and the driving process of the vehicle is more humanized.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (22)

1. A cockpit interaction method, comprising:
acquiring a first interactive instruction;
determining a first interaction strategy according to the first interaction instruction;
and controlling the holographic fan to display a preset image for executing the first interaction strategy.
2. The method of claim 1, wherein prior to performing the step of controlling the holographic fan to present the preset persona that implements the first interaction policy, comprises:
displaying a plurality of images to be selected;
acquiring a selection instruction;
and taking the image to be selected corresponding to the selection instruction as the preset image.
3. The method of claim 2, wherein the step of executing the selection instruction to use the candidate image corresponding to the selection instruction as the preset image comprises:
acquiring user identity information;
and correspondingly storing the preset image aiming at the user identity information.
4. The method of claim 1, wherein in performing the step of determining a first interaction policy from the first interaction instruction comprises:
acquiring a keyword in the first interaction instruction;
and matching the first interaction strategy according to the keywords.
5. The method of claim 1, wherein in the step of performing the controlling the holographic fan to present the preset persona for performing the first interaction policy comprises:
determining a target gear of the holographic fan according to the first interaction strategy;
and controlling the holographic fan to rotate at the target gear to display the interaction action of the preset image.
6. The method of claim 1, wherein after performing the step of controlling the holographic fan to present the preset avatar that performs the first interaction strategy comprises:
receiving a second interaction instruction;
determining a second interaction strategy according to the second interaction instruction;
and controlling the holographic fan to display the preset image for executing the second interaction strategy.
7. The method of claim 1, wherein prior to performing the step of obtaining the first interactivity instruction comprises:
and establishing communication connection with the holographic fan.
8. The method of claim 1, wherein in the step of performing the controlling the holographic fan to present the preset persona for performing the first interaction policy comprises:
acquiring the position information of the copilot;
and when the passenger seat is not in the auxiliary driving position, displaying the preset image.
9. The method of claim 1, wherein in performing the step of determining a first interaction policy from the first interaction instruction comprises:
reading incoming call information of a mobile phone and/or a car machine;
acquiring a preset image corresponding to the name of the incoming call according to the incoming call information;
and displaying the preset image as the first interaction strategy.
10. The method of claim 1, wherein in the step of executing the get first interactive instruction comprises:
acquiring vehicle driving data;
and generating the first interactive instruction according to the vehicle driving data.
11. The cockpit interaction system is characterized by comprising an instruction acquisition module, a processing module and a holographic fan arranged in a cockpit, which are sequentially connected, wherein:
the instruction acquisition module is used for acquiring and sending a first interaction instruction to the processing module;
the processing module is used for determining a first interaction strategy according to the first interaction instruction and controlling the holographic fan to display a preset image for executing the first interaction strategy.
12. The cabin interaction system of claim 11, wherein the holographic fan is further configured to display a plurality of candidate avatars;
the instruction acquisition module is also used for acquiring and sending a selection instruction to the processing module;
the processing module is further used for taking the to-be-selected image corresponding to the selection instruction as the preset image.
13. The cabin interaction system of claim 12, further comprising a storage module coupled to the processing module, the instruction fetch module further configured to fetch and send user identity information to the processing module; the processing module is used for controlling the storage module to correspondingly store the preset image aiming at the user identity information.
14. The cabin interaction system of claim 11, wherein the processing module obtains a keyword in the first interaction instruction and matches the first interaction policy based on the keyword.
15. The cabin interaction system of claim 11, wherein the processing module is further configured to determine a target gear of the holographic fan according to the first interaction strategy to control the holographic fan to rotate at the target gear to present the preset image.
16. The cabin interaction system of claim 11, wherein the instruction acquisition module is further configured to receive and send a second interaction instruction to the processing module;
and the processing module is used for determining a second interaction strategy according to the second interaction instruction and controlling the holographic fan to display the preset image for executing the second interaction strategy.
17. The cockpit interaction system of claim 11 further comprising a camera and/or sensor coupled to said processing module, said processing module further configured to obtain a copilot location information to control said holographic fan to present said preset image when said copilot location is unoccupied.
18. The cabin interaction system of claim 11, wherein the processing module is further configured to, when a call is accessed, read incoming call information of a mobile phone and/or a car machine, and obtain a preset image corresponding to an incoming call name according to the incoming call information; and the processing module is used for displaying the preset image as the first interaction strategy.
19. The cabin interaction system of claim 11, wherein the processing module is further configured to obtain vehicle driving data to generate the first interaction command based on the vehicle driving data.
20. The cabin interaction system of claim 19, wherein the vehicle driving data is selected from at least one of vehicle power-up, start-up, stop, shut-down.
21. A vehicle comprising a body and a cabin interaction system according to any one of claims 11 to 20.
22. A computer storage medium, characterized in that a computer program is stored thereon, which computer program, when executed by a computer, can carry out the cabin interaction method according to any one of claims 1 to 10.
CN202110848215.9A 2021-07-27 2021-07-27 Cabin interaction method, system, vehicle and computer storage medium Pending CN115681178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110848215.9A CN115681178A (en) 2021-07-27 2021-07-27 Cabin interaction method, system, vehicle and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110848215.9A CN115681178A (en) 2021-07-27 2021-07-27 Cabin interaction method, system, vehicle and computer storage medium

Publications (1)

Publication Number Publication Date
CN115681178A true CN115681178A (en) 2023-02-03

Family

ID=85058310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110848215.9A Pending CN115681178A (en) 2021-07-27 2021-07-27 Cabin interaction method, system, vehicle and computer storage medium

Country Status (1)

Country Link
CN (1) CN115681178A (en)

Similar Documents

Publication Publication Date Title
US11034362B2 (en) Portable personalization
CN102039898B (en) Emotive advisory system
CN108725357B (en) Parameter control method and system based on face recognition and cloud server
Williams et al. Towards leveraging the driver's mobile device for an intelligent, sociable in-car robotic assistant
KR20210011416A (en) Shared environment for vehicle occupants and remote users
WO2018061354A1 (en) Information provision device, and moving body
CN109383515A (en) The system and method suggested are focused for moving starting consumer
CN105383411B (en) For operating multimedia-content method and apparatus in means of transport
Cueva-Fernandez et al. Fuzzy system to adapt web voice interfaces dynamically in a vehicle sensor tracking application definition
EP3620319B1 (en) Method for operating a virtual assistant for a motor vehicle and corresponding backend system
CN114083983A (en) System and method for view-field digital virtual steering wheel controller
CN111252074B (en) Multi-modal control method, device, computer-readable storage medium and vehicle
CN107813831A (en) For vehicle and the internuncial system and method for mobile communication equipment
CN113589938B (en) Vehicle-mounted terminal information sharing system with barrage function
WO2018061353A1 (en) Information provision device, and moving body
CN114125376A (en) Environment interaction system for providing augmented reality for in-vehicle infotainment and entertainment
CN113811851A (en) User interface coupling
CN115681178A (en) Cabin interaction method, system, vehicle and computer storage medium
CN112513708B (en) Apparatus and method for use with a vehicle
CN112297842A (en) Autonomous vehicle with multiple display modes
CN115848138A (en) Cabin visual angle switching method, device and equipment and vehicle
US11385715B2 (en) Non-contact operating apparatus for vehicle and vehicle
Dellana et al. Collaboration Around an Interactive Tabletop Map: Comparing Voice Interactions and a Tangible Shape-changing Controller
CN114043939A (en) Vehicle-mounted central control system using holographic projection technology and control method
JP2023172303A (en) Vehicle control method and vehicle control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination