CN115534850B - Interface display method, electronic device, vehicle and computer program product - Google Patents

Interface display method, electronic device, vehicle and computer program product Download PDF

Info

Publication number
CN115534850B
CN115534850B CN202211498038.7A CN202211498038A CN115534850B CN 115534850 B CN115534850 B CN 115534850B CN 202211498038 A CN202211498038 A CN 202211498038A CN 115534850 B CN115534850 B CN 115534850B
Authority
CN
China
Prior art keywords
vehicle
voice interaction
identifier
display
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211498038.7A
Other languages
Chinese (zh)
Other versions
CN115534850A (en
Inventor
李青
王睿
张茜
周国歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202211498038.7A priority Critical patent/CN115534850B/en
Publication of CN115534850A publication Critical patent/CN115534850A/en
Application granted granted Critical
Publication of CN115534850B publication Critical patent/CN115534850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • B60R16/0373Voice control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces

Abstract

The embodiment of the application provides an interface display method, electronic equipment, a vehicle and a computer program product. The interface display method is suitable for a vehicle with display and voice interaction functions, and comprises the following steps: collecting first driving information of a vehicle; displaying second driving information and a first identifier of the voice interaction function on the vehicle according to the collected first driving information; the first identifier is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identifier is determined based on the second driving information. By adopting the technical scheme provided by the embodiment of the application, the user can perceive that the voice interaction function of the vehicle is in the waiting activation state through vision, and the display state of the identifier of the voice interaction function is determined based on the display state of the second driving information, so that the diversity and the interestingness of the identifier display can be increased, and better visual experience can be brought to the user.

Description

Interface display method, electronic device, vehicle and computer program product
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to an interface display method, an electronic device, a vehicle, and a computer program product.
Background
With the development of vehicle intellectualization, vehicles with voice interaction function are more and more popular. After the voice interaction function is awakened by the user, the vehicle can work according to the voice instruction sent by the user.
At present, some vehicles only display identifiers corresponding to voice interaction functions on a display screen when the voice interaction functions are awakened or in the voice interaction process, and a user can perceive that the voice interaction functions are started through vision. However, before the voice interaction function is awakened, the user cannot visually perceive that the voice interaction function is in a waiting-for-awaken state. In addition, the display modes of the identifications corresponding to the voice interaction functions of the related vehicles are also single and not rich enough.
Disclosure of Invention
The present application provides an interface display method, an electronic device, a vehicle and a computer program product that solve or at least partially solve the above-mentioned problems.
In one embodiment of the present application, an interface display method is provided that is suitable for a vehicle having display and voice interaction functions. The method comprises the following steps:
collecting first driving information of a vehicle;
displaying second driving information and a first identifier of the voice interaction function on the vehicle according to the collected first driving information;
The first identifier is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identifier is determined based on the second driving information.
In another embodiment of the present application, an electronic device is also provided. The electronic device includes: a memory and a processor, wherein the memory is configured to store one or more computer programs; the processor is coupled to the memory, and is configured to execute the one or more computer programs stored in the memory, so as to implement the steps in the interface display method provided in the embodiment of the present application.
Still another embodiment of the present application provides a vehicle, including a vehicle body and the electronic device provided in the embodiment of the present application, where the electronic device is disposed on the vehicle body.
In yet another embodiment of the present application, a computer program product is also provided. The computer program product comprises computer programs/instructions which, when executed by a processor, enable the implementation of the steps in the interface display method provided by the embodiments of the present application.
According to the technical scheme provided by the embodiment of the application, according to the collected first driving information of the vehicle, the corresponding second driving information and the first identification of the voice interaction function of the vehicle are displayed on the vehicle, the first identification function is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information. Therefore, according to the scheme provided by the embodiment of the application, the user can perceive that the voice interaction function of the vehicle is in the waiting activation state through vision, the voice interaction function corresponds to the first identifier in the waiting activation state, the display state based on the second driving information is displayed, the diversity and the interestingness of the display of the first identifier can be increased, and better visual experience is brought to the user.
Drawings
FIG. 1 is a flow chart of an interface display method according to an embodiment of the present disclosure;
FIG. 2a is a first example of a first identifier for displaying a voice interaction function according to an embodiment of the present application;
FIG. 2b is a first example of a display of voice interaction functionality according to another embodiment of the present application;
FIG. 2c is a first example of a first identifier for displaying a voice interaction function according to yet another embodiment of the present application;
FIG. 3a is an example of a first identifier of a voice interaction function provided by an embodiment of the present application being displayed in association with a scene element;
FIG. 3b is an example of a first identifier of a voice interaction function and multimedia information associated display according to an embodiment of the present application;
fig. 3c is an example of a window element association display corresponding to push information and a first identifier of a voice interaction function provided in an embodiment of the present application;
fig. 3d is a schematic diagram of a first identifier display of a voice interaction function according to a driving state of a vehicle according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a presentation hierarchy corresponding to a content item displayed on a vehicle according to an embodiment of the present application;
FIGS. 5 a-5 d are schematic illustrations of a second label display of a voice interactive function according to an embodiment of the present application;
FIGS. 6 a-6 e are schematic diagrams illustrating a second identification and text box display of a voice interactive function according to an embodiment of the present application;
fig. 7a is a schematic view illustrating display area division of a vehicle display screen according to an embodiment of the present application;
fig. 7b is a schematic view illustrating display area division of a vehicle display screen according to another embodiment of the present application;
FIGS. 8 a-8 c are schematic diagrams illustrating a second identifier when the voice interaction function according to one embodiment of the present application supports a one-person speaking mode;
fig. 9a to fig. 9f are schematic diagrams illustrating voice broadcasting principle when the voice interaction function provided in an embodiment of the present application supports a two-person speaking mode;
FIG. 10 is a schematic diagram illustrating a voice interaction function exit according to an embodiment of the present disclosure;
FIG. 11 is a schematic structural diagram of an interface display device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a computer program product according to an embodiment of the present application.
Detailed Description
The intelligent development of vehicles promotes the development trend of diversification and complexity of human-computer interaction functions on vehicles. Among them, the more common man-machine interaction functions are digital touch screen, voice interaction (VoiceUserInterface, VUI), gesture interaction, etc., and especially the voice interaction function is popular among people. After the user sends the wake-up voice and wakes up the voice interaction function on the vehicle, the user continues to send voice instructions, and the vehicle can work according to the voice instructions, such as playing music, starting navigation, searching information, making/receiving calls, opening/closing skylights, starting/closing air conditioners and the like. The voice interaction function on the vehicle liberates the control mode of controlling the vehicle work intelligently and comfortably by the hands of the user, and the user (such as a driver) is more focused on driving and improves the driving safety.
The vehicle displays the identification corresponding to the voice interaction function on the vehicle when the voice interaction function is awakened by the user or in the voice interaction process with the user after the voice interaction function is awakened, so that the user can perceive that the voice interaction function is in a starting state through vision. However, before the voice interactive function wakes up, the identifier corresponding to the voice interactive function is not displayed on the vehicle, and the user cannot perceive that the voice interactive function of the vehicle is in a waiting-to-wake state (i.e., a waiting-to-activate state described in the present application). In some implementations, the display style of the identifier corresponding to the voice interaction function is unchanged throughout the voice interaction process. The user can not visually perceive the current voice interaction stage through the displayed identifier corresponding to the voice interaction function, and whether the voice interaction stage is in the voice picking stage or the voice picking is completed to recognize the collected user voice is unclear. Therefore, the display modes of the identifications corresponding to the voice interaction functions of the vehicles are single and not rich enough.
Aiming at the problems, the embodiments of the application provide an interface display technical scheme, which realizes that a user can perceive that the voice interaction function of the vehicle is in a waiting awakening state through vision, and the identification display corresponding to the voice interaction function is rich and various.
In order to better understand the technical solutions of the present application, the following descriptions of the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and drawings described above, a plurality of operations occurring in a particular order are included, and the operations may be performed out of order or concurrently with respect to the order in which they occur. The sequence numbers of operations such as 101, 102, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 1 shows a flow chart of an interface display method according to an embodiment of the present application, where the method is applicable to a vehicle with display and voice interaction functions, and the vehicle may be, but is not limited to: pure electric vehicles, fuel vehicles and hybrid electric vehicles. In particular, the execution body of the interface display method provided in this embodiment may be a component/device having a logic calculation function on a vehicle, for example, may be a vehicle controller (Vehicle control unit, VCU) of the vehicle, an electronic control unit (Electronic Control Unit, ECU), or the like. As shown in fig. 1, the interface display method includes the steps of:
101. Collecting first driving information of a vehicle;
102. displaying second driving information and a first identifier of the voice interaction function on the vehicle according to the acquired first driving information;
the first identifier is used for prompting that the vehicle voice interaction function is in a state to be activated, and the display state of the first identifier is determined based on the second driving information.
In one possible implementation, a display screen is mounted at a center console location on the vehicle. The display screen may extend from a position directly in front of or near the front of the primary drive to a position directly in front of or near the front of the secondary drive. The second driving information and the first identification may be displayed on the display screen. Alternatively, the vehicle may be provided with a projection device, and the second driving information and the first identification may be projected by the projection device onto a certain medium of the vehicle. Or, a first display screen is arranged at the center console of the vehicle, and a second display screen for a rear passenger to watch is also arranged on the vehicle; the first display screen is provided with second driving information and a first mark, and the first mark on the second display screen is displayed along with the first mark on the first display screen. Or, the vehicle is provided with a head-up display device, and the first driving information and the first mark can be displayed on a front windshield of the vehicle through the head-up display device. Or, the vehicle is provided with a wearing device (such as glasses), and the wearing device displays second driving information and a first mark; etc., and the embodiments of the present application are not limited thereto.
After the vehicle is electrified, a control device (such as a VCU) of the vehicle can acquire driving information of the vehicle in real time through sensors and interaction equipment on the vehicle. The driving information may include, but is not limited to: gear, vehicle exterior environment parameters (such as front vehicle distance, road information and the like), driving states (such as forward driving, reverse driving and the like), current driving modes of the vehicle (manual driving modes, auxiliary driving modes or automatic driving modes and the like), driving speeds of the vehicle, accelerator sizes, brake pedal states, interaction information of drivers and passengers in the vehicle through interaction equipment (such as controlling air-conditioning temperature, seat temperature, playing music, calling multimedia applications and the like), images in the vehicle acquired by a vehicle-mounted camera and the like. The control device of the vehicle can control the vehicle to display corresponding contents based on the driving information, so that drivers and passengers in the vehicle can intuitively know the real situation of the vehicle, and the driving safety of the vehicle is improved. For convenience of description, the present embodiment refers to the actual driving information of the vehicle obtained as described above as first driving information, and the content displayed on the vehicle based on the first driving information is referred to as second driving information.
In specific implementation, the first driving information includes, but is not limited to: driving state data, driving environment data, user interaction data with a vehicle, in-vehicle data, and the like. The driving state data includes, but is not limited to, power-on information, driving speed information, driving pose information (such as driving position and driving pose of the vehicle), pedal information (such as accelerator pedal information and brake pedal information), gear information (including what gear the vehicle is currently in, such as forward gear (D gear), reverse gear (R gear), parking gear (P gear), and the like). The driving environment data includes, but is not limited to, environment information around the vehicle collected by an image collection device (such as a camera) provided outside the vehicle, a laser radar, a distance sensor, and the like, and the environment information may include at least one of, but is not limited to: road information, pedestrians, vehicles, plants, animals, etc. The interaction data between the user and the vehicle includes, but is not limited to, data actively input by the user, such as a destination input by hand touch or voice, inquiry information, instructions (such as instructions for calling audio and video applications, instructions for heating seats, instructions for regulating air conditioner temperature, wind speed, wind outlet angle, instructions for calling broadcast frequency bands, etc.), etc. In-vehicle data may include, but is not limited to: in-vehicle images acquired by an in-vehicle camera, facial expressions of a driver and the like; voiceprint information is collected by an in-vehicle microphone, and so on. What needs to be explained here is: information about the privacy of the user can be collected only through the authorization of the user. In this embodiment, these devices for collecting data concerning the driver in the vehicle are all authorized to operate by the driver.
Hereinafter, the technical solution provided in this embodiment will be specifically described by taking various information contents such as second driving information and an identifier of a voice interaction function as examples displayed on a vehicle display screen.
The second driving information displayed on the vehicle display screen may include, but is not limited to: analog images, navigational data (e.g., navigational maps), etc. The simulation image is an image capable of reflecting the running environment and/or running state of the vehicle, and matches the driving mode of the vehicle, for example, if the driving mode of the vehicle is a forward driving mode (i.e., the vehicle is running forward), the simulation image displayed in real time on the vehicle display screen is a forward driving image (such as the simulation image 1 shown in fig. 2b or the simulation image 1' shown in fig. 3 a); for another example, if the driving mode of the vehicle is the parking mode, the simulation image displayed in real time on the vehicle display screen is a parking image, and so on. Specifically, as shown in fig. 2b, a frame of analog image is displayed on the vehicle display screen a, and the analog image includes, but is not limited to, at least one of the following: a vehicle model 12 and a background 11 reflecting the driving environment of the vehicle. The vehicle model 12 is a vehicle model which is scaled down according to the actual shape of the vehicle, and can be pre-stored in a storage medium of the vehicle for being called and displayed when needed, and the display state of the vehicle model is determined according to the running state data in the first driving information; the background reflecting the driving environment of the vehicle is generated and displayed based on the driving environment data in the first driving information. The navigation data may be, but is not limited to, a navigation map (such as navigation map 4 shown in fig. 3 a) displayed according to a user input instruction such as a destination address or a query such as a charge stake, a parking space, or the like.
The display principle of the simulation image will be described below by taking the first driving information of the vehicle at a time t as an example, and displaying a frame of simulation image corresponding to the time t on the display screen of the vehicle.
Assuming that the vehicle is in a manual forward driving mode (i.e., the driver manually drives the vehicle forward), the driver is generally concerned about the surroundings of the vehicle. Accordingly, the driving environment data included in the first driving information may be captured by a camera (such as a wide-angle fisheye camera) disposed around the vehicle. After receiving the surrounding environment images of the vehicle at the time t sent by the cameras in all directions, an image processing device (such as a processor) on the vehicle can perform distortion correction, edge fusion and other processes on a plurality of environment images to obtain the vehicle environment image. It will be appreciated that the resulting vehicle environment image (or background reflecting the vehicle's driving environment) may represent vehicle ambient information collected by various cameras of the vehicle. Then, the vehicle state can be determined according to the driving state data contained in the first driving information of the vehicle at a time t, and the corresponding vehicle model of the vehicle can be updated according to the vehicle state. For example, the states of the wheels of the vehicle may be determined based on the driving state data, and the display states of the wheels in the model may be updated (e.g., the steering of the wheels may be changed) based on the states of the wheels. In addition, the running direction of the vehicle can be determined according to the running state data contained in the driving information of the vehicle at a moment t; for example, the vehicle running direction of the vehicle at the time t can be determined according to the inclination angle of the vehicle at the time t relative to the horizontal plane fed back by the triaxial sensor in the running state data; and finally, according to the running direction of the vehicle, rendering the obtained background reflecting the running environment of the vehicle and the vehicle model to obtain a three-dimensional panoramic image containing the vehicle model corresponding to the running direction of the vehicle. Further, the rendered three-dimensional panoramic image including the car model is displayed through a display screen on the vehicle, so that a simulation image (such as the simulation image 1 shown in fig. 2 b) corresponding to the time t of the vehicle is displayed on the display screen of the vehicle.
It should be noted that, in the rendered three-dimensional panorama including the vehicle model, a distance identifier (such as the distance identifier 5 shown in fig. 2 b) of the safety distance of the vehicle body, and the estimated distance between the vehicle model and each obstacle in the three-dimensional panorama may be further identified, which is not limited herein. For the principle of displaying corresponding simulated images on a vehicle display screen when the vehicle is in other driving modes such as parking, see the above-mentioned examples for relevant matters.
The vehicle model related to the upper and lower of the embodiment refers to the vehicle model image, and is only expressed in different modes under different scenes.
Considering that the voice interaction function of the existing vehicle is not visually perceived by the user as being in a waiting-to-wake state (or speaking-to-activate state) before being awakened. In view of this problem, in this embodiment, after the vehicle is started, in addition to displaying the corresponding second driving information on the vehicle display screen according to the first driving information of the vehicle acquired in real time, a first identifier corresponding to the voice interaction function of the vehicle is displayed. The first mark has a visual prompting function and can be used for prompting that the voice interaction function is in a state to be activated. In other words, the first identifier may be understood as an image to be activated corresponding to the voice interaction function. In specific implementation, the first identifier corresponding to the displayed voice interaction function may be, but is not limited to, one or a combination of more of a graphic element, a text element and a line element. Optionally, in this embodiment, the first identifier (i.e., the identifier referred to in other embodiments below) of the voice interaction function is a graph element, and the form of the graph element may be two-dimensional or three-dimensional. For example, the graphic element may be a polygonal icon, a virtual character, a virtual animal, a virtual plant, etc. in two-dimensional or three-dimensional form. Examples of the first identifier 21 corresponding to the voice interaction function being a square icon are shown in fig. 2b to 3 d.
When the first identifier corresponding to the voice interaction function is displayed, the display state of the first identifier is determined according to the displayed second driving information.
In one possible implementation, the "determining the display state of the first identifier based on the second driving information" may include:
103. target information in the second driving information is determined.
104. And displaying the first identification in association with the target information.
In some cases, the vehicle will display a start-up animation upon power-up and then stay on the initial screen. At this time, when neither the driver nor the passenger is operating, the initial screen is the second driving information displayed on the vehicle. At this time, the page element of the initial image which is focused by the driver may be used as the target information, or the control to be pushed to the driver or the passenger in the initial image may be used as the target information; etc.
Of course, the initial screen may have no target information. Correspondingly, the method provided by the embodiment of the application can further comprise the following steps:
if the second driving information does not include the target information, a first mark may be displayed on the vehicle, and the first mark may be displayed at a fixed position in a semitransparent manner.
For example, as in the example shown in FIG. 2a, the first logo is displayed at a central location on the vehicle. Wherein, the first sign can be a semi-transparent prompt image in a feather shape.
After the vehicle is powered on and is switched from the P gear to the D gear or the R gear, at this time, the first driving information of the vehicle is more informative, and the second driving information displayed based on the first driving information is more informative, and the second driving information may include, but is not limited to, one or more of the following: driving state information of a vehicle, parking state information, application windows, function control floating windows, multimedia information, function controls and the like. At this time, one of the second driving information may be determined as the target information, and the first flag may be displayed in association with the target information.
In a specific embodiment, the "determining the target information in the second driving information" may include the following steps:
s11, acquiring display levels corresponding to all information items in the second driving information; wherein, the content items of the upper layer of the display hierarchy shield the content items of the lower layer of the display hierarchy;
and S12, determining the target information based on the content items at the top-level display level.
Fig. 4 shows an example of presentation levels, as shown in fig. 4, from bottom to top, each corresponding level is: an initial screen (or called a start-up screen, a vehicle power-up screen), a window corresponding to push information, an application window, an information stream, a Dock (a functional interface in a graphical user interface for starting and switching an application program in operation) and a top bar, an activated character and text box corresponding to a voice function, a QC (Quality Control) panel (a control panel, such as various attributes of a display screen available for displaying a vehicle), and the like.
Correspondingly, when the step S11 is specifically implemented, all the information items in the second driving information displayed in the current vehicle display interface and the display levels corresponding to the information items may be obtained. In step S12 described above, the content item at the top presentation level is generally regarded as an information item of interest to the driver or the passenger. Thus, the target information may be determined based on the content items at the top-level presentation level. For example, after the vehicle is powered on, the driver switches the P-range to the D-range, and at this time, the partial area of the initial screen may display the vehicle model image, and the initial screen is at the uppermost layer. At this time, the vehicle model image may be the target object. After the application window located at the upper layer of the initial screen as shown in fig. 4 is called out because of the driving behavior of the driver or the operation of the passenger, etc., the application window may be used as the target information at this time; etc., the embodiments of the present application are not intended to be limiting. The case of no target information and the case of respective correspondence of plural kinds of target information will be exemplified hereinafter.
1. Non-target information
The second driving information comprises display content corresponding to the state that the vehicle is started or parked. The starting state of the vehicle refers to a state that the vehicle is started successfully but not driven yet just after being electrified; the parking state may be a state where the vehicle speed is zero, such as when a red light is encountered during running of the vehicle. Accordingly, it may be determined that the second driving information has no target information, and at this time, "the first identifier displaying the voice interaction function" in the above 102 may specifically include:
1021. Displaying the first mark in a first display mode; the first display mode is used for prompting the starting or parking effect of the vehicle.
The details of 1021 are described below as a few examples.
Example 11, second Driving information includes Start-up interface content (or initial Screen) when the vehicle is in a Start State
For example, the first driving information includes power-on information of the vehicle, a gear of the vehicle is in P gear, an opening degree of an accelerator pedal is zero, an engine is not started, and the like, and the second driving information displayed on a display screen of the vehicle according to the first driving information may be power-on and power-on interface content of the vehicle. That is, the display state of the second driving information reflects that the vehicle has been started immediately before but has not yet traveled, and the display state of the first identifier may be determined to be the second display state, where the first display state includes: the first display mode of the first mark is a first mode, static display and display at a target position. For example, a first marker 21 resembling feathers is shown in fig. 2a at a set target position on the vehicle display screen a.
Example 12, second driving information includes interface content when the vehicle is in a park state
For example, the first driving information includes shift position information that is changed from D-range to P-range, the vehicle speed is zero, and the like, and the second driving information displayed on the vehicle according to the first driving information reflects a screen that the vehicle is changed from the running state to the parking state. In this case, the display state of the first mark is determined to be a second display state, the second display state including: the first display mode of the first mark is a second mode, static display and display at the target position. For a specific description of the second pattern, see the following relevant matters.
2. Determining target information
2.1, the target information is the driving state information of the vehicle
The step 104 of displaying the first identifier in association with the target information may be specifically: displaying the first identification dynamically changing based on driving state information of the vehicle.
The vehicle driving state information may be meter information (such as vehicle speed, driving direction) or an image simulating vehicle driving (a simulated image as mentioned above). Specifically, "displaying the first identifier dynamically changing based on the driving state information of the vehicle" may include the steps of:
S21, determining a moving direction and a moving speed according to the driving state information of the vehicle;
and S22, displaying the first mark dynamically changing along the moving direction based on the moving speed.
For example, the example shown in FIG. 2b, where the current vehicle is in an autonomous mode, the simulated image presents a running view from a look-down perspective. At this time, the movable direction, that is, the direction of the model 12 in the simulated image in the image-corresponding coordinate system, is determined based on the actual vehicle speed, and the moving speed of the model 12 can be reflected. Accordingly, when the first mark is displayed, the first mark that dynamically changes along the moving direction may be displayed, and the changing speed is associated with the moving speed. For example, the speed of change of the first marker may be constant with the speed of movement, or the ratio of the speed of change of the first marker to the speed of movement may be fixed. Fig. 2b shows a display state of the dynamically changing first identifier corresponding to one frame of picture, specifically, as shown in the drawing, a plurality of first identifiers 21 are sequentially arranged along the moving direction; the plurality of first marks 21 may be different in size and/or the plurality of first marks 21 may be different in transparency to present a visual effect of the first marks following the running of the vehicle model. In practice, the size of the plurality of first marks 21 becomes gradually smaller or larger along the moving direction; and/or the transparency of the plurality of first markers 21 becomes gradually smaller or larger along the moving direction.
The example shown in fig. 2c is similar in that the view of the simulated image is a shoulder view image of the driver. As shown in fig. 2c, the moving direction of the first mark 21 displayed in a dynamic manner may be parallel or substantially parallel to the moving direction displayed in the analog image, as long as the first mark 21 displayed in a dynamic manner visually reflects the driving direction and speed of the vehicle.
Further, at least some of the plurality of first markers 21 fluctuate within respective ranges. For example, the plurality of first markers 21 exhibit a visual effect of swimming in the direction of movement.
In a more specific embodiment, the driving state information of the vehicle includes an image simulating driving of the vehicle. Accordingly, the step S22 "displaying the first identifier dynamically changing along the moving direction based on the moving speed" may include:
s221, determining a display position of the first mark based on the image;
s222, displaying the first mark which is dynamically changed along the moving direction according to the moving speed at the display position of the first mark.
Further, the image includes a background reflecting a driving environment of the vehicle and a model simulating the vehicle. Accordingly, the step S221 "determining the display position of the first identifier based on the image" may include:
S2211, if no target environment element exists in the background in the image, determining a display position of the first mark based on the car model.
For example, based on the display position of the car model, the display position of the first mark is determined according to a preset azimuth determination rule. The preset direction determining rule may define a distance between the display position of the vehicle model and a connection line between the display position of the vehicle model and a reference axis (such as an X axis or a Y axis in a coordinate system of a display interface).
S2212, if a target environment element exists in the background in the image, determining a display position of the first mark based on the target environment element;
the target environment element is a lane, a road sign, an idle parking space, an idle charging position or a navigation destination reflecting the driving environment of the vehicle.
One implementation manner of the "determining the display position of the first identifier based on the target environmental element" in S2212 above is: the display position of the first mark, i.e. the position of a point in the vicinity of the target environmental element, such as on the contour of the target environmental element, is taken as the display position of the first mark.
Another implementation determines the location of the first identifier based on a distance between the target environmental element and the vehicle model. Specifically, for example, "determining the display position of the first identifier based on the target environmental element" may include the following steps:
Determining the distance between the car model and the target environment element according to the navigation data; and determining the display position of the first mark based on the distance.
For example, when the distance is smaller than a preset distance, the display position of the first mark is near the target environment element;
and when the distance is greater than or equal to the preset distance, the display position of the first mark is far away from the target environment element.
In a specific implementation, the distance may be a vector, i.e. containing direction information in addition to the distance value. Assuming that the distance takes a positive value when the vehicle model runs more and more towards the target environment element; when the car model runs farther and farther away from the target environment element, the distance is a negative value. Correspondingly, the preset distance can be zero, and if the positive value is larger than zero, the display position of the first mark is near the target environment element; and if the negative value is smaller than zero, the display position of the first mark is far away from the target environment element. For example, a display interface that is greater than a set threshold from the target environmental element may be considered to be far from the target environmental element. The set threshold may be determined based on the specific dimensions of the display interface and is not limited herein.
As shown in fig. 3a, the first identifier is displayed in the vicinity of the target environment element (e.g., environment marking element 32) corresponding to the parking space.
2.2 when the target information is the parking state information of the vehicle
The step 104 of displaying the first identifier in association with the target information may be specifically: and displaying the first mark for guiding the vehicle to park on the vehicle.
For example, a reverse image is displayed on the vehicle, as well as a park guide line. At this time, a first mark for guiding the driver to rotate the steering wheel can be displayed in the reversing image, for example, the first mark is dynamically displayed as various guiding prompt information based on the current vehicle posture, such as an arrow for steering the steering wheel, a straight reversing arrow for the steering wheel, and the like.
2.3, the target information is the window information
Wherein, the window information may be: the application window, the push information corresponds to the popup window, the card information, and the like, which is not limited in this embodiment.
Accordingly, the step 104 of displaying the first identifier in association with the target information may specifically be: displaying the first mark with the effect of highlighting the window information.
For example, as shown in fig. 3c, a first logo 21 is displayed at one corner of the outline edge of the window information, and a halo visually adapted to the first logo may also be displayed outside the outline of the window information to highlight the window information.
2.4 the target information is multimedia information
Accordingly, the step 104 of displaying the first identifier in association with the target information may specifically be: displaying the first identification interacted with the multimedia information.
For example, as shown in FIG. 3b, the current interface shows lyrics in an audio playback application window that follow the dynamic scrolling of audio. The first identification may be a display of lyrics scrollable by the follower. I.e. highlighting the lyrics corresponding to the currently playing audio.
Further, according to the driving state (such as manual driving, automatic driving or auxiliary driving) of the vehicle, some elements displayed in association with each other can be added to the first mark, so that the corresponding driving state of the vehicle can be represented through the added elements. Based on this, the method provided in the embodiment of the present application may further include:
105. if the vehicle is in the manual driving mode, adding a first element displayed in an associated mode for the first mark; the first element is used for representing that the vehicle is in a manual driving state;
106. if the vehicle is in the auxiliary driving mode, adding a second element of the associated display for the first mark; the second element is used for representing that the vehicle is in an auxiliary driving state;
107. If the vehicle is in the automatic driving mode, adding a third element displayed in an associated mode for the first mark; the third element is used for representing that the vehicle is in an automatic driving state.
Embodiments of the present application are described below in conjunction with specific scenarios.
The scene 1, the first driving information includes that the gear of the vehicle is in the D gear or the R gear, the accelerator pedal is in the non-zero opening degree, the vehicle speed is non-zero, and the like, at this time, the second driving information displayed according to the first driving information includes a simulation image, and the vehicle model in the simulation image is in a driving state (such as forward driving or reverse driving or parking state), that is, the display state of the second driving information reflects that the vehicle is in a driving state (such as forward driving state or reverse driving state or parking state). In this case, the display state of the first mark may be determined to be a second display state including: the display mode of the first mark is a second mode (a plurality of first marks are sequentially arranged and displayed), and the dynamic display and the display position are related to the position of the car model in the simulation image. The second pattern may refer to, but is not limited to, a pattern in which a plurality of first marks are sequentially arranged and displayed, and the dynamic display refers to, but is not limited to, an effect display in which the vehicle model in the simulation image is followed. When the second display state is adopted to display the first mark, the visual effect that the first mark follows the vehicle model in the simulation image to run can be displayed. More specifically, if the vehicle is determined to be in a parking state according to the first driving information, the first identifier and the displayed parking route can be displayed in an associated manner, and a certain spatial position relation is maintained between the first identifier and the vehicle model, so that the displayed first identifier has the effect of guiding the vehicle to park; if it is determined that the vehicle is in a forward driving state or a reverse driving state according to the first driving information, the first sign 21 may be displayed with reference to the first sign 21 shown in fig. 2b or fig. 2c, so that the effect that the first sign 21 follows the vehicle model is exhibited.
Fig. 2b shows an example of the display state of the first sign 21, in which the first sign 21 of the voice interaction function is displayed in the second mode, in a case where the display state of the second driving information reflects that the vehicle is in a normal forward driving situation. In fig. 2b, a plurality of first marks 21 with sequentially arranged sizes and transparencies gradually increasing are displayed along the driving direction of the vehicle model 12 in the simulation image 1, so as to display the visual effect that the first marks follow the vehicle model, and the first marks 21 are gradually increased along the driving mode of the vehicle model and transparencies gradually increasing are used for reflecting the forward driving state of the vehicle.
When the second driving information includes the vehicle parking state information, the display state of the first indicator may refer to the display state of the first indicator shown in fig. 2 b.
For example, the second driving information includes vehicle reversing information, and the first identifier 21 corresponding to the voice interaction function is displayed in the second mode, and a display state example of the first identifier 21 is displayed. Along the reversing driving direction of the vehicle model in the simulation image (not shown in the figure), a plurality of first marks 21 which are sequentially arranged and gradually smaller in size and gradually smaller in transparency are displayed so as to display the visual effect that the first marks follow the vehicle model to park and reverse, and the first marks 21 gradually smaller and gradually larger in transparency along the vehicle model driving mode are used for reflecting that the vehicle is in the reversing driving state.
In the example given above, the plurality of first identifiers sequentially arranged are presented on the vehicle display screen, and the plurality of first identifiers may be sequentially arranged according to the display positions of the first identifiers in the driving direction of the vehicle model in the simulation image based on the determined display positions of the first identifiers. Specifically, the display positions of the first marks arranged at the first position in the plurality of first marks can be determined firstly based on the positions of the vehicle models in the simulation image, and then the display positions of other first marks are determined based on the display positions of the first marks arranged at the first position, so that when the first marks which follow the vehicle models in the simulation image to drive are displayed, the first marks and the vehicle models can keep a certain spatial position relation, and the situation that the random display generates poor visual experience is avoided. In the implementation, the display position of the first mark arranged at the first position in the plurality of first marks can be determined according to the position of the vehicle model in the simulation image and the position relation of the preset mark relative to the vehicle model. The position relationship of the preset mark relative to the vehicle model can be flexibly set, and the embodiment is not limited. For example, the display position of the first mark arranged at the first position can be determined by setting a preset distance from the vehicle model in the simulated image to the set azimuth of the vehicle model under the display interface coordinate system of the vehicle display screen. Further, the display position of the first mark arranged at the second position can be determined according to the determined display position of the first mark arranged at the first position and the size of the first mark arranged at the second position in the plurality of first marks; and so on, determining the display positions of the remaining first identifiers except the first identifier arranged at the first position and the first identifier arranged at the second position in the plurality of identifiers.
Here, the display positions of two adjacent first marks among the plurality of first marks may or may not overlap, and this embodiment is not limited thereto. In the example shown in fig. 2b or fig. 2c, the display positions of two adjacent first identifications among the plurality of first identifications 21 are overlapping.
It should be further noted that, in order to reflect, by the first identifier, whether the driving mode in which the vehicle is located (such as a parking mode, a forward driving mode, or the like) is manual driving, automatic driving mode, or assisted driving, an element reflecting the manual driving mode, the automatic driving mode, or the assisted driving, respectively, may be added to the first identifier. For example, if the display state of the second driving information reflects that the vehicle is in normal forward driving and is in manual driving, a manual driving icon 210 (such as a hand-held steering wheel icon) may be added to the first sign 21 'with the largest size among the plurality of first signs 21 shown in fig. 2b to reflect that the vehicle is in manual driving, that is, the display state of the first sign 21 may be as shown in the display state shown in the block diagram 2' in fig. 2 b. For another example, if the display state of the second driving information reflects that the vehicle is in the parking space and is in the automatic parking space, an automatic driving icon 211 (such as a single steering wheel icon) may be added to the first sign with the largest size among the plurality of first signs shown in fig. 2c to reflect that the vehicle is automatically driven to park in the parking space, that is, in this case, the display state of the first sign 21 may be as shown in the display state shown in a block diagram 2″ in fig. 2 c. For example, if the vehicle is traveling forward and is driving assisting, a character such as "driving assisting is added to the first mark having the largest size among the plurality of first marks 21, and the like, and the present invention is not limited thereto.
In addition, at least some of the plurality of first markers shown in fig. 2b or fig. 2c may also be made to fluctuate by a small extent within the respective range in order to achieve a more realistic following effect.
Besides the first identifier corresponding to the voice interaction function displayed by the car model in the following simulation image, other modes can be adopted for displaying. For example, when all display information displayed on the vehicle display screen is identified to contain focus elements meeting the requirement of visual focus, the display mode of the first mark can be switched, so that the first mark is displayed in association with one focus element, the position of the focus element is visually prompted for a user, and the user can conveniently and quickly screen the focus element interested by the user. Wherein the visual focus requirement is a preset requirement, which may include, but is not limited to, at least one of the following: the focus element is a multimedia element, such as a window element corresponding to push information (such as weather, hot news information, advertisements, entertainment eight diagrams and the like) and an interface element in an application interface (such as song names and lyrics); the focus element is a landmark environmental element in the simulation image, for example, an environmental marking element corresponding to a parking space, an environmental marking element corresponding to a gas station, an environmental marking element corresponding to a service area, an environmental marking element corresponding to a lane (such as an environmental marking element corresponding to a roadway and an environmental marking element corresponding to a driving direction), and the like. In the implementation, when the second driving information displayed contains focus elements meeting the visual focus requirement, the number of the focus elements meeting the visual focus requirement may be one or more, if a plurality of focus elements are available, one focus element may be determined from the plurality of focus elements as a target focus element to be displayed in association with the first identifier, and the manner of association display is adapted to the type of the target focus element. The focus element is defined herein from the perspective of analyzing the driver and/or passenger, e.g., locking the visual focus on the display interface according to the driver's and/or passenger's preferences, etc. The above-mentioned target information is described from the information level. In practice, the meaning of the focus element and the target information described herein may be the same. That is, in the above embodiment, the target information is determined based on the content item at the top-level presentation level in the second driving information. The target information may be preset by a program, for example, if a window exists in the content item of the top-level display hierarchy, the window is the target information; for another example, there are simulated images in the content items of the top presentation tier, that are just the target information, and so on. In practice, the target information may also be determined by analyzing the preferences of the driver and/or passengers, etc. That is, the determination of the target information may be determined by the same technical means as the focus element in this section.
Specifically, if the displayed second driving information includes a plurality of focus elements meeting the visual focus requirement, one focus element may be determined as the target focus element from the plurality of focus elements in any one of the following manners:
a first mode is that a random mode can be adopted, one focus element is selected from a plurality of focus elements, and the focus element is determined to be a target focus element; as seen in fig. 3c, the focus element 31 may be randomly selected from a plurality of focus elements 3, determined as a target focus element.
According to the acquired data information, predicting the intention of a driver in the vehicle, and determining one focus element matched with the intention of the driver in the plurality of focus elements as a target element; wherein the acquired data information includes, but is not limited to, at least one of the following: vehicle data (e.g., vehicle positioning information, information collected by a plurality of sensors on the vehicle (e.g., radar, environmental information around the vehicle collected by a distance sensor, etc.), images outside the vehicle collected by a camera, weather (e.g., rainy, snowy, sunny), travel speed, remaining power, remaining oil, etc.) identified from the collected images outside the vehicle, information actively entered by a user (e.g., music preference, driving preference, destination, etc.), information generated by interaction of a driver with the vehicle (e.g., navigation data generated from a destination entered by the driver), images of the face of the driver, images of limb movements, etc.), it should be noted that the above-mentioned interaction data are obtained after authorization or confirmation by the driver, and data not authorized or confirmed by the driver are not obtained.
The following describes, in connection with several examples, a specific implementation of a target focus element determination, and presentation of a first identifier in association with a target focus element.
Example 41
Referring to fig. 3a, the second driving information displayed on the vehicle display screen reflects that the vehicle is in a forward driving state, and according to the navigation data corresponding to the navigation map 4 in the displayed second driving information, it is determined that the distance destination of the vehicle is 400m, at this time, the intention of the driver in the vehicle is predicted to be a parking space, accordingly, according to the intention, the simulated image in the displayed second driving information may be identified, and when the environment image included in the simulated image is identified to have the environment marking element 32 corresponding to the parking space adapted to the destination (the distance between the environment marking element 32 and the vehicle model is smaller than the first preset distance), the display state of the first identifier is switched from the first display state (such as the display state example of the first identifier 21 shown in fig. 2 b) to the third display state, so that the first identifier 21 is displayed near the environment marking element 32 corresponding to the parking space, and the driver may prompt the position of the parking space. Specifically, for example, a plurality of first markers 21 arranged in sequence may be displayed in the region 2' near the environmental label element 32, and the display style of the plurality of first markers 21 arranged in sequence is referred to the above-described second style. In addition, in order to further prompt the driver of the location of the parking space, the environment marking element 32 corresponding to the parking space may be highlighted. The present embodiment does not limit the manner of highlighting the environmental label element 32, as long as it is ensured that the position of the parking space corresponding to the environmental label element 32 can be visually prompted to the driver after highlighting the environmental label element 32.
What should be additionally stated here is: when the first marker 21 is displayed in the third display state in the vicinity of the environmental label element 32, the first marker 21 may be displayed in a static manner or may be displayed in a dynamic manner. For example, the first identifier 21 may be displayed in a dynamic manner of breathing action, or the first identifier 21 may be displayed in a dynamic manner of a spiral motion at a position above the environment marking element 32 corresponding to the parking space, which is not limited herein.
Further, when it is detected that the vehicle starts parking into the parking space corresponding to the environmental label element 32, switching may be performed again on the display state of the first sign, specifically, the display state of the first sign may be switched from the third state to the display state (first display state) of the first sign 21 as shown in fig. 2c, thereby reflecting that the vehicle is backing up to park into the parking space by the first sign 21. Or when it is detected that the vehicle arrives at the parking space corresponding to the environmental marking element 32, the parking is not performed, but the vehicle is directly driven away, and when it is determined that the distance of the vehicle away from the parking space corresponding to the environmental marking element 32 is, for example, 500m (at this time, the distance between the vehicle model and the environmental editing element 32 is greater than or equal to the second preset distance), the switching may be performed again on the display state of the first mark. Specifically, the display state of the first logo may be switched from the third display state to the fourth display state such that the first logo is displayed away from the environmental label element 32. The fourth display state may or may not be the same as the first display state (as in the example of the display state of the first marker 21 shown in fig. 2 b), as long as it is ensured that the first marker 21 is presented remotely from the environmental marker element 32.
Example 42: assuming that all the display contents displayed on the vehicle display screen include multimedia information (such as song songs, etc.), and accordingly, the determined target focus element is a media marking element in the multimedia information, the first identification element 21 can be displayed in the fifth display state, so that an interactive display effect with the multimedia information is presented. In particular, the method comprises the steps of,
example 421, assuming that the target focus element is multimedia information (is the content of the song being played) shown in fig. 3b, according to humming habit of the driver, determining that the driver intends to sing along with the currently played song, and determining that the media tag element 33 of the lyrics corresponding to the progress of the currently played song is the target focus element; accordingly, the first marker element 21 may be displayed at an angular position of the media marker element 33, such as may be displayed statically or dynamically, such as by breathing effort. Alternatively, it may be displayed in a dynamic manner around the media marking element 33, in a winding or the like, and so forth.
Example 422, assuming that the determined target focus element is a video playing interface of a video application displayed on a vehicle display screen, the first identifier 21 may be displayed around the video playing interface in a dynamic manner such as winding, or whether comment content (which may be in the form of text or graphics primitive) appears in video content played in the video playing interface may also be identified; if the first mark appears, the effect that the first mark moves along with the comment content can be displayed on the video playing interface based on the moving direction of the comment content.
Example 43, assuming that push information is included in all display contents displayed on the vehicle display screen, see window element 30 corresponding to the plurality of push information shown in fig. 3 c; according to historical data related to drivers, the interest degree of the drivers on the push information corresponding to the window elements 30 can be determined; and then selecting one window element from the plurality of window elements 30 as a target focus element based on the corresponding interestingness of each window element. In specific implementation, the plurality of window elements may be ranked based on, but not limited to, the interestingness corresponding to each window element, and a first window element ranked among the plurality of window elements is used as the target focus element. For example, assuming that the target focus element is the window element 31, the first identifier 21 may be displayed at, but not limited to, the upper left corner of the window element 31, and in this example, the window element 31 may be highlighted in addition to displaying the first identifier 21 at the upper left corner of the window element 31, where the highlighting may be, but not limited to: a shadow effect/lighting effect or the like is added to the window element 31, for example, a lighting effect of a remarkable color (e.g., violet) may be added to the peripheral edge of the window element 31.
Further, if it is detected that the focus element contained in the displayed second driving information disappears and the second driving information reflects that the vehicle is in a driving state, the first identifier can be restored to follow the driving of the vehicle model contained in the simulation image in the second driving information to display.
The supplementary explanation here is: in order to ensure driving safety, the precondition that the second driving information displayed on the vehicle display screen in the above example includes multimedia information (such as a window element corresponding to the push information) may be that it is monitored that the driver triggers a parking event (such as switching the gear from the forward gear to the parking gear). In a specific implementation, the gear shifting event of the vehicle is determined by monitoring gear information of the vehicle, wherein the gear information of the vehicle can be obtained by detecting a sensor on the vehicle or can also be obtained by detecting a network platform of the vehicle, and the embodiment is not particularly limited as long as the gear information of the vehicle can be accurately detected; the main purpose of detecting the gear of the vehicle is to determine the state of the vehicle. When the gear information of the vehicle is detected to be the driving gear, the vehicle can be indicated to be in a driving state, and the second driving information which is not displayed on the display screen of the vehicle in the driving state does not contain multimedia information, so that the driving safety of a driver is ensured. When the gear information of the vehicle is detected to be a parking gear, the vehicle can be indicated to be in a non-driving state, the speed is zero, and the second driving information displayed in the state can contain multimedia information so as to provide entertainment service for drivers and passengers.
The content included in the above-described drive range and park range may vary for different types of vehicles. For example, taking an automatic range vehicle as an example, the driving range may include a D range (forward range) and an R range (reverse range), and the parking range may include a P range (park range) and an N range (neutral range). The implementation of the window element corresponding to the push information is described and illustrated by taking the driving gear as the D gear and the parking gear as the P gear as examples.
For example, referring to fig. 3D, when the gear of the vehicle is in the forward gear (D gear), the vehicle is in a driving state, and at this time, the first identifier 21 that follows the vehicle model 12 in the analog image to drive is displayed on the vehicle display screen a, and at this time, in order to ensure driving safety, the vehicle does not push information (i.e., the second driving information is displayed so as not to include a window element corresponding to the pushed information); when the vehicle is shifted from D-range to park (P-range), in response to this first shift event, the vehicle executes an information push program, displays at least one window element 30 corresponding to push information on the vehicle display (i.e., the second driving information displayed includes the window element corresponding to push information), and the first indicator 21 stops following and is displayed in a static manner. Wherein, if the window element of the visual focus indication is not needed in the displayed at least one window element 30, the displayed at least one window element 30 may mask the first identifier 21; if there is a window element 31 that needs visual focus indication in the displayed at least one window element 30, the first mark 21 may be displayed at the window edge of the window element 31.
Further, when it is detected that the vehicle switches from P-range to D-range (as indicated by the dashed arrow), in response to this second shift event, a window disappearance procedure is performed, such that at least one window element 30 originally shown on the vehicle display screen a disappears, and accordingly, the first sign 21 resumes following.
The above content is how to display and describe the first identification angle of the voice interaction function corresponding to the image to be activated under the condition that the voice interaction function is in the state to be activated. In the scheme, when the voice interaction function is converted from the to-be-activated state to the activated state, the voice interaction function also has different display states aiming at different voice interaction stages in the voice interaction process with a driver in the vehicle. Specifically, the driver (including the driver and the occupant) issues an activation voice for the voice interaction function of the vehicle in the to-be-activated state, and the vehicle is activated in response to the activation voice issued by the driver, in the activated state. The activation speech (also called wake-up speech) includes an activation word (also called wake-up word) corresponding to the speech interaction function, where the activation word may be a default activation word or may be a custom activation word, which is not limited herein. For example, the activation word may be "Mars number one", "fly" or the like. When the voice interaction function is in an activated state, a driver can conduct voice interaction with the vehicle through the voice interaction function.
In order to enable the driver to clearly know the voice interaction stage in the voice interaction process through vision so as to improve usability of voice interaction, the embodiment displays a second identifier (such as the second identifier 20 shown in fig. 5 a) of the voice interaction function adapted to the voice interaction stage on the vehicle display screen according to different voice interaction stages, and the second identifier may be in a two-dimensional or three-dimensional form, specifically, such as a polygon icon which may be but not limited to two-dimensional or three-dimensional form. Based on this, the method provided in this embodiment may further include the following steps:
201. responding to an activation voice sent by a first user aiming at the voice interaction function, and determining a voice interaction stage corresponding to the first user at present;
202. displaying a second identifier adapted to the voice interaction stage corresponding to the first user;
wherein the second identifier is used for prompting that the voice interaction function is in an activated state.
For the description of the above-mentioned active speech, reference is made to the relevant content above. It should be added that the user (e.g., the first user or the second user described below) described in this embodiment refers to the driver described in this embodiment, where the driver may include a driver sitting in the driver seat (e.g., the primary driver, the secondary driver), and a passenger sitting in the rear row.
Generally, the voice interaction process includes, but is not limited to, the following stages: a radio reception phase, a listening phase, a thinking phase, a voice broadcasting phase and a full duplex phase. The sound receiving stage refers to a stage in which a voice interaction function is activated but a driver does not speak; the listening stage refers to a stage of speaking by a user and collecting voice sent by the user; the thinking stage refers to a voice stage of identifying and analyzing a user; the voice broadcasting stage is used for broadcasting the execution result to the driver when the function is executed according to the voice recognition result; in the full duplex stage, voice interaction can be performed without waking up, and the voice interaction can be performed by adopting two-channel voice communication, random speech insertion and the like, for example, voice broadcasting is currently performed, and voice sent by drivers and passengers can still be collected and identified. Based on the several phases involved in the above-described voice interaction procedure, the specific form of the second identifier 20 of the voice interaction functionality shown, adapted to the voice interaction phase, for the different voice interaction phases will be described below in connection with fig. 5a and 5 b.
As described with reference to fig. 5a and 5b, when the voice interaction function of the vehicle is in the waiting activated state, the first identifiers corresponding to the voice interaction function displayed on the display screen of the vehicle may be a plurality of first identifiers 21 arranged in sequence. Further, if the main driver sends out an activation voice for the voice interaction function, the vehicle responds to the activation voice, when the voice interaction function is activated and the user is not detected after the activation, the current voice interaction stage is determined to be in the sound receiving stage, at this time, a mark 23 (or mark 23 ') adapted to the sound receiving stage is displayed on the display screen of the vehicle in a dynamic manner, and the mark 23 (or mark 23') is used for prompting the main driver that the voice interaction function is activated and the main driver is not detected; in particular, the dynamic display manner of the identifier 23 may be, but is not limited to, regular rhythms of actionable elements in the identifier 23 (such as a plurality of elements 231 of different lengths in the shape of a vertical line within the identifier 23) to present the dynamic effect of being radio-frequency. Further, upon presentation, the color transparency of the actionable element in the logo 23 is a first value. The dynamic display of the logo 23 'may be, but is not limited to, a rotation, such as a left-right rotation of the logo 23'.
When the main driver is detected to speak, if the voice "navigation" sent by the main driver is detected, the current voice interaction stage is determined to be in a listening stage for listening to the user to speak, at this time, the displayed identifier 23 (or identifier 23 ') is updated to the identifier 24 (or identifier 24 ') adapted to the listening stage, and the identifier 24 (or identifier 24 ') is displayed in a dynamic manner. Wherein, the dynamic display manner of the logo 24 may be, but is not limited to, that the actionable elements in the logo 24 (such as a plurality of elements 241 of different lengths in the shape of the vertical line in the logo 23) follow the speech uttered by the main driver to perform rhythmic motion so as to present the dynamic effect of listening to the speech, and the color transparency of the logo 24 is identified as the second value when presented; the dynamic display of the mark 24' may be, but is not limited to, a mark that gradually increases in size and rotates when the increase in size reaches a preset requirement. At this stage, the voice detection (Voice Activity Detection, VAD) function corresponding to the voice interaction function is active. The VAD is a driving voice signal processing technology, in short, the VAD separates the effective voice signal and the useless voice signal or noise signal, so that the subsequent speaker recognition, semantic recognition, voice emotion analysis and other works are more efficient, and the VAD is a very necessary and key link in the voice processing process.
Further, if the main driver's speech is not detected within a certain period of time (e.g. 3 seconds), it is determined that the main driver's speech is finished, and the current voice interaction stage enters into a thinking stage (also called loading stage), at this time, the displayed logo 24 (or logo 24 ') is updated to a logo 25 (or logo 25 ') adapted to the listening stage, and the logo 25 (or logo 25 ') is displayed in a dynamic manner. The dynamic display manner of the identifier 25 may be, but is not limited to, making the actionable element (e.g., multiple elements 251 with different lengths in the diagonal line shape in the identifier 25) in the identifier 25 take a loading action to present the action that is performing speech semantic recognition analysis on the collected speech uttered by the primary driver and/or performing related functions according to the speech recognition result. The technique employed for speech recognition may be, but is not limited to, natural speech understanding (Natural Language Understanding, NLU) techniques. Further, at the time of presentation, the color transparency of the logo 25 is a third value. The dynamic display of the logo 25 'may be, but is not limited to, having two arrows in the logo 25' regularly rhythmically.
Still further, after the voice semantic recognition analysis is finished, it may be determined that the current voice interaction stage enters the voice broadcasting stage, and at this time, the displayed identifier 25 (or identifier 25 ') is updated to the identifier 26 (or identifier 26 ') adapted to the voice broadcasting stage, and the identifier 26 (or identifier 26 ') is displayed in a dynamic manner. Specifically, if the voice recognition result and/or the execution result of the related function according to the voice recognition result are/is determined to be successful and/or the execution is successful, the displayed identifier 25 (or identifier 25 ') may be updated to the identifier 261 (or identifier 261 ') adapted to the voice broadcasting stage, and the actionable element (such as a plurality of elements 261 with different lengths in the shape of the vertical line in the identifier 261) in the identifier 261 (or identifier 261 ') follows the voice broadcasting rhythm, so as to present the action effect of the forward voice broadcasting state; or, the displayed identifier 25' can be updated to the identifier 261' adapted to the voice broadcasting stage, and the identifier 261' is rotated to be displayed, so as to present the dynamic effect of the forward voice broadcasting state. If it is determined, based on the speech recognition result and/or the execution result of the related function according to the speech recognition result, that any one of the speech recognition failure, the function cannot be executed, the speech recognition success but the function cannot be executed, the displayed identifier 25 may be updated to the identifier 262 adapted to the speech broadcasting stage, and the actionable elements (such as the two-dimensional character avatar 2621 in the identifier 262) in the identifier 262 are controlled to execute negative actions such as shaking head, crying, etc., so as to present the dynamic effect of the negative speech broadcasting state; or, the displayed identifier 25' may be updated to the identifier 262' adapted to the voice broadcasting stage, and the actionable broken line arrow in the identifier 262' performs a wave-like action, so as to present the dynamic effect of the negative voice broadcasting state. Further, at the time of presentation, the color transparency of the logo 26 is a fourth value. The above identifier 26 corresponding to the voice interaction function displayed in the voice broadcasting stage can enable the main driver to feel different emotions of the voice interaction function during voice interaction through vision and animation, so that emotion connection between the user and the voice interaction function can be better established.
After the voice broadcasting is finished, determining that the voice interaction between the current round and the main driver is finished, and entering a full duplex stage in the current voice interaction stage, at this time, updating the displayed identifier 26 (or identifier 26 ') into an identifier 27 (or identifier 27') adapted to the full duplex stage, wherein the color transparency of the identifier 27 is a fifth number. The full duplex common stage corresponding to the identifier 27' is further divided into a full duplex radio receiving stage, a full duplex listening stage and a full duplex loading stage, and the three stages are similar to the radio receiving stage, the listening stage and the thinking stage in function, except that the voice interaction is free from waking up, and can support a speech inserting and multi-voice communication channel and the like; after the current round of voice interaction is finished, firstly determining to enter a full duplex radio section, and updating the displayed identifier 26' into an identifier 271' matched with the full duplex radio section, wherein the identifier 271' can be displayed in a rotating and dynamic mode but is not limited to the method; further, if it is detected that the primary driver is speaking again, it is determined to enter a full duplex listening segment, the displayed identifier 271 is updated to an identifier 272 that is adapted to the full duplex listening segment, and the identifier 271 may be displayed rotationally dynamically, but not limited to; after the end of the listening, it is determined to enter the full duplex loading segment, the displayed identifier 271 is updated to the identifier 273 adapted to the full duplex loading segment, and the dynamic display manner of the identifier 273 may be that, but not that, the arrows in two different directions in the identifier 273 are regularly rhythmed.
After the voice interaction stage enters the full duplex stage, the full duplex stage exit countdown program is started to be executed, when the countdown is finished, the fact that the voice sent by the driver is sent out by the driver is not detected again, the voice exit condition is met is confirmed, and the voice interaction function enters the to-be-activated state. If the driver is detected to make a voice again after the countdown is finished, the voice exit condition is judged not to be met, the full duplex stage is kept continuously, and the driver is identified to make a voice.
Here, the first to fifth values may be the same or different. In this embodiment, the first value and the fifth value are larger relative to the second value to the fourth value, so as to weaken the corresponding mark for visual sense, so as to reduce the disturbance to the user.
Further, the voice interaction stage corresponding to the voice interaction function may further include a voice disabling stage in addition to the above-described stages, and accordingly, the display style of the second identifier of the voice interaction function adapted to the voice disabling stage may be, but is not limited to, as shown in fig. 5c (as a static display), so long as it can be presented that the voice interaction function is not available, in a static display manner. The reasons for making voice interactive functions unavailable are diverse, such as in a call, in-car voice enablement, etc., and are not limited herein.
It should be noted that, in the foregoing voice interaction process, after determining that the voice interaction stage enters the listening stage, a text box may be displayed near the displayed second identifier corresponding to the voice interaction function adapted to the listening stage, so as to display text content corresponding to the heard voice in the text box; wherein the text content displayed in the text box may be, but is not limited to, text converted from speech uttered by a listening rider using automatic speech recognition technology (Automatic Speech Recognition, ASR). In particular, when displaying text, but not limited to, text may be displayed in a dynamic manner in a text box, specifically, text may be statically displayed in a text box in a text alignment manner (such as text left alignment or text right alignment) first, after text is displayed in the text box 28 is full, text is displayed in a scrolling manner (such as left scrolling or right scrolling), for example, referring to fig. 5d, an example of displaying a text box 28 on the right side of a displayed identifier 24 corresponding to a voice interaction function adapted to a listening stage is shown, text is displayed in the text box 28, and text corresponding to a voice heard is displayed in a text left alignment manner, such as "navigation", "navigation to rainbow bridge airport", etc., and after text is displayed in the text box 28 is displayed full, the text box 28 is converted into a left scrolling manner to display, for example, "a first. Further, until the listening is finished, after entering the thinking stage, only the originally displayed identifier 24 may be updated to be the identifier 25 adapted to the thinking stage, the displayed text box and the text content displayed therein may not be updated, and the text may still be continuously displayed in a dynamic manner. Still further, after entering the voice broadcasting stage, the originally displayed identifier 25 can be updated to the identifier 26 adapted to the voice broadcasting stage, and the corresponding voice broadcasting content can be displayed in the displayed text box 28 along with the voice broadcasting content; alternatively, the text box and the text content displayed therein may not be updated, which is not limited herein. After the voice broadcasting stage is finished, the voice interaction of the round is finished, and accordingly, the voice interaction stage enters a full duplex stage, and text box display corresponding to the full duplex stage is described in detail below:
One way to display the text box is not to display in real time, that is, the text content corresponding to the recognized voice is not displayed in real time in the process of recognizing the voice sent by the driver, and the text box is displayed only in the voice broadcasting stage so as to display the corresponding voice broadcasting content in the text box.
For example, referring to fig. 6a, in the full duplex stage, when it is detected that the driver speaks to send out the voice "26 degrees of air conditioning", a text box 28 is displayed, for example, a text box 28 may be displayed on the right side of the identifier 26 corresponding to the voice interaction function adapted in the voice broadcast stage, and a "26 degrees of air conditioning" is displayed in the text box 28 when it is recognized that the voice content sent out by the driver is semantically matched with the preset content and the corresponding function is executed by the semantic recognition result (for example, the air conditioning is successfully adjusted to 26 degrees), and the voice broadcast of the execution result is started. After the voice broadcasting is finished, the full duplex stage is carried out again, at the moment, only the mark 27 is displayed, and the text box is not displayed; in addition, the full duplex exit countdown procedure is re-executed.
Alternatively, as shown in fig. 6b, after the above example voice broadcast is finished, the listening stage may be advanced, the text box 28 is continuously displayed, and a text prompt that the driver can continuously send out voice, such as "i am listening", is displayed in the text box 28; the text box 28 disappears after a period of time (e.g. 2 s) is displayed, and the full duplex phase is again entered.
For another example, referring to fig. 6c, in the full duplex stage, when detecting that the driver is chatting, the driver will interrupt the execution of the full duplex stage to exit the countdown procedure, and recognize the voice uttered by the driver; wherein during speech recognition a text box is not displayed in the vicinity of the displayed identifier 27 corresponding to the voice interaction function adapted to the full duplex phase. Further, when it is recognized that the voice content uttered by the driver does not semantically match the preset content, a text box 28 is displayed on the right side of the identifier 27, and the voice uttered by the user is converted into text (such as speaking a joke) according to the voice recognition result and displayed in the text box 28, wherein the display color of the text may be a color without significance, such as gray, so as to present a voice effect refusing the user. The text box 28 disappears after a preset period of time (e.g., 2 s), full duplex continues, and execution of the interrupt execution full duplex phase exits the countdown sequence.
Another way to display the text box is to display it in real time, i.e. when the driver's speech is detected, the text box is displayed and the text corresponding to the recognized speech is displayed in the text box.
For example, referring to fig. 6d, during the full duplex phase, detecting that the driver is speaking the voice "air-conditioned 26 degrees", a text box 28 is displayed adjacent to the displayed logo 27 corresponding to the voice interactive function adapted for the full duplex phase, and "air-conditioned 26 degrees" is displayed within the text box. When the thinking stage and the voice broadcasting stage are both performed later, a text box 28 is displayed, and an air conditioner 26 degrees is displayed in the text box 28. After the voice broadcast is finished, the text box 28 disappears, and the full duplex stage is entered.
For another example, referring to fig. 6e, in the full duplex phase, when the driver is detected to be boring, the voice "get around for a while" is displayed, a text box 28 is displayed on the right side of the displayed identifier 27 of the voice interaction function adapted to the full duplex phase, and the "get around for a while" is displayed in the text box. Subsequently, upon entering the thinking phase, the text box 28 is still displayed and "one-touch go" is displayed within the text box. After that, through recognition analysis, when the voice content emitted by the driver is recognized to be semantically mismatched with the preset content, the full duplex stage is performed, and meanwhile, the text box 28 is displayed for a period of time (such as 2 s), wherein the text is displayed in a color without significance, such as gray, when the text is displayed, so that the voice effect emitted by the refused user is displayed. The text box 28 disappears after, for example, 2 s.
From the above, the solution provided in this embodiment may further include:
203. when the first user sends out a voice command, displaying a text box on one side of the second identifier, and presenting a text corresponding to the voice command in the text box.
In order to more facilitate the driver to visually and clearly understand the current voice interaction stage and corresponding voice interaction content, in this embodiment, the vehicle display screen is divided into multiple display areas, and a part of the display areas in the multiple display areas have a corresponding relationship with the spatial position in the vehicle; in the voice interaction process, the corresponding relation is combined, the display position of the second identifier 20 corresponding to the voice interaction function is determined according to the space position of the driver in the vehicle, and the second identifier of the voice interaction function adapted to the voice interaction function stage is displayed according to the determined display position.
Fig. 7a shows an example in which the vehicle display screen is divided into four display areas (display area A1 to display area A4). The display area A1 is mainly used for displaying the analog image; the display area A2 has a corresponding relation with the space position corresponding to the main driving position in the vehicle and is used for displaying interaction information which needs to be displayed when the main driver in the vehicle interacts with the vehicle (such as voice interaction); the display area A3 has a corresponding relation with the space position corresponding to the co-driver position in the vehicle and is used for displaying interaction information which needs to be displayed when the co-driver in the vehicle interacts with the vehicle; the display area A4 has a corresponding relation with the space position corresponding to the rear row riding position in the vehicle, and is used for displaying interaction information needed to be displayed when the rear row riding person interacts with the vehicle. More specifically, taking the display area A2 as an example, as shown in fig. 7b, the display area A2 is further divided into a voice area a21 and an application area a22, where the voice area a21 is used for displaying a second identifier, a voice interaction result, and the like corresponding to a voice interaction function that needs to be displayed when the host driver interacts with the vehicle, and the application area a22 is used for displaying content related to the application, such as an application window (which is a man-machine interaction interface when the application program runs), that needs to be presented when the host driver interacts with the vehicle. It should be noted that, the above-mentioned voice area a21 and the application area a22 are capable of being panned and zoomed, so that when the voice interaction result to be presented cannot be completely presented in the current voice area a21, the application area a22 can be panned left along the left horizontal direction to shrink the application area a22, and the voice area a21 can be panned left to enlarge the voice area a21.
For further division of the display area A3 and the display area A4, reference is made to the above-mentioned further division of the display area A2, and detailed descriptions thereof will be omitted herein.
The following describes the second identifier corresponding to the voice interaction function and the principle of displaying the voice interaction result in the voice interaction process by combining the above-mentioned correspondence between the multiple display areas on the vehicle display screen and the spatial positions in the vehicle, and by enumerating a few examples.
Example 51, in conjunction with fig. 8 a-8 c, is illustrated from the perspective of the voice interaction function of the vehicle supporting a one person talk mode.
By one-person talk mode is understood that the voice interaction function only supports voice interaction with one driver.
Example 511 displayed on a display region corresponding to the spatial position of the speaking driver in the vehicle
Referring to fig. 8a, assuming that the main driver emits an activation voice for the voice interaction function, by analyzing the obtained activation voice, the spatial position of the main driver in the vehicle (i.e., the spatial position corresponding to the main driver in the vehicle) can be determined; based on the previously established correspondence between the display area of the vehicle display screen and the spatial position in the vehicle, it may be determined that the display area of the vehicle display screen, where the spatial position in the vehicle of the main driver who emits the activation voice has a correspondence, is the display area A2, that is, the second identifier corresponding to the voice interaction function may be displayed on the display area A2, for example, but not limited to, the second identifier 20 corresponding to the voice interaction function may be displayed at a position in the upper left corner of the voice area a21 on the display area A2. Thereafter, if the primary driver is detected to be speaking, a text box may also be displayed at, but not limited to, the left side of the second indicator 20 to convert the primary driver's speaking into text for presentation within the text box. Further, if the voice interaction result needs to be presented later, the voice interaction result (such as the seat heating result card 6) can be presented below the second identifier 20
It should be noted that, based on the above-mentioned active voice sent by the main driver for the voice interaction function, the active voice can be acquired through the vehicle-mounted microphone array system. The vehicle-mounted microphone array system may employ a time delay estimation technique to achieve sound source localization, in particular, time differences in the arrival of the active speech at the different microphone arrays, respectively, may be calculated, and then the spatial position of the primary driver within the vehicle may be determined based at least on the calculated time differences and the positions of the different microphone arrays.
Example 512 if the corresponding display of the voice interaction cannot be completed in the corresponding display area
Referring to fig. 8b, it is assumed that the secondary driver utters an activation voice for the voice interaction function, and accordingly, the content to be displayed for the voice interaction process of the secondary driver with the vehicle is preferably presented in the display area A3. If the co-driver makes a voice such as "turn on the car controller" during the voice interaction, the voice interaction result (i.e. the interface content corresponding to the car controller) cannot be displayed on the display area A3 due to the limitation of the application area in the display area A3, and at this time, the voice interaction result can be presented on the display area A2.
Example 513, post-triggered Voice interactions interrupt preamble Voice interactions
As described with reference to fig. 8c, if it is determined that the spatial position of the previous driver in the vehicle corresponds to the main driving position based on the activation voice uttered by the previous driver, the second identifier 20 corresponding to the voice interaction function, a text box (not shown) for displaying text, and the like are displayed on the display area A2 accordingly. If it is determined that the spatial position of the previous driver in the vehicle corresponds to the co-driver's seat based on the activation voice uttered by the previous driver, the second identifier 20 corresponding to the voice interaction function, the text box for displaying the text, and the like are displayed on the display area A3 accordingly. If it is determined that the spatial position of the previous driver in the vehicle corresponds to the rear seat position based on the activation voice uttered by the previous driver, the second identifier 20 corresponding to the voice interaction function, the text box for displaying the text, and the like are displayed on the display area A4 accordingly.
Further, if it is detected that other drivers send out the activating voice subsequently, the current voice interaction is interrupted, and the display position of the second sign 20 is changed. For example, taking the foregoing as an example of the voice interaction with the primary driver at present, at this time, the second identifier 20 is displayed on the display area A2, and if the secondary driver is detected to make an active voice during the voice interaction with the primary driver, the voice interaction with the primary driver is stopped, the voice interaction with the secondary driver is performed, and the voice interaction content to be displayed such as the second identifier 20 corresponding to the voice interaction function is displayed on the display area A3.
In connection with the above example 51, the foregoing 202 "displaying the second identifier of the voice interaction function adapted to the voice interaction stage of the first user" in this embodiment may specifically include:
2021. determining the position of the first user in the vehicle according to the activation voice sent by the first user;
2022. determining a display position of the second identifier based on the orientation;
2023. and displaying the second mark at the display position of the second mark.
Example 52, in conjunction with fig. 9 a-9 f, is illustrated from the perspective of the voice interaction function of the vehicle supporting two-person talk mode.
By two-person speech mode is understood that the voice interaction function can at most simultaneously interact with two drivers. When the voice interaction function supports a multi-person speaking mode (such as a two-person speaking mode), the second identifier corresponding to the voice interaction function, the text box for presenting the text and other voice interaction contents needing to be displayed can be displayed on a plurality of corresponding display areas.
Example 521 post-triggering voice interactions do not interrupt pre-cursor voice interactions
For example, referring to fig. 9a, initially, the primary driver makes an activation voice for the voice interaction function, and according to the spatial position of the primary driver in the vehicle, during the voice interaction with the primary driver, the second identifier 20 corresponding to the voice interaction function, a text box (not shown in the figure) for displaying text, and the like are presented on the display area A2. If the driver is detected to send out the activation voice during the voice interaction with the primary driver, the vehicle can respond to the voice interaction triggered by the driver while keeping the voice interaction with the primary driver, and according to the space position of the driver in the vehicle, the second identifier 20, text box for displaying text and other contents which need to be displayed during the voice interaction with the driver are also displayed on the display area A3. The content of the corresponding second identifier 20 is displayed on the display area A2 and the display area A3 at the same time, but the content displayed on the display area A2 and the display area A3 may be different or the same according to the difference of the object of voice interaction, the progress of voice interaction, the content, and the like. For example, the patterns of the second identifiers 20 displayed on the display area A2 and the display area A3 may be the same or different, which is related to the voice interaction stage in which the respective corresponding voice interaction functions are located.
It should be noted here that, in the case where the voice interaction function supports, for example, two speaking modes, the voice interaction corresponding to the main driver has the highest priority and cannot be interrupted. For example, assuming that the vehicle is performing a voice navigation task (e.g., navigating to e.g., a rainbow airport task) for voice interaction with the primary driver, at which time the secondary driver utters an activation voice and issues a navigation voice "navigate to puradon airport", the vehicle responds to the navigation voice "navigate to puradon airport" uttered by the secondary driver such that the second identifier 20 displayed on the display area A3 exhibits a negative voice broadcast action and the text box 28 exhibits a refusal effect. For the copilot and the rear passengers, the voice interaction triggered later has high priority, and the previous voice interaction dialogue can be interrupted. For example, with continued reference to fig. 9a, assuming that the voice interaction with the primary driver and the secondary driver is currently maintained, if it is detected that the rear passenger emits the activating voice again, in this case, the voice interaction corresponding to the secondary driver will be interrupted, the content such as the second identifier 20 that needs to be presented for the voice interaction with the primary driver is presented in the display area A2, the content such as the second identifier 20 that needs to be presented for the voice interaction with the primary driver is still maintained, the content such as the second identifier 20 that needs to be presented for the voice interaction with the display area A3 disappears, and the content such as the second identifier 20 that needs to be presented for the voice interaction with the rear passenger is correspondingly presented in the display area A4.
When the voice interaction results corresponding to the two voice interactions point to the same voice function card, the voice function card is displayed in the respective corresponding display areas, and the content contained in the two displayed same function cards is the same.
Referring to fig. 9c, assume that the primary driver first triggers a voice interaction with the vehicle, and that the vehicle presents a seat heating card 61 on a display area A2 of the vehicle display screen for the result of the voice interaction with the primary driver. Further, the secondary driver also subsequently triggers a voice interaction with the vehicle and issues a voice "seat warm second gear"; in response to the voice "seat heating second gear" sent by the secondary driver, it is determined that the functional card pointed by the voice interaction result is the same as the functional card pointed by the voice interaction result of the primary driver, that is, the functional card is the seat heating card, at this time, the seat heating card is also presented on the display area A3 of the vehicle display screen, and the seat heating card that is originally presented on the display area A2 is updated, so that the content of the two seat heating cards presented on the display area A2 and the display area A3 is the same, and the content is the content that should be corresponding to after the latest seat heating adjustment action is executed, for example, the marking element 611 of the seat corresponding to the primary driver in the two seat heating cards and the marking element 612 of the seat corresponding to the secondary driver in the two seat heating cards are both "2".
For example 523, when the voice interaction results corresponding to the two voice interactions point to the same application, the voice interaction result corresponding to the post-triggered voice interaction is executed in the opened application.
Referring to fig. 9d, assume that the primary driver first triggered a voice interaction with the vehicle, and that the vehicle presents an open music application on the display area A2 for the voice interaction junction with the primary driver. Further, the secondary driver also subsequently triggers a voice interaction with the vehicle and issues a voice "play second"; in response to the voice "play second" from the co-driver, content played in accordance with the "play second" instruction is displayed in the music application already presented on the display area A2.
In example 524, during voice broadcasting of one voice interaction, if another voice interaction needs to be voice-broadcasted, and the priority of the other voice interaction is lower, a prompt tone is used to replace the voice broadcasting for the other voice interaction.
Referring to fig. 9e, it is assumed that both the primary driver and the secondary driver are engaged in voice interactions with the vehicle, wherein the voice interactions of the secondary driver with the vehicle are post-triggered relative to the voice interactions of the primary driver with the vehicle. In the case that the vehicle also needs to perform voice broadcasting for the voice interaction result with the co-driver during voice broadcasting (TTS) for the voice interaction result with the main driver, the voice broadcasting for the voice interaction result with the main driver is not interrupted, but a prompt tone is used instead of the voice broadcasting for the voice interaction result with the co-driver because the priority of the voice interaction of the co-driver with the vehicle is lower than that of the voice interaction of the main driver with the vehicle. Specifically, if the voice interaction result with the copilot is that the execution is successful, a forward prompting sound (such as a prompting sound with a happy emotion) is adopted; otherwise, the voice uttered by the co-driver is failed or cannot be understood, and a negative warning sound (such as a sad warning sound with a relatively sad emotion) is adopted.
In example 525, during the voice broadcasting of one voice interaction, if another voice interaction needs to be voice-broadcasted, and the priority of the other voice interaction is higher, the ongoing voice broadcasting is interrupted, and the voice broadcasting is executed for the other voice interaction.
Referring to fig. 9f, assuming that the secondary driver triggers the voice interaction with the vehicle first, during the voice broadcast of the vehicle for the voice interaction result with the secondary driver, the vehicle responds to the voice interaction triggered by the primary driver to determine that the voice broadcast also needs to be performed for the voice interaction result with the primary driver, in this case, since the voice interaction between the primary driver and the vehicle is higher in priority than the voice interaction between the secondary driver and the vehicle, the voice broadcast performed for the voice interaction result with the secondary driver is interrupted, and the voice broadcast is performed for the voice interaction result with the primary driver.
In summary, the method provided in this embodiment may further include:
204. when the activation voice sent by the second user aiming at the voice interaction function is detected, acquiring a voice interaction mode supported by the voice interaction function;
205. If the voice interaction mode is a human interaction mode, displaying a second identifier of the voice interaction function adapted in the voice interaction stage corresponding to the second user, wherein the second identifier adapted in the voice interaction stage corresponding to the first user disappears;
206. if the voice interaction mode is a multi-person interaction mode, the second identifier adapted to the voice interaction stage of the first user is kept displayed, and the second identifier adapted to the voice interaction stage of the second user is displayed.
For the description of the above-mentioned one-person interaction mode and multi-person interaction mode, reference may be made to the one-person speaking mode and the two speaking modes described in the above examples, and no further description is given here. And, for the specific implementation step of displaying the second identifier of the voice interaction function adapted to the voice interaction stage of the second user, reference may be made to the above-described content related to displaying the second identifier of the voice interaction function adapted to the voice interaction stage of the first user, which is not described herein.
Further, if the driver thinks to exit from the vehicle voice interaction, any one of the following modes may be adopted:
Mode one: and emitting an exit voice, wherein the exit voice comprises voice interaction exit words such as 'exit voice', 'you walk bar', and the like.
And the vehicle responds to the exit voice sent by the driver, the voice interaction is ended, and correspondingly, the second identifier corresponding to the voice interaction function displayed on the display screen of the vehicle disappears.
And in the second mode, an exit key arranged on the vehicle is pressed.
For example, referring to fig. 10, when the voice interaction is in the voice broadcasting stage, the driver may end the voice interaction of the current wheel by pressing a mute button provided on the steering wheel of the vehicle; whether the voice interaction is exited or not is determined according to the actual setting situation of the driver, and if the voice interaction can be exited or the full duplex stage can be exited, the voice interaction is not limited herein.
In summary, the technical scheme provided by the embodiment has the following beneficial effects:
1. the corresponding second driving information and the first identification of the voice interaction function of the vehicle are displayed on the vehicle according to the collected first driving information of the vehicle, the first identification is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identification is determined based on the second driving information. Therefore, the voice interaction function of the vehicle is in the waiting activation state through visual perception by the user, the voice interaction function corresponds to the first identification in the waiting activation state, the display state based on the second driving information is displayed, the diversity and the interestingness of the display of the first identification can be increased, and better visual experience is brought to the user.
2. The display mode of the voice interaction function corresponding to the image to be activated can be determined according to all display information displayed on the vehicle; displaying a first identifier of an image to be activated of the voice interaction function on the vehicle according to the determined display mode; for example, the first identifier may be displayed in association with different types of target information displayed on the vehicle in different display manners. The first identifier has a visual prompting function, can be used for prompting that a voice interaction function on the vehicle is waiting to be activated, can also be used for prompting that the target information is positioned, and the like. Therefore, the voice interaction function of the vehicle can be perceived by the user to be in a waiting and awakening state through vision, and as the display modes are determined based on all display contents displayed on the vehicle, different display modes can be determined according to different display contents, so that the diversity and the interestingness of the identification display can be improved, and better visual experience can be brought to the user.
3. In addition to displaying a first identifier of a voice interactive function on a vehicle (for prompting that the voice interactive function is in a waiting activation state), the scheme can also respond to an activation voice sent by a driver aiming at the voice interactive function to display a second identifier for indicating that the voice interactive function is activated, and the display state of the second identifier is adapted to the voice interactive stage of the vehicle and a user. Therefore, according to the scheme, on one hand, the user can perceive that the voice interaction function of the vehicle is in a waiting awakening state through vision, and on the other hand, the user can clearly and clearly determine the current voice interaction stage through vision, so that the usability of the voice interaction function is improved.
Fig. 11 shows an interface display device provided in an embodiment of the present application, which is provided in a vehicle having display and voice interaction functions. As shown in fig. 11, the interface display device includes: an acquisition module 31 and a display module 32; wherein, the liquid crystal display device comprises a liquid crystal display device,
the acquisition module 31 is used for acquiring first driving information of the vehicle;
the display module 32 is configured to display second driving information and a first identifier of the voice interaction function on the vehicle according to the collected first driving information;
the first identifier is used for prompting that the voice interaction function is in a state to be activated, and the display state of the first identifier is determined based on the second driving information.
Further, the display module 32 is specifically configured to, when determining the display state of the first identifier based on the second driving information: determining target information in the second driving information; and displaying the first identification in association with the target information.
Further, when the target information is driving state information of the vehicle, the display module 32 is specifically configured to: displaying the first identification dynamically changing based on driving state information of the vehicle.
Further, the display module 32 is specifically configured to, when displaying the first identifier dynamically changing based on driving status information of the vehicle: determining a moving direction and a moving speed according to the driving state information of the vehicle; and displaying the first mark dynamically changing along the moving direction based on the moving speed.
Further, the driving state information of the vehicle comprises an image simulating driving of the vehicle; accordingly, the display module 32 is specifically configured to, when displaying the first identifier dynamically changing along the moving direction based on the moving speed: determining a display position of the first identifier based on the image; and displaying the first mark which is dynamically changed along the moving direction and at the moving speed at the display position of the first mark.
Further, the image comprises a background reflecting the driving environment of the vehicle and a model simulating the vehicle; accordingly, the display module 32 is specifically configured to, when determining the display position of the first identifier based on the image:
if no target environment element exists in the background in the image, determining a display position of the first mark based on the car model;
If a target environment element exists in the background in the image, determining a display position of the first mark based on the target environment element;
the target environment element is a lane, a road sign, an idle parking space, an idle charging position or a navigation destination reflecting the driving environment of the vehicle.
Further, the display module 32 is specifically configured to, when determining the display position of the first identifier based on the target environmental element:
determining the distance between the car model and the target environment element according to the navigation data; and determining the display position of the first mark based on the distance.
Further, the display module 32 is specifically configured to, when determining the display position of the first identifier based on the distance:
when the distance is smaller than a preset distance, the display position of the first mark is near the target environment element;
and when the distance is greater than or equal to the preset distance, the display position of the first mark is far away from the target environment element.
Further, the target information is parking state information of the vehicle. Accordingly, when the display module 32 displays the first identifier in association with the target information, the display module is specifically configured to: and displaying the first mark for guiding the vehicle to park on the vehicle.
Further, the target information is window information. Accordingly, when the display module 32 displays the first identifier in association with the target information, the display module is specifically configured to: displaying the first mark with the effect of highlighting the window information.
Further, the target information is multimedia information. Accordingly, when the display module 32 displays the first identifier in association with the target information, the display module is specifically configured to: displaying the first identification interacted with the multimedia information.
Further, the display module 32 is further configured to display the first identifier statically and translucently on the vehicle when the target information is not included in the second driving information.
Further, the device provided in this embodiment further includes an adding module. The adding module is used for:
if the vehicle is in the manual driving mode, adding a first element displayed in an associated mode for the first mark;
if the vehicle is in the auxiliary driving mode, adding a second element of the associated display for the first mark;
and if the vehicle is in the automatic driving mode, adding a third element of the associated display for the first identifier.
Further, the display module 32 is further configured to determine a current voice interaction stage corresponding to the first user in response to an activation voice sent by the first user for the voice interaction function; displaying a second identifier adapted to the voice interaction stage corresponding to the first user; wherein the second identifier is used for prompting that the voice interaction function is in an activated state.
Further, the display module 32 is specifically configured to, when displaying the second identifier adapted to the voice interaction stage corresponding to the first user: determining the position of the first user in the vehicle according to the activated voice; determining a display position of the second identifier based on the orientation; and displaying the second mark at the display position of the second mark.
Further, the display module 32 is further configured to display a text box on one side of the second identifier when the first user sends a voice command, and present a text corresponding to the voice command in the text box.
Further, the display module 32 is also used for
When the activation voice sent by the second user aiming at the voice interaction function is detected, acquiring a voice interaction mode supported by the voice interaction function;
If the voice interaction mode is a human interaction mode, displaying a second identifier of the voice interaction function adapted in the voice interaction stage corresponding to the second user, wherein the second identifier adapted in the voice interaction stage corresponding to the first user disappears;
if the voice interaction mode is a multi-person interaction mode, the second identifier adapted to the voice interaction stage corresponding to the first user is kept displayed, and the second identifier adapted to the voice interaction stage corresponding to the second user is displayed.
What needs to be explained here is: the interface display device provided in the foregoing embodiment may implement the technical solution described in the foregoing interface display method embodiment shown in fig. 1, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing interface display method embodiment shown in fig. 1, which is not described herein again.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 12, the electronic device includes: a memory 81 and a processor 82. Wherein the memory 81 is configured to store one or more computer instructions; the processor 82 is coupled to the memory 81 for one or more computer instructions (e.g., computer instructions implementing data storage logic) for implementing the steps in the interface display methods provided by the embodiments of the present application.
The memory 81 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Further, as shown in fig. 12, the electronic device may further include: communication component 83, display 84, power component 85, audio component 86, and other components. Only some of the components are schematically shown in fig. 12, which does not mean that the electronic device only comprises the components shown in fig. 12.
In particular, the electronic device may refer to a vehicle alone, or may refer to a combination of a vehicle and a computer platform, which is not limited herein. Accordingly, the processor 82 described above may refer to a processor in a vehicle, or may refer to a combination of a computer in a vehicle and a corresponding processor of a computer platform, which may include one or more processors. The processor is a circuit with signal processing capability, and in one implementation, the processor may be a circuit with instruction fetch and execute capability, such as a central processing unit (central processing unit, CPU), microprocessor, graphics processor (graphics processing unit, GPU) (which may be understood as a microprocessor), or digital signal processor (digital signal processor, DSP), etc.; in another implementation, the processor may implement a function through a logical relationship of hardware circuitry that is fixed or reconfigurable, e.g., a hardware circuitry implemented for an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD), e.g., a Field Programmable Gate Array (FPGA). In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, a hardware circuit designed for artificial intelligence may be used, which may be understood as an ASIC, such as a neural network processing unit (neural network processing unit, NPU), tensor processing unit (tensor processing unit, TPU), deep learning processing unit (deep learningprocessing unit, DPU), etc. In addition, the computing platform may also include a memory for storing instructions, and some or all of the processors may call the instructions in the memory to execute the instructions to implement the corresponding functions.
It should be understood that, the related operations of the interface display method provided in this embodiment may be performed by the same processor, or may also be performed by one or more processors, which is not specifically limited in this embodiment of the present application.
Accordingly, the present application also provides a computer-readable storage medium storing a computer program, where the computer program is executed by a computer to implement the steps or functions of the data processing method provided in each of the above embodiments.
Fig. 13 schematically shows a block diagram of a computer program product provided by the present application. The computer program product comprises a computer program/instructions 91 which, when the computer program/instructions 91 are executed by a processor, such as the processor 82 shown in fig. 12, implement the steps in the interface display method described in the embodiments of the above-mentioned text application.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (16)

1. An interface display method, suitable for a vehicle having display and voice interaction functions, comprising:
collecting first driving information of a vehicle;
displaying second driving information on the vehicle according to the collected first driving information;
determining target information in the second driving information;
when the target information is the driving state information of the vehicle, determining a moving direction and a moving speed according to the driving state information of the vehicle;
displaying a first identification of the voice interaction function dynamically changing along the moving direction based on the moving speed;
the first identifier is used for prompting that the voice interaction function is in a state to be activated.
2. The method of claim 1, wherein the vehicle driving state information includes an image simulating the vehicle driving; and
displaying the first indication dynamically changing along the moving direction based on the moving speed, including:
determining a display position of the first identifier based on the image;
and displaying the first mark which is dynamically changed along the moving direction and at the moving speed at the display position of the first mark.
3. The method of claim 2, wherein the image includes a background reflecting a vehicle driving environment and a model simulating the vehicle;
determining a display position of the first identifier based on the image, including;
if no target environment element exists in the background in the image, determining a display position of the first mark based on the car model;
if a target environment element exists in the background in the image, determining a display position of the first mark based on the target environment element;
the target environment element is a lane, a road sign, an idle parking space, an idle charging position or a navigation destination reflecting the driving environment of the vehicle.
4. A method according to claim 3, wherein determining the display position of the first identifier based on the target environmental element comprises:
determining the distance between the car model and the target environment element according to the navigation data;
and determining the display position of the first mark based on the distance.
5. The method of claim 4, wherein determining the display location of the first logo based on the distance comprises:
When the distance is smaller than a preset distance, the display position of the first mark is near the target environment element;
and when the distance is greater than or equal to the preset distance, the display position of the first mark is far away from the target environment element.
6. The method according to any one of claims 1 to 5, wherein, when the target information is parking state information of the vehicle,
and displaying the first mark for guiding the vehicle to park on the vehicle.
7. The method according to any one of claims 1 to 5, wherein when the target information is window information,
displaying the first mark with the effect of highlighting the window information.
8. The method according to any one of claims 1 to 5, wherein, when the target information is multimedia information,
displaying the first identification interacted with the multimedia information.
9. The method according to any one of claims 1 to 5, further comprising:
and if the target information is not contained in the second driving information, displaying the first mark in a static and semitransparent mode on the vehicle.
10. The method according to any one of claims 1 to 5, further comprising:
If the vehicle is in the manual driving mode, adding a first element displayed in an associated mode for the first mark;
if the vehicle is in the auxiliary driving mode, adding a second element of the associated display for the first mark;
and if the vehicle is in the automatic driving mode, adding a third element of the associated display for the first identifier.
11. The method according to any one of claims 1 to 5, further comprising:
responding to an activation voice sent by a first user aiming at the voice interaction function, and determining a voice interaction stage corresponding to the first user at present;
displaying a second identifier adapted to the voice interaction stage corresponding to the first user;
wherein the second identifier is used for prompting that the voice interaction function is in an activated state.
12. The method of claim 11, wherein displaying a second identification adapted to a voice interaction phase corresponding to the first user comprises:
determining the position of the first user in the vehicle according to the activated voice;
determining a display position of the second identifier based on the orientation;
and displaying the second mark at the display position of the second mark.
13. The method as recited in claim 11, further comprising:
when the first user sends out a voice command, a text box is displayed on one side of the second identifier, and a text corresponding to the voice command is presented in the text box.
14. The method as recited in claim 11, further comprising:
when the activation voice sent by the second user aiming at the voice interaction function is detected, acquiring a voice interaction mode supported by the voice interaction function;
if the voice interaction mode is a human interaction mode, displaying a second identifier of the voice interaction function adapted in the voice interaction stage corresponding to the second user, wherein the second identifier adapted in the voice interaction stage corresponding to the first user disappears;
if the voice interaction mode is a multi-person interaction mode, the second identifier adapted to the voice interaction stage corresponding to the first user is kept displayed, and the second identifier adapted to the voice interaction stage corresponding to the second user is displayed.
15. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing one or more computer programs;
The processor, coupled to the memory, for executing the one or more computer programs stored in the memory for implementing the interface display method of any of claims 1-14.
16. A vehicle comprising a vehicle body and the electronic device of claim 15, the electronic device being disposed on the vehicle body.
CN202211498038.7A 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product Active CN115534850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211498038.7A CN115534850B (en) 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211498038.7A CN115534850B (en) 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product

Publications (2)

Publication Number Publication Date
CN115534850A CN115534850A (en) 2022-12-30
CN115534850B true CN115534850B (en) 2023-05-16

Family

ID=84722464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211498038.7A Active CN115534850B (en) 2022-11-28 2022-11-28 Interface display method, electronic device, vehicle and computer program product

Country Status (1)

Country Link
CN (1) CN115534850B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107921961A (en) * 2015-08-07 2018-04-17 奥迪股份公司 The method and motor vehicle of auxiliary are provided in terms of performing motor-driven vehicle going in high timeliness for driver
CN111824132A (en) * 2020-07-24 2020-10-27 广州小鹏车联网科技有限公司 Parking display method and vehicle

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110231863B (en) * 2018-03-06 2023-03-24 斑马智行网络(香港)有限公司 Voice interaction method and vehicle-mounted equipment
CN108735215A (en) * 2018-06-07 2018-11-02 爱驰汽车有限公司 Interactive system for vehicle-mounted voice, method, equipment and storage medium
CN112309395A (en) * 2020-09-17 2021-02-02 广汽蔚来新能源汽车科技有限公司 Man-machine conversation method, device, robot, computer device and storage medium
CN115148200A (en) * 2021-03-30 2022-10-04 上海擎感智能科技有限公司 Voice interaction method and system for vehicle, electronic equipment and storage medium
CN113104030A (en) * 2021-05-19 2021-07-13 广州小鹏汽车科技有限公司 Interaction method and device based on automatic driving
CN113782020A (en) * 2021-09-14 2021-12-10 合众新能源汽车有限公司 In-vehicle voice interaction method and system
CN113851126A (en) * 2021-09-22 2021-12-28 思必驰科技股份有限公司 In-vehicle voice interaction method and system
CN115273525B (en) * 2022-05-17 2023-11-10 岚图汽车科技有限公司 Parking space mapping display method and system
CN115158340A (en) * 2022-08-09 2022-10-11 中国重汽集团济南动力有限公司 Driving assistance system, and control method, device, and medium therefor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107921961A (en) * 2015-08-07 2018-04-17 奥迪股份公司 The method and motor vehicle of auxiliary are provided in terms of performing motor-driven vehicle going in high timeliness for driver
CN111824132A (en) * 2020-07-24 2020-10-27 广州小鹏车联网科技有限公司 Parking display method and vehicle

Also Published As

Publication number Publication date
CN115534850A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
US11366513B2 (en) Systems and methods for user indication recognition
KR102521834B1 (en) Method of providing image to vehicle, and electronic device therefor
US10466800B2 (en) Vehicle information processing device
US10809720B2 (en) Bi-directional autonomous vehicle
EP3508381A1 (en) Moodroof for augmented media experience in a vehicle cabin
CN107351763A (en) Control device for vehicle
CN109219551A (en) Condition of road surface head up display
CN108099790A (en) Driving assistance system based on augmented reality head-up display Yu multi-screen interactive voice
JPH1137766A (en) Agent device
JP7068986B2 (en) Agent system, agent control method, and program
WO2019124158A1 (en) Information processing device, information processing method, program, display system, and moving body
KR20190107286A (en) Advertisement providing apparatus for vehicle and method for operating the same
JPH11250395A (en) Agent device
WO2022062491A1 (en) Vehicle-mounted smart hardware control method based on smart cockpit, and smart cockpit
WO2019097762A1 (en) Superimposed-image display device and computer program
JP7250547B2 (en) Agent system, information processing device, information processing method, and program
KR20190088133A (en) Input output device and vehicle comprising the same
JP2020075720A (en) Experience provision system, experience provision method, and experience provision program
WO2021079975A1 (en) Display system, display device, display method, and moving device
CN113119956A (en) Interaction method and device based on automatic driving
US20230356588A1 (en) Vehicle display device and vehicle display method
CN115534850B (en) Interface display method, electronic device, vehicle and computer program product
CN117141388A (en) Method, device and system for changing vehicle style
CN112297842A (en) Autonomous vehicle with multiple display modes
JPH11272639A (en) Agent device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant