CN110825469A - Voice assistant display method and device - Google Patents

Voice assistant display method and device Download PDF

Info

Publication number
CN110825469A
CN110825469A CN201910883296.9A CN201910883296A CN110825469A CN 110825469 A CN110825469 A CN 110825469A CN 201910883296 A CN201910883296 A CN 201910883296A CN 110825469 A CN110825469 A CN 110825469A
Authority
CN
China
Prior art keywords
voice assistant
display
feedback
indication information
screen state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910883296.9A
Other languages
Chinese (zh)
Inventor
宋平
杨之言
郑美洙
周煜啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910883296.9A priority Critical patent/CN110825469A/en
Publication of CN110825469A publication Critical patent/CN110825469A/en
Priority to PCT/CN2020/114899 priority patent/WO2021052263A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application discloses a voice assistant display method and device, relates to the technical field of communication, and is used for defining the display form of a voice assistant, so that the voice assistant can switch the corresponding form according to the change of an actual scene, and the system-level integration of the voice assistant and electronic equipment is realized. The method comprises the following steps: the voice assistant is turned on, and the voice assistant displays in a first display form. The first display mode is a default display mode preset by the voice assistant. And determining the display form of the voice assistant according to the instruction information input into the voice assistant and the service indicated by the instruction information.

Description

Voice assistant display method and device
Technical Field
The present application relates to the field of electronic devices, and in particular, to a method and an apparatus for displaying a voice assistant.
Background
With the increasing maturity of voice interaction technology, the application scenarios of voice assistants are more and more extensive. The voice assistant may have intelligent interaction with the user for intelligent conversations and instant question and answer. Moreover, the voice assistant can also recognize the voice command of the user, so that the mobile phone executes the event corresponding to the voice command. Taking a mobile phone as an example, if the voice assistant receives and recognizes the voice command 'make a call to mr. lee's person 'input by the user, the mobile phone can automatically make a call to mr. lee's person who is a contact person.
In the prior art, the freeform multi-window technology is generally utilized to control the form of the voice assistant, so that the voice assistant is suspended at any position of the display interface, and the operation of a user is facilitated. The voice assistant's modality is relatively independent of the actual scene on the electronic device, making the user experience poor.
Disclosure of Invention
The application provides a voice assistant display method and device, which define the display form of a voice assistant, so that the voice assistant can switch the corresponding form according to the change of an actual scene, thereby realizing the system-level integration of the voice assistant and electronic equipment and improving the user experience.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a voice assistant display method, which is applied to an electronic device, where the display modes of the voice assistant include a half-screen state, a full-screen state, and a floating state. The half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is less than 1, the full-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is 1, and the suspension state means that the voice assistant suspends and displays the current display interface of the electronic equipment. The method comprises the following steps: the voice assistant is turned on, and the voice assistant displays in a first display form. The first display mode is a default display mode preset by the voice assistant. And determining the display form of the voice assistant according to the instruction information input into the voice assistant and the service indicated by the instruction information.
Through the process, the application provides a voice assistant display method, and after the voice assistant is turned on, the voice assistant displays in a preset default display form. And then, according to the instruction information input into the voice assistant and the service indicated by the instruction information, determining the display form of the voice assistant, so that the voice assistant can determine the change of the actual scene according to the instruction information, and switch the corresponding form according to the actual scene, so that the voice assistant and the system are cooperated into a whole, and the system-level integration of the voice assistant and the mobile phone is realized.
In a possible implementation manner, the first display mode is a half-screen mode, and the opening of the voice assistant, the displaying by the voice assistant in the first display mode specifically includes: after the voice assistant is opened, the whole current task interface moves downwards, and the voice assistant and the current task interface are displayed in a split screen mode.
In a possible implementation manner, determining a display form of the voice assistant according to the instruction information input to the voice assistant and the service indicated by the instruction information specifically includes: if the indication information lacks keywords, the display form of the voice assistant is a full screen state. If the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is a half-screen form. If the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a full screen state. And if the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is split screen feedback, the display form of the voice assistant is in a suspension state. And if the application related to the service indicated by the indication information is displayed in the form of a card in the display interface of the voice assistant, the feedback form of the service indicated by the indication information is card feedback. And if the service indicated by the indication information relates to application interface switching, the feedback form of the service indicated by the indication information is split screen feedback.
In a possible implementation manner, the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, where the small half-screen state means that a ratio of a display interface of the voice assistant to an entire display interface of the electronic device is less than 0.5, the large half-screen state means that a ratio of the display interface of the voice assistant to the entire display interface of the electronic device is greater than 0.5, and the first display state is the small half-screen state.
In a possible implementation manner, determining a display form of the voice assistant according to the indication information input into the voice assistant and the service indicated by the indication information includes: if the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is in a small half screen state. If the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a mostly half screen form.
In one possible implementation, after the voice assistant is turned on and the voice assistant displays in the first display form, the method further includes: the voice assistant enters a sleep state, wakes up the voice assistant and determines the display form of the voice assistant.
In one possible implementation, waking up the voice assistant and determining the display modality of the voice assistant includes: and if the display form of the voice assistant is in a suspended state when the voice assistant enters the sleep state, the display form of the voice assistant is the first display form after the voice assistant is awakened. And if the display form of the voice assistant is the half-screen state when the voice assistant enters the sleep state, the display form of the voice assistant is the half-screen state after the voice assistant is awakened. If the voice assistant enters the sleep state and the display form of the voice assistant is the full screen state, the display form of the voice assistant is the full screen state after the voice assistant is awakened.
In a possible implementation manner, the half-screen state of the voice assistant further includes a small half-screen state and a large half-screen state, wherein the small half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form. The method for waking up the voice assistant and determining the display form of the voice assistant comprises the following steps: and if the display form of the voice assistant is in the small half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in the small half screen state after the voice assistant is awakened. If the display form of the voice assistant is in a large half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in a large half screen state after the voice assistant is awakened.
In a possible implementation manner, after waking up the voice assistant and determining the display form of the voice assistant, the method further includes: and determining a new display form of the voice assistant according to the new indication information and the service indicated by the new indication information.
In a possible implementation manner, determining a new display form of the voice assistant according to the new indication information and the service corresponding to the new indication information specifically includes: and if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is text feedback, voice feedback or card feedback, the new display form of the voice assistant is a full screen state. And if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is split screen feedback, the new display form of the voice assistant is a suspension state. And if the display form of the voice assistant is a half-screen state and the new indication information lacks keywords, the new display form of the voice assistant is a full-screen state. And if the display form of the voice assistant is the half-screen form, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is text feedback or voice feedback, the new display form of the voice assistant is the half-screen form. If the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is card feedback, the new display form of the voice assistant is a full-screen state. And if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is split-screen feedback, the new display form of the voice assistant is a suspension state. And if the application related to the service indicated by the new indication information is displayed in the form of a card in the display interface of the voice assistant, the feedback form of the service indicated by the new indication information is card feedback. And if the service indicated by the new indication information relates to application interface switching, the feedback form of the service indicated by the new indication information is split screen feedback.
In a possible implementation manner, the half-screen state of the voice assistant further includes a small half-screen state and a large half-screen state, where the small half-screen state means that the ratio of the display interface of the voice assistant to the whole display interface of the electronic device is less than 0.5, the large half-screen state means that the ratio of the display interface of the voice assistant to the whole display interface of the electronic device is greater than 0.5, and the first display state is the small half-screen state. The determining of the new display form of the voice assistant according to the new indication information and the service corresponding to the new indication information includes: if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is text feedback or voice feedback, the display form of the voice assistant is in a small half screen state; if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is card feedback, the display form of the voice assistant is in a large half screen state.
In a second aspect, an electronic device is provided, comprising: a processor, a memory, and a touchscreen, the memory and the touchscreen coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read by the processor from the memory, cause the electronic device to perform operations comprising: the voice assistant is turned on, and the voice assistant displays in a first display form. The first display mode is a default display mode preset by the voice assistant. And determining the display form of the voice assistant according to the instruction information input into the voice assistant and the service indicated by the instruction information. The display form of the voice assistant comprises a half-screen state, a full-screen state and a suspension state. The half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is less than 1, the full-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is 1, and the suspension state means that the voice assistant suspends and displays the current display interface of the electronic equipment.
In one possible implementation, the first display mode is a half-screen mode, and when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: after the voice assistant is opened, the whole current task interface moves downwards, and the voice assistant and the current task interface are displayed in a split screen mode.
In one possible implementation, when the processor reads the computer instructions from the memory, the electronic device is further caused to perform the following operations: if the indication information lacks keywords, the display form of the voice assistant is a full screen state. If the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is a half-screen form. If the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a full screen state. And if the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is split screen feedback, the display form of the voice assistant is in a suspension state. And if the application related to the service indicated by the indication information is displayed in the form of a card in the display interface of the voice assistant, the feedback form of the service indicated by the indication information is card feedback. And if the service indicated by the indication information relates to application interface switching, the feedback form of the service indicated by the indication information is split screen feedback.
In a possible implementation manner, the half-screen state of the voice assistant further includes a small half-screen state and a large half-screen state, wherein the small half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form.
In one possible implementation, when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: if the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is in a small half screen state. If the indication information does not lack the keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a mostly half screen form.
In one possible implementation, when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: the voice assistant enters a sleep state, wakes up the voice assistant and determines the display form of the voice assistant.
In one possible implementation, when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: and if the display form of the voice assistant is in a suspended state when the voice assistant enters the sleep state, the display form of the voice assistant is the first display form after the voice assistant is awakened. And if the display form of the voice assistant is the half-screen state when the voice assistant enters the sleep state, the display form of the voice assistant is the half-screen state after the voice assistant is awakened. If the voice assistant enters the sleep state and the display form of the voice assistant is the full screen state, the display form of the voice assistant is the full screen state after the voice assistant is awakened.
In a possible implementation manner, the half-screen state of the voice assistant further includes a small half-screen state and a large half-screen state, wherein the small half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form; when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: and if the display form of the voice assistant is in the small half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in the small half screen state after the voice assistant is awakened. If the display form of the voice assistant is in a large half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in a large half screen state after the voice assistant is awakened.
In one possible implementation, when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: and determining a new display form of the voice assistant according to the new indication information and the service indicated by the new indication information.
In one possible implementation, when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: and if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is text feedback, voice feedback or card feedback, the new display form of the voice assistant is a full screen state. And if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is split screen feedback, the new display form of the voice assistant is a suspension state. And if the display form of the voice assistant is a half-screen state and the new indication information lacks keywords, the new display form of the voice assistant is a full-screen state. And if the display form of the voice assistant is the half-screen form, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is text feedback or voice feedback, the new display form of the voice assistant is the half-screen form. If the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is card feedback, the new display form of the voice assistant is a full-screen state. And if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is split-screen feedback, the new display form of the voice assistant is a suspension state. And if the application related to the service indicated by the new indication information is displayed in the form of a card in the display interface of the voice assistant, the feedback form of the service indicated by the new indication information is card feedback. And if the service indicated by the new indication information relates to application interface switching, the feedback form of the service indicated by the new indication information is split screen feedback.
In a possible implementation manner, the half-screen state of the voice assistant further includes a small half-screen state and a large half-screen state, wherein the small half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form; when the processor reads the computer instructions from the memory, the electronic device further performs the following operations: if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is text feedback or voice feedback, the display form of the voice assistant is in a small half screen state; if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is card feedback, the display form of the voice assistant is in a large half screen state.
A third aspect provides a graphical user interface on an electronic device with a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, the graphical user interface comprising a graphical user interface displayed when the electronic device performs a method as described in the above aspects and any one of their possible implementations.
In a fourth aspect, an apparatus is provided, where the apparatus is included in an electronic device, and the apparatus has a function of implementing the behavior of the electronic device in any one of the methods in the foregoing aspects and possible implementation manners. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. For example, a receiving module or unit, a display module or unit, and a transmitting module or unit, etc.
A fifth aspect provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the voice assistant display method as described in the above aspect and any one of its possible implementations.
A sixth aspect provides a computer program product for causing a computer to perform the voice assistant display method as described in the above aspects and any one of the possible implementations when the computer program product runs on the computer.
A seventh aspect provides a chip system, which includes a processor, and when the processor executes the instructions, the processor executes the voice assistant display method as described in the above aspect and any one of the possible implementations.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating various aspects of a voice assistant according to an embodiment of the present application;
FIG. 4 is a flowchart of a voice assistant display method according to an embodiment of the present application;
FIG. 5 is a first half-screen display mode (H) of a voice assistant according to an embodiment of the present disclosure;
FIG. 6 is a second half-screen display mode (H) of a voice assistant according to an embodiment of the present disclosure;
FIG. 7 shows a half-screen display mode (H) of a voice assistant according to an embodiment of the present application;
FIG. 8 is a fourth display mode of a half-screen (L) of a voice assistant according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a configuration switching rule of a voice assistant according to an embodiment of the present application;
FIG. 10 is a first schematic diagram illustrating a configuration switch of a voice assistant according to an embodiment of the present application;
FIG. 11 is a second schematic diagram illustrating a configuration switch of a voice assistant according to an embodiment of the present application;
FIG. 12 is a third schematic diagram illustrating a configuration switch of a voice assistant according to an embodiment of the present application;
FIG. 13 is a fourth schematic diagram illustrating a configuration switch of a voice assistant according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
The embodiment of the application provides a voice assistant display method and device, which can be applied to display of a voice assistant on electronic equipment. The voice assistant may be an Application (APP) installed in the electronic device. The voice assistant may be an embedded application in the electronic device (i.e., a system application of the electronic device) or a downloadable application. An embedded application is an application provided as part of an implementation of an electronic device, such as a cell phone. For example, the embedded application may be a "settings" application, a "short message" application, a "camera" application, and the like. The downloadable application is an application that may provide its own internet protocol multimedia subsystem (IMS) connection, and may be an application that is pre-installed in the electronic device or a third party application that may be downloaded by a user and installed in the electronic device. For example, the downloadable application may be a "WeChat" application, a "Payment treasures" application, a "mail" application, and the like.
The electronic device in the embodiment of the present application may be a portable computer (e.g., a mobile phone), a notebook computer, a Personal Computer (PC), a tablet computer, a wearable electronic device (e.g., an intelligent watch), an intelligent home device, an Artificial Intelligence (AI) terminal (e.g., an intelligent robot), an Augmented Reality (AR) \ Virtual Reality (VR) device, an in-vehicle computer, and the like, and the following embodiment does not particularly limit the specific form of the device.
Referring to fig. 1, a schematic structural diagram of an electronic device 100 provided in the present embodiment is shown. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The DSP can monitor the voice data in real time, and when the similarity between the voice data monitored by the DSP and the awakening words registered in the electronic equipment meets a preset condition, the voice data can be delivered to the AP. And the AP performs text check and voiceprint check on the voice data. When the AP determines that the voice data matches the user's registered wake-up word, the electronic device may turn on the voice assistant. In the embodiment of the application, after being awakened, the voice assistant can display the voice assistant on the interface of the electronic equipment in a small half-screen (H1) mode. The small half-screen (H1) configuration of the voice assistant is shown in (b) of fig. 3, and the description about the small half-screen (H1) configuration of the voice assistant is described in detail below.
The controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110, and may be called directly from the memory if the processor 110 needs to use the instructions or data again. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a display screen serial interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the present embodiment is only an exemplary illustration, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When a call is placed or a voice message is sent or it is desired to trigger the electronic device 100 to perform some function by the voice assistant, the user may speak via his/her mouth near the microphone 170C and input a voice signal into the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In this embodiment, a software structure of the electronic device 100 is exemplarily illustrated by taking an Android system with a layered architecture as an example.
Please refer to fig. 2, which is a block diagram of a software structure of an electronic device 100 according to the present embodiment. Wherein, the layered architecture divides the software into a plurality of layers, and each layer has clear roles and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as voice assistant, mail, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., OpenGL ES (open graphics library for formatted systems)), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The technical solutions in the following embodiments can be implemented in the electronic device 100 having the above hardware architecture and software architecture. The voice assistant display method provided by the present embodiment is described in detail below with reference to the accompanying drawings and application scenarios. It should be noted that the following embodiments all use a voice assistant in a mobile phone as an example.
In order to realize system-level integration of the form of the voice assistant and the actual scene on the mobile phone, the embodiment of the application provides three forms of the voice assistant, namely a half-screen state (H), a full-screen state (L) and a suspension state (F).
According to the proportion of the voice assistant interface to the whole display interface of the mobile phone, the display form can be divided into a half-screen state (H) and a full-screen state (L).
When the voice assistant is in the half-screen state (H), the ratio of the display interface of the voice assistant to the display interface of the whole mobile phone is greater than 0 and less than 1, for example, the ratio may be 1: 2 as shown in fig. 3 (a). As another example, the ratio may be 3: 8, the ratio is less than 1: 2, at this time, the voice assistant may be called a small half screen state (H1), as shown in fig. 3 (b). As another example, the ratio may be 5: 8, the ratio is greater than 1: 2, at this time, the voice assistant may be referred to as a large half screen state (H2), as shown in fig. 3 (c). The voice assistant in the half-screen state (H) is suitable for a scene of single-round voice interaction. The scene of single-round voice interaction refers to an application scene in which the voice assistant can complete corresponding operations according to the indication information input by the user at a single time. The following are exemplary: in a scene of single-round voice interaction, after a user inputs indication information, the voice assistant can detect that the indication information does not lack keywords, and the voice assistant can complete the operation indicated by the indication information without continuously interacting with the user to acquire the keywords. For example, if the indication information is "open bluetooth connection", the voice assistant may directly perform the operation of opening bluetooth connection.
When the voice assistant is in a full screen state (L), the voice assistant displays the full screen on the whole display interface of the mobile phone, namely the ratio of the display interface of the voice assistant to the whole display interface of the mobile phone is 1: 1, as shown in (d) of fig. 3. The full screen state (L) voice assistant is applicable to the scenes of multiple rounds of voice conversations. The scene of multi-round voice interaction means that the voice assistant cannot complete corresponding operation according to the indication information input by the user once and needs to interact with the voice assistant for many times. That is, after the user inputs the indication information, the voice assistant cannot clearly recognize the user intention, that is, the voice assistant can detect that the indication information is the indication information lacking the keyword, the voice assistant needs to continue to interact with the user to obtain the keyword, the voice assistant automatically enters a multi-round voice interaction process, and the voice assistant displays in a full screen state (L). For example, the indication information is "buy ticket", and in this case, the indication information does not indicate when and where the ticket the user needs to buy. Thus, the voice assistant continues to ask the user "ask when you need to purchase a ticket? "and" ask where your destination is ", etc.
And when the voice assistant is in a suspension state (F), the voice assistant is displayed on the display interface of the mobile phone in a suspension manner. Illustratively, as shown in fig. 3 (c), the voice assistant floats on the display interface of the mobile phone in the form of a floating ball. The voice assistant in the suspension state (F) occupies a small space on the whole display interface of the mobile phone. The voice assistant in the suspension state (F) is mainly suitable for scenes that the voice assistant completes operation in cooperation with other applications and immersion scenes that do not interrupt the attention of a user. The scene that the voice assistant completes the operation in cooperation with other applications means a scene that the voice assistant needs to be matched with a third-party application interface to complete the operation corresponding to the indication information. For example, when the voice assistant displays in the half-screen state (H), a "photo" application is displayed in the display interface below the mobile phone, and the indication information input by the user is "how many weekends and meals are sent", so that the voice assistant needs to close the interface of the "photo" application and open the interface of the "WeChat" application, and at this time, the voice assistant displays in the suspended state (F). Illustratively, the method includes immersion scenes such as reading, audio-video playing and the like which do not interrupt the attention of the user.
In practical applications, the half-screen state (H) of the voice assistant may be used as the default state, but the default state is not limited to the half-screen state (H), and the user may set the default state of the voice assistant to the full-screen state (L) or the floating state (F) according to the actual requirement.
The application also provides a display method of the voice assistant, which can realize the switching of the voice assistant between different forms according to a certain rule. By switching different forms of the voice assistant, system-level fusion of the form of the voice assistant and the actual scene on the mobile phone can be realized.
Taking the forms of the voice assistant including a half-screen state (H), a full-screen state (L), and a floating state (F), the half-screen state (H) being the default form of the voice assistant as an example, as shown in fig. 4, the method includes steps S401 to S405:
s401, the voice assistant is turned on, and the voice assistant displays in a default mode (half screen mode).
Before the voice assistant is opened, the display interface of the mobile phone can be a task-free interface, a single-task interface or a multi-task interface. In addition, after the voice assistant is turned on, the voice assistant is displayed in a default mode, namely a half-screen mode (H), and the voice assistant is in a sound receiving state. The voice assistant may be opened in the prior art, for example: long pressing a power key of the mobile phone, or turning on a voice assistant by a voice wake-up word, etc.
The following describes, with reference to fig. 5 to 8, a display interface of a mobile phone after opening a voice assistant according to a difference between display interfaces of the mobile phone before opening the voice assistant:
1. and opening a display interface of the mobile phone before the voice assistant as a task-free interface.
Before the voice assistant is turned on, the display interface of the mobile phone is a task-free interface (or, alternatively, a homepage interface), and icons of a plurality of applications are displayed on the display interface, as shown in (a) of fig. 5. After the voice assistant is turned on, the current task-free interface integrally moves downwards to expose the upper half part of the display interface of the mobile phone, the voice assistant defaults to display the upper half part of the display interface of the mobile phone in a half-screen state (H), and the voice assistant is in a sound receiving state; the application icons in the upper half of the original task-free interface are displayed in the lower half of the display interface of the mobile phone, as shown in fig. 5 (b). Referring to fig. 5 (b), the voice assistant is in a radio reception state, and the prompting information (e.g., "hi i hear …") and the prompting graphics (e.g., sound wave graphics) of the voice assistant may be used to prompt the user to input instruction information, as shown at 501. Optionally, the voice assistant also displays the voice skill recommendation "V", "keyword 1", "keyword 2" on the display interface as shown at 502. Where "V" is used to switch the input form of the voice assistant, "keyword 1" is different from "keyword 2," and "keyword 1" and "keyword 2" are exemplary "open bluetooth connection" and "change ring tone". The voice skill recommendation item is indication information recommended by the voice assistant for the user, is used for calling services in the voice assistant, and is determined by the voice assistant according to the current time, the position of the mobile phone, the current running application, the use habit of the user and the like. The user can directly click the voice skill recommendation item to input the indication information, or can input the indication information through voice, or the user can click V to switch the input form of the voice assistant, open the camera and further input the indication information through video. When the voice assistant is displayed in a half-screen state (H), the user can operate the lower half part of the display interface of the mobile phone. For example, clicking on an application icon on the lower half of the display interface of the mobile phone opens the corresponding application software (e.g., a "photo" application). Taking the "photo" application and the voice assistant split-screen display as an example, the display interface of the mobile phone is shown in fig. 5 (c). Referring to fig. 5 (c), the prompt graphic shown in 501 is changed to a floating ball, which indicates that the voice assistant stops receiving sound and enters a sleep state, and the updated voice skill recommendation items "V", "keyword 3", "keyword 4", "keyword 3" and "keyword 4" shown as 502 are displayed on the display interface of the voice assistant, which may be the same as or different from "keyword 1" and "keyword 2". Illustratively, "keywords 3" and "keywords 4" are recommendations related to the current application interface, such as "photos on the last weekend" and "shared photos".
It should be noted that, if the display interface of the mobile phone is large, the user may manually start the one-handed operation mode of the mobile phone, or after the voice assistant is turned on, the mobile phone may automatically enter the one-handed operation mode, so that the user may operate the lower half portion of the display interface of the mobile phone.
2. And opening a display interface of the mobile phone before the voice assistant into a single task interface.
Before the voice assistant is opened, the display interface of the mobile phone is a single-task interface, and the task in the interface is a photo application in an exemplary manner, as shown in (a) in fig. 6. After the voice assistant is turned on, the current single-task interface integrally moves downwards to expose the upper half part of the display interface of the mobile phone, the voice assistant defaults to display the upper half part of the display interface of the mobile phone in a half-screen state (H), and the voice assistant is in a sound receiving state; the upper half of the "photo" application when displayed full screen is displayed in the lower half of the display interface of the mobile phone, as shown in fig. 6 (b). Referring to FIG. 6 (b), the voice assistant is in a radio reception state, and the prompting information (e.g., "hi i hear …") and prompting graphics (e.g., sound wave graphics) of the voice assistant may be used to prompt the user to input instruction information, as shown at 601. Optionally, as shown in 602, a voice skill recommendation item "V", "keyword 1", "keyword 2", "V" is also displayed on the voice assistant display interface, where "keyword 1" is different from "keyword 2", and exemplarily, "keyword 1" and "keyword 2" are "query of current weather in the sea" and "share of photos", respectively. For a detailed description of the voice skill recommendation item and the manner of inputting the instruction information by the user, reference may be made to the above description, which is not repeated herein.
When the voice assistant is displayed in a half-screen state (H), the user can operate the lower half part of the display interface of the mobile phone. Illustratively, the voice assistant displays the "photo" application in a split screen manner, and the display interface of the mobile phone is shown in fig. 7 (a). Referring to fig. 7 (a), if the user clicks on a certain picture in the "photo" application, for example, the picture shown in 701 is displayed in full screen in the lower half of the display interface of the mobile phone, and the "title bar" shown in the figure is the name or number of the picture shown in 701, as shown in fig. 7 (b). Referring to fig. 7 (b), a toolbar operable on the clicked picture is displayed in the out-of-screen invisible area. If the user performs a drag operation on the lower half of the display interface of the mobile phone, i.e. the "photo" application interface, as shown in 702, the display interface of the "photo" application moves upward as a whole, the photo originally located in the out-of-screen invisible area is displayed in the lower half of the display interface of the mobile phone, and the photo originally located in the in-screen visible area moves upward and is invisible, as shown in (c) of fig. 7. In fig. 7 (a) - (c), the dotted line K is the boundary between the visible area and the invisible area.
3. And opening a display interface of the mobile phone before the voice assistant into a multi-task interface.
Before the voice assistant is turned on, the display interface of the mobile phone is a multitasking interface, and illustratively, two tasks are displayed in the multitasking interface, and the two tasks are displayed in the display interface of the mobile phone in a split screen mode and are respectively a "photo" application and a "WeChat" application, the "WeChat" application is located in the upper half portion of the display interface of the mobile phone, and the "photo" application is located in the lower half portion of the display interface of the mobile phone, as shown in fig. 8 (a). After the voice assistant is turned on, the voice assistant defaults to display in the upper half of the display interface of the mobile phone in a half-screen state (H), the 'WeChat' application interface is closed, the voice assistant is in a sound receiving state, the 'photo' application remains unchanged, and the voice assistant displays in a split screen mode, that is, the 'photo' application is still displayed in the lower half of the display interface of the mobile phone, as shown in (b) in fig. 8. Referring to FIG. 8 (b), the voice assistant is in a radio reception state, and the prompting information (e.g., "hi i hear …") and prompting graphics (e.g., sound wave graphics) of the voice assistant may be used to prompt the user to input instruction information, as shown at 801. Optionally, the voice assistant also displays the voice skill recommendation "V", "keyword 1", "keyword 2" on the display interface as shown in 802. The "V" is used to switch the input form of the voice assistant, and the "keyword 1" is different from the "keyword 2", for example, the "keyword 1" and the "keyword 2" are "a photo of a selected weekend" and "a shared photo", respectively, and for specific description of the voice skill recommendation item and the way of inputting the instruction information by the user, reference may be made to the above description, which is not repeated herein.
It should be noted that the half-screen state (H) of the voice assistant can be further divided into a small half-screen state (H1) and a large half-screen state (H2). At this point, the default configuration of the voice assistant is set to be generally the small half screen (H1). Of course, the default configuration of the voice assistant may be set to the half-screen state (H2) as desired. Compared with the small half screen state (H1), the large half screen state (H2) of the voice assistant has a larger display interface and can display more contents in the large half screen state (H2).
S402, the voice assistant determines the display form according to the received indication information, and enters a sleep state after executing the operation indicated by the indication information.
After step S401, the user inputs instruction information to the voice assistant by voice or video. After receiving the instruction information input by the user, the voice assistant determines the service related to the instruction information, converts the instruction information into text information and displays the text information. Then, the voice assistant determines whether to switch the display form of the voice assistant according to the current application scene and the feedback form of the service related to the indication information. And after finishing the operation corresponding to the indication information, the voice assistant stops receiving the sound and enters a dormant state.
The services comprise services provided by an application on the platform of the voice assistant (such as opening Bluetooth, inquiring weather and the like) and services provided by the voice assistant by calling other applications (such as sending WeChat, opening a 'Taobao' application and the like), and the feedback forms of the services in the voice assistant comprise text feedback, voice feedback, card feedback (setting items or application items related to the services in a feedback card) and split screen feedback (the application interface changes) and the like. The feedback form of the service is determined by the setting items of the application related to the service, for example, if the 'weather' application can only be displayed in the form of a card in the display interface of the voice assistant, the voice assistant feeds back the card by calling the feedback form of the service provided by the 'weather' application.
The process of switching between the various modalities of the voice assistant may be as described with reference to FIG. 9. After receiving the instruction information input by the user, the voice assistant may or may not change the display form according to the current application scenario and the difference of the instruction information input by the user. The following describes two cases where the display mode of the voice assistant is changed and the display mode of the voice assistant is not changed:
1. the display modality of the voice assistant does not change.
(1) Referring to fig. 9, when the voice assistant is in the half-screen state (H), as shown in fig. 9 (a), if the instruction information input by the user does not lack a keyword, the interaction process between the voice assistant and the user is a single round of voice interaction, and the feedback form of the service related to the instruction information is text feedback and voice feedback, the display form of the voice assistant is still in the half-screen state (H), as shown in fig. 9 (b).
Illustratively, after the voice assistant is turned on, the voice assistant displays the icons of part of the applications on the upper half part of the display interface of the mobile phone in the default form, i.e. the half-screen state. At this time, if the user clicks "open bluetooth connection" in the voice skill recommendation item or inputs "open bluetooth connection" through voice or video, the display interface of the voice assistant is as shown in (d) of fig. 5. Referring to fig. 5 (d), after the voice assistant receives the instruction information input by the user, as shown in 501, the prompting information (e.g., "hi i hear …") of the voice assistant is changed to the instruction information input by the user, i.e., "bluetooth connection is opened", and the prompting pattern shown in 501 is a sound wave pattern but no voice skill recommendation item. Subsequently, the voice assistant determines that no keyword is absent in the indication information "open bluetooth connection", that is, the interaction process between the voice assistant and the user is a single round of voice interaction, and the feedback form of the service related to the indication information is text feedback or voice feedback, and after the operation corresponding to the indication information is completed, the display form of the voice assistant is still in a half-screen state (H), as shown in (e) in fig. 5. In fig. 5 (e), the prompt graphic shown at 501 changes to a hover ball, indicating that the voice assistant stopped receiving sound and entered a sleep state, the prompt message shown at 501 changes to the feedback text "good, bluetooth turned on" of the indicator message, and the voice assistant updates and displays the voice skill recommendation, as shown at 502. For example, the updated voice skill recommendation items are "close wireless connection", "open photo". If the feedback form of the service related to the indication information is voice feedback, the voice assistant needs to display the feedback text and output the feedback text by voice.
Illustratively, after the voice assistant is turned on, the voice assistant displays the "photo" application on the upper half of the display interface of the mobile phone in its default form, i.e., half-screen state. At this time, after the user clicks "photograph of selected weekend" in the voice skill recommendation item, or inputs "photograph of selected weekend" by voice or video, the display interface of the voice assistant is as shown in (c) of fig. 8. Referring to fig. 8 (c), after the voice assistant receives the instruction information input by the user, as shown in 801, the prompting information of the voice assistant (e.g., "hi i listen to …") is changed to the instruction information input by the user, i.e., "select photos of weekends", and the prompting pattern shown in 801 is a sound wave pattern but no voice skill recommendation item. Subsequently, the voice assistant determines that the indication information "select photos of weekends" does not lack keywords, that is, the interaction process between the voice assistant and the user is a single round of voice interaction, and the feedback form of the service related to the indication information is text feedback or voice feedback, after the operation corresponding to the indication information is completed, the display form of the voice assistant is still in a half-screen state (H), and the photos of weekends in the "photos" application are selected (the hooked picture in the lower right corner is the selected picture), as shown in (d) in fig. 8. Referring to fig. 8 (d), the prompt graphic shown at 801 is changed to a hover ball to indicate that the voice assistant stops receiving sound and is in a sleep state, and the prompt message shown at 801 is changed to the feedback text "picture of selected weekend" of the instruction message, and the voice assistant updates and displays the voice skill recommendation item, as shown at 802. For example, the updated voice skill recommendation items are "share photos", "delete photos", and the like. If the feedback form of the service related to the indication information is voice feedback, the voice assistant needs to display the feedback text and output the feedback text by voice.
Similarly, when the voice assistant is displayed in the small half-screen state (H1), if the instruction information input by the user does not lack a keyword, the interaction process between the voice assistant and the user is a single round of voice interaction, and the feedback form of the service related to the instruction information is text feedback and voice feedback, the display form of the voice assistant is still in the small half-screen state (H1). When the voice assistant is displayed in a large half screen state (H2), if the indication information input by the user does not lack keywords, the interaction process between the voice assistant and the user is a single-round voice interaction, and the feedback form of the service related to the indication information is text feedback, voice feedback or card feedback, the display form of the voice assistant is still in the large half screen state (H2).
(2) As shown in fig. 9, when the voice assistant is in the full screen state (L), as shown in (c) of fig. 9, regardless of whether the instruction information input by the user lacks a keyword, if the feedback form of the service corresponding to the instruction information input by the user is text feedback, voice feedback, or card feedback, the display form of the voice assistant is not changed, as shown in (d) of fig. 9. The voice assistant receives the indication information and displays the indication information in a full screen state (L), which is described below.
2. The display modality of the voice assistant changes.
(1) Referring to fig. 9, the voice assistant is displayed in a half-screen state (H), as shown in (b) of fig. 9, if the instruction information input by the user does not lack a keyword, the interaction process between the voice assistant and the user is a single-turn voice interaction, and the feedback form of the service related to the instruction information is card feedback, or the instruction information input by the user lacks a keyword, the display form of the voice assistant is switched from the half-screen state (H) to a full-screen state (L), as shown in (c) of fig. 9.
Illustratively, after turning on the voice assistant, the voice assistant defaults to display in the upper half of the display interface of the mobile phone in a half-screen state (H), and the "photo" application is displayed in the lower half of the display interface of the mobile phone. At this time, if the user clicks "query the weather in the current sea" in the voice skill recommendation item or "query the weather in the current sea" is input by voice or video, at this time, the display interface of the voice assistant is as shown in (c) in fig. 6. Referring to fig. 6 (c), after the voice assistant receives the instruction information input by the user, as shown in 601, the prompt information (e.g., "hi i hear …") of the voice assistant is changed to "inquire about the weather of the sea today", and the prompt graphic shown in 601 is a sound wave graphic but has no voice skill recommendation item. Subsequently, the voice assistant determines that the instruction information "query the current Shanghai weather" does not lack keywords, that is, the interaction process between the voice assistant and the user is a single-turn voice interaction, and the feedback form of the service related to the instruction information "query the current Shanghai weather" is card feedback, and after the operation corresponding to the instruction information is completed, the display form of the voice assistant is switched to a full-screen state (L), as shown in (d) of fig. 6. Referring to (d) in fig. 6, the prompt graphic shown at 601 is a floating ball, which indicates that the voice assistant stops receiving sound and enters a sleep state, and two buttons "1" and "2" are displayed on two sides of the floating ball for switching the input form of the voice assistant. Generally, the input form of the voice assistant is voice input, and when the button of "1" is clicked, the input form of the instruction information of the voice assistant is switched to keyboard input (keyboard is opened), and when the button of "2" is clicked, the input form of the instruction information of the voice assistant is switched to video input (camera is opened). The voice assistant updates and displays the voice skill recommendations, such as "photos of last weekend" and "shared photos," as shown at 602. As shown in 603, the prompt message of the voice assistant is changed into a feedback text "it is sunny today's shanghai weather" of the prompt message, the prompt message further includes a feedback weather card, and the card displays detailed weather information of this today's shanghai weather. Alternatively, if the feedback form of the service related to the indication information supports voice feedback, the voice assistant may display the feedback text and output the feedback text "it is sunny in the sea today" by voice.
Illustratively, the voice assistant displays in a half-screen state (H) on the upper half of the display interface of the mobile phone, and the lower half of the display interface of the mobile phone displays a "photo" application, as shown in fig. 10 (a). Referring to fig. 10 (a), the voice assistant is in a radio reception state, and the prompting information (e.g., "hi i hear …") and the prompting graphics (e.g., sound wave graphics) of the voice assistant may be used to prompt the user to input the instruction information, as shown at 1001. Optionally 1002 shows the speech skills recommendations "share photos", "buy tickets". If the voice assistant clicks "buy ticket" in the voice skill recommendation item, or "buy ticket" is inputted by voice or video, the display interface of the voice assistant is as shown in (b) of fig. 10. Referring to fig. 10 (b), after the voice assistant receives the instruction information input by the user, as shown by 1001, the prompting information (e.g., "hi i am listening …") of the voice assistant is changed to "buy the ticket", and the prompting pattern shown by 1001 is a sound wave pattern but no voice skill recommendation item. Subsequently, the voice assistant determines that the information indicating that the key slot position is absent in the information "buy the air ticket", and the interaction with the user automatically enters a multi-round voice interaction process and switches the display form to the full screen state (L), as shown in (c) of fig. 10. In fig. 10 (c), the prompt graphic is a sound wave graphic as indicated at 1003, indicating that the voice assistant is in a sound reception state, and two buttons "1" and "2" are displayed on both sides of the hover ball sound wave graphic as indicated at 1003. Where "1" and "2" are used to switch the input form of the voice assistant. Generally, the input form of the voice assistant is voice input, and when the button of "1" is clicked, the input form of the instruction information of the voice assistant is switched to keyboard input (keyboard is opened), and when the button of "2" is clicked, the input form of the instruction information of the voice assistant is switched to video input (camera is opened). 1004 is a multi-turn voice interaction process for the voice assistant with the user, wherein the voice assistant may determine that the user intends to "buy an 8.24 pm airline ticket to the morning. At this time, the voice assistant determines that the keyword is not absent in "buying 8.24 pm air ticket in shanghai", and the feedback form of the service corresponding to the indication information is card feedback, and the display form of the voice assistant is still full screen (L). Therefore, after performing the corresponding operation according to the user's intention, the display form of the voice assistant is as shown in fig. 10 (d). Referring to fig. 10 (d), the prompt graphic is switched to a hover as shown in 1003, which is used to indicate that the voice assistant stops receiving sound and enters a sleep state, as shown in 1005, the prompt information changes to a feedback text "ticket for 8.24 pm to shanghai has been purchased" and a corresponding feedback card, as shown in 1006, the voice assistant updates and displays the voice skill recommendation, and the updated voice skill recommendation is "change sign", "refund", "share travel", and the like.
It should be noted that, when the voice assistant is displayed in a full screen state (L), the user may access the setting items of the mobile phone and the history of the voice conversation between the user and the voice assistant on the mobile phone, and the functions that the user can implement are more complete compared with the half screen state (H), and the voice interaction process is more immersive, that is, the attention of the user is more focused. In addition, when the voice assistant displays in the half-screen state (H), the display mode of the voice assistant can be switched to the full-screen state (L) by means of drawing down the voice assistant interface.
Similarly, when the voice assistant is displayed in a small half-screen state (H1), if the instruction information input by the user does not lack a keyword, the interaction process between the voice assistant and the user is a single round of voice interaction. And the feedback form of the service related to the indication information is card feedback, the display form of the voice assistant is switched to a large half screen state (H2).
Illustratively, in fig. 11 (a), the voice assistant is displayed in a small half screen state (H1), at which time the voice assistant is in a sound reception state, as shown at 1101, and a prompt graphic (e.g., a sound wave graphic) and a text (e.g., "hi, i listen to …") are used to remind the user to input instruction information, as shown at 1102, the voice skill recommendation items are "V", "photograph at the end of last week", "share photograph", and the like. The voice assistant receives the instruction information, such as "open bluetooth connection", by voice, and the like, and the display interface of the mobile phone is as shown in (b) of fig. 11. Referring to fig. 11 (b), when the user inputs the instruction information, the prompt graphic is still the sound wave graphic as shown at 1101, and the prompt information is changed to the instruction information "open bluetooth connection". Then, the voice assistant determines that the keyword is not absent in the indication information and the feedback form of the service corresponding to the indication information is text feedback or voice feedback, the display state of the voice assistant is still in a small half-screen state (H1), as shown in (c) of fig. 11. Referring to fig. 11 (c), after the voice assistant executes the service corresponding to the instruction information "open bluetooth connection", the prompt information is changed to the feedback text "good, bluetooth is on" of the service as shown in 1101. If the feedback form of the service related to the indication information is voice feedback, the voice assistant needs to display the feedback text and output the feedback text by voice. In addition, the prompt graphic shown at 1101 is changed to a hover ball, indicating that the voice assistant stops receiving sound and is in a sleep state. As shown at 1102, the voice assistant updates and displays the recommendation information, and the updated voice skill recommendations are "V", "bluetooth off", "last weekend photo", etc. If the feedback information is card feedback, the display interface of the mobile phone is switched to a large half screen state (H2) for display as shown in (d) of fig. 11, the contents shown in 1101 and 1102 are not changed, and as shown in 1103, the display interface of the voice assistant further includes a feedback card, and the content of the feedback card is a setting item of a bluetooth switch (at this time, it indicates that bluetooth is in an on state).
It should be noted that the most half screen state (H2) of the voice assistant is automatically entered after the voice assistant recognizes and calculates the feedback content of the received indication information, and cannot be entered manually.
In addition, when the voice assistant is displayed in a small half screen state (H1) or a large half screen state (H2), if the instruction information input by the user lacks a keyword, the interaction process between the voice assistant and the user is a plurality of rounds of voice interaction, and the display mode of the voice assistant is switched to a full screen state (L). Specifically, see an example of switching the half-screen state (H) of the voice assistant to the full-screen state (L), which is not described herein again.
(2) Referring to fig. 9, the voice assistant is in a half-screen state (H), and as shown in fig. 9 (a), if the instruction information input by the user does not lack a keyword and the feedback form of the service related to the instruction information is split-screen feedback, the "simulated click" skill is triggered, and the display mode of the voice assistant is switched to a floating state (F), as shown in fig. 9 (e).
Illustratively, the voice assistant displays the picture application on the upper half of the display interface of the mobile phone in a half-screen mode (H). Referring to fig. 12 (a), after the voice assistant receives the instruction information "send a WeChat to Doudou Weekly meal" input by the user, the presentation graphic is a sound wave graphic as shown at 1201, and the presentation character (e.g., "hi, i hear …") changes to the instruction information "send a WeChat to Doudou Weekly meal". And then, the voice assistant determines that the indication information does not lack key indication information, the feedback form of the service corresponding to the indication information is split screen feedback, and the voice assistant needs to cooperate with a third-party service 'WeChat' application to trigger 'simulated click' (deeplink) skill to complete the operation corresponding to the indication information. At this time, the voice assistant switches its display form to the floating state (F), as shown in fig. 12 (b). Referring to fig. 12 (b), the voice assistant enters a floating state (F), and as shown in 1201, the prompt graphic changes to a floating ball, and the prompt changes to the indication message "send a little message to the weekend for lunch" and the "photo" application resumes full screen display. Referring to fig. 12 (c), the "simulated click" skill continues to launch, the "photo" application is closed, the "WeChat" application is opened and displayed in full screen, as shown at 1201, the prompt graphic is a hover ball, and the prompt message is "send WeChat to how many questions for weekend meals". If the hover ball is clicked before the "simulated click" skill is completed, the "simulated click" skill is terminated and the display modality of the voice assistant is shown as (d) in fig. 12. Referring to fig. 12 (d), as shown in 1201, the prompt graphic is a hover ball, and there is no prompt message, indicating that the voice assistant stops receiving sound and enters a sleep state. If the "simulated click" skill is successfully executed and the voice assistant completes the operation indicated by the instruction information, the display mode of the voice assistant is shown in fig. 12 (e). Referring to fig. 12 (e), as shown in 1201, the text prompt information with the prompt graphic being a hover ball is the feedback text "sent" indicating the execution result of the instruction information, and is used to indicate that the operation corresponding to the instruction information "send a WeChat to how many weekends have eaten on weekends" has been completed. The voice assistant then enters a stop reception state and a sleep state, as shown in fig. 12 (f). Referring to fig. 12 (f), the prompt graphic 1201 is a hover ball with no prompt information. It should be noted that, if there are a plurality of "too many" in the wechat contact list, the voice assistant should determine the recipient contact of the wechat before completing the operation corresponding to the indication information, as shown in (h) of fig. 12. Referring to fig. 12 (h), the contact list includes "how many weeks" and "how many lees", then as shown in 1202, the user may determine that the recipient of the WeChat message is "how many weeks" by clicking the column where the contact is located, and send corresponding information to the contact "how many weeks" according to the indication information. Or clicking the prompt message 'first contact' shown by 1201 to determine that the receiver of the WeChat message is 'many weeks' and sending corresponding information to the contact 'many weeks' according to the indication information.
It should be noted that, in the process of switching the display form of the voice assistant to the suspended state (F), the success rate of the "simulated click" skill is improved by applying the full-screen display. In addition, the suspension state (F) of the voice assistant is automatically entered by the voice assistant after the received indication information is identified and calculated, and cannot be entered manually.
Similarly, when the voice assistant is displayed in a small half screen state (H1) or a large half screen state (H2), if the instruction information input by the user does not lack the keyword, the interaction process between the voice assistant and the user is a single round of voice interaction. And the feedback form of the service related to the indication information is split screen feedback, and the display form of the voice assistant is switched to a suspension state (F). Specifically, see an example of switching the half-screen state (H) of the voice assistant to the floating state (F), which is not described herein again.
(3) Referring to fig. 9, the voice assistant is in a full screen state (L), as shown in fig. 9 (d), if the feedback form of the service related to the indication information input by the user is split screen feedback, the "simulated click" skill is triggered, and the voice assistant enters a floating state (F), as shown in fig. 9 (e).
Optionally, after the voice assistant switches from the half-screen state (H) to the full-screen state (L), the user intent is determined through multiple rounds of voice interaction with the user, and the user intent is, for example, "send WeChat to how many questions of weekend meals". If the feedback form of the service corresponding to the user intention is text feedback, voice feedback or card feedback, the display form of the voice assistant is still in a full screen state (L), and if the feedback form of the service corresponding to the user intention is split screen feedback, the display form of the voice assistant is switched to a suspended state (F).
Illustratively, the voice assistant is displayed in a full screen state (H), as shown in fig. 13 (a). Referring to fig. 13 (a), the voice assistant is in a sound receiving state, as shown in 1301, the prompt message is "ask what help is needed", as shown in 1303, the prompt graphic is a sound wave graphic, two buttons "1" and "2" are displayed on two sides of the sound wave graphic, wherein "1" and "2" are used for switching the input form of the voice assistant, and the prompt message shown in 1301 and the prompt graphic shown in 1303 are used for prompting the user to input the instruction message. As shown at 1302, the voice skill recommendation items are "turn off wireless network", "change ring tone", etc. The user inputs the instruction information "send a WeChat to how many questions weekend for eating" by voice or the like, as shown in fig. 13 (b). Referring to fig. 13 (b), the "simulated click" skill starts, the "WeChat" application is opened and displayed in full screen, as shown in 1303, the prompt graphic is a sound wave graphic, buttons "1" and "2" are displayed on two sides of the sound wave graphic, the contents shown in 1301 and 1302 are not changed, and as shown in 1304, the instruction information "sending WeChat to multi-question weekend meals" input by the user is shown. Subsequently, the voice assistant determines that no keyword is absent in the 'sending WeChat to multi-question weekend meal', the feedback form of the operation corresponding to the indication information is split screen feedback, the voice assistant triggers a 'simulated click' skill to match with a third-party service 'WeChat' application to complete the operation corresponding to the user intention, and at the moment, the display form of the voice assistant is switched to a suspended state (F), as shown in (c) of FIG. 13. Referring to fig. 13 (c), the prompt graphic is a floating ball, and a prompt message "send WeChat to have many questions on weekend eating" is displayed on one side of the floating ball, as shown in 1303. If the "simulated click" skill is successfully executed, and the voice assistant completes the operation indicated by the instruction information, the display form of the voice assistant is as shown in (c) in fig. 13, and a feedback text "sent" is displayed on one side of the hover ball shown in 1303. The voice assistant then enters the sleep state, as shown in (d) of fig. 13, a hover ball, shown at 1303, is used to indicate that the voice assistant stops receiving sound and enters the sleep state.
In addition, when the voice assistant displays in a half-screen state (H), the user can also touch the lower half part of the display interface of the mobile phone, so that the voice assistant stops receiving the sound and enters a dormant state. When the voice assistant is displayed in a full screen state (L), a user can access the setting items of the mobile phone and the history of voice conversation between the user and the voice assistant on the mobile phone, and the like, the realized functions are more complete compared with a half screen state (H), the voice interaction process is more immersive, and the attention of the user is more concentrated. In addition, for the half-screen state (H) and the full-screen state (L) of the voice assistant, the user can also stop receiving the sound of the voice assistant and enter the sleep state by clicking a graphic indicating that the voice assistant is in the sound receiving state or in a voice interaction mode. While the voice assistant is in the hover state (F), the user may click on the hover ball to cause the voice assistant to enter a sleep state before the "simulated click" skill is successfully performed, in the manner noted in the above example.
In addition, after completing the operation corresponding to the indication information, the voice assistant enters a sleep state, and the display form of the voice assistant in the sleep state may be a half-screen state (H), a full-screen state (L), or a floating state (F). Subsequently, the user wakes up the voice assistant again by clicking the floating ball, and the display form of the voice assistant may or may not change according to the difference of the display state when the voice assistant is in the sleep state, which will be described below with reference to the following drawings:
1. after the voice assistant is awakened again, the display form of the voice assistant changes.
If the display form of the voice assistant is in the floating state (F) when the voice assistant is in the sleep state, as shown in (e) of fig. 9, the display form of the voice assistant is switched to its default form, i.e., the half-screen state (H), as shown in (a) of fig. 9.
Illustratively, when the voice assistant is in the sleep state, the display interface of the handset is shown in (f) of fig. 12. Referring to fig. 12 (f), if the hover ball shown in 1201 is clicked, the voice assistant is awakened, the awakened form is the default form, i.e., the half-screen state (H), and the application displayed in the lower half part of the display interface of the mobile phone is the "WeChat" application, as shown in fig. 12 (g). Referring to fig. 12 (g), the voice assistant is in a sound reception state, as shown at 1301, a prompt graphic (e.g., a sound graphic) and prompt information (e.g., "hi i am listening …") prompt the user to input instruction information, and as shown at 1302, voice skill recommendation items such as "V", "return photo", "send WeChat", "red package", and the like.
2. And after the voice assistant is awakened again, the display form of the voice assistant is not changed.
If the voice assistant is in the sleep mode, the display mode is the half-screen mode (H) or the full-screen mode (L), as shown in (b) or (d) of fig. 9, after the voice assistant is awakened again, the display mode is still the half-screen mode (H) or the full-screen mode (L), as shown in (a) or (c) of fig. 9.
For example, if the voice assistant is in a sleep mode and the display mode is full screen (L), the voice assistant still displays full screen (L) after being woken up, as shown in (d) of fig. 6. Referring to (d) in fig. 6, the graph shown at 601 is switched to a hover ball, and two buttons "1" and "2" are displayed on both sides of the hover ball. 602 shows the updated speech skill recommendation items "keyword 3" and "keyword 4", and the weather card shown in 603 contains the feedback text "weather sunny today in the sea" and the detailed weather information in the sea today. Wherein, the "keyword 3" and the "keyword 4" may be the same as or different from the "keyword 1" and the "keyword 2". Illustratively, "keyword 3" and "keyword 4" are "photograph of last weekend" and "shared photograph", respectively. Wherein "1" and "2" on both sides of the hover ball shown at 601 are used to switch the input form of the voice assistant. Generally, the input form of the voice assistant is voice input, and when the button of "1" is clicked, the input form of the instruction information of the voice assistant is switched to keyboard input (keyboard is opened), and when the button of "2" is clicked, the input form of the instruction information of the voice assistant is switched to video input (camera is opened).
For example, if the voice assistant is in the sleep state and the display mode is the half-screen mode (H), the voice assistant still displays the half-screen mode (H) after being awakened, as shown in fig. 6 (b). Referring to FIG. 6 (b), the voice assistant is in a radio-enabled state, and graphics and/or text (e.g., "hi i am listening …") as shown at 601 may be used to prompt the user to enter instructional information. Optionally, as shown in 602, the voice assistant display interface further displays a voice skill recommendation item "V", "keyword 1", "keyword 2", "V" for switching the input form of the voice assistant, "keyword 1" is different from "keyword 2", and exemplarily, "keyword 1" and "keyword 2" are "photos of weekends" and "shared photos", respectively. For a detailed description of the voice skill recommendation item and the manner of inputting the instruction information by the user, reference may be made to the above description, which is not repeated herein.
And S403, waking up the voice assistant, and determining the state of the voice assistant after waking up according to the display state of the voice assistant during sleeping.
After the voice assistant enters the sleep state, the user can wake up the voice assistant by clicking the floating ball. For the change of the display form after the voice assistant wakes up again, refer to the description in step S402 above, and are not described herein again.
S404, the voice assistant receives the indication information again, determines the display form according to the new indication information, and enters the sleep state after executing the operation indicated by the indication information.
For a specific implementation process of step S404, reference may be made to the description in step S402, which is not described herein again.
Generally, in the form switching process of the voice assistant, the process of switching the voice assistant from the half-screen state (H) to the full-screen state (L) is irreversible. Namely, the voice assistant cannot switch from the full screen state (L) to the half screen state (H) or from the floating state (L) to the full screen state (L). In addition, when the voice assistant is in the half-screen state (H), the display form of the voice assistant can be switched to a full-screen state (L) and a floating state (F). When the voice assistant is in the full screen state (L), the display form of the voice assistant can be switched to the suspension state (F). When the voice assistant is in the floating state (F), the display mode of the voice assistant can only be switched to the default mode, and in this embodiment, the default mode of the voice assistant is the half-screen mode (H).
It should be noted that the modality conversion of the voice assistant may also be set to be reversible according to the requirement, that is, the voice assistant may switch from the full screen state (L) to the half screen state (H). When the voice assistant is displayed in a large half screen state (H2), the mode can be switched to a small half screen state (H1) according to the instruction information input by the user and the current scene, and when the voice assistant is displayed in a full screen state (L), the mode can be switched to a large half screen state (H2) according to the instruction information input by the user and the current scene.
S405, closing and exiting the voice assistant.
The display form of the voice assistant can be a half-screen state (H), a full-screen state (L) or a suspension state (F), and when the voice assistant displays in the half-screen state (H) or the full-screen state (L), a user can close and quit the voice assistant in a voice interaction or voice assistant display interface up-stroke mode. When the voice assistant is displayed in a floating state (F), the user can close and exit the voice assistant by means of sliding the floating ball up or down.
It should be noted that, if the voice assistant displays in the half-screen state (H), and no other application interface is displayed in the lower half of the display interface of the mobile phone, after the voice assistant exits, the display interface of the mobile phone is a task-free interface, as shown in (a) in fig. 5. If the voice assistant displays in a half-screen state, and other application interfaces (taking a "photo" application as an example) are displayed in the lower half of the display interface of the mobile phone, after the voice assistant exits, the application in the lower half of the display interface of the mobile phone is displayed in a full screen mode, and the display interface of the mobile phone is a single-task interface, as shown in (a) in fig. 6.
Through the process, the voice assistant can switch the form of the voice assistant according to the actual scene of the mobile phone and the indication information input by the user, so that system-level fusion of the form of the voice assistant and the actual scene on the mobile phone is realized, and user experience is improved.
Through the process, the application provides a voice assistant display method, and after the voice assistant is turned on, the voice assistant displays in a preset default display form. And then, according to the instruction information input into the voice assistant and the service indicated by the instruction information, determining the display form of the voice assistant, so that the voice assistant can determine the change of the actual scene according to the instruction information, and switch the corresponding form according to the actual scene, so that the voice assistant and the system are cooperated into a whole, and the system-level integration of the voice assistant and the mobile phone is realized.
Embodiments of the present application further provide a chip system, as shown in fig. 14, where the chip system includes at least one processor 1401 and at least one interface circuit 1402. The processor 1401 and the interface circuit 1402 may be interconnected by lines. For example, the interface circuit 1402 may be used to receive signals from other devices (e.g., a memory of the electronic device 100). Also for example, the interface circuit 1402 may be used to send signals to other devices, such as the processor 1401. Illustratively, the interface circuit 1402 may read instructions stored in memory and send the instructions to the processor 1401. The instructions, when executed by the processor 1401, may cause the electronic device to perform the various steps performed by the electronic device 100 (e.g. a cell phone) in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, the above-described embodiments may take the form, in whole or in part, of a computer program product comprising one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer.
The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. In the application process, part or all of the units can be selected according to actual needs to achieve the purpose of the scheme of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, or portions thereof, which substantially contribute to the prior art, may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes several instructions for enabling a device (which may be a personal computer, a server, a network device, a single chip or a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application.

Claims (25)

1. A voice assistant display method is applied to electronic equipment and is characterized in that the display form of the voice assistant comprises a half-screen state, a full-screen state and a suspension state; the half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is less than 1; the full screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is 1; the suspension state refers to the state that the voice assistant suspends and displays a current display interface of the electronic equipment; the method comprises the following steps:
turning on the voice assistant, and displaying the voice assistant in a first display form; the first display mode is a default display mode preset by the voice assistant;
and determining the display form of the voice assistant according to the instruction information input into the voice assistant and the service indicated by the instruction information.
2. The voice assistant display method according to claim 1, wherein the first display mode is a half-screen mode;
the opening of the voice assistant and the display of the voice assistant in a first display form specifically comprise:
opening the voice assistant, and moving the current task interface integrally downwards;
and the voice assistant displays the current task interface in a half screen mode in a split screen mode.
3. The method according to claim 1 or 2, wherein the determining a display mode of the voice assistant according to the indication information input to the voice assistant and the service indicated by the indication information specifically comprises:
if the indication information lacks keywords, the display form of the voice assistant is a full screen state;
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is a half-screen state;
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a full screen state; the application related to the service indicated by the indication information is displayed in a card form in a display interface of the voice assistant, and then the feedback form of the service indicated by the indication information is card feedback;
if the indicated information does not lack keywords and the feedback form of the service indicated by the indicated information is split screen feedback, the display form of the voice assistant is a suspension state; and if the service indicated by the indication information relates to application interface switching, the feedback form of the service indicated by the indication information is split screen feedback.
4. The method for displaying the voice assistant according to any one of claims 1 to 3, wherein the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, wherein the small half-screen state means that the ratio of the display interface of the voice assistant to the overall display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form.
5. The method for displaying a voice assistant according to claim 4, wherein the determining the display form of the voice assistant according to the indication information inputted into the voice assistant and the service indicated by the indication information comprises:
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is in a small and half screen state;
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a mostly screen form.
6. The method of any of claims 1-3, wherein after the voice assistant is turned on and the voice assistant is displayed in the first display modality, the method further comprises:
the voice assistant enters a sleep state;
the voice assistant is awakened and the display modality of the voice assistant is determined.
7. The method of claim 6, wherein waking up the voice assistant and determining the display modality of the voice assistant comprises:
if the display form of the voice assistant is in a suspended state when the voice assistant enters the sleep state, the display form of the voice assistant is the first display form after the voice assistant is awakened;
if the display form of the voice assistant is the half-screen state when the voice assistant enters the sleep state, the display form of the voice assistant is the half-screen state after the voice assistant is awakened;
if the voice assistant enters the sleep state and the display form of the voice assistant is the full screen state, the display form of the voice assistant is the full screen state after the voice assistant is awakened.
8. The method for displaying the voice assistant according to claim 7, wherein the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, wherein the small half-screen state means that the ratio of the display interface of the voice assistant to the overall display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form;
the method for waking up the voice assistant and determining the display form of the voice assistant comprises the following steps:
if the display form of the voice assistant is in the small half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in the small half screen state after the voice assistant is awakened;
and if the display form of the voice assistant is in a large half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in a large half screen state after the voice assistant is awakened.
9. The voice assistant display method according to claim 6 or 8, wherein after waking up the voice assistant and determining the display modality of the voice assistant, the method further comprises:
and determining a new display form of the voice assistant according to the new indication information and the service indicated by the new indication information.
10. The method according to claim 9, wherein determining a new display mode of the voice assistant according to the new indication information and the service corresponding to the new indication information comprises:
if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is text feedback, voice feedback or card feedback, the new display form of the voice assistant is a full screen state; the application related to the service indicated by the new indication information is displayed in a card form in a display interface of the voice assistant, and then the feedback form of the service indicated by the new indication information is card feedback;
if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is split screen feedback, the new display form of the voice assistant is a suspension state; if the service indicated by the new indication information relates to application interface switching, the feedback form of the service indicated by the new indication information is split screen feedback;
if the display form of the voice assistant is a half-screen state and the new indication information lacks keywords, the new display form of the voice assistant is a full-screen state;
if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is text feedback or voice feedback, the new display form of the voice assistant is a half-screen state;
if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is card feedback, the new display form of the voice assistant is a full-screen state;
and if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is split-screen feedback, the new display form of the voice assistant is a suspension state.
11. The method for displaying the voice assistant according to claim 9 or 10, wherein the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, wherein the small half-screen state means that the ratio of the display interface of the voice assistant to the overall display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form;
the determining a new display form of the voice assistant according to the new indication information and the service corresponding to the new indication information includes:
if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is text feedback or voice feedback, the display form of the voice assistant is in a small half screen state;
and if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is card feedback, the display form of the voice assistant is in a large half screen state.
12. An electronic device, comprising: a processor, a memory, and a touchscreen, the memory and the touchscreen coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to:
turning on the voice assistant, and displaying the voice assistant in a first display form; the first display mode is a default display mode preset by the voice assistant;
determining the display form of the voice assistant according to the instruction information input into the voice assistant and the service indicated by the instruction information;
the display form of the voice assistant comprises a half-screen state, a full-screen state and a suspension state; the half-screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is less than 1; the full screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is 1; the suspension state refers to the state that the voice assistant suspends and displays the current display interface of the electronic equipment.
13. The electronic device of claim 12, wherein the first display configuration is a half-screen configuration, and wherein the processor reads the computer instructions from the memory to cause the electronic device to further perform the following operations:
opening the voice assistant, and moving the current task interface integrally downwards;
and the voice assistant displays the current task interface in a half screen mode in a split screen mode.
14. The electronic device of claim 12 or 13, wherein when the processor reads the computer instructions from the memory, the electronic device further performs the following:
if the indication information lacks keywords, the display form of the voice assistant is a full screen state;
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is a half-screen state;
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a full screen state; the application related to the service indicated by the indication information is displayed in a card form in a display interface of the voice assistant, and then the feedback form of the service indicated by the indication information is card feedback;
if the indicated information does not lack keywords and the feedback form of the service indicated by the indicated information is split screen feedback, the display form of the voice assistant is a suspension state; and if the service indicated by the indication information relates to application interface switching, the feedback form of the service indicated by the indication information is split screen feedback.
15. The electronic device of any of claims 12-14, wherein the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, wherein the small half-screen state is a ratio of the display interface of the voice assistant to the overall display interface of the electronic device that is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form.
16. The electronic device of claim 15, wherein when the processor reads the computer instructions from the memory, the electronic device further performs the following:
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is text feedback or voice feedback, the display form of the voice assistant is in a small and half screen state;
if the indication information does not lack keywords and the feedback form of the service indicated by the indication information is card feedback, the display form of the voice assistant is a mostly screen form.
17. The electronic device of any of claims 12-14, wherein when the processor reads the computer instructions from the memory, the electronic device further performs the following:
the voice assistant enters a sleep state;
the voice assistant is awakened and the display modality of the voice assistant is determined.
18. The electronic device of claim 17, wherein when the processor reads the computer instructions from the memory, the electronic device further performs the following:
if the display form of the voice assistant is in a suspended state when the voice assistant enters the sleep state, the display form of the voice assistant is the first display form after the voice assistant is awakened;
if the display form of the voice assistant is the half-screen state when the voice assistant enters the sleep state, the display form of the voice assistant is the half-screen state after the voice assistant is awakened;
if the voice assistant enters the sleep state and the display form of the voice assistant is the full screen state, the display form of the voice assistant is the full screen state after the voice assistant is awakened.
19. The electronic device of claim 18, wherein the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, wherein the small half-screen state is that a ratio of a display interface of the voice assistant to an overall display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form; when the processor reads the computer instructions from the memory, the electronic device is further caused to perform the following operations:
the method for waking up the voice assistant and determining the display form of the voice assistant comprises the following steps:
if the display form of the voice assistant is in the small half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in the small half screen state after the voice assistant is awakened;
and if the display form of the voice assistant is in a large half screen state when the voice assistant enters the sleep state, the display form of the voice assistant is in a large half screen state after the voice assistant is awakened.
20. The electronic device of claim 17 or 19, wherein when the processor reads the computer instructions from the memory, the electronic device further performs the following:
and determining a new display form of the voice assistant according to the new indication information and the service indicated by the new indication information.
21. The electronic device of claim 20, wherein when the processor reads the computer instructions from the memory, the electronic device further performs the following:
if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is text feedback, voice feedback or card feedback, the new display form of the voice assistant is a full screen state; the application related to the service indicated by the indication information is displayed in a card form in a display interface of the voice assistant, and then the feedback form of the service indicated by the indication information is card feedback;
if the display form of the voice assistant is a full screen state and the feedback form of the service indicated by the new indication information is split screen feedback, the new display form of the voice assistant is a suspension state; if the service indicated by the new indication information relates to application interface switching, the feedback form of the service indicated by the new indication information is split screen feedback;
if the display form of the voice assistant is a half-screen state and the new indication information lacks keywords, the new display form of the voice assistant is a full-screen state;
if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is text feedback or voice feedback, the new display form of the voice assistant is a half-screen state;
if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is card feedback, the new display form of the voice assistant is a full-screen state;
and if the display form of the voice assistant is a half-screen state, the new indication information does not lack keywords, and the feedback form of the service corresponding to the new indication information is split-screen feedback, the new display form of the voice assistant is a suspension state.
22. The electronic device of claim 20 or 21, wherein the half-screen states of the voice assistant further include a small half-screen state and a large half-screen state, wherein the small half-screen state is that a ratio of a display interface of the voice assistant to an overall display interface of the electronic device is less than 0.5; the large half screen state means that the proportion of the display interface of the voice assistant to the whole display interface of the electronic equipment is more than 0.5; the first display form is a small half screen form; when the processor reads the computer instructions from the memory, the electronic device is further caused to perform the following operations:
the determining a new display form of the voice assistant according to the new indication information and the service corresponding to the new indication information includes:
if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is text feedback or voice feedback, the display form of the voice assistant is in a small half screen state;
and if the display form of the voice assistant is in a small half screen state, the new indication information does not lack keywords, and the feedback form of the service indicated by the new indication information is card feedback, the display form of the voice assistant is in a large half screen state.
23. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the voice assistant display method of any of claims 1-11.
24. A chip system comprising one or more processors that when executing instructions perform the voice assistant display method of any of claims 1-11.
25. A graphical user interface on an electronic device with a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, the graphical user interface comprising a graphical user interface displayed when the electronic device performs the voice assistant display method of any of claims 1-11.
CN201910883296.9A 2019-09-18 2019-09-18 Voice assistant display method and device Pending CN110825469A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910883296.9A CN110825469A (en) 2019-09-18 2019-09-18 Voice assistant display method and device
PCT/CN2020/114899 WO2021052263A1 (en) 2019-09-18 2020-09-11 Voice assistant display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910883296.9A CN110825469A (en) 2019-09-18 2019-09-18 Voice assistant display method and device

Publications (1)

Publication Number Publication Date
CN110825469A true CN110825469A (en) 2020-02-21

Family

ID=69548053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910883296.9A Pending CN110825469A (en) 2019-09-18 2019-09-18 Voice assistant display method and device

Country Status (2)

Country Link
CN (1) CN110825469A (en)
WO (1) WO2021052263A1 (en)

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813491A (en) * 2020-08-19 2020-10-23 广州汽车集团股份有限公司 Vehicle-mounted assistant anthropomorphic interaction method and device and automobile
WO2021052263A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Voice assistant display method and device
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US11070949B2 (en) 2015-05-27 2021-07-20 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US11126400B2 (en) 2015-09-08 2021-09-21 Apple Inc. Zero latency digital assistant
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US11169616B2 (en) 2018-05-07 2021-11-09 Apple Inc. Raise to speak
CN113805747A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Information reminding method, electronic equipment and computer readable storage medium
US11237797B2 (en) 2019-05-31 2022-02-01 Apple Inc. User activity shortcut suggestions
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
CN114327349A (en) * 2021-12-13 2022-04-12 青岛海尔科技有限公司 Method and device for determining smart card, storage medium and electronic device
US11321116B2 (en) 2012-05-15 2022-05-03 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US11380310B2 (en) 2017-05-12 2022-07-05 Apple Inc. Low-latency intelligent automated assistant
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US11431642B2 (en) 2018-06-01 2022-08-30 Apple Inc. Variable latency device coordination
US11467802B2 (en) 2017-05-11 2022-10-11 Apple Inc. Maintaining privacy of personal information
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US11516537B2 (en) 2014-06-30 2022-11-29 Apple Inc. Intelligent automated assistant for TV user interactions
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US11532306B2 (en) 2017-05-16 2022-12-20 Apple Inc. Detecting a trigger of a digital assistant
WO2023005711A1 (en) * 2021-07-28 2023-02-02 华为技术有限公司 Service recommendation method and electronic device
US11580990B2 (en) 2017-05-12 2023-02-14 Apple Inc. User-specific acoustic models
US11599331B2 (en) 2017-05-11 2023-03-07 Apple Inc. Maintaining privacy of personal information
US11657813B2 (en) 2019-05-31 2023-05-23 Apple Inc. Voice identification in digital assistant systems
US11670289B2 (en) 2014-05-30 2023-06-06 Apple Inc. Multi-command single utterance input method
US11671920B2 (en) 2007-04-03 2023-06-06 Apple Inc. Method and system for operating a multifunction portable electronic device using voice-activation
US11675491B2 (en) 2019-05-06 2023-06-13 Apple Inc. User configurable task triggers
US11675829B2 (en) 2017-05-16 2023-06-13 Apple Inc. Intelligent automated assistant for media exploration
US11696060B2 (en) 2020-07-21 2023-07-04 Apple Inc. User identification using headphones
US11705130B2 (en) 2019-05-06 2023-07-18 Apple Inc. Spoken notifications
US11710482B2 (en) 2018-03-26 2023-07-25 Apple Inc. Natural assistant interaction
US11727219B2 (en) 2013-06-09 2023-08-15 Apple Inc. System and method for inferring user intent from speech inputs
US11765209B2 (en) 2020-05-11 2023-09-19 Apple Inc. Digital assistant hardware abstraction
US11783815B2 (en) 2019-03-18 2023-10-10 Apple Inc. Multimodality in digital assistant systems
US11790914B2 (en) 2019-06-01 2023-10-17 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11798547B2 (en) 2013-03-15 2023-10-24 Apple Inc. Voice activated device for use with a voice-based digital assistant
US11809783B2 (en) 2016-06-11 2023-11-07 Apple Inc. Intelligent device arbitration and control
US11809483B2 (en) 2015-09-08 2023-11-07 Apple Inc. Intelligent automated assistant for media search and playback
US11838734B2 (en) 2020-07-20 2023-12-05 Apple Inc. Multi-device audio adjustment coordination
US11853536B2 (en) 2015-09-08 2023-12-26 Apple Inc. Intelligent automated assistant in a media environment
US11854539B2 (en) 2018-05-07 2023-12-26 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11853647B2 (en) 2015-12-23 2023-12-26 Apple Inc. Proactive assistance based on dialog communication between devices
US11886805B2 (en) 2015-11-09 2024-01-30 Apple Inc. Unconventional virtual assistant interactions
US11888791B2 (en) 2019-05-21 2024-01-30 Apple Inc. Providing message response suggestions
US11893992B2 (en) 2018-09-28 2024-02-06 Apple Inc. Multi-modal inputs for voice commands
US11914848B2 (en) 2020-05-11 2024-02-27 Apple Inc. Providing relevant data items based on context
US11947873B2 (en) 2015-06-29 2024-04-02 Apple Inc. Virtual assistant for media playback

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102655554A (en) * 2012-04-19 2012-09-05 惠州Tcl移动通信有限公司 Wireless communication equipment and control method thereof in navigation
CN104731613A (en) * 2015-01-30 2015-06-24 深圳市中兴移动通信有限公司 Quick application starting method and system
CN104898952A (en) * 2015-06-16 2015-09-09 魅族科技(中国)有限公司 Terminal screen splitting implementing method and terminal
CN105302837A (en) * 2014-07-31 2016-02-03 腾讯科技(深圳)有限公司 Information query method and terminal
CN107102806A (en) * 2017-01-25 2017-08-29 维沃移动通信有限公司 A kind of split screen input method and mobile terminal
CN107315518A (en) * 2017-06-27 2017-11-03 努比亚技术有限公司 A kind of terminal split screen method, device and computer-readable recording medium
CN109243462A (en) * 2018-11-20 2019-01-18 广东小天才科技有限公司 A kind of voice awakening method and device
CN109491562A (en) * 2018-10-09 2019-03-19 珠海格力电器股份有限公司 A kind of interface display method and terminal device of voice assistant application program
CN109669754A (en) * 2018-12-25 2019-04-23 苏州思必驰信息科技有限公司 The dynamic display method of interactive voice window, voice interactive method and device with telescopic interactive window

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102450882B1 (en) * 2017-12-22 2022-10-05 삼성전자주식회사 Electronic Device and the Method for Operating Function in accordance with stroke input by the Device
CN109151200A (en) * 2018-08-27 2019-01-04 维沃移动通信有限公司 A kind of means of communication and mobile terminal
CN109584879B (en) * 2018-11-23 2021-07-06 华为技术有限公司 Voice control method and electronic equipment
CN110018858B (en) * 2019-04-02 2022-03-01 杭州蓦然认知科技有限公司 Application management method and device based on voice control
CN110825469A (en) * 2019-09-18 2020-02-21 华为技术有限公司 Voice assistant display method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102655554A (en) * 2012-04-19 2012-09-05 惠州Tcl移动通信有限公司 Wireless communication equipment and control method thereof in navigation
CN105302837A (en) * 2014-07-31 2016-02-03 腾讯科技(深圳)有限公司 Information query method and terminal
CN104731613A (en) * 2015-01-30 2015-06-24 深圳市中兴移动通信有限公司 Quick application starting method and system
CN104898952A (en) * 2015-06-16 2015-09-09 魅族科技(中国)有限公司 Terminal screen splitting implementing method and terminal
CN107102806A (en) * 2017-01-25 2017-08-29 维沃移动通信有限公司 A kind of split screen input method and mobile terminal
CN107315518A (en) * 2017-06-27 2017-11-03 努比亚技术有限公司 A kind of terminal split screen method, device and computer-readable recording medium
CN109491562A (en) * 2018-10-09 2019-03-19 珠海格力电器股份有限公司 A kind of interface display method and terminal device of voice assistant application program
CN109243462A (en) * 2018-11-20 2019-01-18 广东小天才科技有限公司 A kind of voice awakening method and device
CN109669754A (en) * 2018-12-25 2019-04-23 苏州思必驰信息科技有限公司 The dynamic display method of interactive voice window, voice interactive method and device with telescopic interactive window

Cited By (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11671920B2 (en) 2007-04-03 2023-06-06 Apple Inc. Method and system for operating a multifunction portable electronic device using voice-activation
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US11900936B2 (en) 2008-10-02 2024-02-13 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US11321116B2 (en) 2012-05-15 2022-05-03 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US11862186B2 (en) 2013-02-07 2024-01-02 Apple Inc. Voice trigger for a digital assistant
US11557310B2 (en) 2013-02-07 2023-01-17 Apple Inc. Voice trigger for a digital assistant
US11636869B2 (en) 2013-02-07 2023-04-25 Apple Inc. Voice trigger for a digital assistant
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US11798547B2 (en) 2013-03-15 2023-10-24 Apple Inc. Voice activated device for use with a voice-based digital assistant
US11727219B2 (en) 2013-06-09 2023-08-15 Apple Inc. System and method for inferring user intent from speech inputs
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US11670289B2 (en) 2014-05-30 2023-06-06 Apple Inc. Multi-command single utterance input method
US11699448B2 (en) 2014-05-30 2023-07-11 Apple Inc. Intelligent assistant for home automation
US11810562B2 (en) 2014-05-30 2023-11-07 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US11516537B2 (en) 2014-06-30 2022-11-29 Apple Inc. Intelligent automated assistant for TV user interactions
US11838579B2 (en) 2014-06-30 2023-12-05 Apple Inc. Intelligent automated assistant for TV user interactions
US11842734B2 (en) 2015-03-08 2023-12-12 Apple Inc. Virtual assistant activation
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US11070949B2 (en) 2015-05-27 2021-07-20 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display
US11947873B2 (en) 2015-06-29 2024-04-02 Apple Inc. Virtual assistant for media playback
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US11853536B2 (en) 2015-09-08 2023-12-26 Apple Inc. Intelligent automated assistant in a media environment
US11126400B2 (en) 2015-09-08 2021-09-21 Apple Inc. Zero latency digital assistant
US11809483B2 (en) 2015-09-08 2023-11-07 Apple Inc. Intelligent automated assistant for media search and playback
US11954405B2 (en) 2015-09-08 2024-04-09 Apple Inc. Zero latency digital assistant
US11550542B2 (en) 2015-09-08 2023-01-10 Apple Inc. Zero latency digital assistant
US11809886B2 (en) 2015-11-06 2023-11-07 Apple Inc. Intelligent automated assistant in a messaging environment
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US11886805B2 (en) 2015-11-09 2024-01-30 Apple Inc. Unconventional virtual assistant interactions
US11853647B2 (en) 2015-12-23 2023-12-26 Apple Inc. Proactive assistance based on dialog communication between devices
US11657820B2 (en) 2016-06-10 2023-05-23 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US11809783B2 (en) 2016-06-11 2023-11-07 Apple Inc. Intelligent device arbitration and control
US11749275B2 (en) 2016-06-11 2023-09-05 Apple Inc. Application integration with a digital assistant
US11467802B2 (en) 2017-05-11 2022-10-11 Apple Inc. Maintaining privacy of personal information
US11599331B2 (en) 2017-05-11 2023-03-07 Apple Inc. Maintaining privacy of personal information
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US11580990B2 (en) 2017-05-12 2023-02-14 Apple Inc. User-specific acoustic models
US11538469B2 (en) 2017-05-12 2022-12-27 Apple Inc. Low-latency intelligent automated assistant
US11380310B2 (en) 2017-05-12 2022-07-05 Apple Inc. Low-latency intelligent automated assistant
US11862151B2 (en) 2017-05-12 2024-01-02 Apple Inc. Low-latency intelligent automated assistant
US11837237B2 (en) 2017-05-12 2023-12-05 Apple Inc. User-specific acoustic models
US11532306B2 (en) 2017-05-16 2022-12-20 Apple Inc. Detecting a trigger of a digital assistant
US11675829B2 (en) 2017-05-16 2023-06-13 Apple Inc. Intelligent automated assistant for media exploration
US11710482B2 (en) 2018-03-26 2023-07-25 Apple Inc. Natural assistant interaction
US11900923B2 (en) 2018-05-07 2024-02-13 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11854539B2 (en) 2018-05-07 2023-12-26 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11907436B2 (en) 2018-05-07 2024-02-20 Apple Inc. Raise to speak
US11169616B2 (en) 2018-05-07 2021-11-09 Apple Inc. Raise to speak
US11487364B2 (en) 2018-05-07 2022-11-01 Apple Inc. Raise to speak
US11360577B2 (en) 2018-06-01 2022-06-14 Apple Inc. Attention aware virtual assistant dismissal
US11630525B2 (en) 2018-06-01 2023-04-18 Apple Inc. Attention aware virtual assistant dismissal
US11431642B2 (en) 2018-06-01 2022-08-30 Apple Inc. Variable latency device coordination
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US11893992B2 (en) 2018-09-28 2024-02-06 Apple Inc. Multi-modal inputs for voice commands
US11783815B2 (en) 2019-03-18 2023-10-10 Apple Inc. Multimodality in digital assistant systems
US11705130B2 (en) 2019-05-06 2023-07-18 Apple Inc. Spoken notifications
US11675491B2 (en) 2019-05-06 2023-06-13 Apple Inc. User configurable task triggers
US11888791B2 (en) 2019-05-21 2024-01-30 Apple Inc. Providing message response suggestions
US11657813B2 (en) 2019-05-31 2023-05-23 Apple Inc. Voice identification in digital assistant systems
US11237797B2 (en) 2019-05-31 2022-02-01 Apple Inc. User activity shortcut suggestions
US11790914B2 (en) 2019-06-01 2023-10-17 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
WO2021052263A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Voice assistant display method and device
US11765209B2 (en) 2020-05-11 2023-09-19 Apple Inc. Digital assistant hardware abstraction
US11914848B2 (en) 2020-05-11 2024-02-27 Apple Inc. Providing relevant data items based on context
US11924254B2 (en) 2020-05-11 2024-03-05 Apple Inc. Digital assistant hardware abstraction
US11838734B2 (en) 2020-07-20 2023-12-05 Apple Inc. Multi-device audio adjustment coordination
US11750962B2 (en) 2020-07-21 2023-09-05 Apple Inc. User identification using headphones
US11696060B2 (en) 2020-07-21 2023-07-04 Apple Inc. User identification using headphones
CN111813491A (en) * 2020-08-19 2020-10-23 广州汽车集团股份有限公司 Vehicle-mounted assistant anthropomorphic interaction method and device and automobile
CN111813491B (en) * 2020-08-19 2020-12-18 广州汽车集团股份有限公司 Vehicle-mounted assistant anthropomorphic interaction method and device and automobile
WO2023005711A1 (en) * 2021-07-28 2023-02-02 华为技术有限公司 Service recommendation method and electronic device
CN113805747A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Information reminding method, electronic equipment and computer readable storage medium
CN113805747B (en) * 2021-08-12 2023-07-25 荣耀终端有限公司 Information reminding method, electronic equipment and computer readable storage medium
CN114327349A (en) * 2021-12-13 2022-04-12 青岛海尔科技有限公司 Method and device for determining smart card, storage medium and electronic device
CN114327349B (en) * 2021-12-13 2024-03-22 青岛海尔科技有限公司 Smart card determining method and device, storage medium and electronic device

Also Published As

Publication number Publication date
WO2021052263A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
WO2021052263A1 (en) Voice assistant display method and device
RU2766255C1 (en) Voice control method and electronic device
CN110114747B (en) Notification processing method and electronic equipment
CN110138959B (en) Method for displaying prompt of human-computer interaction instruction and electronic equipment
CN110910872B (en) Voice interaction method and device
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
CN111819533B (en) Method for triggering electronic equipment to execute function and electronic equipment
CN110633043A (en) Split screen processing method and terminal equipment
CN111742539B (en) Voice control command generation method and terminal
CN111602108B (en) Application icon display method and terminal
WO2021052139A1 (en) Gesture input method and electronic device
CN114077365A (en) Split screen display method and electronic equipment
WO2021218429A1 (en) Method for managing application window, and terminal device and computer-readable storage medium
CN111835904A (en) Method for starting application based on context awareness and user portrait and electronic equipment
CN113141483A (en) Screen sharing method based on video call and mobile device
CN115589051B (en) Charging method and terminal equipment
CN114995715B (en) Control method of floating ball and related device
CN113380240B (en) Voice interaction method and electronic equipment
CN115022807A (en) Express delivery information reminding method and electronic equipment
WO2024012346A1 (en) Task migration method, electronic device, and system
WO2022042774A1 (en) Profile picture display method and electronic device
WO2022052767A1 (en) Method for controlling device, electronic device, and system
CN113973152A (en) Unread message quick reply method and electronic equipment
CN115883710A (en) Identification method and device for incoming call number
CN115513571A (en) Control method of battery temperature and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221

RJ01 Rejection of invention patent application after publication