CN111190668A - User interface UI event response method and device, electronic equipment and storage medium - Google Patents
User interface UI event response method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111190668A CN111190668A CN201911311290.0A CN201911311290A CN111190668A CN 111190668 A CN111190668 A CN 111190668A CN 201911311290 A CN201911311290 A CN 201911311290A CN 111190668 A CN111190668 A CN 111190668A
- Authority
- CN
- China
- Prior art keywords
- event
- gui
- data
- intention
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000004044 response Effects 0.000 title claims abstract description 28
- 230000001960 triggered effect Effects 0.000 claims abstract description 48
- 238000013507 mapping Methods 0.000 claims description 53
- 230000009471 action Effects 0.000 claims description 42
- 238000012545 processing Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 9
- 238000003058 natural language processing Methods 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 description 114
- 230000006870 function Effects 0.000 description 24
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure provides a user interface UI event response method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring UI event data uploaded by a client; acquiring an event intention corresponding to the triggered UI event based on the UI event data; accessing a target skill for responding to the UI event based on the event intent to provide the service requested by the UI event to the client. The embodiment of the disclosure can improve the expansibility of the client to respond to the UI event.
Description
Technical Field
The disclosure relates to the field of artificial intelligence, in particular to a user interface UI event response method, a user interface UI event response device, electronic equipment and a storage medium.
Background
With the development of information technology, people more and more frequently and rapidly acquire various intelligent services through clients in daily life. For example: rapidly jumping to a desired program channel by using the intelligent television; and the intelligent sound box is used for quickly searching and playing the song to be listened. In the prior art, in order to enable a client to support the acquisition of an intelligent service, the client is required to have certain event intention understanding capability when a User Interface (UI) event is triggered. This situation results in a lower scalability, with only some clients with event intent understanding capability able to respond to UI events.
Disclosure of Invention
An object of the present disclosure is to provide a method, an apparatus, an electronic device, and a storage medium for responding to a UI event in a user interface, which can improve the expandability of a client in responding to the UI event.
According to an aspect of the disclosed embodiments, a method for responding to a user interface UI event is disclosed, the method comprising:
acquiring UI event data uploaded by a client;
acquiring an event intention corresponding to the triggered UI event based on the UI event data;
accessing a target skill for responding to the UI event based on the event intent to provide the service requested by the UI event to the client.
According to an aspect of the disclosed embodiments, a user interface UI event response apparatus is disclosed, the apparatus comprising:
the first acquisition module is configured to acquire UI event data uploaded by the client;
the second acquisition module is configured to acquire an event intention corresponding to the triggered UI event based on the UI event data;
an access module configured to access a target skill for responding to the UI event based on the event intent to provide a service requested by the UI event to the client.
According to an aspect of the disclosed embodiments, there is disclosed a user interface, UI, event response electronic device, comprising: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the preceding claims.
According to an aspect of embodiments of the present disclosure, a computer-readable storage medium is disclosed, having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any of the preceding claims.
In the embodiment of the disclosure, when responding to the UI event, the client transparently transmits the UI event data to the cloud server, and the cloud server confirms the intention of the event according to the obtained UI event data, and then accesses the target skill responding to the UI event on this basis, thereby providing the service requested by the UI event to the client. By the method, the understanding of the UI event data is concentrated on the cloud server, and the limitation that the client side is required to understand the event intention corresponding to the UI event does not exist, so that the threshold of the client side for responding to the UI event is reduced, and the expansibility of the client side for responding to the UI event is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates an architectural diagram of a user interface UI event response application according to one embodiment of the disclosure.
FIG. 2 illustrates a display interface before and after a client responds to a UI event according to one embodiment of the disclosure.
FIG. 3 illustrates a flow diagram of a user interface UI event response method according to one embodiment of the present disclosure.
FIG. 4 illustrates an overall technical architecture according to one embodiment of the present disclosure.
FIG. 5 illustrates a process of registering, publishing GUI events according to one embodiment of the present disclosure.
FIG. 6 illustrates a process of triggering a VUI event according to one embodiment of the present disclosure.
FIG. 7 illustrates a GUI event triggering process according to one embodiment of the present disclosure.
Fig. 8 illustrates a block diagram of a user interface UI event response device according to an embodiment of the present disclosure.
FIG. 9 illustrates a hardware diagram of a user interface UI event response electronic device, according to one embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
First, the concept related to the embodiments of the present disclosure will be explained.
In the embodiment of the present disclosure, a UI event refers to a process in which an instruction sent through a user interface UI (user interface) is responded. Among them, UI events can be largely classified into two types. 1. The process in which the instructions sent through the graphical user interface GUI (graphical user interface) are responded to, the GUI event, is mainly triggered by an action on the client screen. For example: the user triggers the corresponding GUI event by clicking the "next" button on the client screen. 2. The process in which an instruction sent through a voice User interface VUI (voice User interface) is responded to, a VUI event, is mainly triggered by voice sent to a client. For example: the user triggers the corresponding VUI event by saying "i would like to listen to the album of zhou jersey" to the client.
The UI event data refers to related data triggering a corresponding UI event, and includes at least data directly triggering a corresponding UI event. For example: voice data that triggers a VUI event, action data that triggers a GUI event, and component data.
The UI event intention refers to an intention indicated by an instruction transmitted through the user interface UI to describe a service requested by the instruction. When the storage management is performed in the form of data, the event is intended to include at least data corresponding to the indicated action. For example: the event at the natural semantic level is meant to be- "play next song", and when the storage management is performed in the form of data, the event at the data form may be meant to be- "audio _ next" to indicate the action of "play next".
Skill refers to a set or a particular function that can provide a particular service; the target skill refers to a skill to satisfy the intention of a corresponding event in response to a triggered UI event.
The architecture for an application of an embodiment of the present disclosure is described below with reference to FIG. 1.
FIG. 1 illustrates the components of an architecture for an embodiment of the present disclosure: client 10, cloud server 20. The user may issue a corresponding instruction to the client 10 by clicking a client screen or by sending a voice to the client, thereby triggering a corresponding UI event to acquire a corresponding service. And responding to the UI event, namely providing the corresponding service for the user.
Specifically, after receiving the instruction and triggering the UI event, the client 10 uploads the corresponding UI event data to the cloud server 20. The cloud server 20 obtains an event intention corresponding to the UI event based on the UI event data, so as to access a target skill for responding to the UI event, thereby providing the service requested by the UI event to the client 10, and thus enabling the client 10 to provide the corresponding service to the user.
It should be noted that it is understood that the triggering UI event is not necessarily a user, but may also be an intermediate device between the user and the client 10. The architecture components of this example are exemplary only, and are not intended as limitations on the scope of use or functionality of the present disclosure.
FIG. 2 illustrates a user interface before and after a client responds to a UI event according to an embodiment of the disclosure: in the left diagram, the client plays "song 1", and simultaneously displays-song 1 related information (song name, singer name and album information of song 1), the playing state information of song 1 and the playing progress information of song 1 in the UI of the client screen; after the user clicks the 'next' button on the client screen in the left image, the client ends the playing of song 1, plays the next song, namely 'song 2', and refreshes the UI of the client screen, namely the related information of song 2 (the name of the song 2, the name of the singer and the album information), the playing state information of song 2 and the playing progress information of song 2.
Specifically, in this embodiment, when the user clicks the "next" button, the corresponding GUI event is triggered. And the client acquires corresponding GUI event data according to the clicking action of the user and uploads the GUI event data to the cloud server. And the cloud server acquires the corresponding event intention of the natural semantic layer, namely 'playing the next song', according to the acquired GUI event data, and then accesses the target skill. After the target skill is accessed, the cloud server transmits relevant parameters and instructions for playing the next song to the client. The client can play the next song of song 1, song 2, according to the received parameters and instructions, and display the information about song 2 on the screen of the client.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Specific implementations of embodiments of the present disclosure are described in detail below.
Referring to fig. 3, a user interface UI event response method includes:
and step 330, accessing a target skill for responding to the UI event based on the event intention so as to provide the service requested by the UI event to the client.
In the embodiment of the disclosure, when responding to the UI event, the client transparently transmits the UI event data to the cloud server, and the cloud server confirms the intention of the event according to the obtained UI event data, and then accesses the target skill responding to the UI event on this basis, thereby providing the service requested by the UI event to the client. By the method, the understanding of the UI event data is concentrated on the cloud server, and the limitation that the client side is required to understand the event intention corresponding to the UI event does not exist, so that the threshold of the client side for responding to the UI event is reduced, and the expansibility of the client side for responding to the UI event is improved.
The steps of the disclosed embodiments are described in detail below.
In step 310, UI event data uploaded by the client is obtained.
In step 320, based on the UI event data, an event intention corresponding to the triggered UI event is obtained.
In the embodiment of the present disclosure, when a UI event is triggered, the client collects and uploads UI event data that triggers the UI event. For example: the user sends the voice of 'I want to listen to the album of Zhou Jilun' to the client, and sends a corresponding voice command to trigger a corresponding VUI event. And the client uploads the collected VUI event data related to the VUI event to the cloud server. Another example is: the user issues a corresponding action instruction by clicking a 'play next' button on a screen of the client, and triggers a corresponding GUI event. And the client uploads the collected GUI event data related to the GUI event to the cloud server.
In the embodiment of the disclosure, after the client uploads the UI event data, the cloud server obtains the corresponding event intention on the basis according to the obtained UI event data.
In one embodiment, the UI event data is voice user interface VUI event data, the triggered UI event is a VUI event, and the VUI event data includes voice data triggering the VUI event.
Based on the UI event data, acquiring an event intention corresponding to the triggered UI event, wherein the event intention comprises the following steps: and processing the voice data based on a preset natural language processing technology to acquire an event intention corresponding to the VUI event.
In the embodiment, the user sends a voice command to the client through the VUI to trigger a corresponding VUI event. Therefore, the client uploads the VUI event data containing the voice data triggering the VUI event to the cloud server. After receiving the VUI event data, the cloud server processes the voice data based on a preset natural language processing technology, performs semantic analysis on the voice data, and further obtains an event intention corresponding to the VUI event.
For example: the user triggers the corresponding VUI event by saying "i would like to listen to the next song" to the client. The client uploads the VUI event data containing the voice data of 'i want to listen to the next song' to the cloud server. Therefore, the cloud server carries out semantic analysis on the voice data of 'i want to listen to next song' based on a preset natural language processing technology, and further obtains an event intention of a natural semantic layer corresponding to the VUI event- 'play next'. Specifically, the event at the natural semantic level is intended to be stored in the form of data, and may be denoted as "audio _ next".
The embodiment has the advantages that the voice data in the natural language processing technology is processed, so that the client only needs to collect and upload VUI event data containing the voice data, the intention of the event does not need to be understood, and the expansibility of the client for responding to the UI event is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, the UI event data is GUI event data, the triggered UI event is a GUI event, and the GUI event data is structured data in a preset format.
Based on the UI event data, acquiring an event intention corresponding to the triggered UI event, wherein the event intention comprises the following steps: and acquiring an event intention corresponding to the GUI event based on the comparison between the GUI event data and a preset GUI event mapping table, wherein the GUI event mapping table stores GUI event data corresponding to pre-registered GUI events and the event intention mapped by the GUI event data.
In this embodiment, a GUI event mapping table is preset in the cloud server. The GUI event mapping table pre-stores GUI event data corresponding to the GUI events and the GUI event data is mapped to event intents. And the user triggers the corresponding GUI event by issuing an action instruction to the client. Therefore, the client uploads the structured GUI event data containing the preset format triggering the GUI event to the cloud server. And after receiving the GUI event data, the cloud server can acquire an event intention corresponding to the GUI event according to the GUI event mapping table.
The embodiment has the advantages that the client only needs to collect and upload GUI event data by presetting the GUI event mapping table, and does not need to understand the intention of the event, so that the expansibility of the client for responding to the UI event is improved.
In one embodiment, the GUI event data includes: action data triggering corresponding GUI events, component data triggering corresponding GUI events.
In this embodiment, the structured GUI event data in the preset format includes two parts of component data: action data triggering corresponding GUI events, component data triggering corresponding GUI events. Wherein the component data describes components that are specifically triggered when a GUI event is triggered; the action data describes actions specifically triggered when a GUI event is triggered and further describes actions specifically triggered when the component is triggered.
For example: when the user plays a song at the client, the user clicks the "favorites" button, thereby triggering a corresponding GUI event. The GUI event data collected by the client is 'add _ favorite', wherein 'favorite' describes the triggered 'favorites' component, and 'add' describes the triggered 'add' action.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Table 1 below shows a GUI event mapping table in an embodiment of the present disclosure.
In this embodiment, the GUI event mapping table stores GUI event data corresponding to 4 pre-registered GUI events and event intents mapped by the GUI event data. 1. GUI events triggered by clicking the "album" button: the corresponding GUI event data is 'open _ space', and the event intent mapped by the GUI event data is '@ action _ open _ space'; 2. GUI events triggered by clicking the "favorites" button: the corresponding GUI event data is 'add _ favorite', and the event intent mapped by the GUI event data is '@ action _ add _ favorite'; 3. clicking the "cancel favorites" button triggered GUI event: the corresponding GUI event data is "cancel _ favorite", and the event intent mapped by the GUI event data is "@ action _ cancel _ favorite"; 4. while playing the song, clicking on the GUI event triggered by the "next" button: the corresponding GUI event data is "click _ audio _ next", and the event intent mapped by the GUI event data is "@ action _ audio _ next". Wherein "@ action" is the syntax prefix of the event intention in this embodiment.
GUI event (event description) | GUI event data | Intention of event |
Clicking on the album button | open_palace | @action_open_palace |
Click on "collect" button | add_favorite | @action_add_favorite |
Clicking the button for canceling Collection | cancel_favorite | @action_delete_favorite |
Click the "next" button | click_audio_next | @action_audio_next |
TABLE 1
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, the form of the triggered component includes: buttons, check boxes, lists. The form of the triggered action includes: clicking and sliding.
In this embodiment, the user may trigger a GUI event by operating a button on the client screen, such as: clicking the button; GUI events can also be triggered by operating check boxes on the client screen, such as: clicking the check box; GUI events can also be triggered by operating on a list on the client screen, such as: the list is slid.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, the GUI event is pre-registered by:
acquiring GUI event data corresponding to the GUI event requesting registration;
acquiring an event intention of the request mapped with the GUI event data;
and storing the mapping relation between the GUI event data and the event intention into the GUI event mapping table to realize the pre-registration of the GUI event.
In this embodiment, a client standard UI developer may develop and build new GUI events according to business requirements. Specifically, GUI event data corresponding to a new GUI event and an event intention mapped by the GUI event data are constructed. After the construction is completed, the developer can send the GUI event data and the event intention to the cloud server, and send a registration request for the GUI event to the cloud server to request registration of the GUI event.
After receiving the registration request of the GUI event, the cloud server: obtaining GUI event data corresponding to the GUI event; acquiring an event intention mapped by the GUI event data; and storing the mapping relation between the GUI event and the event intention into a GUI event mapping table, thereby realizing the pre-registration of the GUI event.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, before storing the mapping relationship between the GUI event data and the event intent in the GUI event mapping table, the method further comprises: and checking the mapping relation between the GUI event data and the event intention.
In this embodiment, after receiving the registration request, the cloud server verifies the GUI event requesting registration before determining to register the GUI event requesting registration. Specifically, the mapping relationship between the GUI event data corresponding to the GUI event and the event intention mapped by the request and the GUI event data is verified, so as to ensure the correctness and validity of the GUI event requested to be registered.
In this embodiment, storing the mapping relationship between the GUI event data and the event intent in the GUI event mapping table includes: and if the mapping relation between the GUI event data and the event intention passes the verification, storing the mapping relation between the GUI event data and the event intention into the GUI event mapping table.
In this embodiment, after the GUI event requested to be registered passes the verification, that is, after the mapping relationship between the GUI event data corresponding to the GUI event and the event intent requested to be mapped to the GUI event data passes the verification, the registration of the GUI event is determined. And storing the mapping relation between the GUI event data and the event intention into a GUI event mapping table.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, the method further comprises: and generating a corresponding unique event identifier for the GUI event requesting registration to locate the GUI event.
In this embodiment, after receiving a registration request for a GUI event requesting registration, the cloud server: and generating a corresponding unique event identifier for the GUI event so as to accurately position the GUI event in subsequent query, call or other processing of the GUI event.
In one embodiment, the method comprises:
integrating the pre-registered GUI event into a UI resource corresponding to the GUI event;
and sending the UI resource to the client, so that the client can upload GUI event data corresponding to the GUI event according to the UI resource when the GUI event is triggered.
In this embodiment, after the GUI event pre-registration is completed, the GUI event is integrated into the UI resource corresponding to the GUI event, and the UI resource is sent to the client. The client acquires the UI resource: after receiving a UI replacing instruction, replacing the UI according to the UI resource, and displaying the replaced UI on a screen; and when the GUI event is triggered, querying in the UI resource so as to acquire GUI event data corresponding to the GUI event, and uploading the GUI event data to a cloud server. The GUI event is registered in the cloud server in advance, and the mapping relation between the corresponding GUI event data and the corresponding event intention is stored in the GUI event mapping table, so that the cloud server can acquire the event intention corresponding to the GUI event according to the GUI event data.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, sending the UI resource to the client includes: and responding to a UI resource updating request periodically sent by the client, and sending the UI resource to the client.
In this embodiment, the client periodically sends a UI resource update request to the cloud server to request an update of the UI resource stored by the client. And after receiving the UI resource updating request sent by the client, the cloud server sends the UI resource integrated with the pre-registered GUI event to the client.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In step 330, based on the event intent, target skills for responding to the UI event are accessed to provide the client with the service requested by the UI event.
In the embodiment of the disclosure, the skill routing service is preset in the cloud server. The skill routing service is built-in with skill routing logic, which can route to a target skill responding to a corresponding UI event according to an event intention.
After the cloud server obtains the event intention, the service can be routed to the target skill based on the event intention according to the preset skill routing service, and then the target skill is accessed to provide the service requested by the corresponding UI event for the client.
In one embodiment, before accessing the target skills for responding to the UI event based on the event intent, the method further comprises:
based on a preset safety filtering rule, carrying out safety filtering on the event intention, and determining whether the corresponding UI event is registered;
and if the UI event is not registered, refusing to access the corresponding target skill.
In this embodiment, the cloud server performs access of the target skill and provision of subsequent services only for the registered UI events. The event intention obtained according to the UI event data uploaded by the client may not be an event intention corresponding to a registered UI event. Therefore, after the event intention is acquired, before the access target skill is determined, the cloud server performs security filtering on the acquired event intention on the basis of a preset security filtering rule to determine whether the corresponding UI event is registered. And if the UI event is not registered, indicating that the response to the UI event has risk, refusing to access the target skill.
The embodiment has the advantages that through the safety filtering, the possibility of harm caused by illegal UI events is reduced, and the safety of responding to the UI events is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, before accessing the target skills for responding to the UI event based on the event intent, the method further comprises: and acquiring an event field corresponding to the UI event based on the UI event data, wherein the event field comprises the unique skill identification of the target skill.
Accessing a corresponding target skill based on the event intent, comprising:
determining a target skill corresponding to the UI event based on the event intention and the event field;
generating a service request aiming at the target skill based on the event intention and the event field;
accessing the target skill and sending the service request to the target skill to provide the service requested by the UI event.
In this embodiment, event fields are divided in advance for skills. The event field comprises a unique skill identification corresponding to the skill, and the skill identification is used for describing the field served by the skill; the event intention is further divided in the event field and is mainly used for describing actions performed by the skill. For example: the skill of "playing the next song" corresponds to the event field "music field", and the corresponding event intention is "audio _ next"; the event field corresponding to the skill of "storing the current movie" is "file (movie field)", and the corresponding event intention is "add _ favorite".
After receiving the UI event data uploaded by the client, the cloud server obtains an event intention corresponding to the triggered UI event based on the UI event data, and also obtains an event field corresponding to the triggered UI event based on the UI event data. Thereby positioning to the target skill according to the event intention and the event field; generating a service request aiming at the target skill; and accessing the target skill, and sending a service request to the target skill to provide the service requested by the UI event.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, the UI event is a VUI event, the UI event data is VUI event data, and the VUI event data includes voice data that triggers the VUI event. Based on the UI event data, acquiring an event field corresponding to the UI event, including: and processing the voice data based on a preset natural language processing technology to acquire an event field corresponding to the VUI event.
In this embodiment, the cloud server processes the voice data in the VUI event data based on a preset natural language processing technique, performs semantic analysis, and further determines the corresponding event field.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, the UI event is a GUI event, the UI event data is GUI event data, and the UI event data includes data of a page where the client is currently located. Based on the UI event data, acquiring an event field corresponding to the UI event, including: and acquiring an event field corresponding to the GUI event based on the current page data.
In this embodiment, after the GUI event is triggered, the GUI event data uploaded by the client includes page data where the client is currently located. The current page data of the client indicates the current page of the client, namely a song selection page, a movie selection page or other pages, so that the cloud server can determine the event field corresponding to the GUI event from the current state data of the client.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, before accessing the corresponding target skill based on the event intent, the method further comprises: and acquiring an action object parameter contained in the UI event data, wherein the action object parameter describes an action object aimed by the UI event.
Accessing a corresponding target skill based on the event intent, comprising:
determining a target skill corresponding to the UI event based on the event intention and the event field;
generating a service request aiming at the target skill based on the event intention, the event field and the action object parameter;
accessing the target skill and sending the service request to the target skill to provide the service requested by the UI event for the action object.
In this embodiment, before accessing the target skill, in addition to acquiring the event field from the UI event data, the cloud server also acquires the action object parameter to determine the action object to which the UI event is directed. Thereby positioning to the target skill according to the event intention and the event field; generating a service request aiming at the target skill based on the event intention and the already-acting object parameters in the event field; and accessing the target skill, and sending a service request to the target skill so as to provide the service requested by the UI event for the action object.
For example: the voice data in the obtained VUI event data is 'I want to listen to the album of Zhougelon', so that the condition that the event field is 'music', the event intention is 'play' and the action object is 'Zhougelon album' is determined. After the cloud server accesses the target skill capable of providing album playing, a corresponding service request is generated for the Zhongjilun album, so that the target skill request is used for playing the Zhongjilun album.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
The overall technical architecture in an embodiment of the present disclosure is described below with reference to fig. 4.
In this embodiment, the cloud server is logically divided into: the system comprises a database, a cloud WEB service, a voice service, a semantic service, a skill routing service, a skill service and an event registration service.
In particular, the database serves as a carrier of data, supporting the operation of other services.
The event registration service is responsible for managing the registration of UI events.
The cloud WEB service is responsible for interacting with an event management module in the client, so that the UI resource containing the new registration event is sent to the client.
And after the UI event is triggered, the client uploads the UI event data to the voice service of the cloud. The voice service comprises a voice processing module and a GUI event mapping module. The voice processing module is used for processing the voice data according to a preset natural language processing technology to obtain an event intention; the GUI event mapping module is used for mapping the GUI event data according to a preset GUI event mapping table to obtain an event intention.
The voice service sends the resulting event intent to the semantic service. The semantic service performs security filtering on the event intentions according to preset security filtering rules, determines whether the corresponding UI events are registered, and discards unregistered UI events.
The semantic service sends the event intent that passes the security filter to the skills routing service. The skill routing service routes the location to a target skill in the skill service based on the intent of the event.
And the skill service calls a target skill, responds to the UI event and provides corresponding service. Specifically, parameters generated by providing corresponding services are transmitted layer by layer through a skill routing service, a semantic service and a voice service, and are returned to the client, so that the client can execute corresponding actions according to the received parameters, and corresponding services are provided for the user.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
The following describes a process of registering and publishing GUI events in an embodiment of the present disclosure with reference to fig. 5.
In this embodiment, the registering of the UI event mainly involves a client standard UI developer, an event registration service, a cloud WEB service, and a client. The client standard UI developer can develop GUI events and UI resources.
Specifically, after a client standard UI developer develops and constructs a new GUI event, it requests a registration event from an event registration service in the cloud server. After the event registration service receives a registration request for a GUI event: a unique event identification for the GUI event is generated and an event intent is mapped. And requesting an event manager to integrate the GUI event, and if the event manager passes the audit, integrating the GUI event into the UI resource. And requesting to issue the UI resource to a UI manager, and issuing the UI resource after the UI manager passes the verification. Specifically, the cloud WEB service responds to a request for periodically updating the UI resource from the client and issues the UI resource to the client.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment of the present disclosure, an implementation process for providing a service requested by the UI event to the client after accessing the target skill is briefly described below.
In an embodiment, after accessing the target skills, specifically, after accessing the target skills provided by the skill service (the description of the embodiment can be understood with reference to the flow schematic after accessing the skill service shown in fig. 6 or fig. 7). The skills service populates the skills data, i.e. the data needed to enable the client to implement the respective function, according to the received data. For example: and the instruction related data of the function to be executed by the client.
The skills service also fills out a base _ type parameter, which includes relevant parameters for tid, according to preset internal logic. Wherein tid is a mapping table used for maintaining a UI template and stores a UI template number; the UI template is a template referred to by the client for UI display. That is, the client may select the corresponding UI template through the tid, and then display the UI with reference to the selected UI template.
The base _ type parameter filled by the skill service is transmitted to the skill routing service, and then the skill routing service fills the following parameters according to the base _ type parameter: tid, stillinfo, template url. The stillnfo stores parameters of the current event field, so that the client does not need to concern the current field after receiving the parameters; the template url (Uniform Resource Locator) is used to locate a Resource address where the UI template is located, so that the client can obtain the corresponding UI template according to the template url.
The skill routing service transmits the parameters layer by layer and transmits the parameters to the client all the time; during the transmission process, the voice service also generates a TTS (Text To Speech) instruction and a play instruction, and sends the TTS instruction and the play instruction To the client. Thereby enabling the client to execute respective instructions-converting text to speech according to TTS instructions to make a respective speech response to the user, for example: the text is 'Zhougelong album is being opened', after the TTS instruction is received, the client converts the text into voice, and sends out the voice of 'Zhougelong album is being opened', so as to respond to the user; and playing the corresponding song according to the playing instruction.
After receiving the instruction issued by the voice service and the parameters, the client executes the corresponding instruction; and selecting a UI template according to the tid, and displaying the UI according to the selected UI template so as to re-render the UI.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 6 illustrates a process of triggering a VUI event according to an embodiment of the present disclosure.
In this embodiment, the client receives voice data that triggers a VUI event-i "i want to listen to the album of zhou jeren". The client uploads the voice data to a voice service in a cloud server, and the voice data is transmitted by the voice service; then, the data is transmitted to the semantic service through a series of processes (such as source determination and data feature collection) of the intermediate service; semantic services make semantics clear and fill in relevant parameters, namely, event field domain, event intent, play and slot (action object) sources, Zhoujieren album; routing to the music field through the routing location of the skill routing service, and further routing to the target skill in the skill service in the music field; the skill service fills in the skill data and fills in the base _ type parameter; further, the template service in the skill routing service fills relevant parameters of tid, skilinifo and template url according to the base _ type parameter; during the transmission of the parameters, the voice service issues a TTS instruction and a playing instruction to the client, so that the client can execute the corresponding instructions. The client also selects a UI template from the received data according to the tid, and then displays the UI according to the selected UI template, so that the UI is re-rendered.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
FIG. 7 illustrates a GUI event triggering process according to an embodiment of the present disclosure.
In this embodiment, the user clicks the "load more" button on the client. The client calls an SDK (software development Kit) interface to send event data to a voice service; the voice service converts the event data into explicit semantics: the field of events domain is music, and the intent of event is planecore; then, the data is transmitted to the semantic service through a series of processes (such as source determination and data feature collection) of the intermediate service; semantic service defines semantics, adds semantic reference, and directly transmits: the field of events domain is music, and the intent of event is planecore; routing to the music field through the routing location of the skill routing service, and further routing to the target skill in the skill service in the music field; the skill service fills in the skill data and fills in the base _ type parameter; further, the template service in the skill routing service fills relevant parameters of tid, skilinifo and template url according to the base _ type parameter; during the transmission of the parameters, the voice service issues a TTS instruction and a playing instruction to the client, so that the client can execute the corresponding instructions. The client also selects a UI template from the received data according to the tid, and then displays the UI according to the selected UI template, so that the UI is re-rendered.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
According to an embodiment of the present disclosure, as shown in fig. 8, there is also provided a user interface UI event response apparatus including:
a first obtaining module 410 configured to obtain UI event data uploaded by a client;
a second obtaining module 420 configured to obtain an event intention corresponding to the triggered UI event based on the UI event data;
an accessing module 430 configured to access a target skill for responding to the UI event based on the event intent to provide the service requested by the UI event to the client.
In an exemplary embodiment of the present disclosure, the UI event data is voice user interface VUI event data, the triggered UI event is a VUI event, the VUI event data includes voice data triggering the VUI event, and the second obtaining module 420 is configured to: and processing the voice data based on a preset natural language processing technology to obtain an event intention corresponding to the VUI event.
In an exemplary embodiment of the present disclosure, the UI event data is GUI event data of a graphical user interface, the triggered UI event is a GUI event, and the GUI event data is structured data in a preset format; the second obtaining module 420 is configured to: and acquiring an event intention corresponding to the GUI event based on the comparison between the GUI event data and a preset GUI event mapping table, wherein the GUI event mapping table stores GUI event data corresponding to pre-registered GUI events and the event intention mapped by the GUI event data.
In an exemplary embodiment of the present disclosure, the GUI event data includes: action data triggering corresponding GUI events, component data triggering corresponding GUI events.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
acquiring GUI event data corresponding to the GUI event requesting registration;
obtaining an event intention requesting mapping with the GUI event data;
and storing the mapping relation between the GUI event data and the event intention into the GUI event mapping table to realize the pre-registration of the GUI event.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
verifying a mapping relationship between the GUI event data and the event intent;
and if the mapping relation between the GUI event data and the event intention passes the verification, storing the mapping relation between the GUI event data and the event intention into the GUI event mapping table.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and generating a corresponding unique event identifier for the GUI event requesting registration to locate the GUI event.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
integrating the pre-registered GUI event into a UI resource corresponding to the GUI event;
and sending the UI resource to a client, so that the client can upload GUI event data corresponding to the GUI event according to the UI resource when the GUI event is triggered.
In an exemplary embodiment of the disclosure, the apparatus is configured to: and responding to a UI resource updating request periodically sent by the client, and sending the UI resource to the client.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
based on a preset safety filtering rule, carrying out safety filtering on the event intention, and determining whether the corresponding UI event is registered;
and if the UI event is not registered, refusing to access the corresponding target skill.
In an exemplary embodiment of the disclosure, the apparatus is configured to: acquiring an event field corresponding to the UI event based on the UI event data, wherein the event field comprises a unique skill identifier of the target skill; the access module 430 is configured to:
determining a target skill corresponding to the UI event based on the event intention and the event field;
generating a service request aiming at the target skill based on the event intention and the event field;
accessing the target skill and sending the service request to the target skill to provide the service requested by the UI event.
In an exemplary embodiment of the disclosure, the apparatus is configured to: acquiring action object parameters contained in the UI event data, wherein the action object parameters describe action objects aimed at by the UI events; the access module 430 is configured to:
determining a target skill corresponding to the UI event based on the event intention and the event field;
generating a service request aiming at the target skill based on the event intention, the event field and the action object parameter;
accessing the target skill and sending the service request to the target skill to provide the service requested by the UI event for the action object.
The user interface UI event response electronic device 50 according to an embodiment of the present disclosure is described below with reference to fig. 9. The user interface UI event response electronic device 50 shown in fig. 9 is only an example and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
As shown in fig. 9, the user interface UI event response electronic device 50 is in the form of a general purpose computing device. The components of the user interface UI event response electronic device 50 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 that couples various system components including the memory unit 520 and the processing unit 510.
Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention as described in the description part of the above exemplary methods of the present specification. For example, the processing unit 510 may perform the various steps as shown in fig. 3.
The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.
The user interface UI event response electronic device 50 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the user interface UI event response electronic device 50, and/or with any devices (e.g., router, modem, etc.) that enable the user interface UI event response electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. Also, the user interface UI event response electronic device 50 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 560. As shown, network adapter 560 communicates with the other modules of user interface UI event response electronic device 50 over bus 530. It should be appreciated that, although not shown in the figures, other hardware and/or software modules may be used in conjunction with the user interface UI event response electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory (RGM), a Read Only Memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a portable compact disk read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (KGN) or a wide area network (WGN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (15)
1. A User Interface (UI) event response method is characterized by comprising the following steps:
acquiring UI event data uploaded by a client;
acquiring an event intention corresponding to the triggered UI event based on the UI event data;
accessing a target skill for responding to the UI event based on the event intent to provide the service requested by the UI event to the client.
2. The method of claim 1, wherein the UI event data is Voice User Interface (VUI) event data, the triggered UI event is a VUI event, the VUI event data comprises voice data triggering a VUI event,
based on the UI event data, acquiring an event intention corresponding to the triggered UI event, wherein the event intention comprises the following steps: and processing the voice data based on a preset natural language processing technology to obtain an event intention corresponding to the VUI event.
3. The method according to claim 1, wherein the UI event data is Graphical User Interface (GUI) event data, the triggered UI event is a GUI event, and the GUI event data is structured data in a preset format;
based on the UI event data, acquiring an event intention corresponding to the triggered UI event, wherein the event intention comprises the following steps: and acquiring an event intention corresponding to the GUI event based on the comparison between the GUI event data and a preset GUI event mapping table, wherein the GUI event mapping table stores GUI event data corresponding to pre-registered GUI events and the event intention mapped by the GUI event data.
4. The method of claim 3, wherein the GUI event data comprises: action data triggering corresponding GUI events, component data triggering corresponding GUI events.
5. The method of claim 3, wherein the GUI event is pre-registered by:
acquiring GUI event data corresponding to the GUI event requesting registration;
obtaining an event intention requesting mapping with the GUI event data;
and storing the mapping relation between the GUI event data and the event intention into the GUI event mapping table to realize the pre-registration of the GUI event.
6. The method of claim 5, wherein prior to storing the mapping relationship between the GUI event data and the event intent in the GUI event mapping table, the method further comprises: verifying a mapping relationship between the GUI event data and the event intent;
storing a mapping relationship between the GUI event data and the event intent into the GUI event mapping table, comprising: and if the mapping relation between the GUI event data and the event intention passes the verification, storing the mapping relation between the GUI event data and the event intention into the GUI event mapping table.
7. The method of claim 5, further comprising: and generating a corresponding unique event identifier for the GUI event requesting registration to locate the GUI event.
8. The method of claim 3, wherein the method comprises:
integrating the pre-registered GUI event into a UI resource corresponding to the GUI event;
and sending the UI resource to a client, so that the client can upload GUI event data corresponding to the GUI event according to the UI resource when the GUI event is triggered.
9. The method of claim 8, wherein sending the UI resource to a client comprises: and responding to a UI resource updating request periodically sent by the client, and sending the UI resource to the client.
10. The method of claim 1, further comprising, prior to accessing a target skill for responding to the UI event based on the event intent:
based on a preset safety filtering rule, carrying out safety filtering on the event intention, and determining whether the corresponding UI event is registered;
and if the UI event is not registered, refusing to access the corresponding target skill.
11. The method of claim 1, further comprising, prior to accessing a target skill for responding to the UI event based on the event intent: acquiring an event field corresponding to the UI event based on the UI event data, wherein the event field comprises a unique skill identifier of the target skill;
accessing a corresponding target skill based on the event intent, comprising:
determining a target skill corresponding to the UI event based on the event intention and the event field;
generating a service request aiming at the target skill based on the event intention and the event field;
accessing the target skill and sending the service request to the target skill to provide the service requested by the UI event.
12. The method of claim 11, further comprising, prior to accessing a corresponding target skill based on the event intent: acquiring action object parameters contained in the UI event data, wherein the action object parameters describe action objects aimed at by the UI events;
accessing a corresponding target skill based on the event intent, comprising:
determining a target skill corresponding to the UI event based on the event intention and the event field;
generating a service request aiming at the target skill based on the event intention, the event field and the action object parameter;
accessing the target skill and sending the service request to the target skill to provide the service requested by the UI event for the action object.
13. A user interface, UI, event response apparatus, the apparatus comprising:
acquiring UI event data uploaded by a client;
acquiring an event intention corresponding to the triggered UI event based on the UI event data;
accessing a target skill for responding to the UI event based on the event intent to provide the service requested by the UI event to the client.
14. A user interface, UI, event responsive electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-12.
15. A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911311290.0A CN111190668B (en) | 2019-12-18 | 2019-12-18 | User Interface (UI) event response method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911311290.0A CN111190668B (en) | 2019-12-18 | 2019-12-18 | User Interface (UI) event response method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111190668A true CN111190668A (en) | 2020-05-22 |
CN111190668B CN111190668B (en) | 2024-03-22 |
Family
ID=70707348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911311290.0A Active CN111190668B (en) | 2019-12-18 | 2019-12-18 | User Interface (UI) event response method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111190668B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931081A (en) * | 2016-05-10 | 2016-09-07 | 腾讯科技(深圳)有限公司 | Method and apparatus for processing events |
CN106603601A (en) * | 2015-10-15 | 2017-04-26 | 阿里巴巴集团控股有限公司 | Service processing method, device and system, and terminal equipment |
CN107342083A (en) * | 2017-07-05 | 2017-11-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for providing voice service |
-
2019
- 2019-12-18 CN CN201911311290.0A patent/CN111190668B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106603601A (en) * | 2015-10-15 | 2017-04-26 | 阿里巴巴集团控股有限公司 | Service processing method, device and system, and terminal equipment |
CN105931081A (en) * | 2016-05-10 | 2016-09-07 | 腾讯科技(深圳)有限公司 | Method and apparatus for processing events |
CN107342083A (en) * | 2017-07-05 | 2017-11-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for providing voice service |
Also Published As
Publication number | Publication date |
---|---|
CN111190668B (en) | 2024-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10922355B2 (en) | Method and apparatus for recommending news | |
US10643610B2 (en) | Voice interaction based method and apparatus for generating multimedia playlist | |
KR101777392B1 (en) | Central server and method for processing of voice of user | |
RU2486586C1 (en) | Method and device for integration of data on point provided by group of suppliers | |
US9798531B2 (en) | Dependency-aware transformation of multi-function applications for on-demand execution | |
US9952848B2 (en) | Dependency-aware transformation of multi-function applications for on-demand execution | |
WO2016004763A1 (en) | Service recommendation method and device having intelligent assistant | |
US20110016421A1 (en) | Task oriented user interface platform | |
US10135940B2 (en) | Subscribing to event notifications using object instances | |
US8635062B2 (en) | Method and apparatus for context-indexed network resource sections | |
CN107210033A (en) | The language understanding sorter model for personal digital assistant is updated based on mass-rent | |
CN109036397B (en) | Method and apparatus for presenting content | |
US10120951B2 (en) | Bifurcated search | |
US20220414341A1 (en) | Intent addition for a chatbot | |
WO2013079773A1 (en) | Methods and apparatuses for generating semantic signatures for media content | |
KR102506361B1 (en) | Adjustment of overlapping processing of audio queries | |
US10078740B2 (en) | Method to fetch functionality across applications | |
CN112114804B (en) | Application program generation method, device and system | |
KR20130064447A (en) | Method and appratus for providing search results using similarity between inclinations of users and device | |
JP2016517078A (en) | Systems and methods that allow a domain name server to process natural language queries and determine context | |
KR20180007792A (en) | Apparatus and method for providing data based on cloud service | |
CN111190668B (en) | User Interface (UI) event response method and device, electronic equipment and storage medium | |
CN112948733B (en) | Interface maintenance method, device, computing equipment and medium | |
KR101594149B1 (en) | User terminal apparatus, server apparatus and method for providing continuousplay service thereby | |
US9465876B2 (en) | Managing content available for content prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |