CN115048161A - Application control method, electronic device, apparatus, and medium - Google Patents

Application control method, electronic device, apparatus, and medium Download PDF

Info

Publication number
CN115048161A
CN115048161A CN202110220351.3A CN202110220351A CN115048161A CN 115048161 A CN115048161 A CN 115048161A CN 202110220351 A CN202110220351 A CN 202110220351A CN 115048161 A CN115048161 A CN 115048161A
Authority
CN
China
Prior art keywords
control
runnable
controls
application
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110220351.3A
Other languages
Chinese (zh)
Inventor
李世明
李晓珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110220351.3A priority Critical patent/CN115048161A/en
Publication of CN115048161A publication Critical patent/CN115048161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application relates to an application control method, an electronic device, an apparatus and a medium thereof, wherein the application control method comprises the following steps: in N runnable controls in a first display interface of an application, displaying second identification information on the multiple runnable controls under the condition that the M runnable controls correspond to a first voice instruction through the first voice instruction; in response to a received second voice instruction including second identification information, an executable control corresponding to the second identification information included in the second voice instruction is selected from the M executable controls and executed. According to the method and the technical scheme, the electronic equipment can prompt the runnable control of the application containing the repeated first identification information in the screen, and meanwhile, the electronic equipment displays the unrepeated second identification information on the control, so that the electronic equipment can send out voice according to a user and recognize the runnable control.

Description

Application control method, electronic device, apparatus, and medium
Technical Field
The application relates to communication technology in the field of mobile terminals. And more particularly, to a control method of an application, and electronic device, apparatus, and medium thereof.
Background
In the field of voice control application of existing electronic equipment, a user can control a runnable control through voice according to identification information of the runnable control of a display interface of the application. However, in the case where there are multiple runnable controls in the display interface of the application, the identification information of the multiple runnable controls may contain repetitive content. When the repeated content is contained in the voice uttered by the user, the electronic device cannot recognize the executable control.
For example, as shown in fig. 1(a), the user 100 starts the photo application 300 on the electronic device 200, four executable controls are displayed in a display interface of the photo application 300, each executable control corresponds to one photo, and identification information corresponding to the four executable controls is "tortoise", "goldfish", "swan", and "tortoise". Here, the identification information of the photo may be a tag that marks the photo when the photo is captured and stored in a storage area of the electronic device 200. Since the tag is not uniquely verified, there may be cases where a plurality of photos have duplicate identification information. At this time, as shown in fig. 1(b), after the electronic device 200 collects the "tortoise" voice uttered by the user 100, the electronic device 200 cannot confirm which identification information is the "tortoise" photo that the user 100 needs to browse, for example, the user 100 is prompted that a plurality of objects exist in the display interface of the electronic device 200, and cannot judge, so that the experience of the user 100 in controlling the photo application 300 through the voice is reduced.
Disclosure of Invention
An object of the present application is to provide a control method of an application, and an electronic device, an apparatus, and a medium thereof. According to the method and the technical scheme, the electronic equipment can prompt the runnable control of the application containing the repeated first identification information in the screen, and meanwhile, the electronic equipment displays the unrepeated second identification information on the control, so that the electronic equipment can send out voice according to a user and recognize the runnable control.
A first aspect of the present application provides a method for controlling an application, which is used for an electronic device, and includes:
displaying a first display interface of an application, wherein the first display interface comprises N runnable controls capable of being controlled by voice, and the runnable controls display first identification information, wherein N is an integer;
receiving a first voice instruction of a user;
displaying second identification information on the plurality of runnable controls to prompt that the plurality of runnable controls exist under the condition that the M runnable controls correspond to the first voice instruction, wherein M is an integer which is more than or equal to 2 and less than or equal to N;
in response to a received second voice instruction including second identification information, an executable control corresponding to the second identification information included in the second voice instruction is selected from the M executable controls and executed.
That is, in the embodiment of the present application, the electronic device may be a smart tv, and the application may be a photo application running on the smart tv. For example, after the smart television starts a photo application, a photo list interface of the photo application is displayed on a screen of the smart television, and the photo list interface may be a first display interface. In the photo summary interface, four photo controls are displayed, where four may be N. After the smart television starts the voice recognition function, the four photo controls display respective labels of "tortoise", "goldfish", "swan" and "tortoise", where the label may be the first identification information. After the smart television receives and recognizes the voice of the tortoise sent by the user, namely the first voice instruction, the smart television determines that the label with two photo controls contains the tortoise, wherein the second voice instruction can be M. The intelligent television sequentially displays '1' and '2' at the upper left corners of the photo controls of the two 'turtles', wherein the '1' and the '2' can be second identification information. The smart television receives and recognizes the voice of "1", namely, the second voice instruction, sent by the user, and the smart television determines that the user selects the photo control of "turtle" with the second identification information of "1".
In one possible implementation of the first aspect described above, the runnable control is configured to cause the application to switch from the first display interface to the second display interface.
That is, in the embodiment of the present application, for example, the smart television runs the photo control of the "turtle" with the second identification information being "1", and enters the photo detailed interface corresponding to the photo control of the turtle ", that is, the second display interface.
In one possible implementation of the first aspect, the first identification information of the executable control is displayed based on the electronic device activating a voice recognition function.
That is, in the embodiment of the present application, for example, after the smart television starts the voice recognition function, the four photo controls in the photo summary interface of the photo application display the respective labels "tortoise", "goldfish", "swan", and "tortoise".
In a possible implementation of the first aspect, receiving a first voice instruction of a user includes:
receiving a first voice instruction sent by a user, and identifying first text content from the first voice instruction.
That is, in the embodiment of the present application, for example, the user utters the voice of "tortoise", that is, the first voice instruction; the intelligent television recognizes the character content of the tortoise, namely the first character content, from the voice of the tortoise.
In a possible implementation of the first aspect, in a case that the M executable controls correspond to the first voice instruction, second identification information is displayed for the plurality of executable controls to indicate that the plurality of executable controls exist;
determining that the M executable controls correspond to the first voice instruction by determining that the first identification information of the M executable controls contains first text content, and displaying second identification information on the plurality of executable controls.
That is, in the embodiment of the present application, for example, if the smart television determines that the tags of the two photo controls in the photo summary interface of the photo application contain "tortoise", the smart television displays "1" and "2" in turn on the upper left corners of the photo controls of the two "tortoise", where "1" and "2" may be the second identification information.
In one possible implementation of the first aspect, prompting the plurality of runnable controls includes: the plurality of runnable controls are set to a first color and elements other than the plurality of runnable controls are set to a second color.
That is, in the embodiment of the present application, for example, the smart television may set two photo controls containing a "turtle" to red, that is, a first color; the two photo controls "goldfish" and "swan" are set to gray, i.e., a second color.
In a possible implementation of the first aspect, the second identification information has uniqueness.
That is, in the embodiment of the present application, the second identification information may be a natural number sequentially arranged, for example, "1", "2", "3", "4".
In one possible implementation of the first aspect, the executable control displays the second identification information simultaneously with the first identification information.
That is, in embodiments of the present application, for example, in addition to "the turtle," both photo controls of "the turtle" also display "1" and "2" simultaneously.
In one possible implementation of the first aspect, the first identification information is displayed in place of the second identification information.
That is, in the embodiment of the present application, for example, "turtle" is not displayed on the two photo controls of "turtle", and "1" and "2" are displayed.
In one possible implementation of the first aspect, the application includes one of an application software class and an application of an operating system class, and the application includes at least one display interface.
That is, in the embodiment of the present application, the application of the application software class may be an application that implements a specific function of the electronic device, for example, the application of the application software class may be a photo application and a video playing application. The application of the operating system class may be an application that enables management and setting of the electronic device, for example, a menu application, a system setting application, of an operating system of the electronic device.
A second aspect of the present application provides an apparatus for developing a runnable control of an application, comprising:
the control design module is used for coding the operable control and previewing the style of the operable control after the operable control is coded;
the code analysis module is used for verifying whether repeated contents for voice control are contained in the codes of the executable controls;
and the code prompting module is used for prompting the content runnable control which is detected by the code analysis module and contains repetition and is used for voice control.
In one possible implementation of the first aspect described above, the code prompt module sets the repeated content for voice control to be highlighted.
That is, in embodiments of the present application, for example, the code prompting module may set the color of the repeated content for voice control in the runnable control to be different from the other runnable controls, e.g., to be red.
A third aspect of the present application provides an electronic device, comprising:
a memory storing instructions;
a processor, the processor being coupled to the memory, the program instructions stored by the memory, when executed by the processor, causing the electronic device to perform the method of controlling an application as provided in the aforementioned first aspect.
A fourth aspect of the present application provides a readable medium having instructions stored therein, wherein the instructions, when executed on the readable medium, cause the readable medium to execute the control method of the application as provided in the first aspect.
Drawings
FIGS. 1(a) and 1(b) illustrate a scenario for controlling an application by way of speech recognition according to an embodiment of the present application;
2(a) to 2(f) illustrate another scenario of controlling an application by means of speech recognition according to an embodiment of the present application;
FIG. 3 shows a block diagram of a hardware configuration of an electronic device according to an embodiment of the application;
FIG. 4 illustrates a flow diagram of a method for controlling an application by way of speech recognition according to an embodiment of the present application;
FIGS. 5(a) and 5(b) illustrate code structure diagrams of first identification information and second identification information in a runnable control according to an embodiment of the application;
FIG. 6 is a flow chart illustrating another method for controlling an application by way of speech recognition according to an embodiment of the present application;
7(a) to 7(c) illustrate another scenario of controlling an application by means of speech recognition according to an embodiment of the present application;
FIG. 8 is a block diagram illustrating a module structure of a development apparatus for developing a runnable control for an application according to an embodiment of the present application;
FIG. 9 illustrates a flowchart of a method for developing a runnable control of an application according to an embodiment of the application;
fig. 10 illustrates a scene diagram containing repeated content for a development device prompt executable control according to an embodiment of the application.
Detailed Description
Embodiments of the present application include, but are not limited to, an application control method, and electronic device, apparatus, and medium thereof.
In order to solve the problem that the electronic equipment cannot identify the runnable control of the application containing the repeated content, the application provides an application control method. The runnable control is a control which can be triggered by gesture operation or voice recognition in the display interface of the application to change the display content in the display interface of the application.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 2(a) to 2(f) illustrate a scenario of application control in an embodiment of the present application, and as shown in fig. 2(a) and 2(b), the user 100 starts the photo application 300 on the electronic device 200, and a current display interface of the photo application 300 is displayed in a screen of the electronic device 200. After the user 100 starts the voice control function of the electronic device 200, first identification information is displayed on the executable control of the current display interface of the photo application 300, where the first identification information is used for characterizing one attribute of the executable control, for example, in fig. 2(c), the executable control may be a photo control, and the first identification information may be a label of a photo. The user 100 sends a voice instruction to the electronic device 200, and after the electronic device 200 recognizes text content in the voice, the executable control with the first identification information containing the text content is searched in the current display interface of the photo application 300. When the electronic device 200 finds the executable controls of which the first identification information includes the text content in the current display interface, the electronic device 200 prompts the executable controls to the user in the current display interface of the photo application 300, and adds the second identification information to the executable controls. For example, as shown in fig. 2(d), the electronic device 200 may dim the current display interface of the photo application 300 and highlight the plurality of runnable controls to prompt the user. Meanwhile, the electronic device 200 adds the serial number of the runnable control to the plurality of runnable controls as second identification information. Then, the user 100 sends the voice to the electronic device 200 again according to the second identification information of the plurality of executable controls, and after the electronic device 200 recognizes the text content in the voice, the executable controls of which the second identification information includes the text content are searched in the plurality of executable controls prompting the user 100. In the event that a single runnable control is found, the electronic device 200 triggers the runnable control.
In the application control method described above, the electronic device 200 may be various terminal devices, including, but not limited to, a smart television, a laptop computer, a desktop computer, a tablet computer, a mobile phone, a server, a wearable device, a head-mounted display, a mobile email device, a portable game console, a portable music player, a reader device, or other terminal devices capable of accessing a network, for example. In some embodiments, embodiments of the present application may also be applied to wearable devices worn by a user. For example, a smart watch, bracelet, piece of jewelry (e.g., a device made as a decorative item such as an earring, bracelet, or the like) or glasses, or the like, or as part of a watch, bracelet, piece of jewelry or glasses, or the like. For simplicity of description, the present application will be described below with a smart tv 200 as an example of the electronic device 200.
Fig. 3 shows a schematic structural diagram of a smart tv 200 according to an embodiment of the present application. Specifically, as shown in fig. 2, the smart tv 200 includes a processor 201, a memory 202, a microphone 203, a voice recognition module 204, an interface management module 205, a power switch 206, a communication processing module 207, a display 208, a bus 209, and the like.
The processor 201 is operable to read and execute computer readable instructions. In particular implementations, the processor 201 may mainly include a controller, an operator, and a register. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operation, shift operation, logic operation and the like, and can also perform address operation and conversion. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In a specific implementation, the hardware architecture of the processor 201 may be an Application Specific Integrated Circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, etc.
A memory 202 is coupled to the processor 201 for storing various software programs and/or sets of instructions. In particular implementations, memory 202 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 202 may store an operating system, such as an embedded operating system like uCOS, VxWorks, RTLinux, etc. Memory 202 may also store a communication program that may be used to communicate with a cell phone, one or more servers, or additional devices.
The microphone 203 is used for collecting voice uttered by the user.
The voice recognition module 204 is configured to obtain voice uttered by the user and recognize text content from the voice. The speech recognition module 204 may generate feature vectors describing speech uttered by the user through a speech feature extraction algorithm, and then arrange the feature vectors according to a time sequence to obtain the feature vectors. The speech may be characterized by Linear Predictive Coding (LPC) characteristics, Linear Predictive Cepstral Coeffients (LPCC) characteristics, and Mel-Frequency Cepstral Coefficients (MFCC) characteristics, or Linear Predictive Mel-Frequency Cepstral Coefficients (LBPMFCC) characteristics, of the speech uttered by the user.
The interface management module 205 is configured to manage a display interface of an application running on the smart tv 200. For example, the interface management module 205 may construct a display interface of an application, and add a control on the display interface, such as a control for displaying text, a control for displaying pictures, and the like.
The power module 206 may include a power supply, power management components, and the like. The power source may be a battery. The power management component is used for managing the charging of the power supply and the power supply of the power supply to other modules.
The communication processing module 207 may include a wireless communication processing module (not shown) and a wired Local Area Network (LAN) communication processing module (not shown).
The wireless communication processing module may also include a cellular mobile communication processing module (not shown). The cellular mobile communication processing module may communicate with other devices, such as servers, via cellular mobile communication technology.
The wired LAN communication processing module can be used for communicating with other devices in the same LAN through a wired LAN, and can also be used for connecting to a Wide Area Network (WAN) through the wired LAN and communicating with devices in the WAN.
The display screen 208 may be used to display images, video, and the like. The display screen 208 may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED) display screen, an active-matrix organic light-emitting diode (AMOLED) display screen, a flexible light-emitting diode (FLED) display screen, a quantum dot light-emitting diode (QLED) display screen, or the like.
Bus 209 is a common communication trunk that carries information between the above-described functional components and modules in smart television 200.
It is understood that the structure illustrated in fig. 3 does not constitute a specific limitation to the smart tv 200. In other embodiments of the present application, the smart tv 200 may include more or fewer components than those shown, or combine some components, or split some components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The application control method of the present application will be described in detail below with reference to fig. 2 to 5, taking an example that a user controls an executable control in a current display interface of the photo application 300 running on the smart tv 200 through voice. Specifically, as shown in fig. 4, the application control method provided by the present application includes the following steps:
s401: the photo application 300 is launched.
For example, as shown in fig. 2(a), after the user 100 opens the smart tv 200, the user 100 may open the photo application 300 by clicking and selecting an icon of "photo" selected in a User Interface (UI) in the smart tv 200 through a remote controller of the smart tv 200. After the smart tv 200 responds to the instruction, the photo application 300 is opened, and the current display interface of the photo application 300 is displayed on the screen of the smart tv 200. As shown in fig. 2(b), the current display interface of the photo application 300 includes a photo list interface 3001 of the photo application 300. The photo summary interface 3001 includes four executable controls, and for the executable controls in the photo summary interface 3001, the types of the executable controls may be photo controls, where the photo controls may refer to each photo in the photo summary interface 300, and a user may trigger the photo controls by gesture or by voice, and after the photo controls are triggered, the smart tv 200 may enter the photo preview interface from the photo summary interface 300.
S402: and starting a voice control function.
For example, the smart tv 200 may receive an instruction from the user 100 to press a button of a voice control function provided on a remote controller of the smart tv 200, so as to turn on the voice control function of the smart tv 200. At this time, as shown in fig. 2(c), the interface management module 205 of the smart tv 200 displays first identification information within a photo control in the photo list interface 3001 of the photo application 300, where the first identification information of the photo control may be a label of a photo. For example, the photo summary interface 3001 includes four photo controls, and the first identification information of each of the four photo controls is "turtle", "goldfish", "swan", or "turtle". Here, the first identification information may be a property of each photo control, for example, the first identification information may be a label of the photo. Fig. 5(a) shows the code formats of the four photo controls of the above-mentioned "tortoise", "goldfish", "swan" and "tortoise". The first identification information of the four photo controls may be names of the photo controls, that is, values corresponding to the "app: name" attribute in the code of the photo control described in fig. 5 (a). After the smart television 200 starts the voice control function, the interface management module 205 displays corresponding first identification information in the four photo controls.
In another embodiment of the present application, the smart tv 200 may also enter the smart tv 200 into a speech recognition mode by interacting with the user 100. For example, after the smart tv 200 receives that the user speaks "turn on the voice control function" towards the smart tv 200, the smart tv 200 turns on the voice control function.
S403: and acquiring the voice sent by the user, and identifying the text content corresponding to the voice.
In the state that the voice control function is turned on, the voice recognition module 204 of the smart tv 200 monitors the voice sent by the user to the smart tv 200 through the microphone 203, for example: the user sends a tortoise voice to the smart television 200, the smart television 200 can collect and store the voice sent by the user in the memory 202 of the smart television through the microphone 203, and then recognize the text content corresponding to the voice through the voice recognition module 204 of the smart television 200. In other embodiments of the present application, in addition to the voice of the "tortoise", the user may also send out the voice of the "goldfish" to the smart tv 200.
It is understood that in other embodiments, the smart tv 200 may also collect the voice uttered by the user through an audio collecting device including a microphone, and the audio collecting device may be a part of the smart tv 200 or a separate device. When the audio acquisition device is an external independent device of the smart tv 200, the acquired audio signal can be transmitted to the smart tv 200 by performing communication connection with the smart tv 200, for example: the audio acquisition device may be a sound pickup, a recording microphone, or the like.
The speech recognition module 204 of the smart television 200 may input the stored speech into the speech neural network model to calculate the text content corresponding to the speech. For example, the speech Neural Network model here may be a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), a time-recursive Neural Network (LSTM), and the like. In the case that the speech neural network model is a convolutional neural network, the neural network may include a plurality of convolutional layers, pooling layers, and output layers, and the input data of the neural network is speech of the user collected by the smart television 200. For example, after the smart tv 200 inputs the voice of the "tortoise" uttered by the user into the neural network, the corresponding text content "tortoise" is obtained.
S404: it is determined whether only one executable control exists in the current display interface of the photo application 300, and the first identification information of the executable control contains the text content.
After obtaining the text content corresponding to the voice uttered by the user, the interface management module 205 of the smart television 200 may traverse the executable controls in the current display interface of the photo application 300, obtain first identification information corresponding to each executable control, and determine whether the first identification information corresponding to the plurality of executable controls includes the text content corresponding to the voice uttered by the user. If a runnable control containing the text content exists in the current display interface of the photo application 300, S406 is executed, and the smart television 200 triggers the runnable control; if the plurality of runnable controls exist, executing S405, and prompting the user of the plurality of runnable controls in the current display interface of the photo application 300 by the smart television 200, and displaying second identification information on the plurality of runnable controls; if no runnable control exists, the smart tv 200 may prompt the user that no runnable control can be found, e.g., the smart tv 200 may prompt "no runnable control can be found, please speak again" within the current display interface of the photo application 300.
Taking the photo listing interface 3001 of the photo application 300 shown in fig. 2(c) as an example, the first identification information of each of the four photo controls of the photo listing interface 3001 is "tortoise", "goldfish", "swan", or "tortoise". The interface management module 205 of the smart television 200 traverses the first identification information of the four photo controls, and determines that the first identification information of the two photo controls contains the text "turtle" corresponding to the voice sent by the user.
In other embodiments of the present application, after the interface management module 205 of the smart television 200 traverses the first identification information of the four executable controls and determines that the first identification information of only one executable control includes the text content "goldfish" corresponding to the voice uttered by the user, the smart television 200 executes S406.
S405: in the current display interface of the photo application 300, the user is prompted for a plurality of runnable controls and second identifying information is displayed for the runnable controls.
When the interface management module 205 of the smart television 200 determines that the first identification information of the executable controls includes text content corresponding to the voice uttered by the user in the current display interface of the photo application 300, the interface management module 205 of the smart television 200 prompts the executable controls to the user in a highlighted manner on the current display interface of the photo application 300, and displays the second identification information for the executable controls. Then, the smart tv 200 returns to step S403, and the user sends the voice to the smart display 200 again according to the second identification information, so that the smart tv 200 can obtain the voice sent by the user again, recognize the text content corresponding to the voice, select one executable control from the plurality of executable controls, and trigger the executable control.
For example, in the embodiment shown in fig. 2(d), after the interface management module 205 of the smart television 200 determines that the first identification information of two photo controls exists in the photo list interface 3001 of the photo application 300 and includes the text "tortoise" corresponding to the voice uttered by the user, the interface management module 205 thickens the borders of the two photo controls and dims the brightness of the other controls except the two photo controls including the "tortoise", so that the two executable controls are prominently displayed in the photo list interface 3001 of the photo application 300. At the same time, the interface management module 205 displays second identification information on the two photo controls. Fig. 5(b) shows the code formats of the photo controls of the above two "turtles", and the second identification information thereof may be a value corresponding to the "voice _ event" attribute in the code of the photo control described in fig. 5 (b). It can be seen that, in step S402, in the case that the interface management module 205 uses the first identification information to identify the photo control, the value corresponding to the "voice _ event" attribute in the code of the photo control is null. In a case where the interface management module 205 determines that a plurality of photo controls exist in the photo listing interface 3001 of the photo application 300, the interface management module 205 may number the plurality of photo controls in sequence, and set a value corresponding to the number as a value corresponding to the "voice _ event" attribute, that is, the second identification information, so that the interface management module 205 may prompt the user 100 to perform voice control by displaying the second identification information in the photo controls. In other embodiments of the present application, for example, the interface management module 205 may further set the photo controls of the two "turtles" to red and the other controls to gray, for prompting the user for the photo controls of the two "turtles".
For example, after the interface management module 205 identifies the photo controls of two "turtles" as "1" and "2" in order from left to right and top to bottom, according to their positions in the photo listing interface 3001, the "voice _ event" attributes of the two photo controls are set to "1" and "2", respectively. Meanwhile, in the photo summary interface 3001, the upper left corners of the photo controls of the two "turtles" display "1" and "2" in sequence to prompt the user 100 to select one trigger at the two runnable controls by speaking the "1" or "2" speech.
S406: triggering the runnable control.
For example, in the embodiment shown in fig. 2(e), after the user 100 utters the voice of "1" again and is recognized by the smart tv 200, the smart tv 200 determines that the user selects the photo control of "turtle" with the second identification information of "1". Then, the smart tv 200 enters the photo detail interface 3002 as shown in fig. 2(f), and displays a photo of "tortoise" on the photo detail interface 3002.
In other embodiments of the present application, if the voice uttered by the user is "goldfish," the interface management module 205 of the smart tv 200 may uniquely determine the photo control of "goldfish" through the first identification information. After that, the smart tv 200 enters the photo detail interface 3002, and displays a photo of "goldfish" on the photo detail interface 3002.
After the technical solutions of controlling the photo application 300 of the smart tv 200 by the user 100 through voice described in fig. 2 to fig. 5 are introduced, the smart tv 200 and the video playing application 400 are taken as examples, and another application control method of the present application is described in detail below with reference to fig. 6.
Fig. 6 relates to a scheme different from the schemes described in fig. 2 to 5 in that the first identification information of the executable control of the video playback application 400 may be a property of the executable control, for example: the name of the control is run, and the first identification information of the runnable control is already displayed in the display interface 401 of the video playback application 400 before the user starts the voice control function of the smart tv 200.
Specifically, the application control method related to fig. 6 includes the following steps:
s601: the video playback application 400 is launched.
S601 here is similar to S401 described above. For example, after the user 100 opens the smart tv 200, the user 100 may open the video playing application 400 by selecting and clicking a "video playing" icon in the user interface in the smart tv 200 through the remote controller of the smart tv 200.
After responding to the instruction, the smart tv 200 opens the video playing application 400. S601 is different from S401 described above in that the current display interface 401 of the video playback application 400 is displayed on the screen of the smart tv 200. As shown in fig. 7(a), the current display interface 401 of the video playback application 400 includes: a first video field 4001, a second video field 4002, and a title field 4003. The first video bar 4001, the second video bar 4002, and the title bar 4003 each include a plurality of runnable controls having first identifying information. For example, the first video bar 4001 includes four runnable controls, the respective first identification information of which is "current news", "breaking news", "today's comment", or "king of comedy". The second video bar 4002 includes five runnable controls, the respective first identification information of which is "recommend movie", "rock", "jingchuan", "justice alliance", "you youth". Title bar 4003 includes six runnable controls such as "recommendations", "movies", "drama", "art", "kids", "my".
The runnable control in the first video bar 4001 and the second video bar 4002 may be a video control, and after the video control is triggered, the video playback application 400 may enter a video playback interface, and the first identification information of the video control may be a video name, a video introduction, and the like. The runnable control in the title bar 4003 may be a title selection control, and after the title selection control is triggered, the video playback application 400 may switch to a display interface of a title corresponding to the title selection control. The first identification information of the title selection control may be a title name.
S602: and starting a voice control function.
S602 here is similar to S402 described above. The smart television 200 may receive an instruction sent by a user pressing a button of the voice control function provided on a remote controller of the smart television 200, so as to start the voice control function of the smart television 200.
S603: and acquiring voice sent by a user, and recognizing the text content corresponding to the voice.
S603 here is similar to S403 described above. In the state that the voice control function is turned on, the smart tv 200 monitors the voice sent by the user to the smart tv 200, for example: the user sends a voice of "news" to the smart tv 200, and the smart tv 200 can collect and store the voice sent by the user in the memory 202 of the smart tv through the microphone 203.
It is understood that the smart tv 200 may obtain the corresponding text content "news" by passing the "news" voice uttered by the user through the neural network of the voice recognition transplanted onto the smart tv 200.
S604: whether a runnable control exists in the current display interface 401 of the video playing application 400 is determined, and the first identification information of the runnable control contains the text content.
S604 here is similar to S404 described above. After obtaining the text content corresponding to the voice sent by the user, the smart television 200 traverses the runnable controls in the current display interface of the video playback application 400, obtains first identification information corresponding to each runnable control, and determines whether the first identification information corresponding to the plurality of runnable controls contains the text content corresponding to the voice sent by the user. If a runnable control containing the text exists in the current display interface of the video playing application 400, executing S606, and triggering the runnable control by the smart television 200; otherwise, S605 is executed. If a plurality of runnable controls exist, the smart television 200 prompts the user for the plurality of runnable controls within the current display interface of the video playback application 400; if there are no runnable controls, the user is prompted that no runnable controls can be found.
For example, as shown in fig. 7(a), for the four runnable controls of the first video bar 4001, their respective first identification information is "current news", "breaking news", "today's comment", and "king of comedy". The second video bar 4002 includes five executable controls, and the smart television 200 traverses the first identification information of the four executable controls, and determines that the first identification information of the two executable controls "current news" and "breaking news" contains the text content "news" corresponding to the voice uttered by the user. Next, for the second video bar 4002 and the title bar 4003, the smart tv 200 may also perform the same operations performed on the first video bar 4001 to determine whether the second video bar 4002 and the title bar 4003 include a runnable control containing the text content "news" corresponding to the voice uttered by the user.
S605: in the current display interface of the video playback application 400, the user is prompted with a plurality of executable controls, and second identification information is displayed for the executable controls.
S605 here is similar to S405 described above. In the current display interface of the video playback application 400, after the smart television 200 determines that the first identification information of the plurality of executable controls includes the text content corresponding to the voice uttered by the user, the smart television 200 prompts the plurality of executable controls to the user in a highlighted manner on the current display interface of the video playback application 400, displays the second identification information for the plurality of executable controls, and continues to return to S603 to prompt the user to utter the voice again according to the second identification information, so that the smart television 200 can acquire the voice uttered by the user again and recognize the text content corresponding to the voice, so that one executable control is selected from the plurality of executable controls to be triggered.
For example, as shown in fig. 7(b), after the smart television 200 determines that the first identification information of the two runnable controls "current news" and "breaking news" contains the text "news" corresponding to the voice uttered by the user, the smart television 200 thickens the borders of the two runnable controls "current news" and "breaking news" and dims the brightness of the other controls except the two runnable controls "current news" and "breaking news", so that the two runnable controls "current news" and "breaking news" are highlighted in the current display interface of the video playback application 400. Meanwhile, the smart television 200 also adds numbers to the two runnable controls of "current news" and "breaking news", identifies the "current news" as "1" and "breaking news" as "2", that is, displays the second identification information "1" and "2" on the two runnable controls of "current news" and "breaking news", and displays the second identification information "1" and "2" on the upper left corners of the two runnable controls of "current news" and "breaking news", so as to prompt the user 100 to select one trigger on the two runnable controls of "current news" and "breaking news" by sending out the voice of "1" or "2".
S606: triggering the runnable control.
S606 here is similar to S406 described above. After the user utters the voice "1" and is recognized by the smart tv 200 as shown in fig. 7(c), the smart tv 200 determines that the user selects the runnable control "current news", the smart tv 200 triggers the runnable control "current news", and the display interface 402 of the runnable control "current news" is displayed in the video playing application 400.
In addition to the photo application 300 and the video playing application 400 described above, the control method of the application in the embodiment of the present application may also be applied to other types of applications having a display interface running on the smart tv 200, for example, a menu application of an operating system of the smart tv 200, and a system setting application.
In addition to the application control method described above, an embodiment of the present application also discloses a device for developing a runnable control of an application. The development apparatus may be adapted to develop executable controls in a photo application 300, as shown in fig. 2, and a video playback application 400, as shown in fig. 7. The executable control of the application is developed and completed by a developer by using the development device 500 running on the development equipment and is stored in the storage area of the development equipment. Development devices may include, but are not limited to, laptop computers, desktop computers, tablet computers, servers, and other computer devices capable of being used for software development. As shown in fig. 8, the development apparatus 500 may include:
the control design module 501: and the control design module is used for the developer to encode the operable control. For example, where a runnable control belongs to a visible control within a display interface of an application, a developer may preview the runnable control through a control design module after completing encoding of the runnable control. Meanwhile, the control design module can also pre-store a plurality of templates of operable controls in a memory of the development device, and developers can directly select one template from the templates of the operable controls to code on the basis of the template.
Code analysis module 502: the code analysis module is used for verifying the codes of the executable controls, verifying whether the codes of the executable controls contain contents repeatedly used for voice control, and prompting developers through the code prompting module if the codes of the executable controls contain the contents repeatedly used for voice control. The content for voice control here may be pre-configured in the memory of the development apparatus by the developer. For example, for the executable widget of the photo application as described in fig. 5, the content of the executable widget for voice control is a value corresponding to the "app: name" attribute, the developer may configure the "app: name" attribute in the memory of the development device in advance, so that the code analysis module may check whether the "app: name" attribute contains repeated content when checking the executable widget of the photo application.
Code hint module 503: and prompting the code of the executable control detected by the code analysis module to a developer. For example, when the code of the executable control has an error, the code prompt module may underline the code where the error occurred and prompt the code with highlight and color red. Meanwhile, the code prompting module can also display error prompts on a console of the development device at the same time.
The following describes a process of a developer developing a runnable control of a photo application 300 using a development apparatus of the present application, as shown in fig. 9, including:
s901: and receiving and saving the code of the executable control of the display interface of the application developed and completed by the developer.
The developer uses the control design module to complete the encoding of the four photo controls of the photo application 300 and save them in the memory area of the development device.
S902: and checking the code of the executable control.
And the code analysis module is used for repeatedly checking the codes of the photo controls stored in the development equipment. For example, with continued reference to fig. 9, for example, the code analysis module may obtain respective "app: name" attributes from the codes of the four executable controls according to the "app: name" attributes preset by the developer for voice control, and check whether values corresponding to the "app: name" attributes of each executable control contain repeated content.
S903: it is determined whether more than two runnable controls exist in the code that contain the same content for voice control.
In the case that the code analysis module determines that there are more than two executable controls in the code that contain the same content for voice control, S904 is executed, and the code analysis module may send the executable controls with the same content for voice control to the code prompting module, and the code prompting module prompts the developers in the development device for the executable controls with the same content. If not, executing S905, and determining that the code of the executable control is normal by the code analysis module.
S904: the developer is prompted for a runnable control that contains the same content for voice control.
The code prompting module can add underlines below the codes containing repeated contents of the runnable control, highlight the codes and mark red colors for prompting, and meanwhile, the code prompting module can prompt the runnable control containing the same contents for voice control in a prompting information area of the development equipment. For example, as shown in FIG. 10, the code analysis module determines whether the value corresponding to the "app: name" attribute of two of the four executable controls contains duplicate content "turtle". The code analysis module notifies the code hint module of the occurrence of the duplicate content "turtle" attribute "app: name" in the two runnable controls. The code hint module may hint in a hint information area of the development device that the "app: name" attribute of lines 4, 22 in the code contains the same content. In this case, returning to S701, the development device receives again the developer' S encoding of the runnable control with the same content.
S905: the compilation of the code for the executable control is completed.
And the code analysis module confirms that the code of the executable control is normal, and the development equipment can compile the code of the executable control and execute the code.
It will be understood that, although the terms "first", "second", etc. may be used herein to describe various features, these features should not be limited by these terms. These terms are used merely for distinguishing and are not intended to indicate or imply relative importance. For example, a first feature may be termed a second feature, and similarly, a second feature may be termed a first feature, without departing from the scope of example embodiments.
Moreover, various operations will be described as multiple operations separate from one another in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent, and that many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when the described operations are completed, but may have additional operations not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
References in the specification to "one embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature is described in connection with a particular embodiment, those of ordinary skill in the art will be able to affect such feature in connection with other embodiments whether or not such embodiments are explicitly described.
The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrase "A/B" means "A or B". The phrase "A and/or B" means "(A), (B) or (A and B)".
As used herein, the term "module" may refer to, be a part of, or include: memory (shared, dedicated, or group) for executing one or more software or firmware programs, an Application Specific Integrated Circuit (ASIC), an electronic circuit and/or processor (shared, dedicated, or group), a combinational logic circuit, and/or other suitable components that provide the described functionality.
In the drawings, some features of structures or methods may be shown in a particular arrangement and/or order. However, it should be understood that such specific arrangement and/or ordering is not required. Rather, in some embodiments, the features may be described in a manner and/or order different from that shown in the illustrative figures. Additionally, the inclusion of a structural or methodological feature in a particular figure does not imply that all embodiments need to include such feature, and in some embodiments may not include such feature, or may be combined with other features.
While the embodiments of the present application have been described in detail with reference to the accompanying drawings, the application of the present application is not limited to the various applications mentioned in the embodiments of the present application, and various structures and modifications can be easily implemented with reference to the embodiments of the present application to achieve various beneficial effects mentioned herein. Variations that do not depart from the gist of the invention are intended to be within the scope of the invention.

Claims (14)

1. A control method of an application for an electronic device, comprising:
displaying a first display interface of the application, wherein the first display interface comprises N executable controls capable of being controlled by voice, and the executable controls display first identification information, wherein N is an integer;
receiving a first voice instruction of a user;
displaying second identification information for a plurality of said runnable controls to indicate that a plurality of said runnable controls exist, where M is an integer greater than or equal to 2 and less than or equal to N, in case that M said runnable controls correspond to said first voice instruction;
in response to a received second voice instruction including the second identification information, selecting and executing the executable control corresponding to the second identification information included in the second voice instruction from the M executable controls.
2. The method of claim 1, wherein the runnable control is to cause the application to switch from the first display interface to a second display interface.
3. The method of claim 1, wherein the first identification information of the runnable control is displayed based on the electronic device initiating a speech recognition function.
4. The method of claim 1, wherein receiving a first voice instruction of the user comprises:
and receiving the first voice instruction sent by the user, and identifying first text content from the first voice instruction.
5. The method of claim 4, wherein in the event that M of the runnable controls correspond to the first voice instruction, displaying second identifying information for a plurality of the runnable controls to indicate that a plurality of the runnable controls exist;
determining that the M executable controls correspond to the first voice instruction by determining that the first identification information of the M executable controls includes the first textual content, an
Displaying second identifying information for a plurality of the runnable controls.
6. The method of claim 1, wherein prompting a plurality of the runnable controls comprises: setting a plurality of said runnable controls to a first color and elements other than said runnable controls to a second color.
7. The method of claim 1, wherein the second identification information is unique.
8. The method of claim 7, wherein the runnable control displays the second identifying information simultaneously with the first identifying information.
9. The method of claim 7, wherein the first identification information is displayed in place of the second identification information.
10. The method of claim 1, wherein the application comprises one of an application of the application software class and an application of the operating system class, and wherein the application comprises at least one display interface.
11. An apparatus for developing a runnable control for an application, comprising:
the control design module is used for coding the runnable control and previewing the style of the runnable control after the coding of the runnable control is finished;
the code analysis module is used for verifying whether repeated content for voice control is contained in the code of the executable control;
a code prompt module to prompt the runnable control containing repeated content for voice control detected by the code analysis module.
12. The apparatus of claim 11, wherein the code prompt module sets the repeated content for voice control to be highlighted.
13. An electronic device, comprising:
a memory storing instructions;
a processor coupled to a memory, the memory storing program instructions that, when executed by the processor, cause the electronic device to perform the method of controlling an application of any of claims 1 to 10.
14. A readable medium having stored therein instructions, which when run on the readable medium, cause the readable medium to execute a control method of an application according to any one of claims 1 to 10.
CN202110220351.3A 2021-02-26 2021-02-26 Application control method, electronic device, apparatus, and medium Pending CN115048161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110220351.3A CN115048161A (en) 2021-02-26 2021-02-26 Application control method, electronic device, apparatus, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110220351.3A CN115048161A (en) 2021-02-26 2021-02-26 Application control method, electronic device, apparatus, and medium

Publications (1)

Publication Number Publication Date
CN115048161A true CN115048161A (en) 2022-09-13

Family

ID=83156527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110220351.3A Pending CN115048161A (en) 2021-02-26 2021-02-26 Application control method, electronic device, apparatus, and medium

Country Status (1)

Country Link
CN (1) CN115048161A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101876917A (en) * 2010-07-02 2010-11-03 深圳市迅雷网络技术有限公司 Control development method and device
CN105450822A (en) * 2015-11-11 2016-03-30 百度在线网络技术(北京)有限公司 Intelligent voice interaction method and device
CN106294164A (en) * 2016-08-15 2017-01-04 中国银行股份有限公司 A kind of code check method and device
CN106569801A (en) * 2016-10-18 2017-04-19 中国运载火箭技术研究院 Complex parameter-based configuration display control system
CN108305626A (en) * 2018-01-31 2018-07-20 百度在线网络技术(北京)有限公司 The sound control method and device of application program
CN109271315A (en) * 2018-08-23 2019-01-25 中国平安财产保险股份有限公司 Scripted code detection method, device, computer equipment and storage medium
CN109448727A (en) * 2018-09-20 2019-03-08 李庆湧 Voice interactive method and device
CN109584879A (en) * 2018-11-23 2019-04-05 华为技术有限公司 A kind of sound control method and electronic equipment
CN110060679A (en) * 2019-04-23 2019-07-26 诚迈科技(南京)股份有限公司 A kind of exchange method and system of whole process voice control
CN111128168A (en) * 2019-12-30 2020-05-08 斑马网络技术有限公司 Voice control method, device and storage medium
CN111949240A (en) * 2019-05-16 2020-11-17 阿里巴巴集团控股有限公司 Interaction method, storage medium, service program, and device
CN112002321A (en) * 2020-08-11 2020-11-27 海信电子科技(武汉)有限公司 Display device, server and voice interaction method
CN112055234A (en) * 2019-06-06 2020-12-08 百度在线网络技术(北京)有限公司 Television equipment screen projection processing method, equipment and storage medium
CN112102823A (en) * 2020-07-21 2020-12-18 深圳市创维软件有限公司 Voice interaction method of intelligent terminal, intelligent terminal and storage medium
CN112286485A (en) * 2020-12-30 2021-01-29 智道网联科技(北京)有限公司 Method and device for controlling application through voice, electronic equipment and storage medium
CN112393725A (en) * 2019-08-16 2021-02-23 上海博泰悦臻网络技术服务有限公司 Object processing method based on multi-round voice, vehicle machine and computer storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101876917A (en) * 2010-07-02 2010-11-03 深圳市迅雷网络技术有限公司 Control development method and device
CN105450822A (en) * 2015-11-11 2016-03-30 百度在线网络技术(北京)有限公司 Intelligent voice interaction method and device
CN106294164A (en) * 2016-08-15 2017-01-04 中国银行股份有限公司 A kind of code check method and device
CN106569801A (en) * 2016-10-18 2017-04-19 中国运载火箭技术研究院 Complex parameter-based configuration display control system
CN108305626A (en) * 2018-01-31 2018-07-20 百度在线网络技术(北京)有限公司 The sound control method and device of application program
CN109271315A (en) * 2018-08-23 2019-01-25 中国平安财产保险股份有限公司 Scripted code detection method, device, computer equipment and storage medium
CN109448727A (en) * 2018-09-20 2019-03-08 李庆湧 Voice interactive method and device
CN109584879A (en) * 2018-11-23 2019-04-05 华为技术有限公司 A kind of sound control method and electronic equipment
CN110060679A (en) * 2019-04-23 2019-07-26 诚迈科技(南京)股份有限公司 A kind of exchange method and system of whole process voice control
CN111949240A (en) * 2019-05-16 2020-11-17 阿里巴巴集团控股有限公司 Interaction method, storage medium, service program, and device
CN112055234A (en) * 2019-06-06 2020-12-08 百度在线网络技术(北京)有限公司 Television equipment screen projection processing method, equipment and storage medium
CN112393725A (en) * 2019-08-16 2021-02-23 上海博泰悦臻网络技术服务有限公司 Object processing method based on multi-round voice, vehicle machine and computer storage medium
CN111128168A (en) * 2019-12-30 2020-05-08 斑马网络技术有限公司 Voice control method, device and storage medium
CN112102823A (en) * 2020-07-21 2020-12-18 深圳市创维软件有限公司 Voice interaction method of intelligent terminal, intelligent terminal and storage medium
CN112002321A (en) * 2020-08-11 2020-11-27 海信电子科技(武汉)有限公司 Display device, server and voice interaction method
CN112286485A (en) * 2020-12-30 2021-01-29 智道网联科技(北京)有限公司 Method and device for controlling application through voice, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102144209B (en) Multi-tiered voice feedback in an electronic device
CN109979465B (en) Electronic device, server and control method thereof
US20190027147A1 (en) Automatic integration of image capture and recognition in a voice-based query to understand intent
US11551682B2 (en) Method of performing function of electronic device and electronic device using same
US20150179170A1 (en) Discriminative Policy Training for Dialog Systems
US11468881B2 (en) Method and system for semantic intelligent task learning and adaptive execution
CN103529934A (en) Method and apparatus for processing multiple inputs
US11150870B2 (en) Method for providing natural language expression and electronic device supporting same
CN104516709B (en) Voice householder method and system based on running software scene and voice assistant
CN112000820A (en) Media asset recommendation method and display device
CN116415594A (en) Question-answer pair generation method and electronic equipment
CN112286485B (en) Method and device for controlling application through voice, electronic equipment and storage medium
CN111866568B (en) Display device, server and video collection acquisition method based on voice
KR20190122457A (en) Electronic device for performing speech recognition and the method for the same
CN112182196A (en) Service equipment applied to multi-turn conversation and multi-turn conversation method
WO2019005387A1 (en) Command input using robust input parameters
CN103207726A (en) Apparatus And Method For Providing Shortcut Service In Portable Terminal
CN115048161A (en) Application control method, electronic device, apparatus, and medium
CN111950288B (en) Entity labeling method in named entity recognition and intelligent device
CN111344664B (en) Electronic apparatus and control method thereof
KR20220091085A (en) Electronic device and method for sharing execution information on command having continuity
CN112380871A (en) Semantic recognition method, apparatus, and medium
US11756575B2 (en) Electronic device and method for speech recognition processing of electronic device
US20230154463A1 (en) Method of reorganizing quick command based on utterance and electronic device therefor
KR20240038523A (en) Method for judging false-rejection and electronic device performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination