CN111243580B - Voice control method, device and computer readable storage medium - Google Patents

Voice control method, device and computer readable storage medium Download PDF

Info

Publication number
CN111243580B
CN111243580B CN201811433203.4A CN201811433203A CN111243580B CN 111243580 B CN111243580 B CN 111243580B CN 201811433203 A CN201811433203 A CN 201811433203A CN 111243580 B CN111243580 B CN 111243580B
Authority
CN
China
Prior art keywords
interface
control
name
application program
names
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811433203.4A
Other languages
Chinese (zh)
Other versions
CN111243580A (en
Inventor
孙向作
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201811433203.4A priority Critical patent/CN111243580B/en
Publication of CN111243580A publication Critical patent/CN111243580A/en
Application granted granted Critical
Publication of CN111243580B publication Critical patent/CN111243580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention is applicable to the technical field of artificial intelligence, and provides a voice control method, a voice control device and a computer readable storage medium, comprising the following steps: acquiring voice content input by a user, and analyzing keywords from the voice content; searching control display characters matched with the keywords in a preset database, and extracting corresponding interface package names from the preset database according to the control display characters to serve as target interface package names, wherein the preset database contains the corresponding relation between the control display characters and the interface package names; and starting an interface corresponding to the name of the target interface. So as to improve the flexibility of voice control and avoid the coupling problem caused by direct docking with a third party application program.

Description

Voice control method, device and computer readable storage medium
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a voice control method, a voice control device and a computer readable storage medium.
Background
In order to more conveniently control various terminal devices to execute corresponding operations, voice control technology is often used. In order to enable speech recognition and semantic understanding to preferentially hit control functions on a current interface of a terminal device in the process of controlling the terminal device through speech, it is required to know an application program and an interface which are currently used by a user and important information displayed on the interface, such as characters displayed on a control of the interface.
However, in the prior art, when the above work is completed, the voice recognition module needs to be in butt joint with each third party application program, however, the workload and difficulty of the butt joint are very large, so that the voice recognition module is very difficult to control all the application programs, and meanwhile, the subsequent updating of the application programs may influence the control of the voice recognition module due to the very large coupling of the voice recognition module and each application program, so that the subsequent maintenance difficulty is also relatively large.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a voice control method, a device and a computer readable storage medium, so as to solve the problem that the voice recognition module in the existing voice control method is difficult to control part of the third party application program.
A first aspect of an embodiment of the present invention provides a voice control method, including: analyzing keywords from voice content input by a user; searching control display characters matched with the keywords in a preset database, and extracting interface package names corresponding to the control display characters from the preset database to serve as target interface package names, wherein the preset database contains corresponding relations between the control display characters and the interface package names; and starting an interface corresponding to the name of the target interface.
A second aspect of an embodiment of the present invention provides a voice control apparatus, including: the first acquisition module is used for analyzing keywords from voice content input by a user; the searching module is used for searching control display characters matched with the keywords in a preset database, extracting interface package names corresponding to the control display characters from the preset database to serve as target interface package names, and the preset database contains the corresponding relation between the control display characters and the interface package names; and the starting module is used for starting the interface corresponding to the name of the target interface.
A third aspect of an embodiment of the present invention provides a voice control apparatus, including: a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method provided by the first aspect of the embodiments of the invention when the computer program is executed by the processor.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method provided by the first aspect of the embodiments of the present invention.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: analyzing keywords from voice content input by a user, searching control display characters matched with the keywords in a database generated according to interface data of an application program, extracting interface names corresponding to the control display characters, determining an interface which the user wants to start, and starting the interface corresponding to the interface names; the voice control module is used for controlling the relevant interfaces of the application program under the condition that the voice control module is not matched with the third-party application program, so that the flexibility of voice control is improved, and the problem of coupling caused by direct butt joint with the third-party application program is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a voice control method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of an interface provided by an embodiment of the present invention;
FIG. 3 is a flow chart of generating a preset database according to an embodiment of the present invention;
fig. 4 is a flowchart of a specific implementation of a method S302 for generating a preset database according to an embodiment of the present invention;
FIG. 5 is a block diagram of a voice control apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a voice control apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Fig. 1 shows an implementation flow of a voice control method provided by an embodiment of the present invention, and details are as follows:
in S101, voice content input by a user is acquired, and keywords are parsed from the voice content.
In the embodiment of the invention, the voice content input by the user can be converted into the corresponding text through the voice recognition module. By way of example, the embodiment of the invention can adopt the android Recognizer Intent voice recognizer to recognize the voice content input by the user and generate keywords.
In S102, a control display text matching with the keyword is searched in a preset database, and a corresponding interface package name is extracted from the preset database according to the control display text to serve as a target interface package name, wherein the preset database contains a corresponding relation between the control display text and the interface package name.
In the embodiment of the invention, the voice recognition module is not required to be directly matched with the third party application program, the keyword analyzed by the voice recognition module is not required to be input into a specific application program, the data matched with the keyword is searched from a preset database, and the corresponding interface package name is found based on the data. It will be understood that the interface name is a name of an interface package, and in the file stored in the terminal device, an interface name corresponds to a unique interface package, and the interface package includes a code for starting an application program and jumping to a corresponding interface, so that the interface corresponding to the interface package name can be started by executing the code in the interface package.
Notably, the data of the preset database in the embodiment of the present invention is generated by analyzing the related files of the installed application in advance, and the specific analysis process will be described in detail in the following embodiments. It should be emphasized here that the preset database includes at least two types of data, one type of data is control display text, the other type is interface package name, and the corresponding relationship between the two types of data is stored in the preset database.
Wherein, the control display text refers to text displayed on a control of an interface, for example: as shown in fig. 2, the current interface is a "television guard" interface, where the "television guard" interface includes at least 5 controls, and each control has a text displayed thereon, for example: "one-touch acceleration", "garbage cleaning", and "application management", etc., which are convenient for the user to select the corresponding control, it will be understood that, when the user clicks the control to display the control with the text "one-touch acceleration", the current interface may jump to another interface to perform the one-touch acceleration function, or perform the one-touch acceleration function without jumping the interface.
Wherein each interface corresponds to an interface package name, for example: the interface of the above-mentioned interface is the interface package name corresponding to the interface of "television guard".
In the embodiment of the invention, because the preset database contains the corresponding relation between the control display characters and the interface names, when the keywords are known, the control display characters matched with the keywords can be searched from the preset database, and then the interface names corresponding to the control display characters are found out to be used as the target interface names. For example: and if the keyword 'one-key acceleration' is analyzed from the voice input by the user, searching a control display text matched with the 'one-key acceleration' in a preset database, and further determining the interface name corresponding to the control display text.
In S103, an interface corresponding to the name of the target interface is started.
It will be appreciated that, as described above, because the interface includes code that launches an application and enters the corresponding interface, the interface corresponding to the target interface name may be launched by executing code in the interface corresponding to the target interface name.
In the embodiment of the invention, the keyword is analyzed from the voice content input by the user, the control display text matched with the keyword is searched in the database generated according to the interface data of the application program, and the interface name corresponding to the control display text is extracted to determine the interface which the user wants to start, and the interface corresponding to the interface package name is started; the voice control module is used for controlling the relevant interfaces of the application program under the condition that the voice control module is not matched with the third-party application program, so that the flexibility of voice control is improved, and the problem of coupling caused by direct butt joint with the third-party application program is avoided.
In the foregoing embodiments, it is mentioned that the preset database plays an important role in smooth implementation of the voice control method, and in the embodiments of the present invention, a process for generating the preset database is described, and fig. 3 shows a process for generating the preset database provided in the embodiments of the present invention, which is described in detail as follows:
in S301, a layout file of each interface in the installed application program is obtained, where the layout file of each interface includes control data of each control in the interface, and the control data includes control display text.
As is well known, in the android system, each interface corresponds to a layout file, and the layout file includes data of multiple elements in the interface, where control data of each control in the interface is included. The control data comprises control types (such as a display text type control, a button type control, a display picture type control and the like), control IDs, control widths, control heights, control display characters and the like. Taking a pseudo code in a layout file as an example: assume that the pseudocode is:
<Button
android:id="@+id/btnPre"
android:layout_width="124dip"
android:layout_height="37dip"
android text= "one-key acceleration"/>
The type of one control is represented as: a button-like control; the control ID is: btnper; the width of the control is as follows: 124 units; the control height is: 37 units, the control displays the characters as follows: one-key acceleration.
Specifically, an application program file of the application program is searched under a preset catalog, decompilation is carried out on the application program file, an application decompilation file is generated, the application decompilation file comprises interface files of all interfaces of the application program, and the interface file comprises file IDs of layout files of the interfaces; and calling the layout file corresponding to the file ID.
In the embodiment of the invention, in the starting process of the android system, an application management service, namely a PackageManagerService, is started, and the service can be used for scanning a preset catalog in the system, so that application files of all installed application programs are searched, wherein the application files are files with apk as suffixes.
On one hand, the embodiment of the invention obtains the interface names and the interface package names of all interfaces contained in an application program through the application program management service, and on the other hand, decompiles the application program file through an apktool tool in an android system to generate an application decompilation file, wherein the application decompilation file is a file containing Smali codes of the application program. In the process of generating the application decompilation file, a corresponding Smali directory is generated according to the hierarchical structure of the application program file, and all classes in the application program file have corresponding independent Smali files under the directory. Notably, the application decompiles the interface file of all the interfaces of the application program, and the interface file contains the file ID of the layout file of the interface.
Illustratively, assume that the name of one interface is: com, sunxz, test, mainactigy, then a Smali directory of com\sunxz\test\directory structure is generated, then under this directory the name: the application of MainActivlty. Assume that the content of the application decompiled file is as follows:
class public Lcom/sunxz/test/MainActivlty;
.super Landroid/app/Activlty;
.source"MainActivlty.java"
#virutal methods
.method protected onCreate(Landroid/os/Bundle;)V
.locals 3
.parameter"savedInstanceState".prologue
.line 14
invoke-super{p0,p1},Landroid/app/Activity;->onCreate(Landroid/os/Bundle;)V
.line 15
const/high 16 v2,0x7f03
invoke-virtual{p0,v2},Lcom/sunxz/test/MainActivlty;->setContentView
wherein the first row ". Class" instruction specifies the class name of the current class. The second line ". Super" instruction specifies the parent of the current class. The third line, source instruction, specifies the source file name of the current class. The #virus methods are method declaration instructions, parameter is parameter instruction, program is code start instruction, invoke-virtual is method call instruction. The last line of code completes the setting of the view of the MainActivity activity, loading the layout represented by the method parameters by setContentView (I). Invoke-virtual is an opcode representing a method call, { p0, v2} is a register for placing parameters. Lcom/supexz/test/MainActivlty; is the object type that invokes the method, setContentView (I) V is the specific method that invokes, where I refers to the type of parameter being int, V refers to the return value type being void. In the row disassembly code, two registers p0 and v2 respectively store values of Lcom/sample/test/MainActivity and an int type, which is defined in code const/high 16v2,0x7f03 in the penultimate row, which represents that a value of 0x7f03 is assigned to register v2, from which value it can be determined that the activity of MainActivity loads a layout file with a file ID of 0x7f 03.
By the above example, the application decompiled file of an application program can find the file ID of the layout file of any interface under the application program.
In S302, according to the interface names and the layout files of the interfaces in the installed application program, the corresponding relation between the control display text and the interface names is stored in the preset database.
It may be appreciated that, as described above, on the one hand, the application management service may obtain the interface names and interface names of the interfaces included in one application, that is, may obtain the interface package name of the current interface being displayed by the terminal device; on the other hand, the control display text of each control under the interface can be analyzed through the layout file obtained in the previous step. Therefore, the corresponding relation between the control display text and the interface name can be established.
Notably, in the embodiment of the invention, the interface package name corresponding to the control display text is the interface package name of the interface currently displayed by the terminal equipment after the user clicks the control where the control display text is located.
Specifically, the installed applications are started one by one, the operation shown in fig. 4 is performed on the started applications until all the controls of all the interfaces in the applications are selected, and the operation shown in fig. 4 is performed on another installed application.
Fig. 4 shows a flowchart of a specific implementation of a preset database generating method S302 according to an embodiment of the present invention, which is described in detail below:
in S3021, the interface package name of the current interface is obtained as the first interface package name, and the layout file of the current interface is extracted, so as to determine whether all the controls in the layout file are selected.
It will be understood that the "current interface" is the interface being displayed by the terminal device. For example: when an application is started, a main interface of the application is first entered, the current interface is the main interface, if some controls of the main interface are operated, another interface may be entered, and at this time, the current interface is no longer the main interface.
Since the layout file of each interface can be obtained by the method in the above embodiment, the layout file of the current interface can be obtained naturally. As described above, the layout file of one interface includes control data of a plurality of controls, and in the embodiment of the present invention, each control in the layout file is selected one by one as a selected control for subsequent calculation, so that the control in one layout file may be selected or unselected.
In S3022, if all the controls in the layout file are selected, returning to the previous interface of the current interface in the application program, and re-executing S3021;
it will be appreciated that the current interface may not be the main interface of the application, but rather a sub-interface of several layers below the main interface, since there is an operation to simulate clicking on the control in a subsequent step. If the current interface is not the last interface in the application program, the current interface is proved to be the main interface, and according to the overall logic of fig. 4, if all the controls in the main interface of the application program are already selected, it is proved that all the controls in all the interfaces in the application program are selected, and as described above, the logic of fig. 4 is skipped, and the operation shown in fig. 4 is performed on another installed application program.
Notably, the so-called "current interface" naturally also changes after returning to the previous interface to the current interface.
In S3023, if all the controls in the layout file are not selected, selecting one unselected control in the layout file as the selected control.
As described above, in the embodiment of the present invention, each control in the layout file is selected one by one as a selected control, so that in this step, one unselected control is selected from the layout file as a selected control.
In S3024, the control display text of the selected control is extracted from the layout file, and after the selected control is clicked in a simulated manner, the interface package name of the current interface is obtained again as the second interface package name.
From the description in the above embodiment, it can be known that the control display text of the control included in one interface can be extracted from the layout file of the interface.
In the embodiment of the invention, after the selected control is simulated and clicked, one possibility is to jump from the current interface to other new interfaces, the first interface name and the second interface name can be different, and the other possibility is to stay in the current interface, and the first interface name and the second interface name can be the same.
Optionally, the operation of clicking the selected control is simulated by setting the current focus on the display screen on the selected control and simulating to send a click command.
In S3025, it is determined whether the first interface package name is the same as the second interface package name.
In S3026, if the first interface package name is the same as the second interface package name, the process returns to S3021.
In S3027, if the first interface name is different from the second interface name, storing the corresponding relationship between the control display text of the selected control and the second interface name in the preset database, and executing the S3021.
It can be appreciated that by the method, the corresponding relation between the control display text and the interface package name can be established, and specifically, the interface package name is the interface package name of the interface which is entered after clicking the control corresponding to the control display text.
Optionally, storing the control display text, the control type, the control width and the control height of the selected control into the preset database. It can be appreciated that these data are used to specify the display effect of the control on the screen, where the "control displays text" can prompt the user to click on the corresponding control by causing the control on the interface to display the corresponding text; "control type" is used to specify the type to which the control belongs, for example: displaying a text class control, a button class control, a picture class control and the like; "control width" and "control height" are used to indicate the size of the display area of the control on the screen.
Further, in another embodiment of the present invention, a listening function is provided in the terminal device. On the one hand, if the application program is monitored to be uninstalled, the interface package names of all interfaces contained in the uninstalled application program are used as the selected interface package names, and the data containing the selected interface names in the preset database are deleted.
On the other hand, if the new application program is monitored to be installed, the corresponding relation between the display text of the control of each interface in the new application program and the interface name is generated through the method described in the embodiment above and is added into a preset database.
Corresponding to the application upgrading method described in the above embodiments, fig. 5 shows a block diagram of a voice control device according to an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown.
Referring to fig. 5, the apparatus includes:
a first obtaining module 501, configured to obtain voice content input by a user, and parse keywords from the voice content;
the searching module 502 is configured to search a preset database for a control display text matching with the keyword, and extract, according to the control display text, a corresponding interface package name from the preset database as a target interface package name, where the preset database includes a corresponding relationship between the control display text and the interface package name;
and the starting module 503 is configured to start an interface corresponding to the name of the target interface.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a layout file of each interface in the installed application program, wherein the layout file of each interface comprises control data of each control in the interface, and the control data comprises control display characters;
and the storage module is used for storing the corresponding relation between the control display text and the interface names into the preset database according to the interface names and the layout files of all the interfaces in the installed application program.
Optionally, the second obtaining module includes:
the decompilation sub-module is used for searching out application program files of the application program under a preset catalog, decompiling the application program files to generate application decompilation files, wherein the application decompilation files comprise interface files of all interfaces of the application program, and the interface files comprise file IDs of layout files of the interfaces;
and the calling sub-module is used for calling the layout file corresponding to the file ID.
Optionally, the storage module is specifically configured to:
starting the installed application programs one by one, and executing the following operations on the started application programs until all controls of all interfaces in the application programs are selected:
s1: acquiring an interface package name of a current interface as a first interface package name, extracting a layout file of the current interface, judging whether all controls in the layout file are selected, if not, selecting one unselected control in the layout file as a selected control, and executing step S2; if all the controls in the layout file are selected, returning to the previous interface of the current interface in the application program, re-executing the operation of obtaining the interface package name of the current interface as a first interface package name, extracting the layout file of the current interface, and judging whether all the controls in the layout file are selected;
s2: extracting control display characters of the selected control from the layout file, and re-acquiring an interface package name of a current interface as a second interface package name after the selected control is simulated to be clicked;
s3: if the first interface package name is the same as the second interface package name, returning to execute the S1;
s4: and if the first interface name is different from the second interface name, storing the corresponding relation between the control display text of the selected control and the second interface name into the preset database, and returning to execute the S1.
Optionally, the apparatus further comprises:
and the monitoring execution module is used for taking the interface package names of all interfaces contained in the unloaded application program as the selected interface package names if the application program is unloaded, and deleting the data containing the selected interface names in the preset database.
Fig. 6 is a schematic diagram of a voice control apparatus according to an embodiment of the present invention. As shown in fig. 6, the voice control apparatus of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a speech control program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps of the various speech control method embodiments described above, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 501 to 503 shown in fig. 6.
The voice control device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The voice-controlled devices/means may include, but are not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the voice control apparatus 6 and is not meant to be limiting as the voice control apparatus 6 may include more or less components than illustrated, or may combine certain components, or different components, e.g., the voice control apparatus may also include input and output devices, network access devices, buses, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the speech control device, such as a hard disk or a memory of the speech control device 6. The memory 61 may be an external storage device of the voice control apparatus/apparatus 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided in the voice control apparatus/apparatus 6. Further, the memory 61 may also comprise both an internal storage unit and an external storage device of the speech control means/arrangement 6. The memory 61 is used for storing the computer program and other programs and data required by the speech control device/arrangement. The memory 61 may also be used for temporarily storing data that has been output or is to be output. It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. . Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. A voice control method, comprising:
analyzing keywords from voice content input by a user;
searching control display characters matched with the keywords in a preset database, extracting corresponding interface package names from the preset database according to the control display characters to serve as target interface package names, wherein the preset database contains the corresponding relation between the control display characters and the interface package names, the interface package names correspond to the interface packages, and the interface packages contain codes for starting an application program and entering corresponding interfaces;
starting an interface corresponding to the name of the target interface;
before acquiring the voice content input by the user, the method further comprises the following steps:
acquiring a layout file of each interface in an installed application program, wherein the layout file of each interface comprises control data of each control in the interface, and the control data comprises control display characters;
storing the corresponding relation between the control display text and the interface names into the preset database according to the interface names and the layout files of all interfaces in the installed application program;
storing the corresponding relation between the control display text and the interface names in the preset database according to the interface names and the layout files of all interfaces in the installed application program, wherein the method comprises the following steps:
starting the installed application programs one by one, and executing the following operations on the started application programs until all controls of all interfaces in the application programs are selected:
s1: acquiring an interface package name of a current interface as a first interface package name, extracting a layout file of the current interface, and judging whether all controls in the layout file are selected;
s2: if all the controls in the layout file are not selected, selecting one unselected control in the layout file as a selected control, extracting control display characters of the selected control from the layout file, and re-acquiring an interface package name of a current interface as a second interface package name after the selected control is clicked in a simulation mode;
s3: and if the first interface name is different from the second interface name, storing the corresponding relation between the control display text of the selected control and the second interface name into the preset database, and returning to execute the S1.
2. The voice control method according to claim 1, wherein the obtaining a layout file of each interface in the installed application program includes:
searching out application program files of the application program under a preset catalog, decompiling the application program files to generate application decompilation files, wherein the application decompilation files comprise interface files of all interfaces of the application program, and the interface files comprise file IDs of layout files of the interfaces;
and calling the layout file corresponding to the file ID.
3. The voice control method of claim 1, further comprising:
if all the controls in the layout file are selected, returning to the previous interface of the current interface in the application program, re-executing the operation of obtaining the interface package name of the current interface as the first interface package name, extracting the layout file of the current interface, and judging whether all the controls in the layout file are selected.
4. The voice control method of claim 1, further comprising:
and if the first interface package name is the same as the second interface package name, returning to execute the S1.
5. The voice control method of claim 1, further comprising:
if the fact that the application program is unloaded is monitored, the interface package names of all interfaces contained in the unloaded application program are used as the selected interface package names, and data containing the selected interface names in the preset database are deleted.
6. The voice control method of claim 1, further comprising:
and if the first interface name is different from the second interface name, storing the control display text, the control type, the control width and the control height of the selected control into the preset database.
7. A voice control apparatus, comprising:
the first acquisition module is used for analyzing keywords from voice content input by a user;
the searching module is used for searching control display characters matched with the keywords in a preset database, extracting corresponding interface package names from the preset database according to the control display characters to serve as target interface package names, wherein the preset database contains the corresponding relation between the control display characters and the interface package names, the interface package names correspond to the interface packages, and the interface packages contain codes for starting an application program and entering corresponding interfaces;
the starting module is used for starting the interface corresponding to the name of the target interface;
the second acquisition module is used for acquiring a layout file of each interface in the installed application program, wherein the layout file of each interface comprises control data of each control in the interface, and the control data comprises control display characters;
the storage module is used for storing the corresponding relation between the control display characters and the interface names into the preset database according to the interface names and the layout files of all the interfaces in the installed application program;
the storage module is specifically used for:
starting the installed application programs one by one, and executing the following operations on the started application programs until all controls of all interfaces in the application programs are selected:
s1: acquiring an interface package name of a current interface as a first interface package name, extracting a layout file of the current interface, and judging whether all controls in the layout file are selected;
s2: if all the controls in the layout file are not selected, selecting one unselected control in the layout file as a selected control, extracting control display characters of the selected control from the layout file, and re-acquiring an interface package name of a current interface as a second interface package name after the selected control is clicked in a simulation mode;
s3: and if the first interface name is different from the second interface name, storing the corresponding relation between the control display text of the selected control and the second interface name into the preset database, and returning to execute the S1.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the speech control method according to any one of claims 1 to 6.
CN201811433203.4A 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium Active CN111243580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811433203.4A CN111243580B (en) 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811433203.4A CN111243580B (en) 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111243580A CN111243580A (en) 2020-06-05
CN111243580B true CN111243580B (en) 2023-06-09

Family

ID=70879177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811433203.4A Active CN111243580B (en) 2018-11-28 2018-11-28 Voice control method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111243580B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859900B (en) * 2020-07-14 2023-09-08 维沃移动通信有限公司 Message display method and device and electronic equipment
CN112347277A (en) * 2020-10-28 2021-02-09 同辉佳视(北京)信息技术股份有限公司 Menu generation method and device, electronic equipment and readable storage medium
CN112783550A (en) * 2021-01-25 2021-05-11 维沃软件技术有限公司 Application program management method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200910201A (en) * 2007-08-29 2009-03-01 Inventec Corp System and method thereof for switching a display interface
JP2014146260A (en) * 2013-01-30 2014-08-14 Fujitsu Ltd Voice input/output database search method, program and device
CN107948698A (en) * 2017-12-14 2018-04-20 深圳市雷鸟信息科技有限公司 Sound control method, system and the smart television of smart television
CN108109618A (en) * 2016-11-25 2018-06-01 宇龙计算机通信科技(深圳)有限公司 voice interactive method, system and terminal device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970514B (en) * 2013-01-28 2018-04-06 腾讯科技(深圳)有限公司 The information acquisition method and device of Android application program installation kit
CN103645906B (en) * 2013-12-25 2018-04-10 上海斐讯数据通信技术有限公司 The method and system that interface is laid out again are realized based on fixed interface layout files
CN103885783A (en) * 2014-04-03 2014-06-25 深圳市三脚蛙科技有限公司 Voice control method and device of application program
CN104599669A (en) * 2014-12-31 2015-05-06 乐视致新电子科技(天津)有限公司 Voice control method and device
KR20170014353A (en) * 2015-07-29 2017-02-08 삼성전자주식회사 Apparatus and method for screen navigation based on voice
CN105138357B (en) * 2015-08-11 2018-05-01 中山大学 A kind of implementation method and its device of mobile application operation assistant
CN105957530B (en) * 2016-04-28 2020-01-03 海信集团有限公司 Voice control method and device and terminal equipment
CN106293600A (en) * 2016-08-05 2017-01-04 三星电子(中国)研发中心 A kind of sound control method and system
CN107871501A (en) * 2016-09-27 2018-04-03 Fmr有限责任公司 The automated software identified using intelligent sound performs method
CN108009078B (en) * 2016-11-01 2021-04-27 腾讯科技(深圳)有限公司 Application interface traversal method, system and test equipment
US10498858B2 (en) * 2016-12-14 2019-12-03 Dell Products, Lp System and method for automated on-demand creation of and execution of a customized data integration software application
CN108364644A (en) * 2018-01-17 2018-08-03 深圳市金立通信设备有限公司 A kind of voice interactive method, terminal and computer-readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200910201A (en) * 2007-08-29 2009-03-01 Inventec Corp System and method thereof for switching a display interface
JP2014146260A (en) * 2013-01-30 2014-08-14 Fujitsu Ltd Voice input/output database search method, program and device
CN108109618A (en) * 2016-11-25 2018-06-01 宇龙计算机通信科技(深圳)有限公司 voice interactive method, system and terminal device
CN107948698A (en) * 2017-12-14 2018-04-20 深圳市雷鸟信息科技有限公司 Sound control method, system and the smart television of smart television

Also Published As

Publication number Publication date
CN111243580A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN109376166B (en) Script conversion method, script conversion device, computer equipment and storage medium
CN109002510B (en) Dialogue processing method, device, equipment and medium
CN111243580B (en) Voice control method, device and computer readable storage medium
CN108459964B (en) Test case selection method, device, equipment and computer readable storage medium
CN111385633B (en) Resource searching method based on voice, intelligent terminal and storage medium
US20130081002A1 (en) Selective data flow analysis of bounded regions of computer software applications
CN111367531A (en) Code processing method and device
KR20180129623A (en) Apparatus for statically analyzing assembly code including assoxiated multi files
CN111385661B (en) Method, device, terminal and storage medium for voice control of full screen playing
CN111158667B (en) Code injection method and device, electronic equipment and storage medium
CN112671878A (en) Block chain information subscription method, device, server and storage medium
CN110348226B (en) Engineering file scanning method and device, electronic equipment and storage medium
CN107071553B (en) Method, device and computer readable storage medium for modifying video and voice
JP7231664B2 (en) Vulnerability feature acquisition method, device and electronic device
CN110674491B (en) Method and device for real-time evidence obtaining of android application and electronic equipment
CN109857481B (en) Data acquisition method and device, readable medium and electronic equipment
CN108959646B (en) Method, system, device and storage medium for automatically verifying communication number
CN113935847A (en) Online process risk processing method, device, server and medium
CN109299960B (en) Method and device for monitoring advertisement, computer readable storage medium and terminal equipment
CN111124627B (en) Method and device for determining call initiator of application program, terminal and storage medium
RU2595763C2 (en) Method and apparatus for managing load on basis of android browser
JP2010191483A (en) Operation support device, operation support method and program
CN111151008A (en) Game operation data verification method, device, configuration background and medium
CN110780983A (en) Task exception handling method and device, computer equipment and storage medium
CN112256252A (en) Interface generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant