CN112614486A - Voice control execution function method and device applied to sweeper and electronic equipment - Google Patents

Voice control execution function method and device applied to sweeper and electronic equipment Download PDF

Info

Publication number
CN112614486A
CN112614486A CN202011192214.5A CN202011192214A CN112614486A CN 112614486 A CN112614486 A CN 112614486A CN 202011192214 A CN202011192214 A CN 202011192214A CN 112614486 A CN112614486 A CN 112614486A
Authority
CN
China
Prior art keywords
function
keyword
execution
text
retrieval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011192214.5A
Other languages
Chinese (zh)
Inventor
檀冲
沈荻
张书新
李贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaogou Electric Internet Technology Beijing Co Ltd
Original Assignee
Xiaogou Electric Internet Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaogou Electric Internet Technology Beijing Co Ltd filed Critical Xiaogou Electric Internet Technology Beijing Co Ltd
Priority to CN202011192214.5A priority Critical patent/CN112614486A/en
Publication of CN112614486A publication Critical patent/CN112614486A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the disclosure discloses a voice control execution function method and device applied to a sweeper and electronic equipment. One embodiment of the method comprises: carrying out keyword retrieval on the received user voice to obtain a keyword set; determining the execution level of each keyword in the keyword set to obtain an execution level set; generating an execution function text based on the execution level set and the keyword set; and controlling the function to be started based on the execution function text. The embodiment determines the execution level of each keyword to generate the execution function text through keyword retrieval of the voice of the user, and is helpful for controlling the function to be started. The voice control floor sweeper realizes that all functions of the floor sweeper are controlled to be started according to the voice of a user, and improves user experience.

Description

Voice control execution function method and device applied to sweeper and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of voice control of a sweeper, in particular to a voice control execution function method and device applied to the sweeper and electronic equipment.
Background
With the rapid development of voice control technology, a large number of voice interaction devices appear on the market. The voice interaction equipment is used for being connected with various household electrical appliances (such as a sweeping robot) so that a user can interact with the voice interaction equipment through voice, and then the voice interaction equipment controls the household electrical appliances.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure provide a method, an apparatus, and an electronic device for performing a voice control function of a sweeper, so as to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a voice-controlled executive function method applied to a sweeper, the method including: carrying out keyword retrieval on the received user voice to obtain a keyword set; determining the execution level of each keyword in the keyword set to obtain an execution level set; generating an execution function text based on the execution level set and the keyword set; and controlling the function to be started based on the execution function text.
In a second aspect, some embodiments of the present disclosure provide a voice control executive function device applied to a sweeper, the device including: the retrieval unit is configured to perform keyword retrieval on the received user voice to obtain a keyword set; a determining unit configured to determine an execution level of each keyword in the keyword set, resulting in an execution level set; a generating unit configured to generate an execution function text based on the execution level set and the keyword set; a control unit configured to control a function to be turned on based on the execution function text.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: by carrying out keyword retrieval on the received user voice, keywords of the instruction of the user aiming at the target equipment can be quickly obtained. Then, by determining the execution level of each keyword, the importance of each instruction can be understood. And then, controlling the function of the sweeper to be started by using the execution level and the execution function text generated by the keyword set. The voice control sweeper is used for executing the functions specified by the user, the functions with high control level of the execution level are preferentially executed, the requirements of the user for the different execution of the functions and the important degrees of the sweeper are met, and the user experience is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of one application scenario of a voice-controlled executive function method applied to a sweeper, in accordance with some embodiments of the present disclosure;
figure 2 is a flow diagram of some embodiments of a voice control executive function method applied to a sweeper according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of a voice-controlled executive function method applied to a sweeper according to the present disclosure;
figure 4 is a schematic structural diagram of some embodiments of a voice control executive function device applied to a sweeper according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of a voice control execution function method applied to a sweeper according to some embodiments of the present disclosure. The method can be applied to a sweeper, intelligent mobile equipment and the like.
In the application scenario of fig. 1, first, the computing device 101 may perform keyword search on the received user speech 102 to obtain a keyword set 103. The computing device 101 may then determine an execution level for each keyword in the set of keywords 103, resulting in a set of execution levels 104. Thereafter, the computing device 101 may generate an execution function text 105 based on the set of execution levels 104 and the set of keywords 103. Finally, the computing device 101 can control the sweeper function 106 to turn on based on the execute function text 105.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of a plurality of servers or terminal devices (e.g., a sweeper, etc.), or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
Continuing to refer to fig. 2, a flow 200 of some embodiments of a voice control executive function method applied to a sweeper in accordance with the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The voice control execution function method applied to the sweeper comprises the following steps:
step 201, performing keyword retrieval on the received user voice to obtain a keyword set.
In some embodiments, an executing body of the speech control executive function method (e.g., the computing device 101 shown in FIG. 1) may first perform speech recognition on the user speech to obtain a recognized text. Then, the execution subject may perform keyword search on the user speech. Here, the user voice may be an instruction voice uttered by the user with respect to the target device. The target device may be a multi-function robot. As an example, the user voice may be "turn on the sweeping function". Here, the keyword may be a word for describing a function. As an example, the keyword may be "sweep the floor" and may be "play music".
Step 202, determining the execution level of each keyword in the keyword set to obtain an execution level set.
In some embodiments, the execution subject may determine the execution level of each keyword by: first, the execution body may determine a word frequency of each keyword in the keyword set in the recognition text, so as to obtain a word frequency set. Then, the execution agent may determine an execution level of each keyword based on the word frequency set. Here, the word frequency may be the number of times the keyword appears in the recognition text. As an example, the keyword and word frequency may be "sweep, 2 times; play music, 1 time ". Then, the execution subject may determine that the execution level of the keyword is "sweeping, one level; play music, level two ".
Step 203, generating an execution function text based on the execution level set and the keyword set.
In some embodiments, the execution subject may generate the execution function text by: the execution main body can sort the keywords in the keyword set according to the execution level from small to large based on the execution level set to obtain a keyword sequence; secondly, the execution main body can number the keywords in the keyword sequence to obtain a numbered keyword sequence; thirdly, the execution main body may determine the numbered keyword sequence as an execution function text.
By way of example, the set of keywords may be "play music; sweeping the floor; stop playing music ". The set of execution levels may be "play music, level two; sweeping the floor in the first level; stop playing music, three levels ". The execution main body can sequence the keyword set to obtain a keyword sequence' sweeping; playing music; stop playing music ". Then, the execution main body can number the keywords in the keyword sequence to obtain a numbered keyword sequence' 1, sweeping; 2. playing music; 3. stop playing music ". Finally, the execution main body may determine the keyword sequence as an execution function text.
And step 204, controlling the function to be started based on the execution function text.
In some embodiments, the execution main body may start the functions included in the execution function text one by one based on the execution function text.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: by carrying out keyword retrieval on the received user voice, keywords of the instruction of the user aiming at the target equipment can be quickly obtained. Then, by determining the execution level of each keyword, the importance of each instruction can be understood. And then, controlling the function of the sweeper to be started by using the execution level and the execution function text generated by the keyword set. The voice control sweeper is used for executing the functions specified by the user, the functions with high control level of the execution level are preferentially executed, the requirements of the user for the different execution of the functions and the important degrees of the sweeper are met, and the user experience is improved.
With continued reference to fig. 3, a flow chart 300 of further embodiments of voice control executive function methods applied to a sweeper according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The voice control execution function method applied to the sweeper comprises the following steps:
step 301, performing voice recognition on the user voice to obtain a recognition text.
In some embodiments, an executing body of the speech control executive function method (e.g., the computing device 101 shown in FIG. 1) may perform speech recognition on the user speech to obtain recognized text.
And 302, performing function keyword retrieval on the identification text based on a preset function keyword set to obtain a function retrieval result set.
In some embodiments, the execution main body may perform a function keyword search on the recognition text based on a preset function keyword set to obtain a function search result sequence. Here, the preset function keyword may be an instruction word for describing a function included in a user voice for the target device, which is set in advance. As an example, the function keyword may be "sweep the floor" and may be "play music".
Step 303, determining the function search result set as a keyword set.
In some embodiments, the execution subject may determine the set of function search results as a set of keywords.
In some optional implementations of some embodiments, the executing entity may obtain a new keyword set as the keyword set by: the execution main body can determine whether the recognition text contains sequential keywords in a preset sequential keyword set or not based on a preset sequential keyword set; secondly, in response to determining that the recognition text contains the sequence keywords in the preset sequence keyword set, the execution main body can extract the sequence keywords to obtain a sequence retrieval result set; and thirdly, the execution main body can combine the function retrieval result set and the sequence retrieval result set to obtain a new keyword set. Here, the preset may be an instruction word for describing the order contained in a user voice for the target device set in advance. By way of example, the order key may be "first", may be "then", or may be "after".
And 304, generating an execution function text based on the execution level set and the keyword set.
In some embodiments, the execution subject may generate the execution function text by: step one, the execution main body can record the retrieval time of each function keyword in the keyword set to obtain a function retrieval time sequence; secondly, the execution main body can record the retrieval time of each sequential keyword in the keyword set to obtain a sequential retrieval time sequence; thirdly, the execution main body may combine the function retrieval time series and the sequential retrieval time series to obtain a function time series; fourthly, the execution main body may sort the function keywords and the sequence keywords in the new keyword set based on the function time series and the execution level set to obtain a sort result, and determine the sort result as an execution function text.
As an example, the search time of each function keyword in the keyword set may be "play music (13: 03: 46)," start sweeping (13: 03: 49), "play music (13: 03: 47)". The search time for each sequential keyword in the keyword set may be "first (13: 03: 45) and then (13: 03: 48)". The execution main body can obtain the functional time series of "first (13: 03: 45), playing music (13: 03: 46), playing music (13: 03: 47), and then (13: 03: 48), start sweeping (13: 03: 49)". The execution level may be "play music, one level; sweeping the floor, second level ". The execution main body can obtain the sequencing result of playing music and sweeping, and the execution main body can obtain the execution function text of playing music and sweeping.
And 305, performing function analysis on the execution function text to obtain a name set of the function to be executed.
In some embodiments, the execution main body may perform function analysis on the execution function text to obtain a set of names of functions to be executed. Here, the function analysis may be to extract a function keyword from the execution function text.
And step 306, selecting a function with the same name as the to-be-executed function from the target function set as a target to-be-executed function based on the to-be-executed function name set, so as to obtain a target to-be-executed function set.
In some embodiments, the execution agent may select a function having the same name as the name of the function to be executed from a target function set based on the name set of the function to be executed. Here, the target function set may be a name set of functions supported by the above-described target device.
Step 307, controlling each target to-be-executed function in the target to-be-executed function set to be started based on the execution function text.
In some embodiments, the execution main body may control, based on the execution function text, each target to-be-executed function in the target to-be-executed function set to be turned on.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of the method for controlling an execution function by voice in some embodiments corresponding to fig. 3 embodies steps of expanding how to obtain a keyword set, how to generate an execution function text, and controlling the function to be turned on. Therefore, the scheme of the embodiments can obtain a relatively complete keyword set by searching the preset function keywords and the preset sequence keywords. Therefore, the operation control of the functions of the sweeper required by a user is facilitated.
With further reference to fig. 4, as an implementation of the above methods for the above drawings, the present disclosure provides some embodiments of a voice control execution function device applied to a sweeper, which correspond to the above method embodiments of fig. 2, and which can be applied to various electronic devices.
As shown in fig. 4, the voice control execution function device 400 applied to the sweeper of some embodiments includes: a retrieval unit 401, a determination unit 402, a generation unit 403, and a control unit 404. The retrieval unit 401 is configured to perform keyword retrieval on the received user voice to obtain a keyword set; a determining unit 402 configured to determine an execution level of each keyword in the keyword set, resulting in an execution level set; a generating unit 403 configured to generate an execution function text based on the execution level set and the keyword set; a control unit 404 configured to control a function to be turned on based on the execution function text.
In some optional implementations of some embodiments, the retrieving unit 401 of the voice control executive function device 400 applied to the sweeper is further configured to: carrying out voice recognition on the user voice to obtain a recognition text; based on a preset function keyword set, performing function keyword retrieval on the identification text to obtain a function retrieval result set; and determining the function retrieval result set as the keyword set.
In some optional implementations of some embodiments, the retrieving unit 401 of the voice control executive function device 400 applied to the sweeper is further configured to: determining whether the recognition text contains sequential keywords in a preset sequential keyword set or not based on a preset sequential keyword set; in response to the fact that the recognition text contains the sequential keywords in the preset sequential keyword set, extracting the sequential keywords to obtain a sequential retrieval result set; and combining the function retrieval result set and the sequence retrieval result set to obtain a new keyword set.
In some optional implementations of some embodiments, the determining unit 402 of the voice control executive function device 400 applied to the sweeper is further configured to: determining the word frequency of each keyword in the keyword set in the identification text to obtain a word frequency set; and determining the execution level of each keyword in the keyword set based on the word frequency set to obtain an execution level set.
In some optional implementations of some embodiments, the generating unit 403 of the voice control executive function device 400 applied to the sweeper is further configured to: based on the execution level set, sorting the keywords in the keyword set according to the sequence of the execution levels from small to large to obtain a keyword sequence; numbering the keywords in the keyword sequence to obtain a numbered keyword sequence; and determining the numbered keyword sequence as an execution function text.
In some optional implementations of some embodiments, the generating unit 403 of the voice control executive function device 400 applied to the sweeper is further configured to: recording the retrieval time of each functional keyword in the new keyword set to obtain a functional retrieval time sequence; recording the retrieval time of each sequential keyword in the new keyword set to obtain a sequential retrieval time sequence; combining the function retrieval time sequence and the sequence retrieval time sequence to obtain a function time sequence; and sequencing the function keywords and the sequence keywords in the new keyword set based on the function time sequence and the execution level set to obtain a sequencing result, and determining the sequencing result as an execution function text.
In some optional implementations of some embodiments, the control unit 404 of the voice control execution function apparatus 400 applied to the sweeper is further configured to: performing function analysis on the execution function text to obtain a name set of the function to be executed; selecting a function with the same name as the name of the function to be executed from a target function set as a target function to be executed based on the name set of the function to be executed to obtain a target function set to be executed; and controlling each target to-be-executed function in the target to-be-executed function set to be started based on the execution function text.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)500 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: carrying out keyword retrieval on the received user voice to obtain a keyword set; determining the execution level of each keyword in the keyword set to obtain an execution level set; generating an execution function text based on the execution level set and the keyword set; and controlling the function to be started based on the execution function text.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a retrieval unit, a determination unit, a generation unit, and a control unit. The names of these units do not in some cases form a limitation on the units themselves, and for example, the search unit may also be described as a unit for performing keyword search on received user speech to obtain a keyword set.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A voice control execution function method applied to a sweeper comprises the following steps:
carrying out keyword retrieval on the received user voice to obtain a keyword set;
determining the execution level of each keyword in the keyword set to obtain an execution level set;
generating an execution function text based on the execution level set and the keyword set;
and controlling the function to be started based on the execution function text.
2. The method of claim 1, wherein the performing keyword search on the received user speech to obtain a keyword set comprises:
carrying out voice recognition on the user voice to obtain a recognition text;
based on a preset function keyword set, performing function keyword retrieval on the identification text to obtain a function retrieval result set;
and determining the function retrieval result set as the keyword set.
3. The method of claim 2, wherein the performing keyword search on the received user speech to obtain a keyword set comprises:
determining whether the recognition text contains sequential keywords in a preset sequential keyword set or not based on a preset sequential keyword set;
in response to the fact that the recognition text contains the sequential keywords in the preset sequential keyword set, extracting the sequential keywords to obtain a sequential retrieval result set;
and combining the function retrieval result set and the sequence retrieval result set to obtain a new keyword set.
4. The method of claim 2, wherein the determining the execution level of each keyword in the set of keywords, resulting in a set of execution levels, comprises:
determining the word frequency of each keyword in the keyword set in the identification text to obtain a word frequency set;
and determining the execution level of each keyword in the keyword set based on the word frequency set to obtain an execution level set.
5. The method of claim 4, wherein generating an execution function text based on the set of execution levels and the set of keywords comprises:
based on the execution level set, sorting the keywords in the keyword set according to the sequence of the execution levels from small to large to obtain a keyword sequence;
numbering the keywords in the keyword sequence to obtain a numbered keyword sequence;
and determining the numbered keyword sequence as an execution function text.
6. The method of claim 3, wherein generating an execution function text based on the set of execution levels and the set of keywords comprises:
recording the retrieval time of each functional keyword in the new keyword set to obtain a functional retrieval time sequence;
recording the retrieval time of each sequential keyword in the new keyword set to obtain a sequential retrieval time sequence;
combining the function retrieval time sequence and the sequence retrieval time sequence to obtain a function time sequence;
and sequencing the function keywords and the sequence keywords in the new keyword set based on the function time sequence and the execution level set to obtain a sequencing result, and determining the sequencing result as an execution function text.
7. The method of any of claims 1-6, wherein said controlling a function to turn on based on said execute function text comprises:
performing function analysis on the execution function text to obtain a name set of the function to be executed;
selecting a function with the same name as the name of the function to be executed from a target function set as a target function to be executed based on the name set of the function to be executed to obtain a target function set to be executed;
and controlling each target to-be-executed function in the target to-be-executed function set to be started based on the execution function text.
8. A voice control executive function device applied to a sweeper comprises:
the retrieval unit is configured to perform keyword retrieval on the received user voice to obtain a keyword set;
a determining unit configured to determine an execution level of each keyword in the keyword set, resulting in an execution level set;
a generating unit configured to generate an execution function text based on the execution level set and the keyword set;
a control unit configured to control a function to be turned on based on the execution function text.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202011192214.5A 2020-10-30 2020-10-30 Voice control execution function method and device applied to sweeper and electronic equipment Pending CN112614486A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192214.5A CN112614486A (en) 2020-10-30 2020-10-30 Voice control execution function method and device applied to sweeper and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192214.5A CN112614486A (en) 2020-10-30 2020-10-30 Voice control execution function method and device applied to sweeper and electronic equipment

Publications (1)

Publication Number Publication Date
CN112614486A true CN112614486A (en) 2021-04-06

Family

ID=75225734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192214.5A Pending CN112614486A (en) 2020-10-30 2020-10-30 Voice control execution function method and device applied to sweeper and electronic equipment

Country Status (1)

Country Link
CN (1) CN112614486A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399901A (en) * 2013-07-25 2013-11-20 三星电子(中国)研发中心 Keyword extraction method
US20150025883A1 (en) * 2013-07-16 2015-01-22 Samsung Electronics Co., Ltd. Method and apparatus for recognizing voice in portable device
CN105161099A (en) * 2015-08-12 2015-12-16 恬家(上海)信息科技有限公司 Voice-controlled remote control device and realization method thereof
CN107120791A (en) * 2017-04-27 2017-09-01 珠海格力电器股份有限公司 A kind of air conditioning control method, device and air conditioner
CN107193973A (en) * 2017-05-25 2017-09-22 百度在线网络技术(北京)有限公司 The field recognition methods of semanteme parsing information and device, equipment and computer-readable recording medium
CN108320747A (en) * 2018-02-08 2018-07-24 广东美的厨房电器制造有限公司 Appliances equipment control method, equipment, terminal and computer readable storage medium
CN108885614A (en) * 2017-02-06 2018-11-23 华为技术有限公司 A kind of processing method and terminal of text and voice messaging
CN109147801A (en) * 2018-09-30 2019-01-04 深圳市元征科技股份有限公司 voice interactive method, system, terminal and storage medium
CN109255064A (en) * 2018-08-30 2019-01-22 Oppo广东移动通信有限公司 Information search method, device, intelligent glasses and storage medium
CN110265014A (en) * 2019-06-24 2019-09-20 付金龙 A kind of method, apparatus and translator of voice control
CN111145744A (en) * 2019-12-20 2020-05-12 长兴博泰电子科技股份有限公司 Ad-hoc network-based intelligent household voice control recognition method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150025883A1 (en) * 2013-07-16 2015-01-22 Samsung Electronics Co., Ltd. Method and apparatus for recognizing voice in portable device
CN103399901A (en) * 2013-07-25 2013-11-20 三星电子(中国)研发中心 Keyword extraction method
CN105161099A (en) * 2015-08-12 2015-12-16 恬家(上海)信息科技有限公司 Voice-controlled remote control device and realization method thereof
CN108885614A (en) * 2017-02-06 2018-11-23 华为技术有限公司 A kind of processing method and terminal of text and voice messaging
CN107120791A (en) * 2017-04-27 2017-09-01 珠海格力电器股份有限公司 A kind of air conditioning control method, device and air conditioner
CN107193973A (en) * 2017-05-25 2017-09-22 百度在线网络技术(北京)有限公司 The field recognition methods of semanteme parsing information and device, equipment and computer-readable recording medium
CN108320747A (en) * 2018-02-08 2018-07-24 广东美的厨房电器制造有限公司 Appliances equipment control method, equipment, terminal and computer readable storage medium
CN109255064A (en) * 2018-08-30 2019-01-22 Oppo广东移动通信有限公司 Information search method, device, intelligent glasses and storage medium
CN109147801A (en) * 2018-09-30 2019-01-04 深圳市元征科技股份有限公司 voice interactive method, system, terminal and storage medium
CN110265014A (en) * 2019-06-24 2019-09-20 付金龙 A kind of method, apparatus and translator of voice control
CN111145744A (en) * 2019-12-20 2020-05-12 长兴博泰电子科技股份有限公司 Ad-hoc network-based intelligent household voice control recognition method

Similar Documents

Publication Publication Date Title
US11409425B2 (en) Transactional conversation-based computing system
US11164574B2 (en) Conversational agent generation
CN110807515A (en) Model generation method and device
CN109829164B (en) Method and device for generating text
US10395658B2 (en) Pre-processing partial inputs for accelerating automatic dialog response
WO2022151915A1 (en) Text generation method and apparatus, and electronic device and computer-readable medium
CN111340220A (en) Method and apparatus for training a predictive model
CN111353601A (en) Method and apparatus for predicting delay of model structure
CN115129878B (en) Conversation service execution method, device, storage medium and electronic equipment
CN113468344B (en) Entity relationship extraction method and device, electronic equipment and computer readable medium
CN112102836B (en) Voice control screen display method and device, electronic equipment and medium
WO2021047209A1 (en) Optimization for a call that waits in queue
CN111160002B (en) Method and device for analyzing abnormal information in output spoken language understanding
CN112614486A (en) Voice control execution function method and device applied to sweeper and electronic equipment
JP2024508412A (en) Generating natural language interfaces from graphical user interfaces
CN111754984B (en) Text selection method, apparatus, device and computer readable medium
CN116848580A (en) Structural self-aware model for utterance parsing for multiparty conversations
CN113709506A (en) Multimedia playing method, device, medium and program product based on cloud mobile phone
CN112581951A (en) Voice control execution function method and device applied to sweeper and electronic equipment
CN111131354B (en) Method and apparatus for generating information
CN113064704A (en) Task processing method and device, electronic equipment and computer readable medium
CN110688529A (en) Method and device for retrieving video and electronic equipment
CN110990528A (en) Question answering method and device and electronic equipment
CN110781234A (en) TRS database retrieval method, device, equipment and storage medium
CN111292766B (en) Method, apparatus, electronic device and medium for generating voice samples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 7-605, 6th floor, building 1, yard a, Guanghua Road, Chaoyang District, Beijing 100026

Applicant after: Beijing dog vacuum cleaner Group Co.,Ltd.

Address before: 7-605, 6th floor, building 1, yard a, Guanghua Road, Chaoyang District, Beijing 100026

Applicant before: PUPPY ELECTRONIC APPLIANCES INTERNET TECHNOLOGY (BEIJING) Co.,Ltd.

Country or region before: China

CB02 Change of applicant information