CN115631752A - Intelligent equipment AI voice control method and system supporting machine learning - Google Patents

Intelligent equipment AI voice control method and system supporting machine learning Download PDF

Info

Publication number
CN115631752A
CN115631752A CN202211628957.1A CN202211628957A CN115631752A CN 115631752 A CN115631752 A CN 115631752A CN 202211628957 A CN202211628957 A CN 202211628957A CN 115631752 A CN115631752 A CN 115631752A
Authority
CN
China
Prior art keywords
mode
voice control
function
voice
functional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211628957.1A
Other languages
Chinese (zh)
Other versions
CN115631752B (en
Inventor
石劲磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Manyun Intelligent Technology Co ltd
Original Assignee
Shenzhen Manyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Manyun Intelligent Technology Co ltd filed Critical Shenzhen Manyun Intelligent Technology Co ltd
Priority to CN202211628957.1A priority Critical patent/CN115631752B/en
Publication of CN115631752A publication Critical patent/CN115631752A/en
Application granted granted Critical
Publication of CN115631752B publication Critical patent/CN115631752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0638Interactive procedures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention is suitable for the technical field of voice control, and provides an intelligent equipment AI voice control method and system supporting machine learning, which comprises the following steps: learning and analyzing historical voice control information of a user to obtain a plurality of functional modes, and enabling each functional mode to correspond to a mode name; collecting a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode, calling all functional modes including the voice control instruction when the voice control instruction exists in the functional mode, displaying the functional mode and the corresponding voice control instruction, and broadcasting a mode name of the functional mode in a voice mode; the method comprises the steps of collecting mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode, so that the user can enable the intelligent equipment to execute a plurality of voice control instructions only by speaking one voice control instruction, and the method is fast and convenient.

Description

Intelligent equipment AI voice control method and system supporting machine learning
Technical Field
The invention relates to the technical field of voice control, in particular to an AI voice control method and system for intelligent equipment supporting machine learning.
Background
Voice control is smart machine's common control means, and the wide application is in smart mobile phone, smart factory and wisdom house, when people's inconvenient use finger controls the interaction, and voice control is an effective and convenient control means, and present voice control all is a pronunciation command that sends of one, and when a lot of, the user need send a plurality of voice command in the short time, and present voice control is swift convenient inadequately. Therefore, it is desirable to provide a method and a system for controlling AI voice of an intelligent device supporting machine learning, which aim to solve the above problems.
Disclosure of Invention
In view of the defects in the prior art, the present invention aims to provide an AI voice control method and system for an intelligent device supporting machine learning, so as to solve the problems in the background art.
The invention is realized in this way, a method for controlling AI voice of intelligent device supporting machine learning, the method includes the following steps:
the method comprises the steps that learning analysis is conducted on historical voice control information of a user, the historical voice control information comprises voice control instructions and instruction execution time, a plurality of function modes are obtained, each function mode comprises a plurality of voice control instructions, and time intervals are arranged among the voice control instructions;
generating function mode standby name information, and receiving a function mode naming instruction input by a user to enable each function mode to correspond to a mode name;
collecting a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode, and directly executing the voice control instruction when the voice control instruction does not exist in the functional mode; when the functional mode exists, executing the next step;
calling all function modes including the voice control instruction, displaying the function modes and the corresponding voice control instruction, and broadcasting mode names of the function modes in a voice mode;
collecting mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode.
As a further scheme of the invention: the step of performing learning analysis on the historical voice control information of the user, wherein the historical voice control information comprises voice control instructions and instruction execution time, and obtaining a plurality of function modes specifically comprises the following steps:
obtaining a plurality of function groups according to all voice control instructions and corresponding instruction execution time, wherein the interval value between the instruction execution time of all the voice control instructions in each function group is smaller than a set interval value;
classifying the function groups, wherein the function groups in each class are completely the same, and when the number of the function groups in a certain class reaches a set number value, marking the function groups as function modes;
and determining the time interval between the voice control instructions according to the interval value between the instruction execution times, wherein the time interval reflects the sequence time relationship between the voice control instructions.
As a further scheme of the invention: the step of determining a corresponding functional mode according to a mode name in the mode name voice information and executing all voice control instructions in the functional mode specifically includes:
identifying a mode name in the mode name voice information, and determining a corresponding functional mode;
calling a time interval in the functional mode, and broadcasting the time interval in a voice mode;
and acquiring a time interval adjusting command sent by a user, adjusting the time interval, and executing all voice control instructions in the functional mode.
As a further scheme of the invention: the method further comprises the following steps:
editing and setting the time interval in the functional mode to obtain a default time interval;
and when the default two characters are identified in the collected mode name voice information, the time interval is not reported in a voice mode any more.
As a further scheme of the invention: the method further comprises the step of receiving a function mode uploaded by a user in a user-defined mode, wherein the function mode comprises a plurality of voice control instructions, time intervals are set among the voice control instructions, and the time intervals are edited by the user.
Another object of the present invention is to provide a smart device AI voice control system supporting machine learning, the system comprising:
the function mode generating module is used for learning and analyzing historical voice control information of a user, wherein the historical voice control information comprises voice control instructions and instruction execution time to obtain a plurality of function modes, each function mode comprises a plurality of voice control instructions, and time intervals are arranged among the voice control instructions;
the function mode naming module is used for generating function mode standby name information and receiving a function mode naming instruction input by a user so that each function mode corresponds to a mode name;
the user voice acquisition module is used for acquiring a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode or not, and directly executing the voice control instruction when the voice control instruction does not exist in the functional mode; when the mode exists in the functional mode, executing the step in the mode name broadcast module;
the mode name broadcasting module is used for calling all the functional modes containing the voice control instruction, displaying the functional modes and the corresponding voice control instruction and broadcasting the mode names of the functional modes in a voice mode;
and the functional mode execution module is used for acquiring mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode.
As a further scheme of the invention: the functional pattern generation module includes:
the function group determining unit is used for obtaining a plurality of function groups according to all the voice control instructions and the corresponding instruction execution time, and the interval value between the instruction execution times of all the voice control instructions in each function group is smaller than the set interval value;
the function group classification unit is used for classifying the function groups, the function groups in each class are completely the same, and when the number of the function groups in a certain class reaches a set number value, the function groups are marked as function modes;
and the time interval determining unit is used for determining the time interval between the voice control instructions according to the interval value between the instruction execution times, and the time interval reflects the sequence time relationship between the voice control instructions.
As a further scheme of the invention: the function mode execution module includes:
the function mode determining unit is used for identifying the mode name in the mode name voice information and determining a corresponding function mode;
the time interval broadcasting unit is used for calling the time interval in the functional mode and broadcasting the time interval in a voice mode;
and the time interval adjusting unit is used for acquiring a time interval adjusting command sent by a user, adjusting the time interval and then executing all the voice control instructions in the functional mode.
As a further scheme of the invention: the system further comprises a time interval editing module, wherein the time interval editing module specifically comprises:
the time interval editing unit is used for editing and setting the time interval in the functional mode to obtain a default time interval;
and the interval prohibition broadcasting unit is used for recognizing the default two characters in the collected mode name voice information and not broadcasting the time interval in a voice mode.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, a plurality of functional modes are obtained by learning and analyzing the historical voice control information of the user, and each functional mode corresponds to a mode name; collecting a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode, calling all functional modes including the voice control instruction when the voice control instruction exists in the functional mode, displaying the functional mode and the corresponding voice control instruction, and broadcasting a mode name of the functional mode by voice; the method comprises the steps of collecting mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode, so that the user only speaks one voice control instruction, the intelligent equipment can execute a plurality of voice control instructions, and the method is fast and convenient.
Drawings
Fig. 1 is a flowchart of an AI voice control method for an intelligent device supporting machine learning.
Fig. 2 is a flowchart for learning and analyzing historical voice control information of a user in an intelligent device AI voice control method supporting machine learning, where the historical voice control information includes a voice control instruction and an instruction execution time, and a plurality of function modes are obtained.
Fig. 3 is a flowchart of determining a corresponding functional mode according to a mode name in a mode name voice message and executing all voice control commands in the functional mode in an intelligent device AI voice control method supporting machine learning.
Fig. 4 is a flowchart of editing and setting a time interval in a functional mode in an AI voice control method of an intelligent device supporting machine learning.
Fig. 5 is a schematic structural diagram of an AI voice control system of an intelligent device supporting machine learning.
Fig. 6 is a schematic structural diagram of a functional mode generation module in an AI voice control system of an intelligent device supporting machine learning.
Fig. 7 is a schematic structural diagram of a functional mode execution module in an AI voice control system of an intelligent device supporting machine learning.
Fig. 8 is a schematic structural diagram of a time interval editing module in an AI voice control system of an intelligent device supporting machine learning.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention is further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific implementations of the present invention are described in detail below with reference to specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides an AI voice control method for an intelligent device supporting machine learning, where the method includes the following steps:
s100, learning and analyzing historical voice control information of a user, wherein the historical voice control information comprises voice control instructions and instruction execution time to obtain a plurality of function modes, each function mode comprises a plurality of voice control instructions, and time intervals are arranged among the voice control instructions;
s200, generating function mode standby name information, and receiving a function mode naming command input by a user to enable each function mode to correspond to a mode name;
s300, collecting a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode, and directly executing the voice control instruction when the voice control instruction does not exist in the functional mode; when the functional mode exists, executing the next step;
s400, calling all function modes including the voice control instruction, displaying the function modes and the corresponding voice control instruction, and broadcasting mode names of the function modes in a voice mode;
s500, collecting mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode.
It should be noted that voice control is an effective and convenient control means, the current voice control is one voice command, and many times, a user needs to send a plurality of voice commands in a short time, and the current voice control is not fast and convenient enough.
According to the embodiment of the invention, the historical voice control information of a user can be automatically and periodically subjected to learning analysis, the historical voice control information comprises voice control instructions and instruction execution time, a plurality of function modes can be obtained after learning analysis, each function mode comprises a plurality of voice control instructions, time intervals are arranged among the voice control instructions and reflect the sequence time relationship among the voice control instructions, after the function modes are generated, function mode standby name information can be automatically obtained to remind the user of naming each function mode, the user can check the specific content of each function mode during naming, then the user inputs a function mode naming instruction, each function mode can be enabled to correspond to a mode name, and therefore the user can directly speak one mode name, so that the intelligent equipment can execute all the voice control instructions corresponding to the mode name, and the operation is fast and convenient. It is easy to understand that if the function modes are too many, the user is difficult to remember each function mode, therefore when the voice control instruction sent by the user is collected, whether the voice control instruction exists in the function mode or not can be judged, and when the voice control instruction does not exist in the function mode, the voice control instruction is directly executed; when the intelligent equipment exists in the functional mode, calling all functional modes containing the voice control instruction, displaying the functional modes and the corresponding voice control instruction, facilitating the user to check, and voice-broadcasting the mode names of the functional modes, facilitating the user to select, and when the user hears the mode name wanted by the user, the user can send mode name voice information.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of performing learning analysis on historical voice control information of a user, where the historical voice control information includes a voice control instruction and an instruction execution time, to obtain a plurality of function modes specifically includes:
s101, obtaining a plurality of function groups according to all voice control instructions and corresponding instruction execution time, wherein the interval value between the instruction execution time of all the voice control instructions in each function group is smaller than a set interval value;
s102, classifying the function groups, wherein the function groups in each class are completely the same, and when the number of the function groups in a certain class reaches a set number value, marking the function groups as function modes;
s103, determining time intervals among the voice control commands according to interval values among the command execution times, wherein the time intervals reflect the sequence time relation among the voice control commands.
In the embodiment of the present invention, in order to automatically obtain the function mode, a plurality of function groups need to be obtained according to all the voice control instructions and the corresponding instruction execution times, an interval value between the instruction execution times of all the voice control instructions in each function group is smaller than a set interval value, the set interval value is a fixed value set in advance, in this way, it can be ensured that all voice control commands in each function group are issued within a period of time, that they have a correlation, and then classifying the function groups, wherein the function groups in each class are completely the same, which means that all voice control commands in each function group are the same, the command execution time is not the same, for example, function group one includes a first voice control command and a second voice control command, function group two also includes a first voice control command and a second voice control command, the function group one and the function group two belong to the same category, when the number of the function groups in a certain category reaches a set number value, it indicates that the function group is frequently used, the function group is marked as a function mode, and determines the time interval between the voice control commands based on the value of the interval between the command execution times, the time interval may be determined using an average value, for example, the function group corresponding to the first voice control instruction and the second voice control instruction is marked as a function mode, the interval value between the first voice control command and the second voice control command in the corresponding category is 8 minutes, 9 minutes and 10 minutes, the time interval between the first voice control command and the second voice control command is determined to be 9 minutes, in addition, if the time interval is less than a certain value (e.g., 1 minute), the default time interval is 0.
As shown in fig. 3, as a preferred embodiment of the present invention, the step of determining a corresponding functional mode according to a mode name in the mode name voice information and executing all voice control instructions in the functional mode specifically includes:
s501, recognizing a mode name in the mode name voice information, and determining a corresponding function mode;
s502, calling a time interval in a functional mode, and broadcasting the time interval in a voice mode;
and S503, acquiring a time interval adjusting command sent by a user, adjusting the time interval, and executing all voice control instructions in the functional mode.
In the embodiment of the invention, when a user speaks a mode name, a corresponding function mode is automatically determined, a time interval in the function mode is called, the time interval is broadcasted in a voice mode, for example, the time interval between a first voice control instruction and a second voice control instruction is broadcasted to be 9 minutes, the user is reminded that the second voice control instruction is executed after the first voice control instruction is executed for 9 minutes, if the user feels that 9 minutes is not suitable, the user can send a time interval adjusting command, for example, the time interval is changed from 9 minutes to 12 minutes, after the time interval is adjusted, all voice control instructions in the function mode are executed, and the requirement of the user under the condition is better met.
As shown in fig. 4, as a preferred embodiment of the present invention, the method further includes:
s601, editing and setting time intervals in the functional mode to obtain default time intervals;
s602, when the default two characters are identified in the collected mode name voice information, the time interval is not reported in a voice mode.
In the embodiment of the invention, the time interval reflects the sequence time relationship between the voice control instructions, and in many cases, the time interval between two actions of the intelligent equipment is very important, so that a user can edit and set the time interval in the functional mode to obtain the default time interval, when the acquired mode name voice information identifies the default two characters, the time interval is not reported in a voice mode, the default time interval is directly used, and the action execution is more accurate and more convenient.
As a preferred embodiment of the present invention, the method further includes receiving a function mode uploaded by a user in a customized manner, where the function mode includes a plurality of voice control instructions, and a time interval is set between the voice control instructions and is edited by the user. That is to say, the functional mode can be obtained through machine learning, and the user can also customize the desired functional mode, so that the use is more flexible.
As shown in fig. 5, an embodiment of the present invention further provides an intelligent device AI voice control system supporting machine learning, where the system includes:
the function mode generating module 100 is configured to perform learning analysis on historical voice control information of a user, where the historical voice control information includes voice control instructions and instruction execution time to obtain a plurality of function modes, each function mode includes a plurality of voice control instructions, and a time interval is set between the voice control instructions;
a function mode naming module 200, configured to generate function mode standby name information, and receive a function mode naming instruction input by a user, so that each function mode corresponds to a mode name;
a user voice collecting module 300, configured to collect a voice control instruction sent by a user, determine whether the voice control instruction exists in a functional mode, and directly execute the voice control instruction when the voice control instruction does not exist in the functional mode; when existing in the functional mode, the steps in the mode name broadcasting module 400 are executed;
the mode name broadcasting module 400 is used for calling all the functional modes containing the voice control instruction, displaying the functional modes and the corresponding voice control instruction and broadcasting the mode names of the functional modes in a voice mode;
the functional mode execution module 500 is configured to collect mode name voice information sent by a user, determine a corresponding functional mode according to a mode name in the mode name voice information, and execute all voice control instructions in the functional mode.
As shown in fig. 6, as a preferred embodiment of the present invention, the functional pattern generating module 100 includes:
a function group determining unit 101, configured to obtain a plurality of function groups according to all the voice control instructions and corresponding instruction execution times, where an interval value between the instruction execution times of all the voice control instructions in each function group is smaller than a set interval value;
a function group classification unit 102, configured to classify function groups, where the function groups in each class are completely the same, and when the number of function groups in a certain class reaches a set number value, the function groups are marked as function modes;
and the time interval determining unit 103 is configured to determine a time interval between the voice control instructions according to an interval value between the instruction execution times, where the time interval reflects a time relation between the voice control instructions.
As shown in fig. 7, as a preferred embodiment of the present invention, the functional mode execution module 500 includes:
a functional mode determining unit 501, configured to recognize a mode name in the mode name voice information, and determine a corresponding functional mode;
a time interval broadcasting unit 502, configured to call a time interval in the functional mode, and broadcast the time interval in a voice;
the time interval adjusting unit 503 is configured to collect a time interval adjusting command sent by a user, adjust the time interval, and then execute all the voice control instructions in the functional mode.
As shown in fig. 8, as a preferred embodiment of the present invention, the system further includes a time interval editing module 600, where the time interval editing module 600 specifically includes:
a time interval editing unit 601, configured to edit and set a time interval in the functional mode to obtain a default time interval;
the interval broadcast prohibiting unit 602 is configured to, when a default two-character is identified in the collected mode name voice information, no longer broadcast the time interval in voice.
The present invention has been described in detail with reference to the preferred embodiments thereof, and it should be understood that the invention is not limited thereto, but is intended to cover modifications, equivalents, and improvements within the spirit and scope of the present invention.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a portion of steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (9)

1. An AI voice control method for an intelligent device supporting machine learning, the method comprising the steps of:
the method comprises the steps that learning analysis is conducted on historical voice control information of a user, the historical voice control information comprises voice control instructions and instruction execution time, a plurality of function modes are obtained, each function mode comprises a plurality of voice control instructions, and time intervals are arranged among the voice control instructions;
generating function mode standby name information, and receiving a function mode naming instruction input by a user to enable each function mode to correspond to a mode name;
collecting a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode, and directly executing the voice control instruction when the voice control instruction does not exist in the functional mode; when the functional mode exists, executing the next step;
calling all functional modes containing the voice control instruction, displaying the functional modes and the corresponding voice control instruction, and broadcasting the mode names of the functional modes in a voice mode;
collecting mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode.
2. The AI voice control method for intelligent devices supporting machine learning according to claim 1, wherein the step of performing learning analysis on historical voice control information of the user, the historical voice control information including voice control commands and command execution time to obtain a plurality of function modes specifically includes:
obtaining a plurality of function groups according to all voice control instructions and corresponding instruction execution time, wherein the interval value between the instruction execution time of all the voice control instructions in each function group is smaller than a set interval value;
classifying the function groups, wherein the function groups in each class are completely the same, and when the number of the function groups in a certain class reaches a set number value, marking the function groups as function modes;
and determining the time interval between the voice control instructions according to the interval value between the instruction execution times, wherein the time interval reflects the sequence time relationship between the voice control instructions.
3. The AI voice control method for intelligent devices supporting machine learning according to claim 1, wherein the step of determining a corresponding functional mode according to a mode name in the mode name voice information and executing all voice control commands in the functional mode specifically includes:
identifying a mode name in the mode name voice information, and determining a corresponding function mode;
calling a time interval in the functional mode, and broadcasting the time interval in a voice mode;
and acquiring a time interval adjusting command sent by a user, adjusting the time interval, and executing all voice control instructions in the functional mode.
4. The method for intelligent device AI voice control supporting machine learning according to claim 1, further comprising:
editing and setting the time interval in the functional mode to obtain a default time interval;
and when the default two characters are identified in the collected mode name voice information, the time interval is not reported in a voice mode any more.
5. The AI voice control method of claim 1, further comprising receiving a user defined uploaded function mode, the function mode including a plurality of voice control commands with time intervals set therebetween, the time intervals being edited by the user.
6. A smart device AI voice control system that supports machine learning, the system comprising:
the function mode generating module is used for learning and analyzing historical voice control information of a user, wherein the historical voice control information comprises voice control instructions and instruction execution time to obtain a plurality of function modes, each function mode comprises a plurality of voice control instructions, and time intervals are arranged among the voice control instructions;
the function mode naming module is used for generating function mode standby name information and receiving a function mode naming instruction input by a user so that each function mode corresponds to a mode name;
the user voice acquisition module is used for acquiring a voice control instruction sent by a user, determining whether the voice control instruction exists in a functional mode or not, and directly executing the voice control instruction when the voice control instruction does not exist in the functional mode; when the mode exists in the functional mode, executing the step in the mode name broadcast module;
the mode name broadcasting module is used for calling all the functional modes containing the voice control instruction, displaying the functional modes and the corresponding voice control instruction and broadcasting the mode names of the functional modes in a voice mode;
and the functional mode execution module is used for acquiring mode name voice information sent by a user, determining a corresponding functional mode according to a mode name in the mode name voice information, and executing all voice control instructions in the functional mode.
7. The machine-learning enabled smart device AI voice control system of claim 6, the functional pattern generation module comprising:
the function group determining unit is used for obtaining a plurality of function groups according to all the voice control instructions and the corresponding instruction execution time, and the interval value between the instruction execution times of all the voice control instructions in each function group is smaller than the set interval value;
the function group classification unit is used for classifying function groups, the function groups in each class are completely the same, and when the number of the function groups in a certain class reaches a set number value, the function groups are marked as function modes;
and the time interval determining unit is used for determining the time interval between the voice control instructions according to the interval value between the instruction execution times, and the time interval reflects the sequence time relationship between the voice control instructions.
8. The machine-learning enabled smart device AI voice control system of claim 6, the functional mode execution module comprising:
the function mode determining unit is used for identifying the mode name in the mode name voice information and determining a corresponding function mode;
the time interval broadcasting unit is used for calling the time interval in the functional mode and broadcasting the time interval in a voice mode;
and the time interval adjusting unit is used for acquiring a time interval adjusting command sent by a user, adjusting the time interval and then executing all voice control instructions in the functional mode.
9. The AI voice control system of claim 6 for intelligent device supporting machine learning, further comprising a time interval editing module, the time interval editing module comprising:
the time interval editing unit is used for editing and setting the time interval in the functional mode to obtain a default time interval;
and the interval prohibition broadcasting unit is used for recognizing the default two characters in the collected mode name voice information and not broadcasting the time interval in a voice mode.
CN202211628957.1A 2022-12-19 2022-12-19 Intelligent equipment AI voice control method and system supporting machine learning Active CN115631752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211628957.1A CN115631752B (en) 2022-12-19 2022-12-19 Intelligent equipment AI voice control method and system supporting machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211628957.1A CN115631752B (en) 2022-12-19 2022-12-19 Intelligent equipment AI voice control method and system supporting machine learning

Publications (2)

Publication Number Publication Date
CN115631752A true CN115631752A (en) 2023-01-20
CN115631752B CN115631752B (en) 2023-02-28

Family

ID=84909734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211628957.1A Active CN115631752B (en) 2022-12-19 2022-12-19 Intelligent equipment AI voice control method and system supporting machine learning

Country Status (1)

Country Link
CN (1) CN115631752B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108469966A (en) * 2018-03-21 2018-08-31 北京金山安全软件有限公司 Voice broadcast control method and device, intelligent device and medium
CN109830235A (en) * 2019-03-19 2019-05-31 东软睿驰汽车技术(沈阳)有限公司 Sound control method, device, onboard control device and vehicle
US10803392B1 (en) * 2017-03-10 2020-10-13 Amazon Technologies, Inc Deploying machine learning-based models
CN112820290A (en) * 2020-12-31 2021-05-18 广东美的制冷设备有限公司 Household appliance and voice control method, voice device and computer storage medium thereof
US20210335357A1 (en) * 2020-04-28 2021-10-28 Baidu Online Network Technology (Beijing) Co., Ltd. Method for controlling intelligent speech apparatus, electronic device and storage medium
CN114926306A (en) * 2022-07-22 2022-08-19 深圳慢云智能科技有限公司 Apartment house scene mode artificial intelligence interaction method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803392B1 (en) * 2017-03-10 2020-10-13 Amazon Technologies, Inc Deploying machine learning-based models
CN108469966A (en) * 2018-03-21 2018-08-31 北京金山安全软件有限公司 Voice broadcast control method and device, intelligent device and medium
CN109830235A (en) * 2019-03-19 2019-05-31 东软睿驰汽车技术(沈阳)有限公司 Sound control method, device, onboard control device and vehicle
US20210335357A1 (en) * 2020-04-28 2021-10-28 Baidu Online Network Technology (Beijing) Co., Ltd. Method for controlling intelligent speech apparatus, electronic device and storage medium
CN112820290A (en) * 2020-12-31 2021-05-18 广东美的制冷设备有限公司 Household appliance and voice control method, voice device and computer storage medium thereof
CN114926306A (en) * 2022-07-22 2022-08-19 深圳慢云智能科技有限公司 Apartment house scene mode artificial intelligence interaction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
牛飞: "面向智能语音设备的家居系统协议设计与实现" *

Also Published As

Publication number Publication date
CN115631752B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN108831469A (en) Voice command method for customizing, device and equipment and computer storage medium
CN109711802A (en) Item information processing method, device, computer equipment and storage medium
CN109726983A (en) Examine method for allocating tasks, device, computer equipment and storage medium
CN106558305B (en) Voice data processing method and device
CN112037799A (en) Voice interrupt processing method and device, computer equipment and storage medium
CN109614627B (en) Text punctuation prediction method and device, computer equipment and storage medium
CN109102797A (en) Speech recognition test method, device, computer equipment and storage medium
CN112017663B (en) Voice generalization method and device and computer storage medium
CN115631752B (en) Intelligent equipment AI voice control method and system supporting machine learning
CN113390170A (en) Method and device for controlling air conditioner and air conditioner
WO2019227370A1 (en) Method, apparatus and system for controlling multiple voice assistants, and computer-readable storage medium
CN105810206A (en) Meeting recording device and method thereof for automatically generating meeting record
CN113611305A (en) Voice control method, system, device and medium in autonomous learning home scene
CN113840040B (en) Man-machine cooperation outbound method, device, equipment and storage medium
CN111126001A (en) Character marking method, device, equipment and storage medium
CN110805989B (en) Control method and device of air conditioner
CN111028841A (en) Method and device for awakening system to adjust parameters, computer equipment and storage medium
CN105810208A (en) Meeting recording device and method thereof for automatically generating meeting record
CN114185510A (en) Sound box volume adjusting method and system, sound equipment and storage medium
CN113450800A (en) Method and device for determining activation probability of awakening words and intelligent voice product
CN113592114A (en) User fault reporting research and judgment method and device in power grid, computer equipment and storage medium
CN110597874A (en) Data analysis model creation method and device, computer equipment and storage medium
CN117036203B (en) Intelligent drawing method and system
US20220101380A1 (en) Scheduling device, scheduling method and recording medium
CN111062729A (en) Information acquisition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant