CN114863927B - Numerical control machine tool interaction control method and system based on voice recognition - Google Patents

Numerical control machine tool interaction control method and system based on voice recognition Download PDF

Info

Publication number
CN114863927B
CN114863927B CN202210786763.8A CN202210786763A CN114863927B CN 114863927 B CN114863927 B CN 114863927B CN 202210786763 A CN202210786763 A CN 202210786763A CN 114863927 B CN114863927 B CN 114863927B
Authority
CN
China
Prior art keywords
information
machine tool
numerical control
determining
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210786763.8A
Other languages
Chinese (zh)
Other versions
CN114863927A (en
Inventor
吴承科
饶建波
蒋锐
胡天宇
刘占省
刘祥飞
李骁
郭媛君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Hangmai CNC Software Shenzhen Co Ltd
Original Assignee
Zhongke Hangmai CNC Software Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Hangmai CNC Software Shenzhen Co Ltd filed Critical Zhongke Hangmai CNC Software Shenzhen Co Ltd
Priority to CN202210786763.8A priority Critical patent/CN114863927B/en
Publication of CN114863927A publication Critical patent/CN114863927A/en
Application granted granted Critical
Publication of CN114863927B publication Critical patent/CN114863927B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q1/00Members which are comprised in the general build-up of a form of machine, particularly relatively large fixed members
    • B23Q1/0009Energy-transferring means or control lines for movable machine parts; Control panels or boxes; Control parts
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mechanical Engineering (AREA)
  • Numerical Control (AREA)

Abstract

The invention discloses a numerical control machine tool interaction control method and system based on voice recognition, wherein the method comprises the following steps: acquiring a voice signal, identifying the voice signal, and determining a trigger word corresponding to the voice signal; after the trigger word is determined, acquiring continuous sound signals which are consecutive in time with the trigger word, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals; determining category information corresponding to the sentence text according to the sentence text, and determining intention information according to the control information; and generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction. The invention can recognize the voice signal based on the voice recognition technology, controls the action of the numerical control machine tool by voice, does not need manual operation of a user and provides convenience for the user. Moreover, the trigger word identified by the invention is specially used for the numerical control machine tool, so that the numerical control machine tool can be controlled more accurately, and the efficiency is improved.

Description

Numerical control machine tool interaction control method and system based on voice recognition
Technical Field
The invention relates to the technical field of numerical control machine tool control, in particular to a numerical control machine tool interaction control method and system based on voice recognition.
Background
In the traditional mode, interaction between an operator and a numerical control machine (such as commanding the machine to perform the next operation or retrieving or calling out specific information from historical machining information in the machine) needs to be performed through a series of physical buttons or touch screens, so that the efficiency is low, and the operator who does not know the operation of the machine cannot operate the machine correctly, so that repeated and tedious workload is caused.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and a system for controlling interaction of a numerical control machine based on voice recognition, aiming at solving the problems that in the prior art, when a machine tool is operated, the efficiency is low, and a worker who does not know the operation of the machine tool cannot operate the machine tool correctly, which results in repeated and tedious workload.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, the invention provides a method for controlling interaction of a numerical control machine tool based on voice recognition, wherein the method comprises the following steps:
acquiring a voice signal, identifying the voice signal, and determining a trigger word corresponding to the voice signal, wherein the trigger word is specially used for triggering the control of a numerical control machine tool;
after the trigger word is determined, acquiring continuous sound signals which are consecutive in time with the trigger word, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals;
determining category information corresponding to the sentence text according to the sentence text, and determining intention information according to the category information;
and generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction.
In an implementation manner, the acquiring a voice signal, processing and recognizing the voice signal, and determining a trigger word corresponding to the voice signal includes:
acquiring the voice signal, identifying the voice signal and acquiring character information corresponding to the voice signal;
matching the character information with a typical database preset in the numerical control machine tool, and determining a trigger word corresponding to the character information, wherein the typical database is provided with a plurality of trigger words of which the use frequency exceeds a preset frequency threshold, and each trigger word corresponds to a different interactive operation.
In one implementation manner, the recognizing the voice signal and acquiring text information corresponding to the voice signal include:
performing voice noise reduction processing on the sound signal by using a least mean square algorithm filter to obtain a noise-reduced sound signal;
performing Fourier transform on the sound signal subjected to the noise reduction processing to obtain an amplitude spectrum and a phase spectrum in the sound signal, wherein the amplitude spectrum and the phase spectrum are used for reflecting fluctuation information of the sound signal along with time change;
and inputting the amplitude spectrum and the phase spectrum to a preset residual error neural network to obtain the character information, wherein the residual error neural network is obtained by training in advance based on the amplitude spectrum and the phase spectrum of the sound signal corresponding to a plurality of different character information.
In one implementation, after the trigger word is determined, acquiring a continuous sound signal consecutive to the trigger word in time, recognizing the continuous sound signal, and determining a sentence text corresponding to the continuous sound signal includes:
after the trigger word is determined, determining time information corresponding to the trigger word, and acquiring the continuous sound signal behind the time information;
carrying out noise reduction processing on the continuous sound signal, and inputting the continuous sound signal subjected to noise reduction processing into a pre-trained sentence recognition model;
and outputting sentence texts corresponding to the continuous sound signals according to the sentence recognition model.
In one implementation, the training of the sentence recognition model includes:
pre-constructing a typical scene information word list of the numerical control machine tool, wherein a plurality of vocabularies used for the numerical control machine tool are arranged in the typical scene information word list, and the vocabularies used for the numerical control machine tool are used for reflecting control information and operation parameter information of the numerical control machine tool;
labeling a plurality of sample sound signals by using the typical scene information word list, and constructing a mapping relation between the sample sound signals and words in the typical scene information word list;
and training a preset neural network model according to the mapping relation to obtain the sentence recognition model.
In one implementation, the obtaining, according to the sentence text, control information corresponding to the sentence text from a database of a numerical control machine tool, and determining a control intention according to the control information includes:
performing word segmentation processing on the sentence text according to the sentence text to obtain word segmentation information, and screening out key words for reflecting control intentions or query intentions from the word segmentation information;
inputting the keyword into a pre-trained BERT model, and determining category information corresponding to the keyword, wherein the category information comprises a machine tool working control category or a machine tool information query category;
and determining intention information corresponding to the keyword according to the category information corresponding to the keyword.
In one implementation manner, the generating, according to the intention information, a control instruction corresponding to the intention information, and executing, according to the control instruction, a corresponding interactive operation includes:
inputting the intention information and the keywords into an instruction generation template to generate the control instruction, wherein the control instruction comprises an instruction for controlling the numerical control machine tool to work or an instruction for inquiring the information of the machine tool;
and analyzing the control instruction to obtain the intention information, and executing interactive operation corresponding to the intention information.
In a second aspect, an embodiment of the present invention further provides a system for controlling interaction of a numerical control machine tool based on voice recognition, where the system includes:
the trigger word determining module is used for acquiring a voice signal, identifying the voice signal and determining a trigger word corresponding to the voice signal, wherein the trigger word is specially used for triggering the control of the numerical control machine tool;
the sentence recognition module is used for acquiring continuous sound signals which are consecutive to the trigger words in time after the trigger words are determined, recognizing the continuous sound signals and determining sentence texts corresponding to the continuous sound signals;
the intention determining module is used for determining the category information corresponding to the statement text according to the statement text and determining the intention information according to the category information;
and the interaction control module is used for generating a control instruction corresponding to the intention information according to the intention information and executing corresponding interaction operation according to the control instruction.
In a third aspect, an embodiment of the present invention further provides a numerically controlled machine tool, where the numerically controlled machine tool includes a memory, a processor, and a numerically controlled machine tool interaction control program based on voice recognition, where the numerically controlled machine tool interaction control program based on voice recognition is stored in the memory and is capable of running on the processor, and when the processor executes the numerically controlled machine tool interaction control program based on voice recognition, the steps of the numerically controlled machine tool interaction control method based on voice recognition according to any of the above schemes are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a voice recognition-based nc machine tool interaction control program is stored on the computer-readable storage medium, and when the voice recognition-based nc machine tool interaction control program is executed by a processor, the steps of the voice recognition-based nc machine tool interaction control method according to any one of the above schemes are implemented.
Has the advantages that: compared with the prior art, the invention provides a numerical control machine interactive control method based on voice recognition. And then after the trigger word is determined, acquiring continuous sound signals which are consecutive with the trigger word in time, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals. Then, according to the sentence text, determining the category information corresponding to the sentence text, and determining the intention information according to the category information. And finally, generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction. The invention can recognize the voice signal based on the voice recognition technology, controls the action of the numerical control machine tool by voice, does not need manual operation of a user and provides convenience for the user. In addition, the trigger word identified by the invention is specially used for the numerical control machine tool, so that the numerical control machine tool can be controlled more accurately, and the efficiency is improved.
Drawings
Fig. 1 is a flowchart of a specific implementation of a method for controlling interaction of a numerical control machine based on voice recognition according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of a voice recognition-based interaction control system for a numerically-controlled machine tool according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a numerically controlled machine tool according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment provides a numerical control machine tool interaction control method based on voice recognition, and the numerical control machine tool can be controlled by voice based on the method, so that the control efficiency of the numerical control machine tool is effectively improved, and convenience is brought to the use of a user. Specifically, in this embodiment, a voice signal is first obtained, the voice signal is recognized, and a trigger word corresponding to the voice signal is determined, where the trigger word is dedicated to triggering control over a numerical control machine tool. And then, after the trigger word is determined, acquiring continuous sound signals which are consecutive to the trigger word in time, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals. Then, according to the sentence text, determining the category information corresponding to the sentence text, and determining the intention information according to the category information. And finally, generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction. The embodiment does not need a user to manually operate the machine tool, and provides convenience for the user to use. In addition, the trigger word identified by the invention is specially used for the numerical control machine tool, so that the numerical control machine tool can be controlled more accurately, and the efficiency is improved.
Exemplary method
The method for controlling interaction of the numerical control machine tool based on voice recognition in the embodiment can be applied to the numerical control machine tool, the numerical control machine tool is provided with a main control device, the main control device can be a terminal platform such as an intelligent computer, and as shown in fig. 1, when the numerical control machine tool executes the method for controlling interaction of the numerical control machine tool based on voice recognition, the method comprises the following steps:
step S100, acquiring a voice signal, recognizing the voice signal, and determining a trigger word corresponding to the voice signal, wherein the trigger word is specially used for triggering the control of the numerical control machine tool.
The numerical control machine tool of the embodiment first collects a voice signal, the voice signal can be a sentence spoken by a user to the numerical control machine tool, after the numerical control machine tool obtains the voice signal, the numerical control machine tool can recognize the voice signal, and then determines a trigger word corresponding to the voice signal. In this embodiment, the trigger word may be set based on a commonly used function or commonly used control information of the numerical control machine tool in the daily use process, so that the trigger word is specially used for triggering the control of the numerical control machine tool, and after the numerical control machine tool recognizes the trigger word, it indicates that the numerical control machine tool is woken up by voice at this time.
In one implementation manner, the embodiment includes the following steps when identifying the trigger word:
step S101, acquiring the voice signal, identifying the voice signal, and acquiring character information corresponding to the voice signal;
step S102, matching the text information with a typical database preset in the numerical control machine tool, and determining a trigger word corresponding to the text information, wherein the typical database is provided with a plurality of trigger words of which the use frequency exceeds a preset frequency threshold, and each trigger word corresponds to a different interactive operation.
Specifically, the numerical control machine tool in this embodiment may be provided with a sound collection device, such as a microphone, which may collect a voice signal of a user in real time, and then convert the voice signal into text information based on a voice recognition technology. In a specific application, after the voice signal is collected, the embodiment may perform voice noise reduction processing on the voice signal by using a least mean square algorithm filter (LMS), so as to obtain a noise-reduced voice signal. And then, carrying out Fourier transform on the sound signal subjected to the noise reduction treatment to obtain an amplitude spectrum and a phase spectrum in the sound signal, wherein the amplitude spectrum and the phase spectrum are used for reflecting fluctuation information of the sound signal along with time change. And inputting the amplitude spectrum and the phase spectrum to a preset residual error neural network to obtain the text information, wherein the residual error neural network is obtained by training in advance based on the amplitude spectrum and the phase spectrum of the sound signal corresponding to a plurality of different text information. That is to say, in this embodiment, a corresponding relationship between the sound signal corresponding to the text information and the amplitude spectrum and the phase spectrum is established, and the neural network model is trained according to the corresponding relationship, so as to obtain the residual neural network. After the text information corresponding to the sound signal is identified, the text information is matched with a typical database preset in the numerical control machine tool, and a trigger word corresponding to the text information is determined. In this embodiment, a plurality of trigger words with usage frequencies exceeding a preset frequency threshold are set in the typical database, each trigger word corresponds to a different interactive operation, and the interactive operation is some common operations summarized according to historical usage of the numerical control machine tool, so that the trigger words are specially used for triggering the numerical control machine tool. Therefore, when the text information is matched with the typical database, the trigger words in the text information can be found out. For example, the trigger words are "next step" or "acquisition" or "start execution", and the trigger words are all words corresponding to operations of the numerical control machine tool which are frequently used in historical use. Because the trigger words in the embodiment are words based on the field of the numerical control machine tool, the recognition accuracy is better, and the triggering of the numerical control machine tool is more facilitated.
And S200, after the trigger word is determined, acquiring continuous sound signals which are consecutive in time with the trigger word, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals.
After the trigger word is determined, the embodiment may continue to collect the sound signal and collect the continuous sound signal after the trigger word, and since the numerical control machine tool may start to prepare to receive the voice control instruction after recognizing the trigger word, it is necessary to collect the continuous sound signal after the trigger word and recognize the continuous sound signal, so as to determine the sentence text corresponding to the continuous sound signal.
In an implementation manner, when recognizing the sentence text corresponding to the continuous sound signal, the embodiment includes the following steps:
step S201, after the trigger word is determined, determining time information corresponding to the trigger word, and acquiring the continuous sound signal behind the time information;
step S202, carrying out noise reduction processing on the continuous sound signal, and inputting the continuous sound signal subjected to noise reduction processing into a pre-trained sentence recognition model;
step S203, outputting a sentence text corresponding to the continuous sound signal according to the sentence recognition model.
Specifically, in this embodiment, first, time information corresponding to the trigger word is determined, and then, based on the time information, continuous sound signals located after the time information are obtained, where the obtained continuous sound signals are spoken after the user sends the trigger word, and the continuous sound signals located after the trigger word are sound signals for controlling the numerical control machine to perform specific operations. Therefore, in this embodiment, the time information corresponding to the trigger word is used as a node, and the continuous sound signal after the time information is the sound signal for controlling the nc machine tool. In another implementation manner, the manner of acquiring the continuous sound signal in this embodiment may also be acquired based on a pause duration after the trigger word, for example, after the user sends the sound signal and the trigger word is recognized by the numerical control machine, the pause duration when the user sends the sound signal again exceeds 2 seconds, and at this time, the sound signal received after the 2 seconds is the continuous sound signal. In specific application, after the numerically-controlled machine tool receives the sound signal containing the trigger words, the pause time exceeds 2 seconds, and the continuous sound signal is identified after the continuous sound signal is received again after 2 seconds.
Specifically, after the continuous sound signals are obtained, the present embodiment performs noise reduction processing on the continuous sound signals, and during the noise reduction processing, the present embodiment may still perform voice noise reduction processing on the continuous sound signals by using a least mean square algorithm filter to obtain the noise-reduced continuous sound signals. After the noise reduction is completed, the continuous sound signal after the noise reduction is input to a pre-trained sentence recognition model, and a sentence text corresponding to the continuous sound signal is output according to the sentence recognition model. In the embodiment, a typical scene information word list of the numerical control machine tool can be pre-constructed, wherein a plurality of vocabularies for the numerical control machine tool are arranged in the typical scene information word list, and the vocabularies for the numerical control machine tool are used for reflecting control information and operation parameter information of the numerical control machine tool. And then labeling the sample sound signals by using the typical scene information word list to construct a mapping relation between the sample sound signals and words in the typical scene information word list. And finally, training a preset neural network model according to the mapping relation to obtain the sentence recognition model. In the present application, the typical scenario information vocabulary is set according to a processing procedure of the nc machine tool when a certain or certain workpiece is specifically executed, the typical scenario information includes control information and operation parameter information, and the control information and the operation parameter information may correspond to corresponding vocabularies. Therefore, in the embodiment, after the sample sound signal is labeled by using the typical scene information vocabulary, a signal corresponding to a vocabulary in the typical scene information vocabulary in the sample sound signal can be labeled, and a mapping relationship between the sample sound signal and the vocabulary in the typical scene information vocabulary is constructed, so that the sentence recognition model can be constructed based on the mapping relationship. After the numerical control machine tool collects the continuous sound signals, the collected continuous sound signals can be input into the sentence recognition model, and then words corresponding to the continuous sound signals can be recognized based on the sentence recognition model to form a sentence text. In another implementation manner, the embodiment may recognize the continuous sound signal based on a speech recognition technology, and recognize words corresponding to the continuous sound signal to form a sentence text.
Step S300, according to the statement text, determining category information corresponding to the statement text, and according to the category information, determining intention information.
In this embodiment, after the sentence text is obtained, the sentence text is analyzed to determine the category information of the sentence text, where the category information in this embodiment is used to distinguish whether the sentence text corresponds to a control category or a query category, and when the category information of the sentence text is the control category, it indicates that the user wants to control the numerical control machine to perform a corresponding action. When the category information of the sentence text is the query category, it indicates that the user wants to query the numerical control machine tool for the related information, so the embodiment may determine the intention information based on the category information.
In one implementation, the determining the intention information in this embodiment includes the following steps:
step S301, performing word segmentation processing on the sentence text according to the sentence text to obtain word segmentation information, and screening out keywords for reflecting a control intention or a query intention from the word segmentation information;
step S302, inputting the keywords into a pre-trained BERT model, and determining category information corresponding to the keywords, wherein the category information comprises a machine tool working control category or a machine tool information query category;
step S303, determining intention information corresponding to the keyword according to the category information corresponding to the keyword.
Specifically, in this embodiment, word segmentation processing is performed on the sentence text to obtain word segmentation information in the sentence text, and then keywords for reflecting a control intention or a query intention are screened from the word segmentation information, for example, the keywords may be "execute", "start", "obtain", and the like. Then, in this embodiment, the keyword is input into a BERT model trained in advance, and category information corresponding to the keyword is determined, where the category information includes a machine tool operation control category or a machine tool information query category. The BERT model in this embodiment is obtained by training in advance based on the sample vocabulary and the category information corresponding to the sample vocabulary, and therefore the category information corresponding to each keyword can be directly determined based on the BERT model. And when the category information of the keyword is a control category, indicating that the intention of the user at the moment is to control the numerical control machine tool to execute corresponding actions. And when the category information of the keyword is the query category, the intention of the user at the moment is to query the relevant information from the numerical control machine tool. Therefore, the present embodiment can determine the intention information based on the category information.
And S400, generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction.
After the intention information is determined, it is clear what action the user wants the numerical control machine to perform at the moment, so the numerical control machine can generate a corresponding control instruction based on the intention information, and then can perform corresponding interactive operation according to the control instruction.
In one implementation manner, when performing the interactive operation, the present embodiment includes the following steps:
step S401, inputting the intention information and the keywords into an instruction generation template to generate the control instruction, wherein the control instruction comprises an instruction for controlling the numerical control machine tool to work or an instruction for inquiring information of the machine tool;
and S402, analyzing the control instruction to obtain the intention information, and executing interactive operation corresponding to the intention information.
In this embodiment, an instruction generation template is preset in the embodiment, and the instruction generation template can generate a control instruction according to intention information and a keyword, since the intention information can be used for controlling a numerical control machine tool to execute a corresponding action or for inquiring related information from the numerical control machine tool. Therefore, the command generation template can generate different control commands according to the intention information, wherein the control commands comprise commands for controlling the operation of the numerical control machine tool or commands for inquiring the information of the machine tool. In addition, in another implementation manner, according to the embodiment, a PLC control instruction may be automatically generated according to intent information according to a heuristic rule, and then the numerical control machine performs a corresponding interactive operation according to the PLC control instruction, for example, the PLC control instruction controls the numerical control machine to change a tool. In this embodiment, an SQL query instruction may be automatically generated based on a key value pair recorded in an internal database structure of the numerical control machine, and the numerical control machine may obtain required information according to the SQL query instruction, for example, the SQL query instruction is used to query historical processing information of the numerical control machine. Therefore, the embodiment automatically controls the numerically-controlled machine tool to perform corresponding interactive operation based on the voice recognition technology.
In summary, in this embodiment, a voice signal is first obtained, the voice signal is identified, and a trigger word corresponding to the voice signal is determined, where the trigger word is specially used for triggering control over a numerical control machine. And then, after the trigger word is determined, acquiring continuous sound signals which are consecutive with the trigger word in time, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals. Then, according to the sentence text, determining the category information corresponding to the sentence text, and determining the intention information according to the category information. And finally, generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction. The embodiment does not need a user to manually operate the machine tool, and provides convenience for the user to use. Moreover, the trigger word identified by the invention is specially used for the numerical control machine tool, so that the numerical control machine tool can be controlled more accurately, and the efficiency is improved.
Exemplary System
Based on the above embodiment, the present invention further provides a system for controlling interaction of a numerical control machine tool based on voice recognition, as shown in fig. 2, the system includes: a trigger word determination module 10, a sentence recognition module 20, an intent determination module 30, and an interaction control module 40. Specifically, the trigger word determining module is configured to acquire a voice signal, recognize the voice signal, and determine a trigger word corresponding to the voice signal, where the trigger word is dedicated to triggering control over a numerical control machine tool. And the sentence recognition module is used for acquiring continuous sound signals which are consecutive to the trigger words in time after the trigger words are determined, recognizing the continuous sound signals and determining the sentence texts corresponding to the continuous sound signals. The intention determining module is used for determining the category information corresponding to the sentence text according to the sentence text and determining the intention information according to the category information. And the interaction control module is used for generating a control instruction corresponding to the intention information according to the intention information and executing corresponding interaction operation according to the control instruction.
In one implementation, the trigger determining module 10 includes:
the signal identification unit is used for acquiring the voice signal, identifying the voice signal and acquiring character information corresponding to the voice signal;
and the trigger word matching unit is used for matching the character information with a typical database preset in the numerical control machine tool and determining a trigger word corresponding to the character information, wherein a plurality of trigger words with the use frequency exceeding a preset frequency threshold value are arranged in the typical database, and each trigger word corresponds to a different interactive operation.
In one implementation, the signal identification unit includes:
the noise reduction processing subunit is configured to perform speech noise reduction processing on the sound signal by using a least mean square algorithm filter to obtain a noise-reduced sound signal;
the signal processing subunit is configured to perform fourier transform on the noise-reduced sound signal to obtain an amplitude spectrum and a phase spectrum in the sound signal, where the amplitude spectrum and the phase spectrum are used to reflect fluctuation information of the sound signal along with time change;
and the character recognition subunit is used for inputting the amplitude spectrum and the phase spectrum to a preset residual error neural network to obtain the character information, wherein the residual error neural network is obtained by training in advance based on the amplitude spectrum and the phase spectrum of the sound signal corresponding to a plurality of different character information.
In one implementation, the sentence recognition module 20 includes:
the time information determining unit is used for determining the time information corresponding to the trigger word after the trigger word is determined, and acquiring the continuous sound signal behind the time information;
the signal noise reduction processing unit is used for carrying out noise reduction processing on the continuous sound signal and inputting the continuous sound signal subjected to noise reduction processing into a pre-trained sentence recognition model;
and the sentence recognition unit is used for outputting a sentence text corresponding to the continuous sound signal according to the sentence recognition model.
In this embodiment, the system further includes a sentence recognition model training module, where the sentence recognition model training module includes:
the system comprises an information word list construction unit, a control unit and a processing unit, wherein the information word list construction unit is used for constructing a typical scene information word list of the numerical control machine tool in advance, a plurality of vocabularies used for the numerical control machine tool are arranged in the typical scene information word list, and the vocabularies used for the numerical control machine tool are used for reflecting control information and operation parameter information of the numerical control machine tool;
the mapping relation construction unit is used for labeling the sample sound signals by utilizing the typical scene information word list and constructing the mapping relation between the sample sound signals and the words in the typical scene information word list;
and the sentence model training unit is used for training a preset neural network model according to the mapping relation to obtain the sentence recognition model.
In one implementation, the intent determination module 30 includes:
the word segmentation processing unit is used for carrying out word segmentation processing on the sentence text according to the sentence text to obtain word segmentation information, and screening out keywords for reflecting a control intention or a query intention from the word segmentation information;
the category determining unit is used for inputting the keywords into a pre-trained BERT model and determining category information corresponding to the keywords, wherein the category information comprises a machine tool working control category or a machine tool information query category;
and the intention information unit is used for determining intention information corresponding to the keyword according to the category information corresponding to the keyword.
In one implementation, the interaction control module 40 includes:
the instruction generation unit is used for inputting the intention information and the keywords into an instruction generation template for the sentence according to the sentence text and generating the control instruction, and the control instruction comprises an instruction for controlling the operation of the numerical control machine tool or an instruction for inquiring the information of the machine tool;
and the instruction execution unit is used for analyzing the control instruction to obtain the intention information and executing the interactive operation corresponding to the intention information.
The working principle of each module in the voice recognition-based numerical control machine interactive control system of the embodiment is the same as that of each step in the above method embodiments, and details are not repeated here.
Based on the above embodiment, the present invention further provides a numerical control machine tool, where the numerical control machine tool includes a main control device, the main control device may be a terminal platform such as an intelligent computer, and a functional block diagram of the numerical control machine tool may be as shown in fig. 3. The numerical control machine tool comprises a processor and a memory which are connected through a system bus, wherein the processor and the memory are arranged in a host. Wherein, the processor of the numerical control machine tool is used for providing calculation and control capability. The memory of the numerical control machine tool comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the numerical control machine tool is used for being connected and communicated with an external terminal through network communication. The computer program is executed by a processor to realize a numerical control machine interactive control method based on voice recognition.
It will be understood by those skilled in the art that the schematic block diagram shown in figure 3 is only a block diagram of a portion of the structure associated with the inventive solution and does not constitute a limitation of the numerically controlled machine tool to which the inventive solution is applied, a particular numerically controlled machine tool being intended to include more or fewer components than those shown in the figures, or to combine certain components, or to have a different arrangement of components.
In one embodiment, a numerically-controlled machine tool is provided, the numerically-controlled machine tool comprises a memory, a processor and a program of a numerical control machine interaction control method based on voice recognition, wherein the program of the numerical control machine interaction control method based on voice recognition is stored in the memory and can run on the processor, and when the processor executes the program of the numerical control machine interaction control method based on voice recognition, the following operation instructions are realized:
acquiring a voice signal, identifying the voice signal, and determining a trigger word corresponding to the voice signal, wherein the trigger word is specially used for triggering the control of a numerical control machine tool;
after the trigger word is determined, acquiring continuous sound signals which are consecutive in time with the trigger word, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals;
determining category information corresponding to the statement text according to the statement text, and determining intention information according to the category information;
and generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, operational databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the invention discloses a numerical control machine interactive control method and system based on voice recognition, the method comprises: acquiring a voice signal, identifying the voice signal, and determining a trigger word corresponding to the voice signal; after the trigger word is determined, acquiring continuous sound signals which are consecutive in time with the trigger word, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals; determining category information corresponding to the sentence text according to the sentence text, and determining intention information according to the control information; and generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction. The invention can recognize the voice signal based on the voice recognition technology, controls the action of the numerical control machine tool by voice, does not need manual operation of a user and provides convenience for the user. In addition, the trigger word identified by the invention is specially used for the numerical control machine tool, so that the numerical control machine tool can be controlled more accurately, and the efficiency is improved.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. A numerical control machine tool interaction control method based on voice recognition is characterized by comprising the following steps:
acquiring a voice signal, identifying the voice signal, and determining a trigger word corresponding to the voice signal, wherein the trigger word is specially used for triggering the control of a numerical control machine tool;
after the trigger word is determined, acquiring continuous sound signals which are consecutive in time with the trigger word, identifying the continuous sound signals, and determining a sentence text corresponding to the continuous sound signals;
determining category information corresponding to the sentence text according to the sentence text, and determining intention information according to the category information;
generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction;
the acquiring a voice signal, processing and identifying the voice signal, and determining a trigger word corresponding to the voice signal include:
acquiring the voice signal, identifying the voice signal and acquiring character information corresponding to the voice signal;
matching the text information with a typical database preset in the numerical control machine tool, and determining a trigger word corresponding to the text information, wherein the trigger word is set based on common functions or common control information of the numerical control machine tool in the daily use process; the typical database is provided with a plurality of trigger words of which the use frequency exceeds a preset frequency threshold, each trigger word corresponds to a different interactive operation, and the interactive operation is a common operation summarized according to historical use of the numerical control machine;
the recognizing the voice signal to obtain the text information corresponding to the voice signal includes:
performing voice noise reduction processing on the sound signal by using a least mean square algorithm filter to obtain a noise-reduced sound signal;
performing Fourier transform on the sound signal subjected to the noise reduction processing to obtain an amplitude spectrum and a phase spectrum in the sound signal, wherein the amplitude spectrum and the phase spectrum are used for reflecting fluctuation information of the sound signal along with time change;
inputting the amplitude spectrum and the phase spectrum to a preset residual error neural network to obtain the character information, wherein the residual error neural network is obtained by training in advance based on the amplitude spectrum and the phase spectrum of the sound signal corresponding to a plurality of different character information;
after the trigger word is determined, acquiring a continuous sound signal which is consecutive to the trigger word in time, identifying the continuous sound signal, and determining a sentence text corresponding to the continuous sound signal, including:
after the trigger word is determined, determining time information corresponding to the trigger word, and acquiring the continuous sound signal behind the time information;
carrying out noise reduction processing on the continuous sound signal, and inputting the continuous sound signal subjected to noise reduction processing into a pre-trained sentence recognition model;
outputting sentence texts corresponding to the continuous sound signals according to the sentence recognition model;
the acquiring of the continuous sound signal temporally consecutive to the trigger word comprises:
obtaining based on a pause duration after the trigger word;
the training mode of the sentence recognition model comprises the following steps:
the method comprises the steps that a typical scene information word list of the numerical control machine tool is constructed in advance, the typical scene information word list is set according to the machining process of the numerical control machine tool when a certain workpiece or a certain workpiece is executed specifically, a plurality of vocabularies used for the numerical control machine tool are arranged in the typical scene information word list, and the vocabularies used for the numerical control machine tool are used for reflecting control information and operation parameter information of the numerical control machine tool;
labeling a plurality of sample sound signals by using the typical scene information word list, and constructing a mapping relation between the sample sound signals and words in the typical scene information word list;
training a preset neural network model according to the mapping relation to obtain the sentence recognition model;
the obtaining control information corresponding to the sentence text from a database of the numerical control machine tool according to the sentence text and determining a control intention according to the control information includes:
performing word segmentation processing on the sentence text according to the sentence text to obtain word segmentation information, and screening out key words for reflecting control intentions or query intentions from the word segmentation information;
inputting the keyword into a pre-trained BERT model, and determining category information corresponding to the keyword, wherein the category information comprises a machine tool working control category or a machine tool information query category, and the BERT model is obtained by training based on a sample vocabulary and the category information corresponding to the sample vocabulary in advance;
determining intention information corresponding to the keyword according to the category information corresponding to the keyword;
generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction, wherein the method comprises the following steps:
inputting the intention information and the keywords into an instruction generation template to generate the control instruction, wherein the control instruction comprises an instruction for controlling the numerical control machine tool to work or an instruction for inquiring the information of the machine tool;
analyzing the control instruction to obtain the intention information, and executing interactive operation corresponding to the intention information;
the generating a control instruction corresponding to the intention information according to the intention information, and executing corresponding interactive operation according to the control instruction further includes:
automatically generating a PLC control instruction according to intention information according to heuristic rules; and the numerical control machine tool executes corresponding interactive operation according to the PLC control instruction.
2. A numerical control machine interactive control system based on voice recognition is characterized in that the system comprises:
the trigger word determining module is used for acquiring a voice signal, identifying the voice signal and determining a trigger word corresponding to the voice signal, wherein the trigger word is specially used for triggering the control of the numerical control machine tool;
the sentence recognition module is used for acquiring continuous sound signals which are continuous with the trigger words in time after the trigger words are determined, recognizing the continuous sound signals and determining sentence texts corresponding to the continuous sound signals;
the intention determining module is used for determining the category information corresponding to the statement text according to the statement text and determining the intention information according to the category information;
the interaction control module is used for generating a control instruction corresponding to the intention information according to the intention information and executing corresponding interaction operation according to the control instruction;
the trigger word determination module comprises:
the signal identification unit is used for acquiring the voice signal, identifying the voice signal and acquiring character information corresponding to the voice signal;
the trigger matching unit is used for matching the text information with a typical database preset in the numerical control machine tool and determining a trigger corresponding to the text information, wherein the trigger is set based on common functions or common control information of the numerical control machine tool in the daily use process; the typical database is provided with a plurality of trigger words with the use frequency exceeding a preset frequency threshold, each trigger word corresponds to a different interactive operation, and the interactive operation is a common operation summarized according to historical use of the numerical control machine;
the signal identification unit includes:
the noise reduction processing subunit is configured to perform voice noise reduction processing on the sound signal by using a least mean square algorithm filter to obtain a noise-reduced sound signal;
the signal processing subunit is configured to perform fourier transform on the noise-reduced sound signal to obtain an amplitude spectrum and a phase spectrum in the sound signal, where the amplitude spectrum and the phase spectrum are used to reflect fluctuation information of the sound signal along with time change;
the character recognition subunit is used for inputting the amplitude spectrum and the phase spectrum to a preset residual error neural network to obtain the character information, wherein the residual error neural network is obtained by training in advance based on the amplitude spectrum and the phase spectrum of the sound signal corresponding to a plurality of different character information;
the sentence recognition module comprises:
the time information determining unit is used for determining the time information corresponding to the trigger word after the trigger word is determined, and acquiring the continuous sound signal behind the time information;
the signal noise reduction processing unit is used for carrying out noise reduction processing on the continuous sound signal and inputting the continuous sound signal subjected to the noise reduction processing into a pre-trained sentence recognition model;
the sentence recognition unit is used for outputting a sentence text corresponding to the continuous sound signal according to the sentence recognition model;
the acquiring of the continuous sound signal temporally consecutive to the trigger word comprises:
obtaining based on a pause duration after the trigger word;
the system further comprises a sentence recognition model training module, wherein the sentence recognition model training module comprises:
the system comprises an information word list construction unit, a control unit and a processing unit, wherein the information word list construction unit is used for constructing a typical scene information word list of the numerical control machine tool in advance, the typical scene information word list is set according to a processing process of the numerical control machine tool when a certain workpiece or a certain workpiece is specifically executed, a plurality of vocabularies used for the numerical control machine tool are arranged in the typical scene information word list, and the vocabularies used for the numerical control machine tool are used for reflecting control information and operation parameter information of the numerical control machine tool;
the mapping relation construction unit is used for labeling the sample sound signals by utilizing the typical scene information word list and constructing the mapping relation between the sample sound signals and the words in the typical scene information word list;
the sentence model training unit is used for training a preset neural network model according to the mapping relation to obtain the sentence recognition model;
the intent determination module comprising:
the word segmentation processing unit is used for carrying out word segmentation processing on the sentence text according to the sentence text to obtain word segmentation information, and screening out keywords for reflecting a control intention or a query intention from the word segmentation information;
the category determining unit is used for inputting the keywords into a pre-trained BERT model and determining category information corresponding to the keywords, wherein the category information comprises a machine tool working control category or a machine tool information query category, and the BERT model is obtained by training based on sample vocabularies and category information corresponding to the sample vocabularies in advance;
the intention information unit is used for determining intention information corresponding to the keyword according to the category information corresponding to the keyword;
the interaction control module comprises:
the instruction generation unit is used for inputting the intention information and the keywords into an instruction generation template for the sentence according to the sentence text and generating the control instruction, wherein the control instruction comprises an instruction for controlling the operation of the numerical control machine tool or an instruction for inquiring information of the machine tool;
the instruction execution unit is used for analyzing the control instruction to obtain the intention information and executing interactive operation corresponding to the intention information;
the interaction control module further comprises:
automatically generating a PLC control instruction according to intention information according to heuristic rules; and the numerical control machine tool executes corresponding interactive operation according to the PLC control instruction.
3. A nc machine tool, wherein the nc machine tool comprises a memory, a processor and a nc machine tool interaction control program based on voice recognition stored in the memory and operable on the processor, and the processor implements the steps of the nc machine tool interaction control method based on voice recognition according to claim 1 when executing the nc machine tool interaction control program based on voice recognition.
4. A computer-readable storage medium, wherein the computer-readable storage medium stores thereon a voice recognition-based cnc interaction control program, and when the voice recognition-based cnc interaction control program is executed by a processor, the steps of the voice recognition-based cnc interaction control method according to claim 1 are implemented.
CN202210786763.8A 2022-07-06 2022-07-06 Numerical control machine tool interaction control method and system based on voice recognition Active CN114863927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210786763.8A CN114863927B (en) 2022-07-06 2022-07-06 Numerical control machine tool interaction control method and system based on voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210786763.8A CN114863927B (en) 2022-07-06 2022-07-06 Numerical control machine tool interaction control method and system based on voice recognition

Publications (2)

Publication Number Publication Date
CN114863927A CN114863927A (en) 2022-08-05
CN114863927B true CN114863927B (en) 2022-09-30

Family

ID=82626077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210786763.8A Active CN114863927B (en) 2022-07-06 2022-07-06 Numerical control machine tool interaction control method and system based on voice recognition

Country Status (1)

Country Link
CN (1) CN114863927B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409133B (en) * 2022-10-31 2023-02-03 中科航迈数控软件(深圳)有限公司 Cross-modal data fusion-based numerical control machine tool operation intention identification method and system
CN115964115B (en) * 2023-03-17 2023-06-02 中科航迈数控软件(深圳)有限公司 Numerical control machine tool interaction method based on pre-training reinforcement learning and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335695A (en) * 2017-06-27 2018-07-27 腾讯科技(深圳)有限公司 Sound control method, device, computer equipment and storage medium
CN109801629A (en) * 2019-03-01 2019-05-24 珠海格力电器股份有限公司 A kind of sound control method, device, storage medium and air-conditioning
CN111415654A (en) * 2019-01-07 2020-07-14 北京嘀嘀无限科技发展有限公司 Audio recognition method and device, and acoustic model training method and device
CN113094481A (en) * 2021-03-03 2021-07-09 北京智齿博创科技有限公司 Intention recognition method and device, electronic equipment and computer readable storage medium
CN113486661A (en) * 2021-06-30 2021-10-08 东莞市小精灵教育软件有限公司 Text understanding method, system, terminal equipment and storage medium
CN114117009A (en) * 2021-11-30 2022-03-01 深圳壹账通智能科技有限公司 Method, device, equipment and medium for configuring sub-processes based on conversation robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335695A (en) * 2017-06-27 2018-07-27 腾讯科技(深圳)有限公司 Sound control method, device, computer equipment and storage medium
CN111415654A (en) * 2019-01-07 2020-07-14 北京嘀嘀无限科技发展有限公司 Audio recognition method and device, and acoustic model training method and device
CN109801629A (en) * 2019-03-01 2019-05-24 珠海格力电器股份有限公司 A kind of sound control method, device, storage medium and air-conditioning
CN113094481A (en) * 2021-03-03 2021-07-09 北京智齿博创科技有限公司 Intention recognition method and device, electronic equipment and computer readable storage medium
CN113486661A (en) * 2021-06-30 2021-10-08 东莞市小精灵教育软件有限公司 Text understanding method, system, terminal equipment and storage medium
CN114117009A (en) * 2021-11-30 2022-03-01 深圳壹账通智能科技有限公司 Method, device, equipment and medium for configuring sub-processes based on conversation robot

Also Published As

Publication number Publication date
CN114863927A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114863927B (en) Numerical control machine tool interaction control method and system based on voice recognition
EP0852051B1 (en) Process for automatic control of one or more devices by voice commands or by real-time voice dialog and apparatus for carrying out this process
CN108182937B (en) Keyword recognition method, device, equipment and storage medium
EP2309489B1 (en) Methods and systems for considering information about an expected response when performing speech recognition
CN107644638B (en) Audio recognition method, device, terminal and computer readable storage medium
US6615178B1 (en) Speech translator, speech translating method, and recorded medium on which speech translation control program is recorded
DE69827988T2 (en) Speech models for speech recognition
US7966177B2 (en) Method and device for recognising a phonetic sound sequence or character sequence
US5689617A (en) Speech recognition system which returns recognition results as a reconstructed language model with attached data values
US20020184024A1 (en) Speech recognition for recognizing speaker-independent, continuous speech
JPWO2007013521A1 (en) Apparatus, method, and program for performing user-machine interaction
CN110853628A (en) Model training method and device, electronic equipment and storage medium
CN111128192A (en) Voice recognition noise reduction method, system, mobile terminal and storage medium
CN111968645B (en) Personalized voice control system
CN103778915A (en) Speech recognition method and mobile terminal
CN106845628A (en) The method and apparatus that robot generates new command by internet autonomous learning
CN113593565A (en) Intelligent home device management and control method and system
CN111192573B (en) Intelligent control method for equipment based on voice recognition
CN109346099B (en) Iterative denoising method and chip based on voice recognition
Supriya et al. Speech recognition using HTK toolkit for Marathi language
CN113643700B (en) Control method and system of intelligent voice switch
Ranzenberger et al. Integration of a Kaldi speech recognizer into a speech dialog system for automotive infotainment applications
CN109410928B (en) Denoising method and chip based on voice recognition
CN109602333B (en) Voice denoising method and chip based on cleaning robot
Lauria Talking to machines: introducing robot perception to resolve speech recognition uncertainties

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant