CN110619877A - Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium - Google Patents

Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium Download PDF

Info

Publication number
CN110619877A
CN110619877A CN201910923593.1A CN201910923593A CN110619877A CN 110619877 A CN110619877 A CN 110619877A CN 201910923593 A CN201910923593 A CN 201910923593A CN 110619877 A CN110619877 A CN 110619877A
Authority
CN
China
Prior art keywords
laser pen
intention
voice
word
natural language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910923593.1A
Other languages
Chinese (zh)
Inventor
冯海洪
毛德平
许成亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Mic Technology Co Ltd
Original Assignee
Anhui Mic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Mic Technology Co Ltd filed Critical Anhui Mic Technology Co Ltd
Priority to CN201910923593.1A priority Critical patent/CN110619877A/en
Publication of CN110619877A publication Critical patent/CN110619877A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to the field of voice signal processing, in particular to a voice recognition man-machine interaction method, a device, a system and a storage medium applied to a laser pen, wherein the method comprises the following steps: firstly, acquiring voice information by a laser pen, then acquiring data acquired by the laser pen through a local area network by a computer software end, carrying out natural language processing on the data, then carrying out feature extraction by adopting Bilstm + crf after word segmentation is finished, then judging the intention of a marked word to finally generate a statement instruction by using a skearn as a judging tool, generating a result, and finally executing corresponding operation according to the intention, the invention designs a voice recognition man-machine interaction method applied to the laser pen, which develops application software installed at the computer end aiming at the hardware, and a user picks up the voice instruction of the user by pressing a specific key on the laser pen, analyzes the acquired voice information by a natural language processing technology, recognizes the intention of the user and executes corresponding operation to help the user when the laser pen is used, control of the PPT may be through voice interaction.

Description

Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium
Technical Field
The invention relates to the field of voice signal processing, in particular to a voice recognition man-machine interaction method, device and system applied to a laser pen and a storage medium.
Background
At present, most laser pens on the market are single in function and inconvenient to operate, the operation modes are the same as those of remote control, namely, the control mode of transmitting invisible broadcasts by adopting an infrared technology or the technology of wireless communication is adopted, the control modes based on a transmitter and a receiver often need to align the transmitter with a controller for good control, if the pointing direction of the laser pen is not aligned, the receiver is not aligned, the control failure condition can occur, and all control needs to be performed manually.
Intelligent Voice Interaction (Intelligent Voice Interaction) is a very promising direction in the field of artificial intelligence at present, and Voice Interaction can innovate a brand-new 'syndrome' scene, and has various superior advantages compared with other traditional control modes, and the more complex the space is, the more the advantages can be exerted. The voice recognition man-machine interaction method applied to the laser pen can enable the laser pen device to receive voice instructions of a user, process and transmit the voice instructions to computer-side software connected through a local area network, and the software judges intentions and executes corresponding operations after natural language processing is carried out on the obtained voice instructions.
The voice recognition man-machine interaction method applied to the laser pen is designed, application software installed at a computer end is developed for the hardware, a user can pick up a voice instruction of the user by pressing a specific key on the laser pen, the obtained voice information is analyzed through a natural language processing technology, the intention of the user is recognized, corresponding operation is executed, a more intelligent operation mode is provided for the voice recognition man-machine interaction method applied to the laser pen, and greater convenience is brought to the use of the user.
Disclosure of Invention
Aiming at the existing problems, the invention aims to develop a voice interaction method applied to a laser pen, which can effectively improve the control of people on the laser pen, and the laser pen and a corresponding software system can be operated more intelligently in a voice interaction mode, and in order to solve the problems in the prior art, the invention provides a voice recognition human-computer interaction method applied to the laser pen, which comprises the following steps:
step S1: collecting voice information through a laser pen;
step S2: the computer software end acquires the data acquired by the laser pen through the local area network and performs natural language processing on the data;
step S3: after word segmentation is finished, performing feature extraction by adopting Bilstm + crf;
step S4: using sklern as a judging tool to judge the intention of the marked word to finally generate the sentence instruction and generate a result;
step S5: executing corresponding operation according to the intention;
preferably, the natural language processing in step S2 includes the steps of:
step S21: chinese word segmentation is carried out on the obtained voice command, the HMM model, the average perceptron and CRF + + are used for extracting the characteristics of the vocabulary,
step S22: performing the corpus training and the model cutting,
step S23: and storing the trained model.
In order to achieve the above object, the present invention further provides a voice recognition human-computer interaction device applied to a laser pen, comprising
A pickup module for collecting voice information via a laser pen
The acquisition and preprocessing module is used for acquiring data acquired by the laser pen and performing natural language processing on the collected data;
the characteristic extraction module is used for extracting the characteristics of the obtained voice information;
the intention judging module is used for judging the intention of the sentence instruction generated by the marked words by using the sklern as a judging tool and generating a result;
and the execution module is used for executing corresponding operation according to the intention.
In order to achieve the above object, the present invention further provides a voice recognition human-computer interaction system applied to a laser pen, comprising a laser pen, a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the method when executing the computer program.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the above method.
The invention has the beneficial effects that:
(1) the problem of control failure caused by the fact that the traditional laser pen transmitter is not aligned to the receiver direction is solved.
(2) The current latest natural language processing technology in the field of artificial intelligence is applied to accurately analyze the intention of the user.
(3) The laser pen device with the voice interaction function can provide the customers with high-quality experience of natural language processing technology within a very reasonable price.
Drawings
Fig. 1 is an overall flowchart of a voice recognition human-computer interaction method applied to a laser pointer according to embodiment 1 of the present invention.
Fig. 2 is a block diagram of a voice recognition human-computer interaction device applied to a laser pointer in embodiment 2 of the present invention.
FIG. 3 is a schematic diagram of a classical structure of a recurrent neural network.
Fig. 4 shows the structure of the recurrent neural network after being expanded in time.
Fig. 5 is an example of a recurrent neural network after being expanded in time.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Fig. 1 is an overall flowchart of a voice recognition human-computer interaction method applied to a laser pointer according to an embodiment 1 of the present invention. As shown in fig. 1, a voice recognition human-computer interaction method applied to a laser pointer includes the following steps:
step S1: and voice information is collected through the laser pen.
Step S2: and the computer software end acquires the data acquired by the laser pen through the local area network and performs natural language processing on the data.
In the step, firstly, Chinese word segmentation is carried out on the obtained voice command (m characters of a sentence to be segmented are taken from left to right to serve as matching fields, and m is the number of the longest entry in a large machine dictionary.
And (3) searching a large machine dictionary and matching:
if the matching is successful, the matching field is segmented as a word.
If the matching is unsuccessful, the last character of the matching field is removed, the rest character string is used as a new matching field for matching again, and the process is repeated until all the words are cut out.
For example:
now, we will perform word segmentation on the sentence "Changjiang river bridge in Nanjing", according to the principle of forward maximum matching:
firstly, taking the first 5 characters of 'Changjiang river in Nanjing city' from a sentence, matching the 5 characters in a dictionary, if no word is found, shortening the number of the taken characters, if the first four characters of 'Changjiang river in Nanjing city' are found, and if a word bank has the word, cutting the word;
the left three characters of 'river bridges' are subjected to forward maximum matching again, and then the 'river bridges' and 'bridges' can be cut;
the whole sentence segmentation is finished as follows: south Beijing, Chang, Jiang, bridge; the method is characterized in that the features of the vocabulary are extracted by using an HMM model, an average perceptron and CRF + + (the traditional word segmentation method, whether rule-based or statistic-based, generally depends on a word list (dictionary) compiled in advance, and the automatic word segmentation process is used for making word segmentation decisions through the word list and related information.
Since each word occupies a certain word formation position (i.e., lexeme) when constructing a specific word, if it is specified that each word has only four word formation positions at most: i.e., B (prefix), M (in-word), E (suffix), and S (individual word), the segmentation result of the following sentence (a) can be directly expressed in a word-by-word notation form as shown in (B):
(A) word segmentation result: /shanghai/plan/N/this/century/end/implementation/average/domestic/production/total/five thousand dollars $ @
And (B) word mark form: Shanghai/Bhai/Ejime/Bmin/E N/Sben/Shi/Bmin/Endo/Shi/Emin/Eyun/Bmin/Engo/Bproduct/Bmin/Ewu/Bk/Mmei/M yuan/E. (S)
First, it should be noted that the words mentioned herein are not limited to Chinese characters. Considering that the real Chinese text inevitably contains a certain number of non-Chinese characters, the word is also included in the characters such as foreign language letters, Arabic numerals and punctuation marks. All these characters are the basic units of word formation. Of course, Chinese characters remain the most numerous character types in this set of units.
One important advantage of treating the word segmentation process as a word tagging problem is that it can treat the recognition problems of the vocabulary words and the unknown words in a balanced way.
In the word segmentation technology, the vocabulary words and the unknown words in the text are realized by a unified word marking process. In the learning architecture, the information of the vocabulary words does not need to be specially emphasized, and a special identification module of the unknown words (such as the names of people, places and organizations) does not need to be specially designed. This results in a greatly simplified design of the word segmentation system. In the word marking process, all words are subjected to lexeme characteristic learning according to predefined characteristics to obtain a probability model. And then, on the character string to be divided, obtaining a word position labeling result according to the combination tightness degree between the characters. And finally, directly obtaining a final word segmentation result according to the lexeme definition. In summary, in such a word segmentation process, word segmentation becomes a simple process of word reorganization. In the learning framework, the information of the vocabulary words does not need to be particularly emphasized, and a special module aiming at the unknown words does not need to be specially designed, so that the design of the word segmentation system is particularly simple.
Lafferty in 2001 provides an undirected graph model, namely a Conditional Random Field (CRF) model, on the basis of a Maximum Entropy Model (MEM) and a Hidden Markov Model (HMM), can improve the joint probability of a marked sequence to the greatest extent under the condition of giving an observation sequence needing to be marked, and is commonly used for a statistical model for segmenting and marking serialized data; then performing corpus training and model cutting; finally, the trained recurrent neural network model (RNNs are mainly used for processing and predicting sequence data, in the traditional neural network model, the layers are all connected from an input layer to a hidden layer to an output layer, and nodes between each layer are not connected, but the common neural network has no capacity for many problems, for example, if the next word of a sentence is predicted, the current word and the previous word are generally needed, because the previous word and the next word in a sentence are not independent, for example, if the current single is 'very' and the previous word is 'sky', the next word has a great probability of 'blue', the source of the recurrent neural network is used for describing the relation between the current output of a sequence and the previous information, from the network structure, the recurrent neural network can memorize the previous information, and use the previous information to affect the output of the following node. That is, the nodes between the hidden layers of the recurrent neural network are connected, and the input of the hidden layer includes not only the output of the input layer but also the output of the hidden layer at the previous time. Theoretically, RNNs can process sequence data of any length. But in practice the complexity reduction often assumes that the current state is only relevant to the previous ones.
A typical recurrent neural network is illustrated in figure 3. For a recurrent neural network, a very important concept is time of day. The recurrent neural network will give an output for each moment of time input in combination with the state of the current model. As can be seen from the following figure, the input to the main structure A of the recurrent neural network, in addition to coming from the input layer Xt, has a recurrent edge to provide the state at the current time. At each time instant, module a of the recurrent neural network reads the input Xt at time instant t and outputs a value ht. While the state of a will pass from the current step to the next step. Thus, a recurrent neural network can theoretically be seen as the result of the same neural network structure being replicated indefinitely. However, for optimization, the recurrent neural network cannot achieve a truly infinite cycle, so that in reality, the recurrent neural network is generally expanded, and thus a structure as shown in fig. 4 can be obtained.
As can be seen more clearly in fig. 4, the recurrent neural network has an input Xt At each instant and then provides an output Ht according to the current state At of the recurrent neural network. The current state At of the neural network is thus jointly determined on the basis of the state At-1 At the last moment and the current input Xt. The problem that it is best adapted to solve is time series dependent, as can be easily derived from the structural features of the recurrent neural network. The recurrent neural network is also the most natural neural network structure to deal with such problems. For a sequence of data, the data at different times in the sequence may be sequentially passed into the input layer of the recurrent neural network, and the output may be a prediction of the next time in the sequence. A recurrent neural network requires an input at each time, but does not necessarily require an output at each time. In the past few years, recurrent neural networks have been widely used for speech recognition, language modeling, machine translation, and timing analysis, with great success.
The method for explaining how the recurrent neural network solves the practical problem by taking machine translation as an example, the input of each moment in the recurrent neural network is a word in a sentence needing to be translated. As shown in fig. 5, the sentence to be translated is ABCD, the input at each moment of the first segment of the recurrent neural network is A, B, C and D, and then "D" is used as the end symbol of the sentence to be translated. In the first segment, the recurrent neural network has no output. From the terminator "" start, the recurrent neural network enters the translation phase. The input at each moment in the phase is the output at the last moment, and the resulting output is the result of the translation of the sentence ABCD. It can be seen from the following figure that XYZ is the translation result corresponding to the sentence ABCD, and Q is an end symbol representing the end of translation. As described earlier, a recurrent neural network can be regarded as the result of the same neural network structure being replicated multiple times in time series, and this replicated multiple structure is referred to as a recurrent body. How to design the network structure of the cyclic body is the key for solving the practical problem of the cyclic neural network, and similar to the way that the parameters in the convolutional neural network filter are shared, in the cyclic neural network, the parameters in the network structure of the cyclic body are also shared at different moments.
Step S3: and performing feature extraction by using Bilstm + crf after word segmentation is finished.
Step S4: the sklern is used as a tool for judgment, and the marked word is judged to finally generate the intention of the sentence command, so that a result is generated.
Step S5: and executing corresponding operation according to the intention.
Example 2
FIG. 2 is a block diagram of a voice recognition human-computer interaction device applied to a laser pointer according to an embodiment 2 of the present invention. As shown in FIG. 2, the embodiment provides a voice recognition human-computer interaction device applied to a laser pointer, comprising
A pickup module for collecting voice information via a laser pen
The acquisition and preprocessing module is used for acquiring data acquired by the laser pen and performing natural language processing on the collected data;
the characteristic extraction module is used for extracting the characteristics of the obtained voice information;
the intention judging module is used for judging the intention of the sentence instruction generated by the marked words by using the sklern as a judging tool and generating a result;
and the execution module is used for executing corresponding operation according to the intention.
Example 3
The embodiment provides a voice recognition man-machine interaction system applied to a laser pen, which comprises the laser pen, a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the steps of the method when executing the computer program.
Example 4
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the above-mentioned method.
In summary, the voice recognition human-computer interaction method, device, system and storage medium applied to the laser pen disclosed in the embodiments of the present invention accurately analyze the intention of the user by using the latest natural language processing technology in the field of artificial intelligence, thereby avoiding the problem of control failure caused by the fact that the conventional laser pen transmitter is not aligned to the receiver direction, and the laser pen device applied with the voice interaction function can provide the high-quality experience of the natural language processing technology for the client within a very reasonable price.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the changes or modifications within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.

Claims (5)

1. A voice recognition man-machine interaction method applied to a laser pen is characterized by comprising the following steps:
step S1: collecting voice information through a laser pen;
step S2: the computer software end acquires the data acquired by the laser pen through the local area network and performs natural language processing on the data;
step S3: after word segmentation is finished, performing feature extraction by adopting Bilstm + crf;
step S4: using sklern as a judging tool to judge the intention of the marked word to finally generate the sentence instruction and generate a result;
step S5: and executing corresponding operation according to the intention.
2. The subtitle implementing method based on the intelligent voice mouse of claim 1, wherein: the natural language processing in step S2 includes the steps of:
step S21: chinese word segmentation is carried out on the obtained voice command, the HMM model, the average perceptron and CRF + + are used for extracting the characteristics of the vocabulary,
step S22: performing the corpus training and the model cutting,
step S23: and storing the trained model.
3. The utility model provides a be applied to speech recognition human-computer interaction device of laser pen which characterized in that: comprises that
A pickup module for collecting voice information via a laser pen
The acquisition and preprocessing module is used for acquiring data acquired by the laser pen and performing natural language processing on the collected data;
the characteristic extraction module is used for extracting the characteristics of the obtained voice information;
the intention judging module is used for judging the intention of the sentence instruction generated by the marked words by using the sklern as a judging tool and generating a result;
and the execution module is used for executing corresponding operation according to the intention.
4. A speech recognition human-computer interaction system for a laser pointer, comprising a laser pointer, a memory, a processor and a computer program stored on the memory and executable on the processor, wherein: the processor, when executing the computer program, realizes the steps of the method of any of the preceding claims 1 to 2.
5. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program when executed by a processor implements the steps of the method of any of claims 1 to 2.
CN201910923593.1A 2019-09-27 2019-09-27 Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium Pending CN110619877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910923593.1A CN110619877A (en) 2019-09-27 2019-09-27 Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910923593.1A CN110619877A (en) 2019-09-27 2019-09-27 Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium

Publications (1)

Publication Number Publication Date
CN110619877A true CN110619877A (en) 2019-12-27

Family

ID=68924434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910923593.1A Pending CN110619877A (en) 2019-09-27 2019-09-27 Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium

Country Status (1)

Country Link
CN (1) CN110619877A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114546134A (en) * 2022-02-24 2022-05-27 深圳市启望科文技术有限公司 Interaction system and method of multifunctional page turning pen and host computer end
CN114783439A (en) * 2022-06-20 2022-07-22 清华大学 Command injection method and system based on intelligent voice control system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632531A (en) * 2013-12-10 2014-03-12 青岛歌尔声学科技有限公司 Intelligent remote controller with laser pointer function and remote control method thereof
CN203630952U (en) * 2013-12-10 2014-06-04 青岛歌尔声学科技有限公司 Intelligent remote controller with function of laser pen
CN108491179A (en) * 2018-03-13 2018-09-04 黄玉玲 A kind of method and system of word input
CN108536302A (en) * 2018-04-17 2018-09-14 中国矿业大学 A kind of teaching method and system based on human body gesture and voice
CN108769638A (en) * 2018-07-25 2018-11-06 京东方科技集团股份有限公司 A kind of control method of projection, device, projection device and storage medium
CN109165384A (en) * 2018-08-23 2019-01-08 成都四方伟业软件股份有限公司 A kind of name entity recognition method and device
CN109255119A (en) * 2018-07-18 2019-01-22 五邑大学 A kind of sentence trunk analysis method and system based on the multitask deep neural network for segmenting and naming Entity recognition
CN208507021U (en) * 2018-03-21 2019-02-15 合肥师范学院 A kind of multimedia teaching laser pen with speech identifying function
CN109785840A (en) * 2019-03-05 2019-05-21 湖北亿咖通科技有限公司 The method, apparatus and vehicle mounted multimedia host, computer readable storage medium of natural language recognition
CN109857327A (en) * 2017-03-27 2019-06-07 三角兽(北京)科技有限公司 Information processing unit, information processing method and storage medium
CN110111646A (en) * 2019-06-11 2019-08-09 深圳市美朵科技有限公司 A kind of multifunction electronic laser pen with magnetic charge port

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632531A (en) * 2013-12-10 2014-03-12 青岛歌尔声学科技有限公司 Intelligent remote controller with laser pointer function and remote control method thereof
CN203630952U (en) * 2013-12-10 2014-06-04 青岛歌尔声学科技有限公司 Intelligent remote controller with function of laser pen
CN109857327A (en) * 2017-03-27 2019-06-07 三角兽(北京)科技有限公司 Information processing unit, information processing method and storage medium
CN108491179A (en) * 2018-03-13 2018-09-04 黄玉玲 A kind of method and system of word input
CN208507021U (en) * 2018-03-21 2019-02-15 合肥师范学院 A kind of multimedia teaching laser pen with speech identifying function
CN108536302A (en) * 2018-04-17 2018-09-14 中国矿业大学 A kind of teaching method and system based on human body gesture and voice
CN109255119A (en) * 2018-07-18 2019-01-22 五邑大学 A kind of sentence trunk analysis method and system based on the multitask deep neural network for segmenting and naming Entity recognition
CN108769638A (en) * 2018-07-25 2018-11-06 京东方科技集团股份有限公司 A kind of control method of projection, device, projection device and storage medium
CN109165384A (en) * 2018-08-23 2019-01-08 成都四方伟业软件股份有限公司 A kind of name entity recognition method and device
CN109785840A (en) * 2019-03-05 2019-05-21 湖北亿咖通科技有限公司 The method, apparatus and vehicle mounted multimedia host, computer readable storage medium of natural language recognition
CN110111646A (en) * 2019-06-11 2019-08-09 深圳市美朵科技有限公司 A kind of multifunction electronic laser pen with magnetic charge port

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朱频频: "《智能客户服务技术与应用》", 31 January 2019, 北京:中国铁道出版社 *
陈世梅等: "基于BiLSTM-CRF模型的汉语否定信息识别", 《中文信息学报》 *
陈小荷等: "《先秦文献信息处理》", 31 January 2013, 北京:世界图书北京出版公司 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114546134A (en) * 2022-02-24 2022-05-27 深圳市启望科文技术有限公司 Interaction system and method of multifunctional page turning pen and host computer end
CN114783439A (en) * 2022-06-20 2022-07-22 清华大学 Command injection method and system based on intelligent voice control system
CN114783439B (en) * 2022-06-20 2022-09-13 清华大学 Command injection method and system based on intelligent voice control system

Similar Documents

Publication Publication Date Title
CN112100349B (en) Multi-round dialogue method and device, electronic equipment and storage medium
CN107291783B (en) Semantic matching method and intelligent equipment
CN108287858A (en) The semantic extracting method and device of natural language
CN110968660B (en) Information extraction method and system based on joint training model
CN111159990B (en) Method and system for identifying general special words based on pattern expansion
CN110895932A (en) Multi-language voice recognition method based on language type and voice content collaborative classification
JP6838161B2 (en) End-to-end modeling methods and systems
CN114022882B (en) Text recognition model training method, text recognition device, text recognition equipment and medium
CN113420296A (en) C source code vulnerability detection method based on Bert model and BiLSTM
CN109285111A (en) A kind of method, apparatus, equipment and the computer readable storage medium of font conversion
CN103577548B (en) Method and device for matching characters with close pronunciation
CN110516240B (en) Semantic similarity calculation model DSSM (direct sequence spread spectrum) technology based on Transformer
CN111967267B (en) XLNET-based news text region extraction method and system
CN111914555B (en) Automatic relation extraction system based on Transformer structure
CN112463942A (en) Text processing method and device, electronic equipment and computer readable storage medium
CN114860942B (en) Text intention classification method, device, equipment and storage medium
CN112149386A (en) Event extraction method, storage medium and server
CN110196963A (en) Model generation, the method for semantics recognition, system, equipment and storage medium
CN110516035A (en) A kind of man-machine interaction method and system of mixing module
CN110084323A (en) End-to-end semanteme resolution system and training method
CN110619877A (en) Voice recognition man-machine interaction method, device and system applied to laser pen and storage medium
CN108509539B (en) Information processing method and electronic device
CN111813923A (en) Text summarization method, electronic device and storage medium
CN115129892A (en) Power distribution network fault disposal knowledge graph construction method and device
CN117746078B (en) Object detection method and system based on user-defined category

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227