CN103593048A - Voice navigation system and method of animal robot system - Google Patents

Voice navigation system and method of animal robot system Download PDF

Info

Publication number
CN103593048A
CN103593048A CN201310517773.2A CN201310517773A CN103593048A CN 103593048 A CN103593048 A CN 103593048A CN 201310517773 A CN201310517773 A CN 201310517773A CN 103593048 A CN103593048 A CN 103593048A
Authority
CN
China
Prior art keywords
stimulation
animal robot
animal
stimulation parameter
phonetic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310517773.2A
Other languages
Chinese (zh)
Other versions
CN103593048B (en
Inventor
吴朝晖
杨莹春
夏兵朝
郑能干
潘纲
郑筱祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310517773.2A priority Critical patent/CN103593048B/en
Publication of CN103593048A publication Critical patent/CN103593048A/en
Application granted granted Critical
Publication of CN103593048B publication Critical patent/CN103593048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a voice navigation system of an animal robot. The voice navigation system comprises a microphone, a master control device with a voice recognizer and a stimulation command switch, and a micro stimulator. The invention further discloses a voice navigation method of the animal robot. The method includes the steps of S01, allowing the microphone to receive and forward a voice command; S02, allowing a voice command recognition module to recognize the voice command and decide recognition results; if the results confirm noise, not operating; if the results confirm a stimulation parameter adjustment command, allowing the stimulation command switch to modify current simulation parameters according to the content of the stimulation parameter adjustment command and store the parameters; if the results confirm a control command, allowing the stimulation command switch to generate stimulation parameters according to the control command, and entering the step S03; S03, allowing the micro stimulator to perform electric pulse stimulation on the animal robot according to the stimulation parameters, and navigate the animal robot. Movements of the animal robot are flexibly controlled through voice.

Description

The speech guide system of animal robot system and method
Technical field
The present invention relates to the control method of animal robot, relate in particular to speech guide system and the method for animal robot system.
Background technology
" animal robot ", also referred to as " biorobot ", it refers to behavior or moves controlled animal.Generally from the feeling to import into innervation and start with of animal, realize manually controlled to the motion of animal and some behavior.In physical environment, there is the place of a lot of deficiencies in traditional mechanical robot, such as stability and keep the ability of balance, to the adaptability to changes of environment and need to consume and constantly supplement a large amount of electric energy.And animal robot can be good at overcoming these shortcomings, therefore in the nearest more than ten years, obtained extensive and deep research.
1997 years, Holzer and Shimoyama etc. developed in the world first animal robot system that cockroach is carrier of take based on electro photoluminescence jointly.Professor Chapin of the state University Medical Center in USA New York in 2002 waits somatosensory cortex and the medial forebrain bundle implant electrode of finding by rat brain, and gives suitable electro photoluminescence, can control animal used as test according to the contemplated path walking of people.The Su Xuecheng of University Of Science and Technology Of Shandong in 2006 professor team utilize the method for electro photoluminescence, have realized the prototype of rat robot, and take subsequently pigeon and realized the flight of animal robot bird is controlled as carrier.Professors Zheng Xiaoxiang of Seeking Truth Institute for Advanced Study of Zhejiang University in 2007 etc., utilize the mode of electro photoluminescence, have further realized controlling rat and passing through the complicated actions such as three-dimensional barrier; They have also found a kind of by control the method for stop of animal robot at periaqueductal gray implant electrode subsequently.
The control of animal robot at present mostly adopts electro photoluminescence animal brain nerve to realize, for example, for rat, at the fast sensillary area (controlling it moves ahead) of its brain, the somatesthesia of both sides is experienced cortex (controlling its left and right turn) and periaqueductal gray (controlling it stops) implantable stimulation electrode.Although the control mode of this electro photoluminescence is shorter cycle of training, the long electro photoluminescence meeting of strain and operational phase makes animal produce neural fatigue and adaptability, and this just requires us in each experiment, to improve navigation efficiency.
At present, the navigation of animal robot experiment mainly contains two kinds, and a kind of is manual guidance, and a kind of is self-navigation.
Manual guidance refers to, the instruction decision-making in navigation procedure is all to provide by people's observation and judgement, and experimenter's experience and judgment are had to certain requirement.Although manual guidance has more intense subjectivity, and need testing crew notice relatively to concentrate, react very fast, generally speaking its efficiency is higher, and for the task of same complexity, manual guidance can complete within a short period of time.
In self-navigation process, animal robot control system is obtained the current state of animal by the method for camera and machine vision, then with certain algorithm or rule, produce instruction decision-making, control the motion of animal and complete a complicated task, this whole process is not need manual intervention, and the control program in computing machine completes automatically completely.During due to each experiment, the state of animal is different, excitability reaction to reward stimulation is different, best stimulation parameter is also not quite similar, therefore we need to constantly adjust stimulation parameter often in experiment, thereby improve to a certain extent each precision and efficiency stimulating, in existing navigational system, be all that off-line completes by manual setting, very inconvenient, and cannot in navigation procedure, realize the dynamic adjustment of stimulation parameter.In addition, for self-navigation, by the method for machine vision obtain animal state (be mainly at present animal position, head position, head towards these three states) can there is state recognition mistake due to factors such as the variation of light luminance and complex environments, thereby cause follow-up machine automatically to control and fall flat.Therefore, self-navigation needs the mechanism of a FEEDBACK CONTROL and manual correction, thereby the electro photoluminescence wound that improves the efficiency of whole system and make to the full extent animal be subject to is minimum.
Summary of the invention
For give animal robot system provide a kind of mutual more natural, electrical stimulation parameters is adjustable, the mechanism of FEEDBACK CONTROL and manual correction, makes up current animal robot system effectiveness low, lacks the mutual defects such as friendly.Thereby control to correct by voice the efficiency that improves self-navigation, and in shorter time, still less complete the navigation task of same complexity in stimulus intensity, the invention provides a kind of speech guide system of animal robot, thereby comprise animal robot is carried out microstimulation device that electric pulse stimulation navigates and for the master control set to this microstimulation device output steering order, is also provided with for receiving the microphone of phonetic order;
Described master control set comprises:
For the phonetic order identification module of identifying phonetic order and recognition result being carried out to decision-making;
According to the result of decision, process and export the stimulation instructions switch module of stimulation parameter.
The present invention is by real-time phonetic order recognition technology, by carry out mutual interface by mouse, keyboard and animal robot, become more natural interactive voice interface, also newly introduced simultaneously and adjusted stimulation parameter to change the phonetic order of animal movement speed, and can carry out manual correction to self-navigation process.
Phonetic order described in the present invention refers to the sound instruction that people can understand, comprise that " advancing ", " left-hand rotation ", " right-hand rotation " and " stopping " etc. directly control the motion control instruction of animal movement, the stimulation parameter regulating commands such as " before lifting/lowering/left/right/powers failure press ", " before adding/subtracting/left/right/stop stimulating number " and " adding/Easy abeadl/left/right turn (between predefined stimulus intensity grade, switching) " etc., adjusted the rate control instruction of parameter, control animal movement simultaneously.In current embodiment, send " before lifting/lowering/and left/right/power failure pressure " instruction, corresponding stimulation parameter correspondingly raises/reduces by 0.5 volt, and in other application scenarioss, the voltage of rising or reduction can be set as other value at every turn; at every turn Correspondingly, constant current stimulates if, acquiescence rising/reduce 5mA, and in other scenes, each electric current that raises or reduce can be other value.
Described master control set sends to described microstimulation device by radio communication by stimulation parameter.By radio communication, can remote control carry out animal robot Voice Navigation.
Utilize the speech guide system of animal robot of the present invention, the present invention also provides a kind of phonetic navigation method of animal robot, comprises the steps:
S01, receives phonetic order;
S02, identification phonetic order also carries out decision-making to recognition result, according to the result of decision, processes and export stimulation parameter, is specially:
If recognition result is noise, do not operate;
If recognition result is stimulation parameter regulating command, according to the current stimulation parameter of the content modification of stimulation parameter regulating command preservation;
If recognition result is motion control instruction, according to the numbering of motion control instruction, generate corresponding stimulation parameter, enter step S03;
If recognition result is rate control instruction, according to the numbering adjustment of rate control instruction, generate corresponding new stimulation parameter, enter step S03;
S03, thus according to the stimulation parameter receiving, the corresponding brain of animal robot district is sent and stimulates electric pulse to navigate.
By voice, control the friendly that can improve on the one hand animal robot system interaction, can realize on the other hand the manual intervention of self-navigation and correction, thereby better, complete quickly complicated navigation task, and make to minimize to the intensity of electric stimulus total amount of animal, reduce to a certain extent the injury to animal, extend the serviceable life of animal robot.
In step S02, the step of phonetic order identification is:
201, phonetic order is carried out to pre-service, wherein pre-service comprises end-point detection;
202, pre-service result is carried out to the extraction of characteristic parameter;
203, according to the characteristic parameter extracting, carry out pattern-recognition, obtain recognition result.
By phonetic order is identified phonetic control command is converted to corresponding stimulation parameter, and act on the brain of animal robot, animal robot is acted accordingly according to the mankind's language.
In step 201, utilize short-time average amplitude to carry out end-point detection.
Control object due to phonetic order---animal has very strong self-consciousness, motion state changes very fast, so done some optimizations in the method for end-point detection and in the selection of speech recognition modeling, in real-time end-point detection, adopt short-time average amplitude as criterion, and with an amplitude thresholds, two time thresholds, define the beginning and ending time of voice segments.According to the statistical property of voice, voice segments can be divided into voiceless sound, voiced sound and quiet (comprising ground unrest) 3 kinds.Short-time average magnitude detects can distinguish voiced sound and quiet preferably, thereby can carry out preferably speech recognition.In addition, short-time average amplitude is calculated simple, and low to instantaneous high level signal susceptibility,
In step 202, the characteristic parameter of extraction is Mel cepstrum coefficient.Mel cepstrum coefficient (MFCC) can be simulated the auditory response of people's ear well, adopts the voice of this parameter identification to have good discrimination.
In step 203, voice instruction recognition method adopts DTW algorithm to carry out pattern-recognition.
Because the speech guide system of animal robot of the present invention is higher for requirement of real-time, and instruction set is fixed, number of instructions is less, so the embodiment of the present invention adopts the simplest DTW algorithm method.
Wherein DTW algorithm comprises the training stage, in the training stage of DTW algorithm, selects to meet the training sample of following condition as the instruction template of test phase: the variance of the DTW distance between the MFCC eigenmatrix of the phonetic order of this sample and identical content is minimum.
By selecting training sample with this condition, make phonetic order there is good discrimination.
Adjusting parameter in the regulating command of step S02 moderate stimulation parameter is the voltage/current stimulating and the pulse number at every turn stimulating.By stimulation voltage/electric current and each these two parameters of boost pulse number are regulated, can realize preferably the motion control to animal robot.
Each animal robot is with the numbering of body one by one, stimulation instructions switch generates corresponding stimulation parameter according to the steering order numbering that in the individuality numbering of animal robot and step S02, recognition result is corresponding with the mapping relations between stimulation parameter, and wherein steering order comprises motion control instruction and rate control instruction.
The motion state individual, different due to different animal robots needs different stimulation parameter values, and therefore, the generation of electrical stimulation parameters need to consider these factors.Thereby the mode that adopts artificial rule here realizes the generation that mapping relations between individual numbering and steering order numbering and stimulation parameter realize electrical stimulation parameters.Be determined by experiment different animal robot individualities required electrical stimulation parameters under different motion states such as advancing, turn left, turn right and stop, and these electrical stimulation parameters set are stored in separately according to robot individual packets in the file of corresponding individual numbering, each steering order numbering is to should the required electrical stimulation parameters of corresponding sports state.For example, the individuality of animal robot numbering comprises: 01,02,03, and steering order is numbered: 91,92,93,94, correspondence is advanced, turns left, is turned right and stops respectively.Corresponding individuality is numbered 01 animal robot, and the stimulation parameter that the motion control instruction of advancing is numbered 91 correspondences is 0.5 volt of voltage, 2 pulses.When the steering order of sending when stimulation instructions switch is 0191, microstimulation device sends electric pulse stimulation from 2 0.5 volt of voltages to the corresponding brain of animal district advances animal.In this way, stimulation instructions switch is determined corresponding electrical stimulation parameters according to the individuality numbering of animal, steering order numbering, by radio communication, send to and control microstimulation device, thereby microcontroller sends different electrical brain stimulation pulses according to the electrical stimulation parameters receiving to animal robot, realize Voice Navigation.
Accompanying drawing explanation
Fig. 1 is the structural representation of the speech guide system of one embodiment of the invention animal robot;
Fig. 2 is the voice instruction recognition method schematic diagram that the current embodiment of the present invention is used;
Fig. 3 is the process flow diagram of the phonetic navigation method of animal robot in the current embodiment of the present invention;
Fig. 4 is that the current embodiment of the present invention is by the decision flow diagram of Voice Navigation and self-navigation fusion;
Fig. 5 is dynamic time warping (DTW) algorithm schematic diagram.
Embodiment
The present invention carries out alternately for realizing the natural-sounding of man and animal by the mankind, by real-time, phonetic order is identified efficiently, phonetic control command is converted to corresponding electro photoluminescence instruction, and act on the brain of animal, make it to move at the sound, thereby realize with voice, control flexibly animal robot and improve the object of entire system efficiency.
As shown in Figure 1, for realizing entire system structural representation of the present invention.Wherein, in current embodiment, master control set is the PC with sound card, and wherein PC comprises phonetic order identification module and stimulation instructions switch module, and phonetic order identification module and stimulation instructions switch module are all realized by computer program.Microphone is connected with PC.Common microphone and sound card all can be used for goal of the invention of the present invention, and the microphone of the beautiful SM-010 of current embodiment employing sound and IDT High Definition Audio CODEC sound card, for gathering outside voice signal.
Step S01, receives phonetic order.Experimenter is by observing the state of animal robot, make instruction decision-making and send phonetic order, microphone gathers phonetic order, the analog voice signal that sound card in PC gathers microphone is sampled, quantizes, is encoded, become the discrete digital signal corresponding with phonetic order, the phonetic order identification module in PC obtains this digital signal by the voice I/O interface of system.
Step S02, identification phonetic order.The algorithmic procedure of whole phonetic order identification as shown in Figure 2.The identification of phonetic order specifically comprises the steps:
201, phonetic order is carried out to pre-service.Phonetic order identification module divides frame to obtained discrete digital signal and calculates short-time average amplitude and process, and the signal after processing is carried out to end-point detection, thereby determines starting point and the end point of voice segments.
Utilize short-time average amplitude to carry out end-point detection, wherein sampling rate is 16kHz, and coding figure place is 16bit, and frame length is 32ms, and frame moves as 16ms, the short-time average amplitude A of each frame mthe formula that is calculated as follows:
A m = 1 N Σ i = 0 N - 1 | x ( i ) |
Wherein x (i) is signal sampling point sequence, and N is frame length.Work as A mbe greater than and copy threshold value A thresand the duration is greater than time lower threshold t minwhen (as 256ms), be labeled as the starting point of voice; As duration overtime upper limit threshold t max(as 512ms), or A msurpassing continuous t minin/2 time, be less than amplitude thresholds A threstime, be labeled as the end point of voice.This section of voice that start to finishing are defined as to efficient voice section.
Two time threshold t here minand t maxreasonable setting be very crucial: t minif setting too little, easily responsive to noise in short-term and do a lot of unnecessary calculating, be also likely less than quiet district between two instruction words of same instruction and wrong this phonetic order that splits; t maxif setting too little, the voice segments obtaining is not enough to distinguish different phonetic instruction (front one or two word of instruction having may overlap); If too large, cannot accomplish well real-time.In the middle of implementation process, can concentrate the length of instruction word and experimenter's the rate of articulation determine according to actual instruction, for example in instruction set the extreme length value of instruction word at 1 to 5 o'clock, corresponding t maxvalue can be between 256ms to 1024ms, and t mincan be t max1/4 to 1/2 between.
It is regular that this part data of the voice segments that end-point detection is obtained are carried out amplitude.Amplitude is regular middle by the maximum amplitude A of efficient voice section maxregular to 12000, other sampled point is in the scaling ratio (C/A of maximum amplitude point max) carry out regular.Pre-emphasis (increasing the weight of the HFS of signal) coefficient is 0.95, and what windowing process adopted is Hamming window.
Carrying out that end-point detection, amplitude are regular, after pre-emphasis, minute frame and these preprocessing process of windowing, obtain voice signal, enter step 202.
202 ,Yi Zhengwei units extract the characteristic parameter of pretreated voice signal.
The characteristic parameter that speech recognition adopts mainly comprises Mel cepstrum coefficient (MFCC), linear predict code cepstralcoefficients (LPCC), the linear predictor coefficients of perceptual weighting (PLP) etc., the characteristic parameter that the embodiment of the present invention is carried out characteristic parameter extraction is MFCC feature.
The process of feature extraction is: first the signal after windowing is carried out to Fast Fourier Transform (FFT) (FFT), then by each frame process Mel bank of filters (i.e. one group of V-belt bandpass filter) through conversion gained frequency-region signal, calculate again the logarithm energy of output signal, finally it is carried out to discrete cosine transform (DCT), obtain Mel cepstrum coefficient (MFCC).That the embodiment of the present invention adopts is 512 FFT, and Mel bank of filters comprises 24 V-belt bandpass filters, and MFCC is characterized as 13 dimensions.
203, according to the characteristic parameter extracting, carry out pattern-recognition, obtain recognition result.
After feature extraction is complete, carry out model construction and pattern match and identification.Model and method for speech recognition has much at present, and more conventional method has dynamic time warping (DTW), Hidden Markov Model (HMM) (HMM), support vector machine (SVM), artificial neural network (ANN), gauss hybrid models (GMM) etc.Because the speech guide system of animal robot of the present invention is higher for requirement of real-time, and instruction set is fixed, number of instructions is less, therefore the present invention adopts the simplest dynamic time warping (DTW) algorithm, and when using DTW algorithm, adopt the DTW algorithm of belt path constraint, hunting zone is limited to certain area, as belt-like zone or parallelogram region, thereby reduces time complexity.
The embodiment of the present invention has adopted DTW algorithm as shown in Figure 5.DTW problem can be summed up as the problem of finding a paths on a limited grid, as shown in Figure 5.Make R (n) and T (m) be respectively template sequence and cycle tests, n=1 wherein, 2 ..., N, N is template sequence length; M=1,2 ..., M, M is test sequence.According to Fig. 5, DTW problem be exactly in (m, n) plane, find a path optimizing m=w (n) thus make overall distance minimum, this minor increment is called DTW between two sequences apart from dtw (R (n), T (m)), be shown below:
dtw ( R ( n ) , T ( m ) ) = min D = min Σ n = 1 N d ( R ( n ) , T ( w ( n ) ) )
Wherein d (R (n), T (w (n))) is the local Euclidean distance between the n frame of template sequence and m=w (n) frame of cycle tests.
Training stage, each instruction is selected 16 as training sample, selects to meet the training sample of following condition as the instruction template of test phase: the variance of the DTW distance between the MFCC eigenmatrix of this sample and the MFCC eigenmatrix of the phonetic order of identical content is minimum.
Test (identification) stage, calculate the dynamic time warping distance of the MFCC eigenmatrix of current tested speech instruction and the MFCC eigenmatrix of each phonetic order template, wherein minimum DTW is apart from d minif be less than default distance threshold d threshold, current recognition result is the numbering i that this instruction sound template is corresponding, if be more than or equal to this threshold value, is demarcated as noise, result is 0.
Minimum value d minbe expressed as follows:
d min=min{dtw(Tx,T1),dtw(Tx,T2),…,dtw(Tx,Tn)}
The recognition result obtaining represents as follows:
Figure BDA0000403033200000091
The threshold value d here thresholdselect repeatedly in test, to make the highest numerical value of call instruction discrimination, in the invention process example, this threshold value is made as 1350.Finally, in the embodiment of the present invention discrimination of most of instructions all more than 95%.
Phonetic order by step S02 is identified, and can judge the type of phonetic order, enters step S03, and recognition result is carried out to decision-making, is exactly the decision process of Voice Navigation, as shown in Figure 3.
If phonetic order is noise, directly forward to and judged whether Voice Navigation, if completed, exit, otherwise continue.Here the method that has judged whether Voice Navigation can have a variety of, such as surpassing limiting time, be labeled as navigation, or the phonetic order that uses " end Voice Navigation " completing of mark navigation, thereby then or the image of catching in conjunction with camera find that animal robot has arrived target location mark and navigated etc.
Steering order is divided into motion control instruction and adjusts parameter simultaneously and control the rate control instruction of animal movement, the corresponding numbering of each steering order.
Stimulation instructions switch in PC generates suitable electrical stimulation parameters according to steering order.It needs different animals robot individuality required electrical stimulation parameters under different motion states such as advancing, turn left, turn right and stop that storage is determined by experiment in advance, these electrical stimulation parameters set are stored in separately according to robot individual packets in the file of corresponding individual numbering, each steering order numbering is to should the required electrical stimulation parameters of corresponding sports state.
Motion control instruction, gives recognition result stimulation instructions switching programme if.This program generates suitable electrical stimulation parameters according to current steering order, in the current embodiment of the present invention, according to artificial rule, only according to current number of animals, current motion control instruction, number these two information and find the stimulation parameter of storage as the stimulation parameter generating.Subsequently, program first checks whether serial COM port is opened, and then obtains stimulation parameter corresponding to current appointment and is write this COM port.This stimulation parameter is transferred to wireless signal transmitter by a USB/ serial converter, wireless signal transmitter sends out this stimulation parameter with the form of radiowave, the reception of wireless signals termination that is arranged in the knapsack that animal bears is received after this signal, stimulation parameter is passed to the microstimulation device in knapsack, pulse producer in microstimulation device produces the electric pulse of corresponding parameter, this electric pulse is under counter auxiliary, through switching selector, be input to corresponding microelectrode pair, the corresponding brain area of animal robot (present embodiment use be SD rat) is carried out to electro photoluminescence, make it to produce the motor behavior of expection.Finally judge whether equally to finish Voice Navigation, according to judgement, continue or finish Voice Navigation.Rate control instruction, gives recognition result stimulation instructions switching programme if.First stimulation instructions switching programme adjusts current stimulation parameter according to artificial rule according to command content.According to current number of animals, present speed steering order, number these two information current stimulation parameter is adjusted to the new stimulation parameter of rear generation.In this implementation process, " advance " and be provided with 5 parameter class and (respectively organize stimulation voltage and each boost pulse number and be respectively 4V, 10,5V, 10,5V, 12,5V, 14,6V, 12), acceleration is advanced and from low grade, is up adjusted a class, and Easy abeadl is down adjusted a class, exactly stimulation parameter is write subsequently to the processes identical with motion control instruction such as COM port.
If phonetic order is stimulation parameter regulating command, stimulation parameter is adjusted instruction and is mainly comprised " before lifting/lowering/left/right/powers failure pressure " (acquiescence raises/reduce by 0.5 volt, constant current stimulates if accordingly, the acquiescence 5mA that raises/reduce), " before adding/subtracting/left/right/stop stimulating number " (acquiescence increases/reduces 1 and stimulates number) etc., be mainly used to the stimulation parameter of " advancing ", " left-hand rotation ", " right-hand rotation " and " stopping " these 4 steering orders to modify.This class instruction is not directly carried out electro photoluminescence to animal, but press default setting, revises stimulation parameter, and the stimulation parameter index word of this acquiescence can arrange in program the sensitivity of different parameters voluntarily according to different animals.Revised after the stimulation parameter of corresponding instruction, be saved in text.After program exits, when again start next time, initial parameter becomes amended parameter.Finally, still judge whether to finish Voice Navigation.
The factors such as the complexity of actual environment and light luminance variation, can cause going wrong in the self-navigation process based on computer vision, our voice-operated mode can be corrected self-navigation process, thereby improve the efficiency of self-navigation, and in shorter time, still less complete the navigation task of same complexity in stimulus intensity.In addition, in the whole process of current self-navigation, stimulation parameter generally all remains unchanged.In self-navigation, add after Voice Navigation, can also real-time change electrical stimulation parameters, thus control to a certain extent the movement velocity of animal.As shown in Figure 4, in self-navigation, add Voice Navigation mechanism, realized the fusion of manual guidance and self-navigation, combine like this advantage of the two, efficiency raising, the practicality of system are strengthened.When self-navigation occurs extremely or need to regulate parameter time, speech guide system that can the application of the invention carries out manual guidance, thereby has improved the dirigibility of animal robot navigation.
The preferred embodiment of the present invention, is to select SD rat as carrier animal, and more accurate because of its brain correlative study starting compare Zao,Nao district contrast locating, success rate of operation is higher.In the medial forebrain bundle (MFB) of rat, the elementary sensory cortex in left side (S1), the elementary sensory cortex in right side (S1), periaqueductal gray nerve nucleus, also implanted respectively pair of stimulating electrodes, be respectively used to control animal robot and advance, turn left, turn right and stop.Electro photoluminescence medial forebrain bundle (MFB) can activate nucleus accumbens septi, makes rat produce joyful " virtual award ", and this award can be advanced for controlling animal; Left and right elementary sensory cortex receives extraneous impression, and this region of electro photoluminescence can make animal feel that there is " virtual haptic " that collides object on left side or right side, and this virtual haptic can be turned left or turn right for being controlled animal; Stimulating Periaqueductal Gray on Colonic Myoelectric Activity can make animal inducing send out defense reaction, shows as freezing of action, acceleration aroused in interest, blood pressure raises, and the increase of muscle tone etc., make animal produce the sensation of " virtual fear ", this virtual fear can be used for controlling animal stop motion.
Rat is after surgery through convalescence of about one week, the training that it is screened and advances, turns left, turns right, stops, and determine best stimulation parameter.In training and optimal parameter deterministic process, also can control with phonetic order.Through training and determine after the optimal stimulus of each instruction, the experiment of can navigating.
The present invention is not only applicable to take the control of the animal robot system that rat is carrier, is also applicable to the control of the animal robot system of other types.

Claims (10)

1. the speech guide system of an animal robot, thereby comprise animal robot is carried out to microstimulation device that electric pulse stimulation navigates and for the master control set to this microstimulation device output steering order, it is characterized in that, be also provided with for receiving the microphone of phonetic order;
Described master control set comprises:
For the phonetic order identification module of identifying phonetic order and recognition result being carried out to decision-making;
According to the result of decision, process and export the stimulation instructions switch module of stimulation parameter.
2. the speech guide system of animal robot as claimed in claim 1, is characterized in that, described master control set sends to described microstimulation device by radio communication by stimulation parameter.
3. a phonetic navigation method for animal robot, is characterized in that, comprises the steps:
S01, receives phonetic order;
S02, identification phonetic order also carries out decision-making to recognition result, according to the result of decision, processes and export stimulation parameter, is specially:
If recognition result is noise, do not operate;
If recognition result is stimulation parameter regulating command, according to the current stimulation parameter of the content modification of stimulation parameter regulating command preservation;
If recognition result is motion control instruction, according to the numbering of motion control instruction, generate corresponding stimulation parameter, enter step S03;
If recognition result is rate control instruction, according to the numbering adjustment of rate control instruction, generate corresponding new stimulation parameter, enter step S03;
S03, thus according to the stimulation parameter receiving, the corresponding brain of animal robot district is sent and stimulates electric pulse to navigate.
4. the phonetic navigation method of animal robot as claimed in claim 3, is characterized in that, in step S02, the step of phonetic order identification is:
201, phonetic order is carried out to pre-service, wherein pre-service comprises end-point detection;
202, pre-service result is carried out to the extraction of characteristic parameter;
203, according to the characteristic parameter extracting, carry out pattern-recognition, obtain recognition result.
5. the phonetic navigation method of animal robot as claimed in claim 4, is characterized in that, in step 201, utilizes short-time average amplitude to carry out end-point detection.
6. the phonetic navigation method of animal robot as claimed in claim 4, is characterized in that, in step 202, the characteristic parameter of extraction is Mel cepstrum coefficient.
7. the phonetic navigation method of animal robot as claimed in claim 4, is characterized in that, in step 203, voice instruction recognition method adopts DTW algorithm to carry out pattern-recognition.
8. the phonetic navigation method of animal robot as claimed in claim 7, it is characterized in that, wherein DTW algorithm comprises the training stage, in the training stage of DTW algorithm, select to meet the training sample of following condition as the phonetic order template of test phase: the variance of the DTW distance between the MFCC eigenmatrix of this sample and the MFCC eigenmatrix of the phonetic order of identical content is minimum.
9. the phonetic navigation method of animal robot as claimed in claim 3, is characterized in that, voltage/current and each pulse number stimulating in the adjusting parameter of step S02 moderate stimulation parameter regulating command for stimulating.
10. the phonetic navigation method of animal robot as claimed in claim 9, it is characterized in that, each animal robot is with the numbering of body one by one, stimulation instructions switch generates corresponding stimulation parameter according to the steering order numbering that in the individuality numbering of animal robot and step S02, recognition result is corresponding with the mapping relations between stimulation parameter, and wherein steering order comprises motion control instruction and rate control instruction.
CN201310517773.2A 2013-10-28 2013-10-28 Voice navigation system and method of animal robot system Active CN103593048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310517773.2A CN103593048B (en) 2013-10-28 2013-10-28 Voice navigation system and method of animal robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310517773.2A CN103593048B (en) 2013-10-28 2013-10-28 Voice navigation system and method of animal robot system

Publications (2)

Publication Number Publication Date
CN103593048A true CN103593048A (en) 2014-02-19
CN103593048B CN103593048B (en) 2017-01-11

Family

ID=50083232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310517773.2A Active CN103593048B (en) 2013-10-28 2013-10-28 Voice navigation system and method of animal robot system

Country Status (1)

Country Link
CN (1) CN103593048B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105943077A (en) * 2015-09-29 2016-09-21 刘伟锋 Stethoscope
WO2017000774A1 (en) * 2015-06-30 2017-01-05 芋头科技(杭州)有限公司 System for robot to eliminate own sound source
CN106325112A (en) * 2015-06-25 2017-01-11 联想(北京)有限公司 Information processing method and electronic equipment
CN106653015A (en) * 2016-10-28 2017-05-10 海南双猴科技有限公司 Speech recognition method by and apparatus for robot
CN107351080A (en) * 2017-06-16 2017-11-17 浙江大学 A kind of hybrid intelligent research system and control method based on array of camera units
CN107833582A (en) * 2017-11-20 2018-03-23 南京财经大学 The Method of Speech Endpoint Detection based on arc length
CN108254067A (en) * 2018-01-10 2018-07-06 上海展扬通信技术有限公司 Test the system and method for sound pressure level
CN108320750A (en) * 2018-01-23 2018-07-24 东南大学—无锡集成电路技术研究所 A kind of implementation method based on modified dynamic time warping speech recognition algorithm
CN108927815A (en) * 2017-06-15 2018-12-04 北京猎户星空科技有限公司 Robot brake control method, device and robot
CN109946055A (en) * 2019-03-22 2019-06-28 武汉源海博创科技有限公司 A kind of sliding rail of automobile seat abnormal sound detection method and system
CN111408044A (en) * 2020-03-31 2020-07-14 北京百科康讯科技有限公司 Controller, voice recognition method thereof and spinal cord epidural stimulation system
CN113799156A (en) * 2021-10-21 2021-12-17 哈尔滨工业大学(深圳) Motion ganglion electrical stimulation circuit and electrical stimulation method for biological robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101023737A (en) * 2007-03-23 2007-08-29 浙江大学 BCI animal experiment system
CN101861836A (en) * 2010-04-30 2010-10-20 重庆大学 Method for controlling movement of woundless rat robot
CN101881940A (en) * 2010-05-25 2010-11-10 浙江大学 Method for controlling stop of animal robot
CN102982811A (en) * 2012-11-24 2013-03-20 安徽科大讯飞信息科技股份有限公司 Voice endpoint detection method based on real-time decoding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101023737A (en) * 2007-03-23 2007-08-29 浙江大学 BCI animal experiment system
CN101861836A (en) * 2010-04-30 2010-10-20 重庆大学 Method for controlling movement of woundless rat robot
CN101881940A (en) * 2010-05-25 2010-11-10 浙江大学 Method for controlling stop of animal robot
CN102982811A (en) * 2012-11-24 2013-03-20 安徽科大讯飞信息科技股份有限公司 Voice endpoint detection method based on real-time decoding

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106325112A (en) * 2015-06-25 2017-01-11 联想(北京)有限公司 Information processing method and electronic equipment
US10482898B2 (en) 2015-06-30 2019-11-19 Yutou Technology (Hangzhou) Co., Ltd. System for robot to eliminate own sound source
WO2017000774A1 (en) * 2015-06-30 2017-01-05 芋头科技(杭州)有限公司 System for robot to eliminate own sound source
CN105943077A (en) * 2015-09-29 2016-09-21 刘伟锋 Stethoscope
CN106653015A (en) * 2016-10-28 2017-05-10 海南双猴科技有限公司 Speech recognition method by and apparatus for robot
CN108927815B (en) * 2017-06-15 2020-12-04 北京猎户星空科技有限公司 Robot brake control method and device and robot
CN108927815A (en) * 2017-06-15 2018-12-04 北京猎户星空科技有限公司 Robot brake control method, device and robot
CN107351080A (en) * 2017-06-16 2017-11-17 浙江大学 A kind of hybrid intelligent research system and control method based on array of camera units
CN107833582A (en) * 2017-11-20 2018-03-23 南京财经大学 The Method of Speech Endpoint Detection based on arc length
CN107833582B (en) * 2017-11-20 2021-02-09 南京财经大学 Arc length-based voice signal endpoint detection method
CN108254067A (en) * 2018-01-10 2018-07-06 上海展扬通信技术有限公司 Test the system and method for sound pressure level
CN108320750A (en) * 2018-01-23 2018-07-24 东南大学—无锡集成电路技术研究所 A kind of implementation method based on modified dynamic time warping speech recognition algorithm
CN109946055A (en) * 2019-03-22 2019-06-28 武汉源海博创科技有限公司 A kind of sliding rail of automobile seat abnormal sound detection method and system
CN109946055B (en) * 2019-03-22 2021-01-12 宁波慧声智创科技有限公司 Method and system for detecting abnormal sound of automobile seat slide rail
CN111408044A (en) * 2020-03-31 2020-07-14 北京百科康讯科技有限公司 Controller, voice recognition method thereof and spinal cord epidural stimulation system
CN113799156A (en) * 2021-10-21 2021-12-17 哈尔滨工业大学(深圳) Motion ganglion electrical stimulation circuit and electrical stimulation method for biological robot
CN113799156B (en) * 2021-10-21 2024-03-29 哈尔滨工业大学(深圳) Motor nerve electricity-saving stimulation circuit and electric stimulation method for biological robot

Also Published As

Publication number Publication date
CN103593048B (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN103593048A (en) Voice navigation system and method of animal robot system
US20230017367A1 (en) User interface system for movement skill analysis and skill augmentation
CN110998696B (en) System and method for data-driven mobile skill training
EP3801743A1 (en) Methods and apparatus for providing sub-muscular control
Kelso et al. Motor control: Which themes do we orchestrate?
CN107463780A (en) A kind of virtual self-closing disease treatment system of 3D and treatment method
Wand et al. Domain-Adversarial Training for Session Independent EMG-based Speech Recognition.
CN101467515B (en) Method for controlling and guiding mammalian robot
CN104825256A (en) Artificial limb system with perception feedback function
Philippsen et al. Goal babbling of acoustic-articulatory models with adaptive exploration noise
CN102880906B (en) Chinese vowel pronunciation method based on DIVA nerve network model
CN202161439U (en) Control system capable of controlling movement of upper artificial limbs through eye movement signals
Hoffer Central control and reflex regulation of mechanical impedance: The basis for a unified motor-control scheme
Kröger et al. Phonemic, sensory, and motor representations in an action-based neurocomputational model of speech production
Wand Advancing electromyographic continuous speech recognition: Signal preprocessing and modeling
Hogan Moving with control: Using control theory to understand motor behavior
Gottlieb et al. Control theoretic concepts and motor control
CN204636626U (en) A kind of artificial limb system with perceptible feedback function
Chen et al. Research on EEG classification with neural networks based on the Levenberg-Marquardt algorithm
CN106156842A (en) A kind of integration provides the method for optimally controlling of neuron models
CN111408038A (en) Portable hand function rehabilitation system based on electrode array
Pond The importance of connective tissue within and between muscles
Abbs A speech-motor-system perspective on nervous-system-control variables
Kearney et al. Systems analysis in the study of the motor-control system: Control theory alone is insufficient
Guenther DIVA: A Self-organizing Neural Network Model for Motor Equivalent Speech Production

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant