CN1506936A - Sound analyser and sound conversion control method - Google Patents

Sound analyser and sound conversion control method Download PDF

Info

Publication number
CN1506936A
CN1506936A CNA2003101172844A CN200310117284A CN1506936A CN 1506936 A CN1506936 A CN 1506936A CN A2003101172844 A CNA2003101172844 A CN A2003101172844A CN 200310117284 A CN200310117284 A CN 200310117284A CN 1506936 A CN1506936 A CN 1506936A
Authority
CN
China
Prior art keywords
sound
animal
input
language
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2003101172844A
Other languages
Chinese (zh)
Other versions
CN1261922C (en
Inventor
��ľ����
黑木保雄
殿村敬介
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN1506936A publication Critical patent/CN1506936A/en
Application granted granted Critical
Publication of CN1261922C publication Critical patent/CN1261922C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

To support two-way transmission of feelings between a human and an animal. A wristwatch type speech analyzing device 100 picks up the voice of a dog 2 (animal) by a microphone part 102 and analyzes it to decide a feeling included in the voice. The analysis result is displayed by a monitor part 106 in the form of a text of human words. Further, the voice of a user 4 (human) is inputted by the microphone part 102 and analyzed to decide a feeling included in the voice. The analysis result is outputted by a speaker part 104 in the form of the voice of animal words.

Description

Sound analysis device and sound switching control method
Technical field
The present invention relates to the sound analysis device and the sound switching control method of analyzing animal sound.
Background technology
For raising the people of animals such as dog and cat as pet, wish that all pet becomes a member in the family, and can link up emotion and intention equally with the mankind, exchange.
In recent years, follow the phonetic analysis technology, the especially progress of voiceprint analytical technology, making judgement and being included in the emotion in the animal cry and being intended to suitable content (following only be called " emotion ") becomes possibility.
For example, the sound that animals such as pet and domestic animal are sent carries out phonetic analysis, and obtains the pattern (for example, sonograph) of having extracted its feature.And,, judge the emotion of animal by comparing with the pre-prepd standard voice pattern of analyzing according to ethology.
With this phonetic analysis technology is benchmark, there is a kind of technology to be disclosed, promptly, for example import the cry of animal and the image of behavior of animal (behavior), and, show with human writings and image to understand then by relatively judging the emotion of animal with the data of sound that carries out the ethology analysis in advance and action.
Adopt this technology, owner can understand the emotion of animal to a certain extent, and is appreciated that this requirement and takes some strains when animal has requirement.But the emotion of existing techniques in realizing is passed on and can only unilaterally be carried out to the mankind from animal, and can not support to pass on to the emotion of animal from the mankind.Therefore, be the interchange that has realized main human-animal hardly.
Summary of the invention
The present invention proposes in view of the above problems, and purpose is to support the two-way emotion between the human and animal to pass on.
In order to reach this purpose, sound analysis device of the present invention is: if the sound of input animal, then analyze the sound of this input and export this analysis result by human language, in addition, if after sound import, import user's sound, then analyze the sound of this input and export this analysis result with animal language.
Adopt the present invention, because after input and having analyzed animal sounds, by human language this analysis result is exported, so the user can understand the meaning of animal sounds by the human language of having exported.In addition, because import and analyzed after user's the sound, by animal language output analysis result, so animal can be understood the meaning of user's sound by the animal language of having exported.Therefore, can support the two-way emotion transmission between the human and animal, realize exchanging.
In addition, other inventions are in advance emotion, human language, the animal painting of animal to be carried out many group storages, if the sound of input animal, then analyze the emotion of the sound of this input, and read the human language corresponding, animal painting with the emotion of this analysis, show human language and animal painting that this reads out then.
Adopt the present invention,, can show human language and animal painting that this reads out then if the input animal sounds is then analyzed the emotion of the sound of this input, and read the human language corresponding with the emotion of this analysis, animal painting.Therefore, can confirm the human language corresponding (for example literal form information) and animal painting (illustration of animal for example perhaps carries out the photographs of the animal that this sound imports itself) by display device with the emotion of the animal sounds (for example sob) of input.
Description of drawings
Figure 1A and Figure 1B represent an example as the outward appearance of the Wristwatch-type sound analysis device of first embodiment;
Fig. 2 is the synoptic diagram of an example of expression Wristwatch-type sound analysis device using method;
Fig. 3 is the functional-block diagram of an example of presentation function structure;
Fig. 4 represents to be stored in the example of phonetic analysis with the content among the ROM;
Fig. 5 represents an example of the data structure of animal standard voice pattern;
Fig. 6 represents an example of the data structure of human standard acoustic pattern;
Fig. 7 represents to be stored in an example of the content among the RAM;
Fig. 8 represents to be stored in an example of the content among the ROM;
Fig. 9 represents an example of the data structure of animal language human language conversion TBL;
Figure 10 represents an example of the data structure of human language animal language conversion TBL;
Figure 11 represents an example of the data structure of vibration mode TBL;
Figure 12 is the process flow diagram that is used to illustrate main treatment scheme;
Figure 13 is the process flow diagram that is used to illustrate the phonetic analysis treatment scheme;
Figure 14 is the process flow diagram that is used to illustrate human language output treatment scheme;
Figure 15 is the process flow diagram that is used to illustrate animal language output treatment scheme;
Figure 16 is the process flow diagram that is used to illustrate button input treatment scheme;
Figure 17 is the process flow diagram that is used to illustrate the mode switch treatment scheme;
Figure 18 represents an example of the picture in the human language output processing;
Figure 19 represents an example of the picture in the animal language output processing;
Figure 20 represents an example of the picture in the button input processing;
Figure 21 represents an example of the picture in the mode switch processing;
Figure 22 represents an example of the picture in the historical record display process;
Figure 23 represents an example of clock display frame;
Figure 24 A and Figure 24 B represent an example as the outward appearance of the trailed model sound analysis device of second embodiment.
Embodiment
[first embodiment]
Below with reference to Figure 1A~Figure 21 first embodiment that is applicable to sound analysis device of the present invention is described.Having, in the present embodiment, though represent animal to describe with dog, be not limited thereto, for example also can be other animals such as cat, dolphin, parrot.
[structure explanation]
Figure 1A represents to be applicable to an example of the outward appearance of Wristwatch-type sound analysis device of the present invention.Shown in Figure 1A, formed the shape identical on the overall appearance of Wristwatch-type sound analysis device 100 with existing wrist-watch.And possess the microphone 102 of input animal and human class sound, the loudspeaker 104 of output sound, the display 106 that shows output text and image, the various operations of input button operation portion 108, be used for when carrying this Wristwatch-type sound analysis device 100, being worn on muffetee 110, oscillator 112 on the human body, be used for carrying out the data communication section 114 of radio communication, the control module 120 of unified control Wristwatch-type sound analysis device 110, not shown power supply with external device (ED).
Microphone 102 is collection acoustic devices, by realizations such as for example microphones.Though in this figure, be set as monomer also can have a plurality of, also can be can be freely separation with loading and unloading, and by clip etc. be installed in the body wire connecting on structure.
Loudspeaker 104 is voice outputs, by realizations such as for example loudspeakers.In the present embodiment, because there is the human situation that can hear the high-frequency sound outside the field of output, so loudspeaker 104 is made as the mode that can export this high frequency field sound.
Display 106 is the demonstration output units by realizations such as for example LCD (Liquid Crystal Display) and ELD display elements such as (Electronic Luminescent Display) and backlight and driving circuits.Display 106 can be by control display text (text) and figure, the image etc. of control module 120.In this figure, though that display 106 is that odd number also can possess is a plurality of.
Button operation portion 108 is input medias of realizing by for example pushbutton switch and control lever, operation board etc.Present embodiment is moved key 108a, is moved down key 108c, options button 108b, cancel key 108d shown in Figure 1B on possessing.Time of pressing by button and the combination of pressing order for example can be imported selection operation, decision and the cancellation operation from a plurality of menus etc., the call operation of predetermined function etc.The quantity of button operation portion 108 can be not limited to above-mentioned and set aptly.
Muffetee 110 persons of being to use are used to dress the device that is equipped on health or the belongings when carrying, for example, except the muffetee identical with wrist-watch, also can be clip or button, chain, the sticking chain of nylon, magnet etc.
Oscillator 112 is small-sized vibration generating apparatus, in the present embodiment, produces vibration under the corresponding pattern of the emotion that comprises in the sound with dog 2 by being controlled at of control module 120.User 4 responds to various vibration modes by health, can not see that display 106 also can understand emotion and the intention of dog 2, and visually impaired person and deaf individual also can utilize.
Data communication section 114 realizes the transmitting-receiving of data by external device (ED) such as computing machine and radio communication, by for example Bluetooth (registered trademark), the communication module corresponding with specification such as IrDA, the realizations such as female terminal that wire communication is used.
Control module 120 possesses CPU (Central Processing Unit) and various IC storer, crystal oscillator etc., read program of being stored in the IC storer etc. by CPU and it is carried out calculation process, can unify to control Wristwatch-type sound analysis device 100.In addition, also can use for example crystal oscillator etc., Wristwatch-type sound analysis device 100 is used as wrist-watch.
Fig. 2 is the synoptic diagram of an example of the using method in the expression present embodiment.As shown in this figure, user 4 is worn on for example user 4 first-class the portably using of wrist by muffetee 110 with Wristwatch-type sound analysis device 100.By carry Wristwatch-type sound analysis device 100 as wrist-watch, can eliminate the inconvenience that carrying device separately walks and will install the inconvenience of taking-up from bag when using at every turn.
And the sound that Wristwatch-type sound analysis device 100 is caught (detection) user 4 and user 4 pet dog 2 is by the side of supported two-way exchange between the two.That is, when capturing the sound of dog 2, analyze this sound and judge the emotion of dog 2, on display 106, show user's 4 understandable texts and figure (human language) by microphone 102.On the contrary, in the sound that captures user 4, analyze this sound and judge user 4 emotion, from loudspeaker 104 output dogs 2 understandable sound (animal language).Here so-called " sound of animal " is meant the cry of animal.In addition, " human language " is meant the literal (text) of human sound and the human statement that is appreciated that its meaning content etc., perhaps image etc.In addition, " animal language " is meant and can carrying out the animal sounds that thought is linked up with kind with in colony.
[explanation of functional-block diagram]
Fig. 3 is the functional-block diagram of an example of the functional structure in the expression present embodiment.
As shown in this figure, Wristwatch-type sound analysis device 100 possesses sound input part 10, phonetic analysis portion 12, press key input section 14, phonetic analysis ROM (Read Only Memory) 16, CPU20, RAM (Radom Access Memory) 30, ROM40, audio output unit 50, display part 52, vibration generating unit 54, Department of Communication Force 60, system path 90.
Sound input part 10 input dogs 2 and user's 4 sound outputs to phonetic analysis portion 12 with voice signal.Microphone 102 is suitable with it in Figure 1A.
Phonetic analysis portion 12 analyzes from the voice signal of sound input part 10 inputs.More particularly, carry out and for example the removal that is included in the noise composition in the voice signal to be handled and voice signal is carried out the A/D conversion and is converted to comparison process that the processing of the voice data of prescribed form, the medelling that is used to extract the voice data feature handle, carry out with the standard voice pattern of login in advance etc.These are handled all is that calculation process by for example A/D converter and filtering circuit, DSP (Degital Signal Processor) etc. is with realizations such as integrated circuit.Part or all of function also can be stored in phonetic analysis with program and data among the ROM16 by reading, and its calculation process is realized (software realization).Phonetic analysis portion 12 is installed in Figure 1A in the control module 120.
Phonetic analysis offers the program and the data of the various processing of phonetic analysis portion 12 with ROM16 storage, by 12 references of phonetic analysis portion.In Figure 1A, phonetic analysis is installed in the control module 120 with ROM16.
Fig. 4 represents the example that phonetic analysis is used the content among the ROM16 that is stored in the present embodiment.As shown in this figure, storage for example as be used for by calculation process realize phonetic analysis portion 12 various processing program sound analysis program 162, as with the animal standard voice pattern 164 and the human standard acoustic pattern 166 of the sound comparative standard data of sound input part 10 input.
Fig. 5 represents an example of the data structure of the animal standard voice pattern 164 in the present embodiment.As shown in this figure, the corresponding and storage of animal standard voice pattern 164: expression is for each kind (animal instinct code) of animal and animal instinct code 164a pre-prepd, the animal species that is fit to; Emotion cognizance code 164b as the information that the animal emotion is classified; The standard voice pattern 164c of the sound (cry) that matches with the animal language that is used to transmit this emotion.Standard voice pattern 164c is the data of sonograph for example.Animal language is meant and can carrying out the acoustic pattern that thought is linked up with kind with in colony.
Animal standard voice pattern 164 is to obtain by statistical method, the information that the action thing action credit of going forward side by side is analysed.Based on animal instinct code 164a, the animal standard voice pattern 164 that retrieval and object animal meet, to by carrying out matching judgment, can judge the animal emotion that is included in this sound from the voice data medelling of sound input part 10 sound imports with standard voice pattern 164c.
Human standard acoustic pattern 166 is to be the information of standard with the emotion that is used for judging the sound that is included in user 4, and is corresponding with the human attributes that is fit to and prepared in advance.Here so-called human attributes is meant the classification that for example language classification, sex, age etc. is made as parameter.
Human standard acoustic pattern 166 is for example shown in Figure 6, comprises the human attributes code 166a of the human attributes that expression is fit to, with the emotion cognizance code 166b of human emotion classification, the standard voice pattern 166c of corresponding human sound therewith.
Standard voice pattern 166c is the statistics characteristic sounds pattern having obtained and analyzed and the acoustic pattern to the simple sentence pronunciation that emotes the time, for example the data of sonograph etc.Therefore, the human standard acoustic pattern 166 that meets user 4 human attributes code 166a by retrieval, will be from the voice data medelling of the sound of sound input part 10 input, 166c carries out matching judgment with the standard voice pattern, can judge the user's 4 who is included in this sound emotion.Have, be included in data in the human standard acoustic pattern 166 and be not limited to above-mentionedly, the decision content etc. that also can comprise the rate of articulation, sound intensity etc. of language is aptly judged required data, and uses in matching judgment.
Press key input section 14 is by realizations such as for example pushbutton switch or operating rod, operation board, contact membranes, track pad, and input operation also outputs to operation signal among the CPU20.In Figure 1A, button operation portion 108 is suitable with it.
CPU20 is installed in Figure 1A in the control module 120, by the calculation process unification various processing are controlled and carried out to each (function) piece.
RAM30 is the IC storer of CPU20 and phonetic analysis portion 12 temporary transient stored programmes and data, is installed in Figure 1A in the control module 120.
Fig. 7 represents to store an example of the content in the present embodiment the RAM30.As shown in this figure, storage for example: the animal name 302 that stores the name information of dog 2; Animal instinct code 305; Human attributes code 306; Chronometric data 308; Voice data 310; Sound input time data 312; Voice recognition mark 314; Emotion cognizance code 316; High frequency mode mark 318; Health inductive mode mark 320; Historical data 322.
Animal name 302 is the information of expression dog 2 titles, and animal instinct code 304 is kinds of information of expression dog 2.Whichsoever all need the user to login before use.Animal name 302 is displayed on the display 106 in human language described later output is handled etc., is used to improve dog 2 and user's 4 intimate sense.
Human attributes code 306 is the information of expression user's 4 attribute (for example category of language, sex, age etc.), and user 4 logins before use.
Chronometric data 308 is the information of expression date and time information.By reference chronometric data 308, Wristwatch-type sound analysis device 100 also can be used as clock and timer uses.
Voice data 310 is to have carried out the numerical data of conversion by phonetic analysis portion 12 from the sound of sound input part 10 inputs.Though in the present embodiment as the Wave data storage,, also can be other data modes such as sonograph in addition.Input becomes the moment of the sound in voice data 310 sources, is stored in the sound input time data 312.
The result that voice recognition mark 314 and 316 storages of emotion cognizance code are analyzed by 12 pairs of voice datas 310 of phonetic analysis portion.Voice recognition mark 314 is that the expression voice data is the animal sounds or the information of human sound.Emotion cognizance code 316 emotion cognizance code 164b or the 166bs of storage by the coupling of standard voice pattern 164c or 166c is judged.
High frequency mode mark 318 is judged user 4 emotion in animal language described later output is handled, be a kind of with animal language from loudspeaker 104 when the output sound, set and whether export the mankind and can not hear and the information of the high-frequency sound that dog 2 can be heard.For example, be under the situation of dog animal, high-frequency sound is equivalent to the sound in the range that what is called " dog whistle " sends.
Health inductive mode mark 320 is in human language output described later is handled, judge the emotion in the sound that is included in dog 2, be when showing user's 4 understandable texts and figure on the display 106, set the oscillator 112 vibrative information of whether passing through.
Historical data 322 is the history about the sound input and output, corresponding also stored sound input time 322a, voice recognition mark 322b, emotion cognizance code 322c.Therefore, by reference historical data 322, when can understand, what emotion whose (dog 2 or user 4) represented mutually.
The ROM40 storage is used for realizing various functional programs and data at CPU20 by calculation process.
Fig. 8 represents the example that is stored in the content among the ROM40 in the present embodiment.As shown in this figure, program comprises: system program 400; Human language written-out program 402, it is carried out the human language output of (human languages) such as output understandable text of user 4 (mankind) and figure and handles according to the analysis result of the sound of dog 2 (animal); Animal language written-out program 404, it is carried out the animal language output of output dog 2 understandable sound and handles according to the analysis result of user 4 sound; Be used to carry out the mode switching program 406 that various mode switch are handled; Be used to carry out the historical written-out program 408 of the history display processing of carrying out according to historical data 322.
As data storage be used for confirming my voiceprint data 410 of user 4, be used for carrying out clock video data 412 that clock shows, be stored in image frame frame data 414, the animal language human language conversion TBL (table) 416 of the necessary information that various pictures show at display 106; Human language animal language conversion TBL (table) 418, vibration mode TBL (table) 420.
Voiceprint data 410 are voiceprints that dog 2 is accustomed to the personage that gets close on ordinary days, for example are owner's voiceprints, are for example taked in advance and store in that the producer of Wristwatch-type sound analysis device is medium.Have, voiceprint data 410 are not limited to be stored among the ROM40 again, can certainly be by user's 4 logins in RAM30.
Animal language human language conversion TBL416 is corresponding emotion and the human language that stores dog 2, and human language animal language conversion TBL418 is corresponding stored user 4 the emotion and the information that is equivalent to dictionary data of animal language.
Fig. 9 represents an example of the data structure of the animal language human language conversion TBL416 in the present embodiment.As shown in this figure, the corresponding and stored voice analysis portion 12 of animal language human language conversion TBL416 analyze dogs 2 sound and to its emotion cognizance code 416a that judges, therewith corresponding human understandable text data 416b, be used to show the view data 416c of animal painting.Have, view data 416c also can be an information for still picture again, also can be the animation information that is used to show animation.
Figure 10 represents an example of the data structure of the human language animal language conversion TBL418 in the present embodiment.As shown in this figure, the corresponding and stored voice analysis portion 12 of human language animal language conversion TBL418 analyze users 4 sound and to its emotion cognizance code 418a that judges, therewith synthesized voice data 418d, the mankind of the cry of corresponding human understandable text data 418b, the view data 418c that is used to show human image, synthetic animal (being dog at this moment) can hear high-frequency sound data 418e outside the field, as the login voice data 418f of the user's 4 of login sound in advance.Have, view data 418c can be an information for still picture again, also can be the animation information that is used to show animation.
Vibration mode TBL420 is for example shown in Figure 11, and correspondence also stores emotion cognizance code 420a, vibration mode 420b.By reference vibration mode TBL420, can under the vibration mode 420b corresponding, make oscillator 112 vibrations with emotion cognizance code 420a.
Audio output unit 50 is for example realized by loudspeaker, and output sound.The loudspeaker 104 of Figure 1A is suitable therewith.
Display part 52 is for example realized by display elements such as LCD, ELD, PDP, and is shown output image.The display 106 of Figure 1A is suitable therewith.
Vibration generating unit 54 for example realizes and produces vibration by Vib.s such as oscillators.The oscillator 112 of Figure 1A is suitable therewith.
Department of Communication Force 60 is the transmission and reception apparatus that are used for carrying out with external device (ED) radio communication.By modules such as for example Bluetooth (registered trademark), IrDA, wired socket and realization such as control circuit with telecommunication cable.The data communication section 114 of Figure 1A is suitable therewith.Have, information such as the agreement stack that Department of Communication Force 60 will provide in the time of will communicating by letter are recorded in ROM40 and go up (omitting diagram), and read utilization aptly again.
[processing spec]
Next with reference to Figure 12~Figure 23 the treatment scheme in the present embodiment is described.
Figure 12 is the process flow diagram that is used for illustrating the main treatment scheme of present embodiment.As shown in this figure, if sound input part 10 has detected the sound (step S102) of input, then 12 pairs of voice signals from 10 inputs of sound input part of phonetic analysis portion carry out A/D conversion and Filtering Processing, convert the voice data 310 (S104) of the appropriate format that is suitable for phonetic analysis to.
Next, chronometric data 308 at this moment as sound input time data 312 and voice data 310 corresponding and storages (step S106), is carried out the phonetic analysis of voice data 310 then and handled (step S108).
Figure 13 is the process flow diagram that is used for illustrating the phonetic analysis treatment scheme of present embodiment.As shown in this figure, phonetic analysis portion 12 at first reads the voice data of having stored 310 (step S202), carries out the coupling (step S204) with animal standard voice pattern 164 then.That is, with voice data 310 medellings and obtain sonograph, then with the pattern of standard voice pattern 164c relatively, if having the part similar, then be judged as part with coupling to pattern feature.
In animal standard voice pattern 164, has (step S206 under the situation of compatible portion; YES), phonetic analysis portion 12 will represent that animal sounds " 1 " is stored in the voice recognition mark 314, and will be stored in the emotion cognizance code 316 of RAM30 (step S208) with the corresponding emotion cognizance code 164b of standard voice pattern 164c of coupling, finish phonetic analysis then and handle and turn back in the flow process of Figure 12.
In animal standard voice pattern 164, there is not (step S206 under the situation of compatible portion; NO), the coupling (step S210) of implementation and human standard acoustic pattern 166.
In human standard acoustic pattern 166, has (step S212 under the situation of compatible portion; YES), phonetic analysis portion 12 will represent that human sound " 0 " is stored in the voice recognition mark 314, and will be stored in the emotion cognizance code 316 of RAM30 (step S214) with the corresponding emotion cognizance code 166b of standard voice pattern 166c of coupling, finish phonetic analysis then and handle and turn back in the flow process of Figure 12.
(step S212 when in human standard acoustic pattern 166, not having compatible portion; NO), phonetic analysis portion 12 is stored in " 0 " in the voice recognition mark 3 14, also " 0 " is stored in the emotion cognizance code 316 of RAM30 (step S216), finishes phonetic analysis then and handles and turn back in the flow process of Figure 12.
Handle and turn back in the flow process of Figure 12 if finish phonetic analysis, then CPU20 reference voice identification marking 314 and emotion cognizance code 316.
When voice recognition mark 314 is " 1 ", that is, and (step S110 when having imported the sound of animal kennel 2; YES), carry out human language output and handle (step S112).When voice recognition mark 314 is " 0 ", that is, and (step S114 when having imported human user's 4 sound; YES), carry out animal language output and handle (step S116).Voice recognition mark 314 is " 0 " and emotion cognizance code 316 when also being " 0 ", that is, can not judge animal sounds and can not judge (step S114 under the situation of human sound; NO), then neither can enter human language output processing and also can not enter animal language output processing.
Figure 14 is the process flow diagram that is used for illustrating the human language output treatment scheme of present embodiment.As shown in this figure, CPU20 is reference picture frame frame data 414 at first, and the frame of then human language being exported usefulness is presented at (step S302) on the display part 52.
Next, emotion cognizance code 316 (step S304) with reference to RAM30, from animal language human language conversion TBL416, read text data 416b and the view data 416c corresponding, and be presented on the assigned address in the picture of human language output usefulness (step S306) with emotion cognizance code 316.
Next, read voice data 310, voice data is presented on the assigned position in the picture of human language output usefulness (step S308), read voice output data 312 constantly, and show the date and time (step S310) of sound import.
Next, CPU20 is with reference to health inductive mode mark 320, when health inductive mode mark is " 1 ", that is, and (step S312 when the health inductive mode is set at " ON "; YES), from vibration mode TBL420, read the vibration mode 420b corresponding with the emotion cognizance code that read out 3 16.Then, vibrate generating unit 54 and make it produce vibration (step S314), stop human language output then and handle and turn back in the flow process of Figure 12 according to the vibration mode 420b control that reads out.Then, if turn back in the flow process of Figure 12, then CPU20 upgrades historical data 322 (step S117).
Figure 18 represents an example of the picture of the human language output in the present embodiment in handling.In the picture 5 of human language output usefulness, show that with title 5a shows the information of transmitting to user 4 from dog 2.At this moment, by for example " information that transmits from (too youth) has arrived " such demonstration, comprise animal name 302 (name of pet), bring further cordial feeling can for user 4.
Text data 416b and the view data 416c corresponding with emotion cognizance code 316 that reads out from animal language human language conversion TBL416 is presented at 5b of text display portion and image displaying part 5c respectively.It is better if the 5b of text display portion can be displayed in the content that image displaying part 5c for example described.
The form of voice data 310 with chart is presented on the voice data display part 5d.Can be used as Wave data and show, also can under other forms such as sonograph, show.Here, by showing voice data 310, user 4 can cultivate the sensation that reads this indicating characteristic (diagram shape etc.), does not just need to see the text of the 5b of text display portion gradually, and only watches the chart of voice data 310 to show the emotion and the meaning that just can understand dog 2.This diagram shape comprises the trickleer emotion and the meaning, if user 4 obtains reading the sensation of characteristic chart, then can more fine understand dog 2 than the classification of emotion cognizance code.
The moment of having imported sound is displayed on the date and time display part 5e, for example the below of picture.
Figure 15 is the process flow diagram that is used for illustrating the animal language output treatment scheme of present embodiment.As shown in this figure, CPU20 is reference picture frame data 414 at first, and the frame of then animal language being exported usefulness is presented at (step S402) on the display part 52.
Next, emotion cognizance code 316 (step S404) with reference to RAM30, from human language animal language conversion TBL418, read text data 418b and the view data 418c corresponding, be presented at then on the assigned position in the picture of animal language output usefulness (step S406) with emotion cognizance code 316.
Next read voice data 310, voice data is presented at diagrammatic form on the assigned position in the picture of animal language output usefulness (step S408), read sound input time data 312 then, and show the date and time (step S410) of having imported sound.
Next CPU20 is under the situation of " 1 " with reference to high frequency mode mark 318 at the high frequency mode mark, that is, high frequency mode is set to situation (the step S412 of " ON "; YES), from human language animal language conversion TBL418, read with before the corresponding high-frequency sound data 418e of emotion cognizance code 316 of reference, and export (step S414) from audio output unit 50.
Next, voice data 310 is verified (step S416) mutually with voiceprint data 410, judge whether unanimity (step S418).
In voice data 310 and voiceprint data 410 consistent and be judged as my sound in (step S418; YES), from human language animal language conversion TBL418, read the synthesized voice data 418d (step S422) corresponding with emotion cognizance code 316, and from audio output unit 50 outputs (step S424).
Judging (step S418 under voice data 310 and the voiceprint data 410 inconsistent situations; NO), from human language animal language conversion TBL418, read the login voice data 418f (step S420) corresponding with emotion cognizance code 316, and from audio output unit 50 outputs (step S424).By output login voice data 418f, under user 4 the situation that is not owner, can hear that the sound of getting close to the personage on ordinary days relaxes the anxiety and the warning heart of dog 2 by making it, even when dog 2 also is not accustomed to user 4, also can more successfully link up.
If, then finish animal language output and handle and turn back in the flow process of Figure 12 from audio output unit 50 output synthesized voice data 418d or login voice data 418f.Then, if turn back in the flow process of Figure 12, then CPU20 upgrades historical data 322 (step S117).
Figure 19 represents an example of the picture that the animal language output in the present embodiment is handled.In the picture 6 of animal language output usefulness, show that with title 6a represents the information of transmitting to dog 2.At this moment, by for example " information that transmits from (too youth) has arrived " such demonstration, comprise animal name 302 (name of pet), bring further cordial feeling can for user 4.
Text data 418b and the view data 418c corresponding with emotion cognizance code 3 16 that reads from human language animal language conversion TBL418 is displayed on respectively on 6b of text display portion and the image displaying part 6c.The 6b of text display portion as shown in this figure, if can be displayed in the content that image displaying part 6c for example described then better.
Voice data 310 is presented on the voice data display part 6d with diagrammatic form, and shows the moment of having imported sound on the date and time display part 6e below the picture.
In the flow process of Figure 12, on for example, move button 108a or move down button 108c (the step S118 under the situation of long period that is pressed at the appointed time; YES), CPU20 just carries out the button input and handles (step S120).
Figure 16 is the process flow diagram that is used for illustrating the button input treatment scheme of present embodiment.As shown in this figure, CPU20 is reference picture frame data 414 at first, and the frame of then button being imported usefulness is presented at (step S502) on the display part 52.Reference man's speech like sound animal language conversion TBL418 for example on the picture of button input usefulness is with the button form content of videotex data 418b (step S116) selectively.
User 4 by on move button 108a or move down the content button that button 108c select to wish, press the selection key 108b decision (step S504) that makes one's options.
Select decision if imported, then CPU20 selects and has changed the corresponding emotion cognizance code 418a of content that has selected the TBL418 from the human language animal language, and is stored in (step S506) among the RAM30.Then, the end key input is handled, and turns back in the flow process of Figure 12.If turn back in the flow process of Figure 12, next CPU20 carries out animal language output processing.
Figure 20 represents an example of the picture of the button input in the present embodiment in handling.In the picture 7 of button input usefulness, show that with title 7a represents the information of transmitting to dog 2.
Demonstration is with the text data 418b that reads out from the human language animal language conversion TBL418 selector button 7b as content.When can not once showing all selector button 7b, demonstration can roll display.In addition, the selector button 7b that is in present selection mode flip displays for example.
In addition, on picture 7, show selector button 7c and cancel button 7d,, visually notify user 4 to import this button if press selection key 108b and cancellation button 108d respectively then carry out flip displays.
In the flow process of Figure 12, selection key 108b (the step S122 under the situation of long period that is pressed at the appointed time for example; YES), CPU20 carries out mode switch and handles (step S124).
Figure 17 is the process flow diagram that is used for illustrating the mode switch treatment scheme of present embodiment.As shown in this figure, CPU20 is reference picture frame data 414 at first, and mode switch is presented at display part 52 (step S602) with frame.
Next, (step S604 under the situation of the blocked operation of having imported high frequency mode; YES), CPU20 switches high frequency mode mark 318 (step S606).(step S608 under the situation of the blocked operation of having imported the health inductive mode; YES), CPU20 switches health inductive mode mark 320 (step S610).Then, if imported end operation (the step S612 of regulation; YES), end mode hand-off process then, and turn back in the flow process of Figure 12.
Figure 21 represents an example of the picture of the mode switch in the present embodiment in handling.In the picture 7 that mode switch is used, show that with title 8a represents to carry out mode switch and handles.The ON/OFF that shows high frequency mode on the picture that mode switch is used shows that the ON/OFF of 8b, health inductive mode shows 8c.ON/ OFF show 8b and 8c by on the input that moves button 108a or move down button 108c enter the state of selecting successively.Under selection mode, by input selection key 108b, import the hand-off process of this pattern, CPU20 just switches ON and OFF.If press cancel key 108d, end operation that then can the input pattern hand-off process.
In the flow process of Figure 12, (step S126 under the situation of the long period that for example cancel key 108d is pressed at the appointed time; YES), CPU20 carries out history display and handles (step S128).
Figure 22 is an example of the picture during the history display in the expression present embodiment is handled.As shown in this figure, in history display is handled, with reference to historical data 322 and show the 9a of history display portion.For example, the icon 9c, the content 9d that show which side sound among time 9b, expression dog 2 and the user.Icon 9c shows according to voice recognition mark 322b.Content 9d reads out text data 416b or 418b, this demonstration of the style of writing of going forward side by side according to voice recognition mark 322b and emotion cognizance code 322c from animal language human language conversion table 416 or human language animal language conversion TBL418.
In addition, can not be once in picture, showing under the situation of the history display 9a of portion, by on the input that moves key 108a and move down key 108c can rollably show.At this moment, be preferably in show bar 9e and show that upward the history that shows at present is suitable with which section period within one day (24 hours).
User 4 is by observing this history display, can understand the conversion of the personality of dog 2 for example and custom, health etc.
In Figure 12, at situation (the step S102 that does not have the sound input; NO) can not judge that but animal sounds still is situation (the step S114 of human sound though imported sound; When NO) and not importing specific button operation (NO of NO → S126 of NO → S112 of step S118), CPU20 is for example shown in Figure 23, and clock picture 3 is presented at (step S130) on the display part 52.
On clock picture 3, show for example simulated clock simulation clock 3a, date 3b, week 3c.Therefore, user 4 with Wristwatch-type sound analysis device 100 as and dog 2 between media of communication use in, also can be used as wrist-watch.
[second embodiment]
Next, second embodiment that is applicable to sound analysis device of the present invention is described.Have, present embodiment can realize by the structure identical with first embodiment basically, and the identical inscape that has same-sign is omitted explanation again.
Figure 24 A and Figure 24 B represent an example of the outward appearance of the trailed model sound analysis device 200 in the present embodiment.As shown in this figure, the haulage chain 202 that trailed model sound analysis device 200 uses when possessing 2 strolls of band dog, and it can freely extract out/curl by spool 204.Can load and unload the necklace 207 of dog 2 and the mould 206 and the microphone 102 of haulage chain 202 the front end setting of haulage chain 202.User 4 controls body 208, perhaps is installed in upward uses such as belt by clip 212.
Microphone 102 is connected with the control module 120 and the power supply that are built in body 208 by being configured in the signal wire 210 in the haulage chain 202.By microphone 102 being arranged on the front end of haulage chain 202, even also can gather sound efficiently under the condition that waits sound to spread easily without.
Have, trailed model sound analysis device 200 also can transmit the voice signal of being assembled by microphone 102 by the Wristwatch-type sound analysis device 100 and the data communication section 114 of user's wearing again.At this moment, trailed model sound analysis device 200 omits phonetic analysis portions 12 and phonetic analysis with ROM16, display part 52, vibration generating unit 54, and formation can utilize the structure of the Wristwatch-type sound analysis device 100 that user 4 dressed.
Though more than to being applicable to that embodiments of the present invention are illustrated, use of the present invention is not limited thereto, and can suitably change inscape in the scope that does not exceed inventive concept and append reduction etc.
For example, sound analysis device also can be used as computing machine and PDA (Personal DigitalAssistant), multifunctional portable phone and realizes.
Phonetic analysis portion 12 also can be used as the structure that the calculation process by CPU20 realizes, phonetic analysis can be same with ROM40 also with ROM16.In addition, as press key input section 14, also can on the display surface of display 106, touch panel be set and constitute.
In addition, in animal language output is handled, also can have nothing to do and output synthesized voice data 418d with user 4 the voice data and the checking result of voiceprint data 410, and under voice data and voiceprint data 410 inconsistent situations, the flow process of output login voice data 418f is set with appending.
As mentioned above, the sound analysis device of this embodiment is characterised in that to possess: first acoustic input dephonoprojectoscope (for example the sound input part 10 of the microphone 102 of Figure 1A, Fig. 3, the step S102 of Figure 12) of input animal sounds; First sound analysis device (for example the phonetic analysis of the phonetic analysis portion 12 of the control module 120 of Figure 1A, Fig. 3, Figure 13 is handled) that the sound of this first acoustic input dephonoprojectoscope input is analyzed; First output unit (for example 5b of text display portion of the display part 52 of the display 106 of Figure 1A, Fig. 3, Figure 18) of the analysis result that obtains by this first sound analysis device by human language output, imported after the sound second acoustic input dephonoprojectoscope (for example the sound input part 10 of the microphone 102 of Figure 1A, Fig. 3, the step S102 of Figure 12) of input user sound by this first acoustic input dephonoprojectoscope; To second sound analysis device of analyzing by the sound of this second acoustic input dephonoprojectoscope input (for example the phonetic analysis of the phonetic analysis portion 12 of the control module 120 of Figure 1A, Fig. 3, Figure 13 is handled); Export second output unit (for example the output of the animal language of the audio output unit 50 of the speaker portion 104 of Figure 1A, Fig. 3, Figure 15 is handled) of the analysis result that obtains by this second sound analysis device by animal language.
In addition, the sound switching control method of this embodiment is characterised in that, comprises: the first sound input step (for example step S102 of Figure 12) of input animal sounds; To the first phonetic analysis step of analyzing by the sound of this first sound input step input (for example the step S104 of Figure 12, S108); Export the first output step (for example step S112 of Figure 12) of the analysis result of this first phonetic analysis step by human language; In order to reply the content by this first output step output, the second sound input step of user's sound import (for example step S102 of Figure 12); To the second phonetic analysis step of analyzing by the sound of this second sound input step input (for example the step S104 of Figure 12, S108); Export the second output step (for example step S116 of Figure 12) of the analysis result of this second phonetic analysis step by animal language.
Here so-called " animal sounds " is meant the cry of animal.In addition, " human language " is meant human sound and the human literal such as statement (text) that can express this meaning content or image etc.In addition, so-called " animal language " is meant the animal sounds that can link up the meaning in identical type or the same community.
Adopt this embodiment, because can be after input and analyzing animal sound, by human language output analysis result, so the user can understand the meaning of animal sounds by the human language of output.In addition, because just export analysis result after input and analysis user's the sound, so animal can be understood the meaning of user's sound by the animal language of output by animal language.Therefore, can support the two-way emotion transmission between the human and animal, and realize linking up.
In addition, the sound analysis device of this embodiment is characterised in that to possess: first acoustic input dephonoprojectoscope of input animal sounds; First sound analysis device that the sound of this first acoustic input dephonoprojectoscope input is analyzed; Export first output unit of the analysis result that obtains by this first sound analysis device by human language; Optional selecting arrangement (for example step S120 of the press key input section 14 of the display 106 of Figure 1A, button operation portion 108, Fig. 3, display part 52, Figure 12) from people's quasi-sentence of storage in advance; Export the 3rd output unit (the step S422 of the audio output unit 50 of the loudspeaker 104 of Figure 1A, Fig. 3, Figure 15~S424) for example of the statement that this selecting arrangement selects by animal language.
The sound switching control method of this embodiment is characterised in that to possess: the first sound input step of input animal sounds; To the first phonetic analysis step of analyzing by the sound of this first sound input step input; Export the first output step of the analysis result of this first phonetic analysis step by human language; In order to reply the content by this first output step output, the selection step (for example step S120 of Figure 12) that the user selects arbitrarily from people's quasi-sentence of storage in advance; By the three output step (for example step S116 of Figure 12) of animal language output by the analysis result of the statement of this selection step selection.
In addition, as present embodiment, the described sound analysis device of claim 1 also can possess the selecting arrangement of selecting arbitrarily from the human language of storage in advance; By three output unit of animal language output by the language of this selecting arrangement selection.
Because adopt present embodiment after input and analyzing animal sound, to export analysis result, so the user can understand the meaning of animal sounds by the human language of output by human language.In addition, if the user selects from the human language of storage in advance arbitrarily, because the statement of this selection exports by animal language, so animal is appreciated that the meaning of user's sound.Therefore, can support the two-way emotion transmission between the human and animal, and realize exchanging.
In phonetic analysis preferably as described in the present embodiment, above-mentioned first sound analysis device is the acoustic pattern imported by more above-mentioned first acoustic input dephonoprojectoscope and the standard voice pattern of storage in advance, judge emotion that comprises in this sound and the device that sound is analyzed, above-mentioned second sound analysis device is the acoustic pattern imported by more above-mentioned second acoustic input dephonoprojectoscope and the standard voice pattern of storage in advance, judges emotion that comprises in this sound and the device that sound is analyzed.
This embodiment is characterised in that, above-mentioned second output unit has to form by output humanly can hear that the high-frequency sound outside the field realizes the device of the voice output of animal language (the step S412 of high-frequency sound data 418e, Figure 15 of the loudspeaker 104 of Figure 1A, Figure 10~S414) for example.Adopt this embodiment, the sound that can export the frequency domain that the mankind can not hear is as animal language.Therefore, even for example pass through under the situation of human language output content to animal, also can realize exchanging in scruple.
This embodiment is characterised in that to possess: first entering device (for example voiceprint data 410 of the ROM40 of Fig. 3, Fig. 8) of login user voiceprint; Second entering device (for example login voice data 418f of the conversion of the human language animal language of the ROM40 of Fig. 3, Fig. 8 TBL418, Figure 10) of the sound that login is obtained by the human language of regulation meaning content; Judge the whether consistent judgment means of the voiceprint of the sound of above-mentioned second acoustic input dephonoprojectoscope input and the login of above-mentioned first entering device (the step S416 of CPU20, Figure 15 of the control module 120 of Figure 1A, Fig. 3~S418) for example; Be judged as under the inconsistent situation in this judgment means, export the 4th output unit (for example step S418~S420, the S424 of the CPU20 of the loudspeaker 104 of Figure 1A, Fig. 3, audio output unit 50, Figure 15) of the sound of above-mentioned second entering device login by human language.
Adopt present embodiment, under user's the sound situation different, can export by the human sound that human language will be logined in advance with the voiceprint of login in advance.Therefore, make animal hear the human sound corresponding, the training effect that obtains taming with user's emotion.In addition, at the voiceprint that voiceprint is made as the most familiar personage of animal (for example owner), and then under the situation of the sound login of this personage being sent by the second sound entering device, when the user is different with this personage, can hear that familiar character sound obtains pacifying the effect of animal by making animal.
This embodiment is characterised in that to possess: the memory storage (Fig. 8, Fig. 9 416) that emotion, human language, the animal painting of animal is carried out many group storages; The acoustic input dephonoprojectoscope (the sound input part 10 of Fig. 3) of input animal sounds; The sound analysis device (the phonetic analysis portion 12 of Fig. 3) that the emotion of the sound of this first acoustic input dephonoprojectoscope input is analyzed; From above-mentioned memory storage, read out corresponding human language, the animal painting of analyzing with this sound analysis device of emotion, and show the human language that this reads out and the display device (display part 52 of Fig. 3) of animal painting.The sound of input animal adopts this embodiment, if then can be analyzed the emotion of the sound of this input, and read the human language corresponding with the emotion of this analysis, animal painting, and show human language and the animal painting (with reference to Figure 18) that this reads out.Therefore, can confirm the human language corresponding (for example literal form information) and animal painting (for example the illustration of animal or carry out the photographs of the animal that this sound imports itself) by display device with the emotion of animal sounds (for example sobbing sound) of input.
This embodiment is characterised in that to have: corresponding and store the memory storage (for example historical data 322 of the CPU20 of Fig. 3, RAM30, Fig. 7, the step S117 of Figure 12) in the moment of the sound of tut input media input and the above-mentioned first acoustic input dephonoprojectoscope sound import.
This embodiment is characterised in that to possess vibrative vibration generating apparatus (for example vibration mode TBL420 of the CPU20 of the oscillator 112 of Figure 1A, Fig. 3, vibration generating unit 54, Figure 11).Adopt this embodiment, not only can understand analysis result by vibration for healthy person but also for visually impaired person and deaf individual.
Adopt the present invention, the user can be after input and analyzing animal sound, by understood the meaning of animal sounds by human language output analysis result.In addition, after input and analyzing user's sound, by exporting analysis result by animal language, animal is appreciated that the meaning of user's sound.In addition, if the user selects arbitrarily from people's quasi-sentence of access in advance, because can export the language of this selection, so animal also is appreciated that the meaning of user's sound by the language of animal.
Therefore, can support the both sides' emotion transmission between the human and animal, and realize exchanging.
In addition, when the output animal language, the sound by exporting the frequency domain that the mankind can't hear is as animal language, though in scruple to animal by under the situation of human language output content, also can realize exchanging.
In addition, with of the voiceprint checking of user's sound, under different situations, can export the sound of the human language of login in advance by human language with login in advance.Therefore, by to animal output corresponding with user's emotion pass through human sound, make animal hear the mankind's sound and obtain making its taming training effect.In addition, if voiceprint is made as the personage's (for example owner) that animal is accustomed to most voiceprint,, also can obtain pacifying the effect of animal even under user's situation different with this personage.
In addition, can portably use by sound analysis device is worn on the health.Therefore, when using this sound analysis device, need not from bag, to take out at every turn, improved and used randomness.
In addition, by storing the mutual history of communication between animal and human's class in advance, can carry out analysis and utilization to mutual history of communication between animal and human's class.
In addition, by the corresponding vibration that produces regulation with the analysis result of animal sounds, and utilize the health induction to inform user's analysis result, thus can need not to wait the randomness that reads analysis result raising use by the text of human language, more successfully realize exchanging.In addition, even for visually impaired person and deaf individual, also can understand analysis result.
And then, adopt the present invention, if the input animal sounds then can be analyzed the sound emotion of this input, and read the human language corresponding with the emotion of this analysis, animal painting, and demonstrate human language and the animal painting that reads.Therefore, can confirm the human language corresponding (for example literal form information) and animal painting (for example the illustration of animal or carry out the photographs of the animal that this sound imports itself) by display device with the emotion of animal sounds (for example sobbing sound) of input.

Claims (14)

1. sound analysis device is characterized in that possessing:
First acoustic input dephonoprojectoscope of input animal sounds;
First sound analysis device that the sound of this first acoustic input dephonoprojectoscope input is analyzed;
Export first output unit of the analysis result that obtains by this first sound analysis device by human language;
Imported after the sound second acoustic input dephonoprojectoscope of input user sound by this first acoustic input dephonoprojectoscope;
To second sound analysis device of analyzing by the sound of second acoustic input dephonoprojectoscope input;
Export second output unit of the analysis result that obtains by second sound analysis device by animal language.
2. sound analysis device according to claim 1 is characterized in that possessing:
Optional selecting arrangement from people's quasi-sentence of storage in advance;
By three output unit of animal language output by the statement of this selecting arrangement selection.
3. sound analysis device according to claim 1 is characterized in that,
Above-mentioned first sound analysis device is the acoustic pattern imported by more above-mentioned first acoustic input dephonoprojectoscope and the standard voice pattern of storage in advance, judges emotion that comprises in this sound and the device that sound is analyzed,
Above-mentioned second sound analysis device is the acoustic pattern imported by more above-mentioned second acoustic input dephonoprojectoscope and the standard voice pattern of storage in advance, judges emotion that comprises in this sound and the device that sound is analyzed.
4. sound analysis device according to claim 1 is characterized in that, above-mentioned second output unit has to form by output humanly can hear that the high-frequency sound outside the field realizes the device of the voice output of animal language.
5. sound analysis device according to claim 1 is characterized in that possessing:
First entering device of login user voiceprint;
Second entering device of the sound that login is obtained by the human language of regulation meaning content;
Judge the whether consistent judgment means of voiceprint of the sound of above-mentioned second acoustic input dephonoprojectoscope input and the login of above-mentioned first entering device;
Be judged as under the inconsistent situation in this judgment means, export the 4th output unit of the sound of above-mentioned second entering device login by human language.
6. sound analysis device is characterized in that possessing:
Emotion, human language, the animal painting of animal carried out the memory storage of many group storages;
First acoustic input dephonoprojectoscope of input animal sounds;
First sound analysis device that the emotion of the sound of this first acoustic input dephonoprojectoscope input is analyzed;
Read out corresponding human language, the animal painting of analyzing with this first sound analysis device of emotion from above-mentioned memory storage, and show the human language that this reads out and the display device of animal painting.
7. sound analysis device according to claim 6 is characterized in that, above-mentioned animal painting is illustration of animal or the image captured to the animal of the above-mentioned first acoustic input dephonoprojectoscope sound import.
8. sound analysis device according to claim 6 is characterized in that, above-mentioned human language is and above-mentioned emotion corresponding character or and literal that describe corresponding with above-mentioned emotion.
9. sound analysis device according to claim 6 is characterized in that possessing:
Corresponding and store the memory storage in the moment of the sound of above-mentioned first acoustic input dephonoprojectoscope input and the above-mentioned first acoustic input dephonoprojectoscope sound import.
10. sound analysis device according to claim 6 is characterized in that possessing: when above-mentioned display device shows human language and animal painting, produce the vibration generating apparatus of predefined vibration.
11. sound analysis device according to claim 6 is characterized in that, the tut analytical equipment is wrist object wearing device or the entrained portable phone of user that is worn on user's health.
12. a sound switching control method is characterized in that, comprises:
The first sound input step of input animal sounds;
To the first phonetic analysis step of analyzing by the sound of this first sound input step input;
Export the first output step of the analysis result of this first phonetic analysis step by human language;
In order to reply the content by the output of the first voice output step, the second sound input step of user's sound import;
To the second phonetic analysis step of analyzing by the sound of second sound input step input;
Export the second output step of the analysis result of the second phonetic analysis step by animal language.
13. a sound switching control method is characterized in that, comprises:
The first sound input step of input animal sounds;
To the first phonetic analysis step of analyzing by the sound of this first sound input step input;
Export the first output step of the analysis result of this first phonetic analysis step by human language;
In order to reply the content by the input of the first voice output step, the user is optional selection step from people's quasi-sentence of storage in advance;
By the three output step of animal language output by the analysis result of the statement of this selection step selection.
14. a sound switching control method uses at the portable electron device that possesses the first sound input part that is used for importing animal sounds, it is characterized in that, comprises:
The phonetic analysis step that the emotion of the sound that is input to the first sound input part is analyzed;
The memory storage of many group storages is carried out emotion, human language, the animal painting of animal in visit, from this memory storage, read corresponding human language, the animal painting of analyzing with the above-mentioned first phonetic analysis step of emotion, and show the human language that this is read and the step display of animal painting.
CN200310117284.4A 2002-12-13 2003-12-10 Sound analyser and sound conversion control method Expired - Fee Related CN1261922C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002362855 2002-12-13
JP2002362855A JP4284989B2 (en) 2002-12-13 2002-12-13 Speech analyzer

Publications (2)

Publication Number Publication Date
CN1506936A true CN1506936A (en) 2004-06-23
CN1261922C CN1261922C (en) 2006-06-28

Family

ID=32761186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200310117284.4A Expired - Fee Related CN1261922C (en) 2002-12-13 2003-12-10 Sound analyser and sound conversion control method

Country Status (2)

Country Link
JP (1) JP4284989B2 (en)
CN (1) CN1261922C (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101960994A (en) * 2009-07-23 2011-02-02 卡西欧计算机株式会社 Animal emotion display system and animal emotion display packing
CN102246705A (en) * 2010-05-19 2011-11-23 张彩凤 Method and system for analyzing animal behaviours by using signal processing technology
CN106537471A (en) * 2014-03-27 2017-03-22 飞利浦灯具控股公司 Detection and notification of pressure waves by lighting units
CN107423821A (en) * 2017-07-11 2017-12-01 李家宝 The intelligence system of human and animal's interaction
WO2018039934A1 (en) * 2016-08-30 2018-03-08 深圳市沃特沃德股份有限公司 Pet placation method, apparatus and system

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101112940B (en) * 2006-07-25 2011-04-20 京元电子股份有限公司 Container conversion device for semi-conductor packaging element
US8838260B2 (en) * 2009-10-07 2014-09-16 Sony Corporation Animal-machine audio interaction system
JP2016146070A (en) * 2015-02-06 2016-08-12 ソニー株式会社 Information processor, information processing method and information processing system
CN105941306B (en) * 2016-04-29 2017-10-24 深圳市沃特沃德股份有限公司 A kind of method and its device for detecting the action of animal tail
CN106531173A (en) * 2016-11-11 2017-03-22 努比亚技术有限公司 Terminal-based animal data processing method and terminal
CN112447170A (en) * 2019-08-29 2021-03-05 北京声智科技有限公司 Security method and device based on sound information and electronic equipment
CN111508469A (en) * 2020-04-26 2020-08-07 北京声智科技有限公司 Text-to-speech conversion method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0293861A (en) * 1988-09-30 1990-04-04 Toshiba Corp Interaction system with animal
JPH11169009A (en) * 1997-12-12 1999-06-29 Main Kk Training apparatus for pet
JP2002278583A (en) * 2001-03-14 2002-09-27 Teruo Ueno Translation device for interpretation of voices of pets
JP3083915U (en) * 2001-08-06 2002-02-22 株式会社インデックス Dog emotion discrimination device based on phonetic feature analysis of call

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101960994A (en) * 2009-07-23 2011-02-02 卡西欧计算机株式会社 Animal emotion display system and animal emotion display packing
US9019105B2 (en) 2009-07-23 2015-04-28 Casio Computer Co., Ltd. Animal emotion display system and method
CN102246705A (en) * 2010-05-19 2011-11-23 张彩凤 Method and system for analyzing animal behaviours by using signal processing technology
CN106537471A (en) * 2014-03-27 2017-03-22 飞利浦灯具控股公司 Detection and notification of pressure waves by lighting units
WO2018039934A1 (en) * 2016-08-30 2018-03-08 深圳市沃特沃德股份有限公司 Pet placation method, apparatus and system
CN107423821A (en) * 2017-07-11 2017-12-01 李家宝 The intelligence system of human and animal's interaction

Also Published As

Publication number Publication date
CN1261922C (en) 2006-06-28
JP4284989B2 (en) 2009-06-24
JP2004191872A (en) 2004-07-08

Similar Documents

Publication Publication Date Title
CN1261922C (en) Sound analyser and sound conversion control method
CN1253811C (en) Information processing apparatus and information processing method
CN1258285C (en) Multi-channel information processor
CN1392827A (en) Device for controlling robot behavior and method for controlling it
CN1324501C (en) Information processing terminal, server, information processing program and computer readable medium thereof
CN100352255C (en) Imaging method and system for healthy monitoring and personal safety
CN100336415C (en) Method and apparatus for performing bringup simulation in a mobile terminal
CN1761265A (en) Method and apparatus for multi-sensory speech enhancement on a mobile device
CN110134316A (en) Model training method, Emotion identification method and relevant apparatus and equipment
CN1516112A (en) Speed identification conversation device
CN1638313A (en) Alarm notification system and device having voice communication capability
CN1553845A (en) Robot system and robot apparatus control method
CN1124191C (en) Edit device, edit method and recorded medium
CN1369834A (en) Voice converter, Voice converting method, its program and medium
CN1547829A (en) Radio communication apparatus and method therefor ,wireless radio system, and record medium, as well as program
CN1691693A (en) Communication terminal and method for selecting and presenting content
CN1684485A (en) Portable terminal, response message transmitting method and server
CN1545692A (en) Information transmission apparatus, information transmission method, and monitoring apparatus
CN1914666A (en) Voice synthesis device
CN1460051A (en) Robot device and control method therefor and storage medium
CN1144005A (en) Image processing method and electronic apparatus
CN1291765C (en) System for interaction with exercise deivce
CN1817311A (en) Judgment ability evaluation apparatus, robot, judgment ability evaluation method, program, and medium
CN1257460C (en) Super text display device and supertext display program
CN106878390A (en) Electronic pet interaction control method, device and wearable device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20060628

Termination date: 20161210

CF01 Termination of patent right due to non-payment of annual fee