CN103186227A - Man-machine interaction system and method - Google Patents
Man-machine interaction system and method Download PDFInfo
- Publication number
- CN103186227A CN103186227A CN2011104528272A CN201110452827A CN103186227A CN 103186227 A CN103186227 A CN 103186227A CN 2011104528272 A CN2011104528272 A CN 2011104528272A CN 201110452827 A CN201110452827 A CN 201110452827A CN 103186227 A CN103186227 A CN 103186227A
- Authority
- CN
- China
- Prior art keywords
- control command
- sound
- image information
- keyword
- correspondence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a man-machine interaction system and method. The system comprises a sound capture device, a video capture device and a control device, wherein the sound capture device is used for picking up sounds and outputting the audio signals of the picked-up sounds; the video capture device is used for picking up images in real time and outputting image information; and the control device is connected with the sound capture device and the video capture device, and is used for receiving the audio signals and the image information, generating control commands according to the audio signals and the image information and executing the control commands. The method comprises the steps of picking up the sounds by using the sound capture device to obtain the audio signals of the picked-up sounds; picking up the images by using the video capture device to generate the image information; and generating the control commands according to the audio signals and the image information and executing the control commands. By adopting the technical scheme provided by the invention, the sound control and the motion sensing control can be realized, and the diversity and the interestingness of the control method are improved.
Description
Technical field
The present invention relates to a kind of human-computer interaction technology, particularly relate to a kind of human-computer interaction system and method.
Background technology
Along with the progress of science and technology, the intelligent degree that electronic equipment has is more and more higher, and utilizing sound that electronic equipment is controlled is that current electronic device is to an important directions of intelligent development.
At present the implementation that electronic equipment is carried out acoustic control normally is based upon on the basis of speech recognition.This implementation is specially: electronic equipment carries out speech recognition to the sound that the effector sends, and judge that according to voice identification result the effector wishes the control command that electronic equipment is carried out, afterwards, electronic equipment has been realized the acoustic control of electronic equipment by automatically performing this control command.
The inventor finds in realizing process of the present invention: the implementation of existing control technology to electronic equipment is more single, underaction.
Because the above-mentioned existing defective that the control mode of electronic equipment is existed, the inventor is based on being engaged in practical experience and the professional knowledge that this type of product design manufacturing is enriched for many years, and the utilization of cooperation scientific principle, actively studied innovation, in the hope of founding a kind of new human-computer interaction system and method, can overcome the existing problem that the control mode of electronic equipment is existed, make it have more practicality.Through constantly research, design, after studying sample and improvement repeatedly, create the present invention who has practical value finally.
Summary of the invention
The objective of the invention is to, overcome the existing defective that the control implementation of electronic equipment is existed, and provide a kind of human-computer interaction system and a kind of new human-computer interaction method of new structure, technical matters to be solved is, make the control mode of electronic equipment have diversity and interest, be very suitable for practicality.
Purpose of the present invention and solve its technical matters and can adopt following technical scheme to realize.
A kind of human-computer interaction system according to the present invention proposes comprises: sound capture device, capture device and control device; Described sound capture device is used for picking up sound, and exports the sound signal of the sound that picks up; Capture device is used for the real time shooting image, and output image information; Described control device is connected with described capture device with described sound capture device, and described control device receives described sound signal and image information information, produces control command according to described sound signal and image information, and carries out this control command.
Purpose of the present invention and solve its technical matters and can also be further achieved by the following technical measures.
Preferable, aforesaid human-computer interaction system, wherein this control device comprises: sound identification module is used for the sound signal of described sound capture device output is carried out voice recognition processing; The keyword module is used for extracting keyword from the voice recognition processing result of described sound identification module, and exports described keyword; The first control command modular converter is used for determining the control command of keyword correspondence, and determines the control command of described image information correspondence; First execution module is used for carrying out described control command, otherwise not carrying out described control command when the corresponding control command of the control command of described keyword correspondence and image information is identical.
Preferable, aforesaid human-computer interaction system, wherein this control device comprises: the voice attribute detection module, for detection of the voice attribute of the sound signal of described sound acquisition module output; The second control command modular converter is used for determining the control command of described image information correspondence, and determines the implementation effect of described voice attribute correspondence, with the input parameter of described implementation effect as described control command; Second execution module is used for carrying out this and carries the control command of described input parameter.
Preferable, aforesaid human-computer interaction system, wherein this control device comprises: sound identification module is used for the sound signal of described sound capture device output is carried out voice recognition processing; The keyword module is used for extracting keyword from the voice recognition processing result of described sound identification module, and exports described keyword; The first control command modular converter is used for determining the control command of keyword correspondence, and determines the control command of described image information correspondence; The 3rd execution module is used for selecting at least one control command to carry out from the control command of described keyword correspondence and the control command of image information correspondence.
The present invention also provides a kind of human-computer interaction method, and described method comprises: utilize the sound capture device to pick up sound, with the sound signal of the sound that obtains to pick up; Utilize capture device real time shooting image, with the image information of the image that obtains picked-up; Produce control command according to described sound signal and image information, and carry out this control command.
Preferable, aforesaid human-computer interaction method wherein saidly produces control command according to described sound signal and image information, and carries out this control command and comprise: the sound signal to the described sound that picks up is carried out voice recognition processing; From the result of described voice recognition processing, extract keyword; Determine the control command of described keyword correspondence, and determine the control command of described image information correspondence; When the corresponding control command of the control command of described keyword correspondence and described image information is identical, carries out described control command, otherwise do not carry out described control command.
Preferable, aforesaid human-computer interaction method wherein saidly produces control command according to described sound signal and image information, and carries out this control command and comprise: the voice attribute that detects the sound signal of the described sound that picks up; Determine the control command of described image information correspondence, and determine the implementation effect of described voice attribute correspondence, with the input parameter of described implementation effect as described control command; Carry out the control command that this carries described input parameter.
Preferable, aforesaid human-computer interaction method wherein saidly produces control command according to described sound signal and image information, and carries out this control command and comprise: the sound signal to the described sound that picks up is carried out voice recognition processing; From the result of the voice recognition processing of described module, extract keyword; Determine the control command of described keyword correspondence, and determine the control command of described image information correspondence; From the control command of the control command of described keyword correspondence and image information correspondence, select at least one control command to carry out.
By technique scheme, human-computer interaction system and method for the present invention has following advantage and beneficial effect at least: the present invention picks up sound by utilizing sound capture device, and utilize capture device to produce image information, comprehensively user's sound and the action made produce control command to make control device, realized the human-computer interaction based on body sense control and voice, thereby make the implementation diversification of human-computer interaction, and improved the interest of human-computer interaction, be very suitable for practicality.
In sum, the present invention has obvious improvement technically, has tangible good effect, really is a new and innovative, progressive, practical new design.
Above-mentioned explanation only is the general introduction of technical solution of the present invention, for can clearer understanding technological means of the present invention, and can be implemented according to the content of instructions, and for above-mentioned and other purposes, feature and advantage of the present invention can be become apparent, below especially exemplified by preferred embodiment, and conjunction with figs. is described in detail as follows.
Description of drawings
Fig. 1 is human-computer interaction system schematic of the present invention;
Fig. 2 is the synoptic diagram of an object lesson of control device of the present invention;
Fig. 3 is the synoptic diagram of another object lesson of control device of the present invention;
Fig. 4 is the synoptic diagram of the 3rd object lesson of control device of the present invention;
Fig. 5 is the process flow diagram of human-computer interaction method of the present invention.
Embodiment
Reach technological means and the effect that predetermined goal of the invention is taked for further setting forth the present invention, below in conjunction with accompanying drawing and preferred embodiment, to its embodiment of human-computer interaction system and method, structure, feature, flow process and the effect thereof that foundation the present invention proposes, describe in detail as after.
Embodiment one, human-computer interaction system.This system as shown in Figure 1.
Human-computer interaction system shown in Fig. 1 comprises: sound capture device 1, capture device 2 and control device 3.Control device 3 wherein can comprise as shown in Figure 2: sound identification module 31, keyword module 32, the first control command modular converter 33 and first execution module 34; Perhaps, control device 3 comprises as shown in Figure 3: voice attribute detection module 35, the second control command modular converter 36 and second execution module 37; Perhaps, control device 3 comprises as shown in Figure 4: sound identification module 31, keyword module 32, the first control command modular converter 33 and the 3rd execution module 38.
Sound capture device 1 is connected with control device 3.Sound capture device 1 is mainly used in picking up sound, namely pick up the sound that the user sends, the sound signal of the sound that sound capture device 1 will be picked up is to control device 3 outputs, for example, sound capture device 1 is to sound identification module 31 or to the sound signal of its sound that pick up of voice attribute detection module 35 output.Sound capture device 1 can be embodied in forms such as microphone or headset.
Capture device 2 is connected with control device 3.Capture device 2 is mainly for generation of image information, and to the image information of control device 3 its generations of output.The real time shooting here such as capture device 2 are carried out image sampling according to predetermined sampling frequency.
Capture device 2 can adopt picture pick-up devices such as prior camera and video camera, as RGB colour imagery shot or 3D degree of depth inductor etc.The present invention does not limit the particular type of capture device 2.
Control device 3 is connected respectively with capture device 2 with sound capture device 1.This control device 3 receives the sound signal of sound capture device 1 output and the image information of capture device 2 outputs, and produces control command according to the sound signal that receives and image information, and afterwards, control device 3 is carried out this control command.
The implementation that control device 3 produces control command according to sound signal and image information and carries out control command has multiple, for example, earlier both are converted to control command respectively, then, judging under the identical situation of these two control commands, carry out this control command; Again for example, earlier image information is converted to control command, then, determines the input parameter of this control command according to the voice attribute of sound signal, carry out the control command that carries input parameter again; Again for example, earlier both are converted to control command respectively, then, from these two control commands, select at least one control command to carry out.
Specific implementation to control device 3 is elaborated below.
Implementation one: control device 3 comprises: sound identification module 31, keyword module 32, the first control command modular converter 33 and first execution module 34.
Sound identification module 31 all is connected with sound capture device 1 and keyword module 32.Sound identification module 31 is mainly used in the sound signal of sound capture device 1 output is carried out voice recognition processing, and to keyword module 32 output voice identification results.Sound identification module 31 can adopt existing speech recognition technology, and the speech recognition process that sound identification module 31 is adopted is illustrated below:
Store predefined sound signal in example one, the sound identification module 31.That is to say, store sound signal in the sound identification module 31, and this sound signal should be corresponding with control command.The sound signal of storage can be to send the sound signal of control command by the user that sound acquisition module 1 is recorded in advance in the sound identification module 31.
In example one, store at least one section audio signal in the sound identification module 31, and the corresponding one or more of control commands of a section audio signal, generally, the corresponding control command of a section audio signal.
Each section audio signal of storage all can corresponding sound signal identifier in the sound identification module 31, this sound signal identifier is used for the sound signal of the different sections of difference, a concrete example stores the correspondence relationship information of sound signal identifier and sound signal in the sound identification module 31.The sound signal that sound identification module 31 is next with 1 transmission of sound acquisition module and the sound signal of its storage compare, to determine the sound signal of coupling, afterwards, sound identification module 31 is determined the sound signal corresponding identifier of this coupling, and exports this sound signal identifier to the keyword module.
If the sound signal that 1 transmission of sound acquisition module comes also needs to carry out processing such as denoising, format conversion, then sound identification module 31 can be after carrying out relevant treatment, and the sound signal after utilization is handled and the sound signal of its storage compare.Sound identification module 31 can utilize existing sound signal correlation technique to realize the contrast of sound signal, thereby determines in each section audio signal of its storage the section audio signal with the sound signal coupling that receives.The present invention does not limit the specific implementation of the sound signal correlation technique that sound identification module 31 adopts.
Do not prerecord effector's sound signal in example two, the sound identification module 31, sound identification module 31 directly carries out speech recognition to the sound signal of sound acquisition module 1 output, be converted to text message with the sound signal that will get access to, afterwards, sound identification module 31 offers control module 4 with text information.Sound identification module 31 can utilize existing speech recognition technology directly sound signal to be converted to text message.
Keyword module 32 is connected respectively with the first control command modular converter 33 with sound identification module 31.Keyword module 32 is mainly used in extracting keyword from the voice recognition processing result that sound identification module 31 transmission come, and exports the keyword of its extractions to the first control command modular converter 33.The keyword that keyword module 36 extracts can be specially numeral or literal etc.Keyword module 32 can adopt predetermined fetch strategy to carry out the extraction of keyword, and for example, keyword module 32 can be ignored tone auxiliary words such as " ", " " and " you, I, he " and pronoun etc. in extracting the process of keyword.Keyword module 32 can adopt existing fetch strategy to carry out the extraction of keyword, and the present invention does not limit the specific implementation process that keyword module 32 is extracted keyword.
The first control command modular converter 33 is connected respectively with capture device 2, keyword module 32 and first execution module.The first control command modular converter 33 is mainly used in its keyword that receives is converted to control command, and its image information that receives also is converted to control command, the first control command modular converter 33 is all exported to first execution module 34 with these two control commands afterwards.
The first control command modular converter 33 has the mode that keyword and image information are converted to control command multiple, for example, store the correspondence relationship information of keyword and control command in the first control command modular converter 33, the first control command modular converter 33 is searched in correspondence relationship information according to the keyword that receives, and from the record of coupling, obtain control command, this control command is the control command of its keyword correspondence that receives.In addition, the first control command modular converter 33 can adopt the correlation technique (as kinect technology etc.) in the existing somatic sensation television game to determine the action of the user in its image information that receives, then, can adopt the action that sets in advance and the corresponding relation of control command to obtain corresponding control command, the present invention does not limit the specific implementation process that the first control command modular converter 33 carries out the control command conversion.
First execution module 34 is connected with the first control command modular converter 33.First execution module 34 is mainly used in carrying out this control command when the corresponding control command of the control command of judging its keyword correspondence that receives and image information is identical, otherwise first execution module 34 is not carried out this control command.
One of above-mentioned implementation one has particular application as: the user action in the image information that user video trap setting 2 collects characterizes " jumping up from bottom to top ", and when the sound signal that sound capture device 1 captures characterizes " jump ", first execution module 34 is carried out the control command of jumping, otherwise first execution module 34 is not carried out the control command of jump.
Implementation two: control device 3 comprises: voice attribute detection module 35, the second control command modular converter 36 and second execution module 37.
Voice attribute detection module 35 is connected respectively with sound capture device 1 and the second control command module 36.Voice attribute detection module 35 is mainly for detection of the voice attribute of the sound signal of sound acquisition module 1 output.
Voice attribute among the present invention can specifically comprise at least one in tone color, volume, value and the tone.Tone color wherein refers to the sense quality of sound, can tell different sounding bodies by the difference of tone color; Volume can be called loudness or loudness of a sound again, and volume refers to the sound size strong and weak subjective feeling of people's ear to hearing, the amplitude size that its objective evaluation yardstick is sound; Value can be called duration again, and value refers to the time length of sound continuity, is decided by the time of sounding body vibration; Tone refers to the height of sound frequency.Above-mentioned tone color, volume, tone can be called three main subjective attributes of sound, and value then can be called the objective attribute (also being physical attribute) of sound.
At implementation two of particular note, voice attribute detection module 35 detected voice attributes can determine the parameter that control command is entrained, the second control command modular converter 36 can be determined a control command according to image information, determine the entrained parameter of this control command according to voice attribute detection module 35 again, thereby form a complete control command.
The second control command modular converter 36 is connected respectively with voice attribute detection module 35 and second execution module 36.The second control command modular converter 36 is mainly used in determining the control command of image information correspondence, and the implementation effect of definite its voice attribute correspondence that receives, afterwards, with the input parameter of this implementation effect as control command, carry the control command of this input parameter to 36 outputs of second execution module.
Second execution module 37 is connected with the second control command modular converter 36.Second execution module 37 is mainly used in carrying out the control command that carries input parameter that it receives.
One of above-mentioned implementation two has particular application as: the user action in the image information that user video trap setting 2 collects characterizes " jumping up from bottom to top ", and when the volume in the voice attribute of the sound signal that sound capture device 1 captures surpasses predetermined decibel, first execution module 34 is carried out high control command of jumping up, and the volume in voice attribute does not surpass when being scheduled to decibel, and first execution module 34 is carried out the control command of jumping up in low latitudes.
Implementation three, control device 3 comprise: sound identification module 31, keyword module 32, the first control command modular converter 33 and the 3rd execution module 38.
Sound identification module 31 all is connected with sound capture device 1 and keyword module 32.Sound identification module 31 is mainly used in the sound signal of sound capture device 1 output is carried out voice recognition processing, and to keyword module 32 output voice identification results.Sound identification module 31 can adopt existing speech recognition technology, and the illustrating as the description in the above-mentioned implementation one of the speech recognition process that sound identification module 31 is adopted is in this no longer repeat specification.
Keyword module 32 is connected respectively with the first control command modular converter 33 with sound identification module 31.Keyword module 32 is mainly used in extracting keyword from the voice recognition processing result that sound identification module 31 transmission come, and exports the keyword of its extractions to the first control command modular converter 33.The keyword that keyword module 36 extracts can be specially numeral or literal etc.Keyword module 32 can adopt predetermined fetch strategy to carry out the extraction of keyword, and for example, keyword module 32 can be ignored tone auxiliary words such as " ", " " and " you, I, he " and pronoun etc. in extracting the process of keyword.Keyword module 32 can adopt existing fetch strategy to carry out the extraction of keyword, and the present invention does not limit the specific implementation process that keyword module 32 is extracted keyword.
The first control command modular converter 33 is connected respectively with capture device 2, keyword module 32 and first execution module.The first control command modular converter 33 is mainly used in its keyword that receives is converted to control command, and its image information that receives also is converted to control command, the first control command modular converter 33 is all exported to first execution module 34 with these two control commands afterwards.
The first control command modular converter 33 has the mode that keyword and image information are converted to control command multiple, for example, store the correspondence relationship information of keyword and control command in the first control command modular converter 33, the first control command modular converter 33 is searched in correspondence relationship information according to the keyword that receives, and from the record of coupling, obtain control command, this control command is the control command of its keyword correspondence that receives.In addition, the first control command modular converter 33 can adopt the correlation technique in the existing somatic sensation television game to identify the action of the user in the image information that receives, and this action is converted to corresponding control command, the present invention does not limit the specific implementation process that the first control command modular converter 33 carries out the control command conversion.
The 3rd execution module 38 is connected with the first control command modular converter 33.The 3rd execution module 38 is mainly used in selecting at least one control command from the control command of the control command of its keyword correspondence that receives and image information correspondence, and carries out.The 3rd execution module 38 can select at least one control command to carry out from two control commands according to the selection strategy that sets in advance, and for example, the control command of selecting to receive is earlier carried out; Again for example, select a control command to carry out earlier, select another control command to carry out again after carrying out.
One of above-mentioned implementation three has particular application as: the user action in the image information that user video trap setting 2 collects characterizes " squatting down ", and when the sound signal that sound capture device 1 captures characterizes " jump ", first execution module 34 receives " jump " corresponding control command earlier, after receive the control command of " squatting down ", therefore, first execution module 34 is carried out the control command of jumping earlier, and then, first execution module 34 is carried out the control command of squatting down again.
Embodiment two, human-computer interaction method.The flow process of this method as shown in Figure 5.
Human-computer interaction method shown in Fig. 5 comprises the steps:
S500, utilize the sound capture device to pick up sound, with the sound signal of the sound that obtains to pick up.
S510, utilize capture device to produce image information.
Concrete, the present invention can utilize picture pick-up device real-time sampling images such as prior camera and video camera, to obtain image information.
S520, produce control command according to above-mentioned sound signal and image information, and carry out this control command.
Concrete, the implementation that produces control command and carry out control command according to sound signal and image information has multiple, for example, earlier both are converted to control command respectively, then, judging under the identical situation of these two control commands, carry out this control command; Again for example, earlier image information is converted to control command, then, determines the input parameter of this control command according to the voice attribute of sound signal, carry out the control command that carries input parameter again; Again for example, earlier both are converted to control command respectively, then, from these two control commands, select at least one control command to carry out.
Need to prove at embodiment two, though embodiment two orders have been described S500-S520, but in fact, do not have the restriction of sequencing between the S500 among the present invention and the S510, namely sound pick process and image information capture process the two can be mutually parallel.
The above only is preferred embodiment of the present invention, be not that the present invention is done any pro forma restriction, though the present invention discloses as above with preferred embodiment, yet be not in order to limit the present invention, any those skilled in the art are not in breaking away from the technical solution of the present invention scope, when the technology contents that can utilize above-mentioned announcement is made a little change or is modified to the equivalent embodiment of equivalent variations, in every case be the content that does not break away from technical solution of the present invention, any simple modification that foundation technical spirit of the present invention is done above embodiment, equivalent variations and modification all still belong in the scope of technical solution of the present invention.
Claims (8)
1. a human-computer interaction system is characterized in that, comprising: sound capture device, capture device and control device;
Described sound capture device is used for picking up sound, and exports the sound signal of the sound that picks up;
Described capture device is used for the real time shooting image, and output image information;
Described control device is connected with described capture device with described sound capture device, and described control device receives described sound signal and image information information, produces control command according to described sound signal and image information, and carries out this control command.
2. human-computer interaction as claimed in claim 1 system is characterized in that this control device comprises:
Sound identification module is used for the sound signal of described sound capture device output is carried out voice recognition processing;
The keyword module is used for extracting keyword from the voice recognition processing result of described sound identification module, and exports described keyword;
The first control command modular converter is used for determining the control command of described keyword correspondence, and determines the control command of described image information correspondence;
First execution module is used for carrying out described control command, otherwise not carrying out described control command when the corresponding control command of the control command of described keyword correspondence and described image information is identical.
3. human-computer interaction as claimed in claim 1 system is characterized in that this control device comprises:
The voice attribute detection module is for detection of the voice attribute of the sound signal of described sound acquisition module output;
The second control command modular converter is used for determining the control command of described image information correspondence, and determines the implementation effect of described voice attribute correspondence, with the input parameter of described implementation effect as described control command;
Second execution module is used for carrying out this and carries the control command of described input parameter.
4. human-computer interaction as claimed in claim 1 system is characterized in that this control device comprises:
Sound identification module is used for the sound signal of described sound capture device output is carried out voice recognition processing;
The keyword module is used for extracting keyword from the voice recognition processing result of described sound identification module, and exports described keyword;
The first control command modular converter is used for determining the control command of described keyword correspondence, and determines the control command of described image information correspondence;
The 3rd execution module is used for selecting at least one control command to carry out from the control command of described keyword correspondence and the control command of image information correspondence.
5. a human-computer interaction method is characterized in that, described method comprises:
Utilize the sound capture device to pick up sound, with the sound signal of the sound that obtains to pick up;
Utilize capture device real time shooting image, with the image information of the image that obtains picked-up;
Produce control command according to described sound signal and image information, and carry out this control command.
6. human-computer interaction method as claimed in claim 5 is characterized in that, describedly produces control command according to described sound signal and image information, and carries out this control command and comprise:
Sound signal to the described sound that picks up is carried out voice recognition processing;
From the result of described voice recognition processing, extract keyword;
Determine the control command of described keyword correspondence, and determine the control command of described image information correspondence;
When the corresponding control command of the control command of described keyword correspondence and described image information is identical, carries out described control command, otherwise do not carry out described control command.
7. human-computer interaction method as claimed in claim 5 is characterized in that, describedly produces control command according to described sound signal and image information, and carries out this control command and comprise:
Detect the voice attribute of the sound signal of the described sound that picks up;
Determine the control command of described image information correspondence, and determine the implementation effect of described voice attribute correspondence, with the input parameter of described implementation effect as described control command;
Carry out the control command that this carries described input parameter.
8. human-computer interaction method as claimed in claim 5 is characterized in that, describedly produces control command according to described sound signal and image information, and carries out this control command and comprise:
Sound signal to the described sound that picks up is carried out voice recognition processing;
From the result of the voice recognition processing of described module, extract keyword;
Determine the control command of described keyword correspondence, and determine the control command of described image information correspondence;
From the control command of the control command of described keyword correspondence and image information correspondence, select at least one control command to carry out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011104528272A CN103186227A (en) | 2011-12-28 | 2011-12-28 | Man-machine interaction system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011104528272A CN103186227A (en) | 2011-12-28 | 2011-12-28 | Man-machine interaction system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103186227A true CN103186227A (en) | 2013-07-03 |
Family
ID=48677427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011104528272A Pending CN103186227A (en) | 2011-12-28 | 2011-12-28 | Man-machine interaction system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103186227A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593047A (en) * | 2013-10-11 | 2014-02-19 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
CN104505091A (en) * | 2014-12-26 | 2015-04-08 | 湖南华凯文化创意股份有限公司 | Human-machine voice interaction method and human-machine voice interaction system |
CN105843378A (en) * | 2016-03-17 | 2016-08-10 | 中国农业大学 | Service terminal based on somatosensory interaction control and control method of the service terminal |
CN107077847A (en) * | 2014-11-03 | 2017-08-18 | 微软技术许可有限责任公司 | The enhancing of key phrase user's identification |
CN107197327A (en) * | 2017-06-26 | 2017-09-22 | 广州天翌云信息科技有限公司 | A kind of Digital Media preparation method |
CN108469772A (en) * | 2018-05-18 | 2018-08-31 | 阿里巴巴集团控股有限公司 | A kind of control method and device of smart machine |
CN111261159A (en) * | 2020-01-19 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Information indication method and device |
WO2020155020A1 (en) * | 2019-01-31 | 2020-08-06 | 深圳市大疆创新科技有限公司 | Environment perception method and device, control method and device, and vehicle |
CN111918453A (en) * | 2020-08-18 | 2020-11-10 | 深圳市秀骑士科技有限公司 | LED light scene control system and control method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101079996A (en) * | 2006-05-22 | 2007-11-28 | 北京盛开交互娱乐科技有限公司 | An interactive digital multimedia making method based on video and audio |
US20090079813A1 (en) * | 2007-09-24 | 2009-03-26 | Gesturetek, Inc. | Enhanced Interface for Voice and Video Communications |
CN101472066A (en) * | 2007-12-27 | 2009-07-01 | 华晶科技股份有限公司 | Near-end control method of image viewfinding device and image viewfinding device applying the method |
CN101786272A (en) * | 2010-01-05 | 2010-07-28 | 深圳先进技术研究院 | Multisensory robot used for family intelligent monitoring service |
CN102074232A (en) * | 2009-11-25 | 2011-05-25 | 财团法人资讯工业策进会 | Behavior identification system and identification method combined with audio and video |
-
2011
- 2011-12-28 CN CN2011104528272A patent/CN103186227A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101079996A (en) * | 2006-05-22 | 2007-11-28 | 北京盛开交互娱乐科技有限公司 | An interactive digital multimedia making method based on video and audio |
US20090079813A1 (en) * | 2007-09-24 | 2009-03-26 | Gesturetek, Inc. | Enhanced Interface for Voice and Video Communications |
CN101472066A (en) * | 2007-12-27 | 2009-07-01 | 华晶科技股份有限公司 | Near-end control method of image viewfinding device and image viewfinding device applying the method |
CN102074232A (en) * | 2009-11-25 | 2011-05-25 | 财团法人资讯工业策进会 | Behavior identification system and identification method combined with audio and video |
CN101786272A (en) * | 2010-01-05 | 2010-07-28 | 深圳先进技术研究院 | Multisensory robot used for family intelligent monitoring service |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593047A (en) * | 2013-10-11 | 2014-02-19 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
CN107077847A (en) * | 2014-11-03 | 2017-08-18 | 微软技术许可有限责任公司 | The enhancing of key phrase user's identification |
CN107077847B (en) * | 2014-11-03 | 2020-11-10 | 微软技术许可有限责任公司 | Enhancement of key phrase user identification |
US11270695B2 (en) | 2014-11-03 | 2022-03-08 | Microsoft Technology Licensing, Llc | Augmentation of key phrase user recognition |
CN104505091A (en) * | 2014-12-26 | 2015-04-08 | 湖南华凯文化创意股份有限公司 | Human-machine voice interaction method and human-machine voice interaction system |
CN105843378A (en) * | 2016-03-17 | 2016-08-10 | 中国农业大学 | Service terminal based on somatosensory interaction control and control method of the service terminal |
CN107197327A (en) * | 2017-06-26 | 2017-09-22 | 广州天翌云信息科技有限公司 | A kind of Digital Media preparation method |
CN107197327B (en) * | 2017-06-26 | 2020-11-13 | 广州天翌云信息科技有限公司 | Digital media manufacturing method |
CN108469772B (en) * | 2018-05-18 | 2021-07-20 | 创新先进技术有限公司 | Control method and device of intelligent equipment |
CN108469772A (en) * | 2018-05-18 | 2018-08-31 | 阿里巴巴集团控股有限公司 | A kind of control method and device of smart machine |
WO2020155020A1 (en) * | 2019-01-31 | 2020-08-06 | 深圳市大疆创新科技有限公司 | Environment perception method and device, control method and device, and vehicle |
CN111261159A (en) * | 2020-01-19 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Information indication method and device |
CN111261159B (en) * | 2020-01-19 | 2022-12-13 | 百度在线网络技术(北京)有限公司 | Information indication method and device |
CN111918453A (en) * | 2020-08-18 | 2020-11-10 | 深圳市秀骑士科技有限公司 | LED light scene control system and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103186227A (en) | Man-machine interaction system and method | |
CN105788610B (en) | Audio-frequency processing method and device | |
CN101859562B (en) | Method for matching conventional images with karaoke melodies in real time | |
US20200186912A1 (en) | Audio headset device | |
CN107316651B (en) | Audio processing method and device based on microphone | |
CN103918284B (en) | Voice control device, voice control method, and program | |
CN106095384B (en) | A kind of effect adjusting method and user terminal | |
CN104185132A (en) | Audio track configuration method, intelligent terminal and corresponding system | |
CN103137125A (en) | Intelligent electronic device based on voice control and voice control method | |
CN113676592B (en) | Recording method, recording device, electronic equipment and computer readable medium | |
CN103873919B (en) | A kind of information processing method and electronic equipment | |
CN106067996A (en) | Voice reproduction method, voice dialogue device | |
CN110223677A (en) | Spatial audio signal filtering | |
CN103137126A (en) | Intelligent electronic device based on voice control and voice control method | |
CN111508531A (en) | Audio processing method and device | |
CN104092809A (en) | Communication sound recording method and recorded communication sound playing method and device | |
CN102671383A (en) | Game implementing device and method based on acoustic control | |
CN103127718A (en) | Game achieving device and method based on voice control | |
CN107197431A (en) | A kind of multi-medium play method and device | |
CN202661996U (en) | Multimedia terminal | |
CN109743658A (en) | A kind of information processing method and electronic equipment | |
CN105718174B (en) | Interface switching method and system | |
WO2017061278A1 (en) | Signal processing device, signal processing method, and computer program | |
CN103135751A (en) | Intelligent electronic device and voice control method based on voice control | |
CN103186226A (en) | Man-machine interaction system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130703 |