CN110120219A - A kind of intelligent sound exchange method, system and device - Google Patents
A kind of intelligent sound exchange method, system and device Download PDFInfo
- Publication number
- CN110120219A CN110120219A CN201910367946.4A CN201910367946A CN110120219A CN 110120219 A CN110120219 A CN 110120219A CN 201910367946 A CN201910367946 A CN 201910367946A CN 110120219 A CN110120219 A CN 110120219A
- Authority
- CN
- China
- Prior art keywords
- user
- man
- voice
- image
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000002452 interceptive effect Effects 0.000 claims description 29
- 230000008859 change Effects 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 12
- 230000008569 process Effects 0.000 abstract description 9
- 238000004891 communication Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 241001071864 Lethrinus laticaudis Species 0.000 description 1
- 206010044565 Tremor Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention discloses a kind of intelligent sound exchange method, system, device and computer readable storage mediums, are related to field of communication technology.The method of the invention is the following steps are included: S1 acquires user image;S2 identifies the attribute of the user according to the user image of acquisition;S3 plays the voice of customization according to the attribute of the different users;S4 interacts operation by the voice and the user of different customizations.The present invention provides different customized voices and interacts with user, improve the flexibility and interest in human-computer interaction process by the identification to user property.
Description
Technical field
The invention belongs to fields of communication technology, more particularly to a kind of intelligent sound exchange method, system and device.
Background technique
Intelligent sound interaction is the interactive mode of new generation based on voice input, can be obtained by feedback knot by speaking
Fruit.Existing interactive voice mode is all made of the mode of fixed mode pronunciation, i.e., for all different access users, instead
Present the pronunciation of same mode, have a different although interior, the tone color of sound, sound quality all, to the feeling ratio of user
It is more dull.
Such problems is primarily due to the different attribute that interactive voice equipment itself is difficult to identify that user, for example male
The identity informations such as female, old children cause interactive voice equipment that can not provide the interactive voice ability customized according to user.
Summary of the invention
The purpose of the present invention is to provide a kind of intelligent sound exchange method, system and devices, by user property
Identification, provides different customized voices and interacts with user, improve the flexibility and interest in human-computer interaction process.
In order to solve the above technical problems, the present invention is achieved by the following technical solutions:
The present invention provides a kind of intelligent sound exchange method, and the method at least includes the following steps:
S1 acquires user image;
S2 identifies the attribute of the user according to the user image of acquisition;
S3 plays the voice of customization according to the attribute of the different users;
S4 interacts operation by the voice and the user of different customizations.
As a further improvement of the foregoing solution, the voice interactive method is further comprising the steps of:
S0 interacts operation before acquiring the user image, using standard voice type and the user.
Further, the step S2 identify the user property method the following steps are included:
S21 judges whether the user is man, is that then the determining user is man, otherwise determines that the user is female
Scholar.
Further, the step S2 identifies that the method for the user property is further comprising the steps of:
If the user is man, following steps are executed:
S22 continues to judge whether the age of the man is teenager, is that then the determining user is teenager man, no
Then execute S23;
S23 continues to judge whether the age of the man is the middle age, is that then the determining user is middle aged man, otherwise really
User is determined for old man.
Further, the step S2 identifies that the method for the user property is further comprising the steps of:
If the user is Ms, following steps are executed:
S24 continues to judge whether the age of the Ms is teenager, is that then the determining user is Mrs teenager, no
Then execute S25;
S25 continues to judge whether the age of the Ms is the middle age, is that then the determining user is middle aged Ms, otherwise really
Determining user is old woman.
Further, the step S21 the following steps are included:
The user image of acquisition is compared with pre-stored sample point, is then calculated from the user image most
The gender of nearest sample point is assigned to the user image of acquisition by close sample point.
Further, any one of described step S22 to S25 is the following steps are included: extract and change of age close relation
Face partial statistics characteristic.
The present invention also provides a kind of intelligent speech interactive system, the system comprises:
Image collecting module, for acquiring the image of user;
Subscriber identification module identifies the user according to the image of the image collecting module difference user collected
Attribute;
Speech control module plays the voice of customization according to the attribute of the different users;
Interactive module interacts operation by the voice and the user of customization.
The present invention also provides a kind of intelligent sound interactive devices, comprising: processor, memory, system bus;
The processor and the memory are connected by the system bus;
The memory includes instruction, described instruction for storing one or more programs, one or more of programs
The step of making the processor execute voice interactive method noted earlier when being executed by the processor.
The present invention quickly identifies the attribute of user using the recognition capability for influencing acquisition device, and passes through user's
Attribute provides the interactive sound mode of customization, greatly improves flexibility and interest in human-computer interaction process.The present invention
Method deployment scheme it is clear, algorithm is succinct, does not need to set up complicated process flow, and arithmetic speed is fast, and real-time is high.Simultaneously
The device of the invention component is simple, is easy to dispose, convenient for that can be compatible with all kinds of different scenes, wide adaptation range while extension.
Certainly, it implements any of the products of the present invention and does not necessarily require achieving all the advantages described above at the same time.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will be described below to embodiment required
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of step flow chart of intelligent sound exchange method;
Fig. 2 is a kind of composed structure schematic diagram of intelligent speech interactive system;
Fig. 3 is a kind of hardware structural diagram of intelligent sound interactive device.
Specific embodiment
Existing interactive voice mode is all made of the mode of fixed mode pronunciation, i.e., for all different access users,
Feed back the pronunciation of same mode, have a different although interior, the tone color of sound, sound quality all, to the sense of user
Feel more dull.
Such problems is primarily due to the different attribute that interactive voice equipment itself is difficult to identify that user, for example male
The identity informations such as female, old children cause interactive voice equipment that can not provide the interactive voice ability customized according to user.
In order to solve drawbacks described above, the embodiment provides a kind of intelligent sound exchange method, system and device,
By the identification to access user property, different customized voices is provided and is interacted with access user, human-computer interaction is improved
Flexibility and interest in the process.
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, the method at least includes the following steps the present invention provides a kind of intelligent sound exchange method:
S0 uses standard voice type to interact with user first;S1 acquires user's shadow by influencing acquisition device
Picture;S2 identifies that the attribute of user, the attribute of user include main according to the feature of face and clothing according to the user image of acquisition
Including gender and age, wherein gender value is " man " or " Ms ", age value can be " teenager ", " middle age ",
" old age ";S21 first determines whether user is man during identifying user property, is that then determining user is man, no
Then determine that user is Ms.
Gender identification recited above is the face gender attribute for distinguishing and analyzing using computer vision in image.Base
Principal component analysis is mainly used in the gender recognizer of eigenface.By eliminating the correlation in data in calculating process
Property, dimensional images are reduced to lower dimensional space, and the sample in training set is then mapped to a bit in lower dimensional space.Work as needs
Judge acquisition user image gender when, it is necessary to first the user image of acquisition is mapped in lower dimensional space, then calculate from
Which the nearest sample point of the user image of acquisition is, the gender of nearest sample point is assigned to the user image of acquisition.
If user is man, S22 continues to judge whether the age of man is teenager, is that then determining user is teenager male
Otherwise scholar executes S23;S23 continues to judge whether the age of man is the middle age, is that then determining user is middle aged man, otherwise really
User is determined for old man;If user is Ms, S24 continues to judge whether the age of Ms is teenager, is then determining user
For Mrs teenager, S25 is otherwise executed;S25 judges whether the age of Ms is the middle age, is that then determining user is middle aged Ms,
Otherwise determine that user is old woman.
Age estimation, which is roughly divided into, estimates and assesses in detail two stages.It estimates the stage: extracting the user image of acquisition
The skin pattern feature of middle face does a rough assessment to the range of age, obtains a specific age bracket;Assessment in detail
Stage: by the method for support vector machines, multiple model classifiers corresponding to multiple age brackets are established, and select suitably
Model is matched.Sentence for example, by using the face age algorithm for estimating of fusion local binarization mode and histogram of gradients feature
The disconnected age.It is extracted by the face age algorithm for estimating of fusion local binarization mode and histogram of gradients feature and is become with the age
Change the partial statistics characteristic of the face of close relation.Local binarization pattern feature and histogram of gradients feature, and with typical phase
The method fusion for closing analysis, is trained and tests to face database finally by the method for Support vector regression.
Referring to Fig. 1, the attribute for the different user that S3 is obtained further according to the above method, plays the voice of customization;It is right first
Matched voice is preset for it in the user of different attribute, when the attribute for identifying user according to above method step
Corresponding voice is then played afterwards.
Referring to Fig. 1, S4 is interacted by the voice of different customizations with the user.Specific steps include at least:
First by acoustic signal reception apparatus, such as microphone, the voice signal of different location is received;In these sound
Voice signal in addition to a user is filtered in signal, generates the first voice signal;Identify the command information of the first voice signal;It receives
Described instruction information, and corresponding operation is executed with the voice of different customizations.
Referring to Fig. 2, the present invention also provides a kind of intelligent speech interactive system, the system comprises: infrared induction module
8, image collecting module 1, subscriber identification module 2, speech control module 3 and interactive module 4.
Referring to Fig. 2, infrared induction module 8, for incuding the close of user;Image collecting module 1, for acquiring user
Image;Subscriber identification module 2 identifies the user according to the image of the image collecting module difference user collected
Attribute;Speech control module 3 plays the voice of customization according to the attribute of the different users;Interactive module 4 passes through customization
Voice interacted with the user.
Referring to Fig. 3, the present invention also provides a kind of intelligent sound interactive devices, comprising: processor 5, memory 6, system
Bus 7.Processor 5 and memory 6 are connected by system bus 7, and processor 5 is the control centre of voice interaction device, can
Be stored in memory 6 by running or executing using the various pieces of various interfaces and connection voice interaction device
Instruction and calling be stored in the data in memory 6, thus realize the acquisition of user image, identification and to different attribute
User carries out corresponding voice control.Processor 5 may include one or more processing units, and processor 5 can be integrated using processing
Device and modem processor, wherein the main processing operation system of application processor, user interface and application program etc., modulation
Demodulation processor mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processing
In device 5.In some embodiments, processor 5 and memory 6 can be realized on the same chip, in some embodiments, they
It can also be realized respectively on independent chip.Processor 5 can be general processor, such as central processing unit, digital signal
Processor, specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor are patrolled
Device, discrete hardware components are collected, may be implemented or execute each method, step disclosed in the embodiment of the present application and logical box
Figure.General processor can be microprocessor or any conventional processor etc..The side in conjunction with disclosed in the embodiment of the present application
The step of method, can be embodied directly in hardware processor 5 and execute completion, or in processor 5 hardware and software module combine
Execute completion.
Referring to Fig. 3, memory 6 is used as a kind of non-volatile computer readable storage medium storing program for executing, can be used for storing non-volatile
Property software program, non-volatile computer executable program and module.Memory 6 may include the storage of at least one type
Medium, such as may include that flash memory, hard disk, multimedia card, card-type memory, random access storage device, static random-access are deposited
Reservoir, programmable read only memory, read-only memory, band electrically erasable programmable read-only memory, magnetic storage, disk,
CD etc..Memory 6 can be used for carrying or store the desired program code with instruction or data structure form simultaneously
Can by any other medium of computer access, but not limited to this.Memory 6 in the embodiment of the present application can also be circuit
Or other devices that arbitrarily can be realized store function, for storing program instruction and/or data.
Also referring to Fig. 1 to Fig. 3, in the scene that the present invention applies, such as apply in exhibition to showpiece
It introduces, when there is user to want to understand certain part showpiece, the interface of voice interaction device, voice interaction device can be clicked
Speech control module 3 will be called to greet using Mrs standard sound to visiting subscriber first, while image collecting module 1 is right
The influence of user is acquired, such as is photographed, taken pictures to user by camera.Collected user image is transmitted to user's knowledge
Other module 2, the processor 5 in subscriber identification module 2 identifies the attribute of user according to the feature of face and clothing, in identification user
First determine whether user is man during attribute, is that then determining user is man, otherwise determines that user is Ms.If with
Family is man, then continues to judge that the man is teenager man, middle aged man or old man;If user is Ms, after
It is continuous to judge that the Ms is Mrs teenager, middle aged Ms or old woman.Determining user property is transmitted to voice control again
Molding block 3, speech control module 3 will play corresponding language according to speech pattern corresponding to preset different attribute user
Sound, such as: identification user is that " teenager man " then plays " campus schoolgirl's sound ";Identification user is that " Mrs teenager " then broadcasts
Put " campus boy student's sound ";Identification user is that " middle aged man " then plays " Mrs standard sound ";Identification user is " middle aged female
Scholar " then plays " standard man sound ";Identification user is that " old man " then plays " mature Ms's sound ";Identify that user is
" old woman " then plays " mature man's sound ".The speech pattern for distributing to the user is transmitted best friend by speech control module 3
Mutual module 4, interactive module 4 receive user speech, the instruction letter of the first voice signal of identification by acoustic signal reception apparatus
Breath receives the command information, and executes corresponding operation with the corresponding speech pattern for distributing to the user, is situated between to showpiece
It continues.
Also referring to Fig. 1 to Fig. 3, in another scene that the present invention applies, such as apply in cultural center to humanity
The introduction of geography etc., the infrared sense when there is user to want to understand certain section of political geography knowledge, on voice interaction device
Answer module 8 that can sense that user's is close, at this moment voice interaction device will call speech control module 3 using standard first
Ms's sound greets visiting subscriber, while influence of the image collecting module 1 to user is acquired, such as by infrared
Line image collection device carries out image collection.Collected user's infrared image is transmitted to subscriber identification module 2, subscriber identification module
Processor 5 in 2 identifies the attribute of user according to the feature of face and clothing, first determines whether during identifying user property
User is man, Ms or children.If user is man, continue to judge the man be teenager man, middle aged man also
It is old man;If user is Ms, continue to judge that the Ms is Mrs teenager, middle aged Ms or old woman.Again
Determining user property is transmitted to speech control module 3, speech control module 3 will be according to preset different attribute user
Corresponding speech pattern plays corresponding voice, such as: identification user is that " teenager man " then plays " campus schoolgirl's sound
Sound ";Identification user is that " Mrs teenager " then plays " campus boy student's sound ";Identification user is that " middle aged man " then plays " mark
Quasi- Ms's sound ";Identification user is that " middle aged Ms " then plays " standard man sound ";Identification user is that " old man " then broadcasts
Put " mature Ms's sound ";Identification user is that " old woman " then plays " mature man's sound ";Identify user for " children " then
It plays " sound of cartoon animation ".The speech pattern for distributing to the user is transmitted to interactive module 4 by speech control module 3, is handed over
Mutual module 4 receives user speech by acoustic signal reception apparatus, identifies the command information of the first voice signal, receives this and refer to
Information is enabled, and executes corresponding operation with the corresponding speech pattern for distributing to the user, this section of political geography knowledge is carried out
It introduces.
Also referring to Fig. 1 to Fig. 3, in another scene that the present invention applies, such as to game machine in electronic game city
The introduction of playing method, when there is user to want to understand the playing method of platform game machine, voice interaction device will call voice first
Control module 3 greets visiting subscriber using standard robotic sound, at the same influence of the image collecting module 1 to user into
Row acquisition, such as image collection is carried out by infrared image collector.Collected user's infrared image is transmitted to user's knowledge
Other module 2, the processor 5 in subscriber identification module 2 identifies the attribute of user according to the feature of face and clothing, in identification user
First determine whether that user is adult, young girl or boy during attribute.If user is children, continue to judge user for male
Virgin or young girl;If user is adult, continue to judge that the adult is young, middle age or old age.Determining user is belonged to again
Property is transmitted to speech control module 3, and speech control module 3 will be according to voice mould corresponding to preset different attribute user
Formula plays corresponding voice, such as: identification user is that " boy " then plays " sound for A dream of trembling ";Identify that user is " young girl "
Then play " sound of piggy Page ";Identification user is that " young adult " then plays " young Ms's sound ";Identify user be " in
Year is adult " then play " middle aged Ms's sound ";Identification user is that " older adults " then play " mature Ms's sound ".Voice control
The speech pattern for distributing to the user is transmitted to interactive module 4 by module 3, and interactive module 4 is connect by acoustic signal reception apparatus
User speech is received, the command information of the first voice signal is identified, receives the command information, and with distributing to the corresponding of the user
Speech pattern executes corresponding operation, and the playing method of the game machine is introduced.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or system.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In computer readable storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can
To be mobile phone, computer, server, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (9)
1. a kind of intelligent sound exchange method, which is characterized in that the method at least includes the following steps:
S1 acquires user image;
S2 identifies the attribute of the user according to the user image of acquisition;
S3 plays the voice of customization according to the attribute of the different users;
S4 interacts operation by the voice and the user of different customizations.
2. a kind of intelligent sound exchange method according to claim 1, which is characterized in that the intelligent sound exchange method is also
The following steps are included:
S0 interacts operation before acquiring the user image, using standard voice type and the user.
3. a kind of intelligent sound exchange method according to claim 1, which is characterized in that the step S2 identifies the user
The method of attribute the following steps are included:
S21 judges whether the user is man, is that then the determining user is man, otherwise determines that the user is Ms.
4. a kind of intelligent sound exchange method according to claim 3, which is characterized in that the step S2 identifies the user
The method of attribute is further comprising the steps of:
If the user is man, following steps are executed:
S22 continues to judge whether the age of the man is teenager, is that then the determining user is teenager man, otherwise holds
Row S23;
S23 continues to judge whether the age of the man is the middle age, is that then the determining user is middle aged man, otherwise determines and use
Family is old man.
5. a kind of intelligent sound exchange method according to claim 4, which is characterized in that the step S2 identifies the user
The method of attribute is further comprising the steps of:
If the user is Ms, following steps are executed:
S24 continues to judge whether the age of the Ms is teenager, is that then the determining user is Mrs teenager, otherwise holds
Row S25;
S25 judges whether the age of the Ms is the middle age, is that then the determining user is middle aged Ms, otherwise determines that user is
Old woman.
6. a kind of intelligent sound exchange method according to claim 3, which is characterized in that the step S21 includes following step
It is rapid:
The user image of acquisition is compared with pre-stored sample point, is then calculated nearest from the user image
The gender of nearest sample point is assigned to the user image of acquisition by sample point.
7. a kind of intelligent sound exchange method according to claim 4 or 5, which is characterized in that the step S22 is into S25
Any one is the following steps are included: extract the partial statistics characteristic with the face of change of age close relation.
8. a kind of intelligent speech interactive system, which is characterized in that the system comprises:
Image collecting module, for acquiring the image of user;
Subscriber identification module identifies the category of the user according to the image of the image collecting module difference user collected
Property;
Speech control module plays the voice of customization according to the attribute of the different users;
Interactive module interacts operation by the voice and the user of customization.
9. a kind of intelligent sound interactive device, which is characterized in that described device includes: processor, memory, system bus;
The processor and the memory are connected by the system bus;
The memory includes instruction for storing one or more programs, one or more of programs, and described instruction works as quilt
The processor executes the processor as described in claim any one of 1-7 the step of voice interactive method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910367946.4A CN110120219A (en) | 2019-05-05 | 2019-05-05 | A kind of intelligent sound exchange method, system and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910367946.4A CN110120219A (en) | 2019-05-05 | 2019-05-05 | A kind of intelligent sound exchange method, system and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110120219A true CN110120219A (en) | 2019-08-13 |
Family
ID=67520293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910367946.4A Pending CN110120219A (en) | 2019-05-05 | 2019-05-05 | A kind of intelligent sound exchange method, system and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110120219A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113091221A (en) * | 2021-03-08 | 2021-07-09 | 珠海格力电器股份有限公司 | Air conditioner and control method thereof |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542252A (en) * | 2011-11-18 | 2012-07-04 | 江西财经大学 | Intelligent advertisement delivery system |
CN103945104A (en) * | 2013-01-21 | 2014-07-23 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104143079A (en) * | 2013-05-10 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Method and system for face attribute recognition |
CN106648082A (en) * | 2016-12-09 | 2017-05-10 | 厦门快商通科技股份有限公司 | Intelligent service device capable of simulating human interactions and method |
US20180040046A1 (en) * | 2015-04-07 | 2018-02-08 | Panasonic Intellectual Property Management Co., Ltd. | Sales management device, sales management system, and sales management method |
CN108182714A (en) * | 2018-01-02 | 2018-06-19 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium |
CN108647662A (en) * | 2018-05-17 | 2018-10-12 | 四川斐讯信息技术有限公司 | A kind of method and system of automatic detection face |
CN108898093A (en) * | 2018-02-11 | 2018-11-27 | 陈佳盛 | A kind of face identification method and the electronic health record login system using this method |
CN109189980A (en) * | 2018-09-26 | 2019-01-11 | 三星电子(中国)研发中心 | The method and electronic equipment of interactive voice are carried out with user |
-
2019
- 2019-05-05 CN CN201910367946.4A patent/CN110120219A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542252A (en) * | 2011-11-18 | 2012-07-04 | 江西财经大学 | Intelligent advertisement delivery system |
CN103945104A (en) * | 2013-01-21 | 2014-07-23 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104143079A (en) * | 2013-05-10 | 2014-11-12 | 腾讯科技(深圳)有限公司 | Method and system for face attribute recognition |
US20180040046A1 (en) * | 2015-04-07 | 2018-02-08 | Panasonic Intellectual Property Management Co., Ltd. | Sales management device, sales management system, and sales management method |
CN106648082A (en) * | 2016-12-09 | 2017-05-10 | 厦门快商通科技股份有限公司 | Intelligent service device capable of simulating human interactions and method |
CN108182714A (en) * | 2018-01-02 | 2018-06-19 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium |
CN108898093A (en) * | 2018-02-11 | 2018-11-27 | 陈佳盛 | A kind of face identification method and the electronic health record login system using this method |
CN108647662A (en) * | 2018-05-17 | 2018-10-12 | 四川斐讯信息技术有限公司 | A kind of method and system of automatic detection face |
CN109189980A (en) * | 2018-09-26 | 2019-01-11 | 三星电子(中国)研发中心 | The method and electronic equipment of interactive voice are carried out with user |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113091221A (en) * | 2021-03-08 | 2021-07-09 | 珠海格力电器股份有限公司 | Air conditioner and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107742107B (en) | Facial image classification method, device and server | |
CN108769823B (en) | Direct broadcasting room display methods, device, equipment | |
CN104795067B (en) | Voice interactive method and device | |
CN112560605B (en) | Interaction method, device, terminal, server and storage medium | |
CN111241340B (en) | Video tag determining method, device, terminal and storage medium | |
CN110475069A (en) | The image pickup method and device of image | |
CN108322788A (en) | Advertisement demonstration method and device in a kind of net cast | |
CN108345385A (en) | Virtual accompany runs the method and device that personage establishes and interacts | |
CN110364146A (en) | Audio recognition method, device, speech recognition apparatus and storage medium | |
CN108903521A (en) | A kind of man-machine interaction method applied to intelligent picture frame, intelligent picture frame | |
CN107040746B (en) | Multi-video chat method and device based on voice control | |
CN113411517B (en) | Video template generation method and device, electronic equipment and storage medium | |
CN107968890A (en) | theme setting method, device, terminal device and storage medium | |
CN109286848B (en) | Terminal video information interaction method and device and storage medium | |
CN111491123A (en) | Video background processing method and device and electronic equipment | |
CN108510917A (en) | Event-handling method based on explaining device and explaining device | |
CN111583415A (en) | Information processing method and device and electronic equipment | |
CN110030704A (en) | Control method and device of air conditioner, storage medium and air conditioner | |
CN108830980A (en) | Security protection integral intelligent robot is received in Study of Intelligent Robot Control method, apparatus and attendance | |
CN110120219A (en) | A kind of intelligent sound exchange method, system and device | |
CN114007064A (en) | Special effect synchronous evaluation method, device, equipment, storage medium and program product | |
CN105797375A (en) | Method and terminal for changing role model expressions along with user facial expressions | |
CN111768729A (en) | VR scene automatic explanation method, system and storage medium | |
CN115579023A (en) | Video processing method, video processing device and electronic equipment | |
CN108040284A (en) | Radio station control method for playing back, device, terminal device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190813 |