CN107680599A - User property recognition methods, device and electronic equipment - Google Patents

User property recognition methods, device and electronic equipment Download PDF

Info

Publication number
CN107680599A
CN107680599A CN201710898676.0A CN201710898676A CN107680599A CN 107680599 A CN107680599 A CN 107680599A CN 201710898676 A CN201710898676 A CN 201710898676A CN 107680599 A CN107680599 A CN 107680599A
Authority
CN
China
Prior art keywords
user
far field
vocal print
print feature
score value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710898676.0A
Other languages
Chinese (zh)
Inventor
高聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710898676.0A priority Critical patent/CN107680599A/en
Publication of CN107680599A publication Critical patent/CN107680599A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Abstract

The present invention proposes a kind of user property recognition methods, device and electronic equipment, and this method includes receiving the speech data of user's input;The far field vocal print feature of speech data is determined, and determines score value of the far field vocal print feature based on preset model;The attribute of user is determined according to score value.The identification precision of user property can be lifted by the present invention.

Description

User property recognition methods, device and electronic equipment
Technical field
The present invention relates to technical field of electronic equipment, more particularly to a kind of user property recognition methods, device and electronics to set It is standby.
Background technology
In correlation technique, typically the semanteme of user voice data is identified.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.
Therefore, it is an object of the present invention to propose a kind of user property recognition methods, user property can be lifted Identify precision.
It is another object of the present invention to propose a kind of user property identification device.
It is another object of the present invention to propose a kind of non-transitorycomputer readable storage medium.
It is another object of the present invention to propose a kind of computer program product.
To reach above-mentioned purpose, user property recognition methods that first aspect present invention embodiment proposes, including:Receive and use The speech data of family input;The far field vocal print feature of the speech data is determined, and it is pre- to determine that the far field vocal print feature is based on If the score value of model;The attribute of the user is determined according to the score value.
The user property recognition methods that first aspect present invention embodiment proposes, the voice number inputted by receiving user According to, determine the far field vocal print feature of speech data, and determine score value of the far field vocal print feature based on preset model, and according to point Value determines the attribute of user, can lift the identification precision of user property.
To reach above-mentioned purpose, user property identification device that second aspect of the present invention embodiment proposes, including:Receive mould Block, for receiving the speech data of user's input;First determining module, for determining that the far field vocal print of the speech data is special Sign, and determine the score value of the far field vocal print feature based on preset model;Second determining module, for being determined according to the score value The attribute of the user.
The user property identification device that second aspect of the present invention embodiment proposes, the voice number inputted by receiving user According to, determine the far field vocal print feature of speech data, and determine score value of the far field vocal print feature based on preset model, and according to point Value determines the attribute of user, can lift the identification precision of user property.
To reach above-mentioned purpose, non-transitorycomputer readable storage medium that third aspect present invention embodiment proposes, When the instruction in the storage medium is performed by the processor of mobile terminal so that mobile terminal is able to carry out a kind of user Attribute recognition approach, methods described include:Receive the speech data of user's input;Determine that the far field vocal print of the speech data is special Sign, and determine the score value of the far field vocal print feature based on preset model;The attribute of the user is determined according to the score value.
The non-transitorycomputer readable storage medium that third aspect present invention embodiment proposes, inputted by receiving user Speech data, determine the far field vocal print feature of speech data, and determine score value of the far field vocal print feature based on preset model, with And the attribute of user is determined according to score value, the identification precision of user property can be lifted.
To reach above-mentioned purpose, the computer program product that fourth aspect present invention embodiment proposes, when the computer When instruction in program product is by computing device, a kind of user property recognition methods is performed, methods described includes:Receive user The speech data of input;The far field vocal print feature of the speech data is determined, and it is default to determine that the far field vocal print feature is based on The score value of model;The attribute of the user is determined according to the score value.
The computer program product that fourth aspect present invention embodiment proposes, the speech data inputted by receiving user, The far field vocal print feature of speech data is determined, and determines score value of the far field vocal print feature based on preset model, and according to score value The attribute of user is determined, the identification precision of user property can be lifted.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet for the user property recognition methods that one embodiment of the invention proposes;
Fig. 2 is the schematic flow sheet for the user property recognition methods that another embodiment of the present invention proposes;
Fig. 3 is the schematic flow sheet for the user property recognition methods that another embodiment of the present invention proposes;
Fig. 4 is the structural representation for the user property identification device that one embodiment of the invention proposes;
Fig. 5 is the structural representation for the user property identification device that another embodiment of the present invention proposes.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.On the contrary, this All changes that the embodiment of invention includes falling into the range of the spirit and intension of attached claims, modification and equivalent Thing.
Fig. 1 is the schematic flow sheet for the user property recognition methods that one embodiment of the invention proposes.
The present embodiment is configured as illustrating in user property identification device with the user property recognition methods.
User property identification device can be set in the server in the present embodiment, or can also be arranged on mobile device In, the embodiment of the present invention is not restricted to this.
User property recognition methods in the embodiment of the present invention, for knowing in the application to the attribute of user Not, the attribute of user can be, for example, sex or age, and this is not restricted.
Wherein, application program can refer to run software program on an electronic device, and electronic equipment is, for example, personal electricity Brain (Personal Computer, PC), cloud device or mobile device, mobile device such as smart mobile phone, or flat board electricity Brain etc..
It should be noted that the executive agent of the embodiment of the present invention, can be, for example, server or movement on hardware Central processing unit (Central Processing Unit, CPU) in equipment, on software can be, for example, server or The back-stage management service of application program in mobile device, is not restricted to this.
The embodiment of the present invention carries out example by voice service class application program of application program, and this is not restricted.
Referring to Fig. 1, this method includes:
S101:Receive the speech data of user's input.
For example, user can in voice service class application program one section of speech data of typing.
In correlation technique, typically the semanteme of user voice data is identified.And in an embodiment of the present invention, by Differed in the vocal print feature of different user sound, therefore, speech data owning user can be identified according to far field vocal print feature Attribute, can accurately identify user property.
Speech data can be, for example, that " how is the weather condition of today”.
S102:The far field vocal print feature of speech data is determined, and determines score value of the far field vocal print feature based on preset model.
Preset model therein is default gauss hybrid models, and the preset model pre-establishes.
The quantity of the default gauss hybrid models is at least two, and user corresponding to different default gauss hybrid models belongs to Property is identical or differs.
Far field vocal print feature therein can be specially mel-frequency cepstrum coefficient feature, and this is not restricted.
Alternatively, referring to Fig. 2, score value of the far field vocal print feature based on preset model is determined, including:
S201:Determine conditional probability function of the far field vocal print feature based on each preset model.
S202:Far field vocal print feature is given a mark according to each conditional probability function, obtained and each preset model pair The score value answered.
In the case where the quantity of default gauss hybrid models is at least two, it may be determined that far field vocal print feature is based on every The conditional probability function of individual preset model, then, far field vocal print feature is given a mark according to each conditional probability function, obtained Score value corresponding with each preset model, i.e. score value of the far field vocal print feature on the basis of each preset model, due to presetting mould Type is to be pre-established previously according to the far field vocal print feature of user's sample voice data, therefore, by determining that far field vocal print is special The score value on the basis of each preset model is levied, so as to subsequently demarcate the attribute of user according to the score value, user's category can be lifted Property identification precision, also, due to far field vocal print feature collection be relatively easy to, therefore, method for improving perform simplification, section About hardware layout cost.
S103:The attribute of user is determined according to score value.
Attribute therein is sex and/or age, or or user voice data in the intonation demarcated, it is right This is not restricted.
Alternatively, it will be worth belonging to preset model corresponding to the score value of maximum and belong in score value corresponding with each preset model Attribute of the property as user.
Or also can be by score value corresponding to each preset model, value is more than pre- corresponding to the score value of a predetermined threshold value If attribute of the affiliated attribute of model as optional user, then, continue value being more than the score value of predetermined threshold value and multiple default The matching of model is demarcated, so as to therefrom determine a target score, and using attribute corresponding to the target score as The attribute of user, this is not restricted.
By will directly be worth belonging to preset model corresponding to the score value of maximum and belong in score value corresponding with each preset model Property attribute as user, it is simple easily to realize, and recognition effect is preferable.
In the present embodiment, by receiving the speech data of user's input, the far field vocal print feature of speech data is determined, and really Score value of the Dingyuan field vocal print feature based on preset model, and the attribute of user is determined according to score value, user property can be lifted Identification precision.
Fig. 3 is the schematic flow sheet for the user property recognition methods that another embodiment of the present invention proposes.
Before above-mentioned S101, preset model can also be established by following steps, including:
S301:The data of different attribute user are gathered as sample data.
S302:It is determined that far field corresponding with every sample data vocal print feature.
S303:Model training is carried out to gauss hybrid models according to corresponding far field vocal print feature, obtained and different attribute Corresponding gauss hybrid models, as default gauss hybrid models.
Alternatively, model training is carried out to gauss hybrid models according to corresponding far field vocal print feature, including:According to corresponding Far field vocal print feature, using expectation-maximization algorithm to gauss hybrid models carry out model training.
In an embodiment of the present invention, the quantity of gauss hybrid models corresponding with different attribute can be multiple to enter one Step ground, the quantity of gauss hybrid models corresponding with identical attribute can also be multiple, by training multiple default Gausses to mix Matched moulds type, characteristic matchings can be carried out based on multiple default gauss hybrid models, from another dimension it is determined that during user property Degree has ensured Attribute Recognition precision.
By using expectation-maximization algorithm to gauss hybrid models carry out model training, can establish more match it is pre- If model, and then, lift the identification precision of user property.
In the present embodiment, be used as sample data by the data for gathering different attribute user, it is determined that with every sample data Corresponding far field vocal print feature, and model training is carried out to gauss hybrid models according to corresponding far field vocal print feature, obtain Gauss hybrid models corresponding with different attribute, as default gauss hybrid models, by training multiple default Gaussian Mixture moulds Type, it can carry out characteristic matchings it is determined that when user property based on multiple default gauss hybrid models, protected from another dimension Attribute Recognition precision is hindered.
Fig. 4 is the structural representation for the user property identification device that one embodiment of the invention proposes.
Referring to Fig. 4, the device 400 includes:
Receiving module 401, for receiving the speech data of user's input.
First determining module 402, for determining the far field vocal print feature of speech data, and determine that far field vocal print feature is based on The score value of preset model.
Second determining module 403, for determining the attribute of user according to score value.
Alternatively, in some embodiments, referring to Fig. 5, preset model is default gauss hybrid models, in addition to:3rd is true Cover half block 404, wherein,
3rd determining module 404, for gathering the data of different attribute user as sample data, and determine with per galley proof Far field vocal print feature corresponding to notebook data, and model instruction is carried out to gauss hybrid models according to corresponding far field vocal print feature Practice, gauss hybrid models corresponding with different attribute are obtained, as default gauss hybrid models.
Alternatively, in some embodiments, the 3rd determining module 404, it is specifically used for:
According to corresponding far field vocal print feature, model training is carried out to gauss hybrid models using expectation-maximization algorithm.
Alternatively, in some embodiments, the first determining module 402, it is specifically used for:
Determine conditional probability function of the far field vocal print feature based on each preset model;
Far field vocal print feature is given a mark according to each conditional probability function, obtains corresponding with each preset model point Value.
Alternatively, in some embodiments, the second determining module 403, it is specifically used for:
The affiliated attribute of preset model corresponding to the score value of maximum will be worth as using in score value corresponding with each preset model The attribute at family.
Alternatively, in some embodiments, attribute is sex and/or age.
It should be noted that the explanation in earlier figures 1- Fig. 3 embodiments to user property recognition methods embodiment Suitable for the user property identification device 400 of the embodiment, its realization principle is similar, and here is omitted.
In the present embodiment, by receiving the speech data of user's input, the far field vocal print feature of speech data is determined, and really Score value of the Dingyuan field vocal print feature based on preset model, and the attribute of user is determined according to score value, user property can be lifted Identification precision.
In order to realize above-described embodiment, the present invention also proposes a kind of non-transitorycomputer readable storage medium, works as storage Instruction in medium by terminal computing device when so that terminal is able to carry out a kind of user property recognition methods, method bag Include:
Receive the speech data of user's input;
The far field vocal print feature of speech data is determined, and determines score value of the far field vocal print feature based on preset model;
The attribute of user is determined according to score value.
Non-transitorycomputer readable storage medium in the present embodiment, the speech data inputted by receiving user, really Determine the far field vocal print feature of speech data, and determine score value of the far field vocal print feature based on preset model, and it is true according to score value Determine the attribute of user, the identification precision of user property can be lifted.
In order to realize above-described embodiment, the present invention also proposes a kind of computer program product, when in computer program product Instruction when being executed by processor, perform a kind of user property recognition methods, method includes:
Receive the speech data of user's input;
The far field vocal print feature of speech data is determined, and determines score value of the far field vocal print feature based on preset model;
The attribute of user is determined according to score value.
Computer program product in the present embodiment, the speech data inputted by receiving user, determines speech data Far field vocal print feature, and score value of the far field vocal print feature based on preset model is determined, and the attribute of user is determined according to score value, The identification precision of user property can be lifted.
It should be noted that in the description of the invention, term " first ", " second " etc. are only used for describing purpose, without It is understood that to indicate or implying relative importance.In addition, in the description of the invention, unless otherwise indicated, the implication of " multiple " It is two or more.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Necessarily refer to identical embodiment or example.Moreover, specific features, structure, material or the feature of description can be any One or more embodiments or example in combine in an appropriate manner.
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned Embodiment is changed, changed, replacing and modification.

Claims (14)

1. a kind of user property recognition methods, it is characterised in that comprise the following steps:
Receive the speech data of user's input;
The far field vocal print feature of the speech data is determined, and determines the score value of the far field vocal print feature based on preset model;
The attribute of the user is determined according to the score value.
2. user property recognition methods as claimed in claim 1, it is characterised in that the preset model is default Gaussian Mixture Model, the default gauss hybrid models determine in the following manner;
The data of different attribute user are gathered as sample data;
It is determined that far field corresponding with every sample data vocal print feature;
Model training is carried out to gauss hybrid models according to the corresponding far field vocal print feature, obtained and the different attribute pair The gauss hybrid models answered, as the default gauss hybrid models.
3. user property recognition methods as claimed in claim 2, it is characterised in that described according to the corresponding far field vocal print Feature carries out model training to gauss hybrid models, including:
According to the corresponding far field vocal print feature, model training is carried out to gauss hybrid models using expectation-maximization algorithm.
4. user property recognition methods as claimed in claim 1, it is characterised in that described to determine the far field vocal print feature base In the score value of preset model, including:
Determine the conditional probability function of the far field vocal print feature based on each preset model;
The far field vocal print feature is given a mark according to each conditional probability function, obtained corresponding with each preset model Score value.
5. user property recognition methods as claimed in claim 4, it is characterised in that described that the use is determined according to the score value The attribute at family, including:
By in the score value corresponding with each preset model, it is worth the affiliated attribute of preset model corresponding to the score value of maximum and makees For the attribute of the user.
6. the user property recognition methods as described in claim any one of 1-5, it is characterised in that the attribute be sex and/ Or the age.
A kind of 7. user property identification device, it is characterised in that including:
Receiving module, for receiving the speech data of user's input;
First determining module, for determining the far field vocal print feature of the speech data, and determine the far field vocal print feature base In the score value of preset model;
Second determining module, for determining the attribute of the user according to the score value.
8. user property identification device as claimed in claim 7, it is characterised in that the preset model is default Gaussian Mixture Model, in addition to:3rd determining module, wherein,
3rd determining module, for gathering the data of different attribute user as sample data, and determine and every sample Far field vocal print feature corresponding to data, and model instruction is carried out to gauss hybrid models according to the corresponding far field vocal print feature Practice, gauss hybrid models corresponding with the different attribute are obtained, as the default gauss hybrid models.
9. user property identification device as claimed in claim 8, it is characterised in that the 3rd determining module, be specifically used for:
According to the corresponding far field vocal print feature, model training is carried out to gauss hybrid models using expectation-maximization algorithm.
10. user property identification device as claimed in claim 7, it is characterised in that first determining module is specific to use In:
Determine the conditional probability function of the far field vocal print feature based on each preset model;
The far field vocal print feature is given a mark according to each conditional probability function, obtained corresponding with each preset model Score value.
11. user property identification device as claimed in claim 10, it is characterised in that second determining module is specific to use In:
By in the score value corresponding with each preset model, it is worth the affiliated attribute of preset model corresponding to the score value of maximum and makees For the attribute of the user.
12. the user property identification device as described in claim any one of 7-11, it is characterised in that the attribute is sex And/or the age.
13. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, it is characterised in that the program The user property recognition methods as any one of claim 1-6 is realized when being executed by processor.
14. a kind of computer program product, when the instruction in the computer program product is by computing device, perform one kind User property recognition methods, methods described include:
Receive the speech data of user's input;
The far field vocal print feature of the speech data is determined, and determines the score value of the far field vocal print feature based on preset model;
The attribute of the user is determined according to the score value.
CN201710898676.0A 2017-09-28 2017-09-28 User property recognition methods, device and electronic equipment Pending CN107680599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710898676.0A CN107680599A (en) 2017-09-28 2017-09-28 User property recognition methods, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710898676.0A CN107680599A (en) 2017-09-28 2017-09-28 User property recognition methods, device and electronic equipment

Publications (1)

Publication Number Publication Date
CN107680599A true CN107680599A (en) 2018-02-09

Family

ID=61138299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710898676.0A Pending CN107680599A (en) 2017-09-28 2017-09-28 User property recognition methods, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107680599A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109841218A (en) * 2019-01-31 2019-06-04 北京声智科技有限公司 A kind of voiceprint registration method and device for far field environment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543641A (en) * 2001-06-19 2004-11-03 �������ֿ� Speaker recognition systems
CN101178897A (en) * 2007-12-05 2008-05-14 浙江大学 Speaking man recognizing method using base frequency envelope to eliminate emotion voice
CN102262440A (en) * 2010-06-11 2011-11-30 微软公司 Multi-modal gender recognition
CN102834842A (en) * 2010-03-23 2012-12-19 诺基亚公司 Method and apparatus for determining a user age range
CN103310788A (en) * 2013-05-23 2013-09-18 北京云知声信息技术有限公司 Voice information identification method and system
CN103943104A (en) * 2014-04-15 2014-07-23 海信集团有限公司 Voice information recognition method and terminal equipment
CN104700843A (en) * 2015-02-05 2015-06-10 海信集团有限公司 Method and device for identifying ages
CN105700682A (en) * 2016-01-08 2016-06-22 北京乐驾科技有限公司 Intelligent gender and emotion recognition detection system and method based on vision and voice
CN105761720A (en) * 2016-04-19 2016-07-13 北京地平线机器人技术研发有限公司 Interaction system based on voice attribute classification, and method thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543641A (en) * 2001-06-19 2004-11-03 �������ֿ� Speaker recognition systems
CN101178897A (en) * 2007-12-05 2008-05-14 浙江大学 Speaking man recognizing method using base frequency envelope to eliminate emotion voice
CN102834842A (en) * 2010-03-23 2012-12-19 诺基亚公司 Method and apparatus for determining a user age range
CN102262440A (en) * 2010-06-11 2011-11-30 微软公司 Multi-modal gender recognition
CN103310788A (en) * 2013-05-23 2013-09-18 北京云知声信息技术有限公司 Voice information identification method and system
CN103943104A (en) * 2014-04-15 2014-07-23 海信集团有限公司 Voice information recognition method and terminal equipment
CN104700843A (en) * 2015-02-05 2015-06-10 海信集团有限公司 Method and device for identifying ages
CN105700682A (en) * 2016-01-08 2016-06-22 北京乐驾科技有限公司 Intelligent gender and emotion recognition detection system and method based on vision and voice
CN105761720A (en) * 2016-04-19 2016-07-13 北京地平线机器人技术研发有限公司 Interaction system based on voice attribute classification, and method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109841218A (en) * 2019-01-31 2019-06-04 北京声智科技有限公司 A kind of voiceprint registration method and device for far field environment
CN109841218B (en) * 2019-01-31 2020-10-27 北京声智科技有限公司 Voiceprint registration method and device for far-field environment

Similar Documents

Publication Publication Date Title
CN109658938A (en) The method, apparatus of voice and text matches, equipment and computer-readable medium
CN109145204B (en) Portrait label generation and use method and system
CN111078887B (en) Text classification method and device
CN103970743B (en) A kind of recommendation method for personalized information, system and search engine in the search
CN106997342B (en) Intention identification method and device based on multi-round interaction
CN110222330B (en) Semantic recognition method and device, storage medium and computer equipment
CN110196948A (en) Content recommendation method and device, computer equipment and storage medium
CN107679564A (en) Sample data recommends method and its device
CN109308895A (en) Acoustic training model method, apparatus, equipment and computer-readable medium
US20210383793A1 (en) Information presentation device, and information presentation method
CN110070140A (en) Method and device is determined based on user's similitude of multi-class information
CN102741861B (en) Cross the image identification system of complete dictionary based on cascade
CN111243604B (en) Training method for speaker recognition neural network model supporting multiple awakening words, speaker recognition method and system
CN107844531B (en) Answer output method and device and computer equipment
CN113947693A (en) Method and device for obtaining target object recognition model and electronic equipment
CN114626380A (en) Entity identification method and device, electronic equipment and storage medium
CN111210840A (en) Age prediction method, device and equipment
CN107680599A (en) User property recognition methods, device and electronic equipment
CN111949793B (en) User intention recognition method and device and terminal equipment
CN111931491B (en) Domain dictionary construction method and device
CN109543187B (en) Method and device for generating electronic medical record characteristics and storage medium
CN111261196A (en) Age estimation method, device and equipment
CN109726726B (en) Event detection method and device in video
CN110619090A (en) Regional attraction assessment method and device
WO2022141867A1 (en) Speech recognition method and apparatus, and electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180209

RJ01 Rejection of invention patent application after publication