WO2019000991A1 - 一种声纹识别方法及装置 - Google Patents
一种声纹识别方法及装置 Download PDFInfo
- Publication number
- WO2019000991A1 WO2019000991A1 PCT/CN2018/077359 CN2018077359W WO2019000991A1 WO 2019000991 A1 WO2019000991 A1 WO 2019000991A1 CN 2018077359 W CN2018077359 W CN 2018077359W WO 2019000991 A1 WO2019000991 A1 WO 2019000991A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- command
- category
- voice
- voiceprint
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 description 28
- 230000006399 behavior Effects 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 239000000203 mixture Substances 0.000 description 8
- 235000013305 food Nutrition 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 235000009508 confectionery Nutrition 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 235000021147 sweet food Nutrition 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002650 habitual effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
Definitions
- the present application relates to the field of artificial intelligence applications, and in particular, to a voiceprint recognition method and apparatus.
- Artificial Intelligence is a new technical science that studies and develops theories, methods, techniques, and applications for simulating, extending, and extending human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence. Research in this area includes robotics, speech recognition, image recognition, Natural language processing and expert systems. Among them, one aspect of artificial intelligence is the voiceprint recognition technology.
- voice dialogue is that it can record the user's voice.
- everyone has their own voice, just like a fingerprint. So we also call everyone's voice as a voiceprint. Through the speaker's voiceprint, we can determine who the speaker is. The user and extract the user's data to provide a personalized service.
- the invention is based on the voiceprint technology and cooperates with a series of product strategies to propose the best solution to the above problems.
- the degree of productization is low. Due to the single strategy and insufficient technical ability, the product design is limited.
- the voiceprint is only used for extremely basic functions. Even if it is already productized, it can only be applied to very narrow scenes. For example, it is only used for specific sound wake-up devices, but it is not used to provide personalized services. Voiceprint technology is not without deep productization.
- the customer needs to participate in the voiceprint recognition, and the voiceprint training process is needed to further identify the user ID. User satisfaction is not high.
- aspects of the present application provide a voiceprint recognition method and apparatus for providing personalized service to a user.
- a voiceprint recognition method including:
- the voiceprint recognition method is used to identify the user category in which the command voice is issued;
- Resources are provided according to the user categories and commands.
- the user category includes user gender and user age range.
- the voiceprint recognition method is used to identify the user category for issuing the command voice, and further includes:
- model training is performed to establish a voiceprint processing model for different user categories.
- the method further includes:
- a corpus forming corpus with different user-type colloquial features is collected, and the corpus is used to perform speech recognition model training, and a speech recognition model corresponding to the user type is obtained.
- Searching for a target resource that matches the recommended interest category presenting the target resource to the user.
- searching for a recommended interest category that matches the command includes:
- the current vertical class is determined
- the recommended content is obtained by using a pre-established user interest model.
- the attribute information includes at least one of a user age group and a user gender.
- the method further includes:
- Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
- the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
- a voiceprint recognition apparatus comprising:
- a voiceprint recognition module configured to identify a user category that issues a command voice according to the acquired command voice, using a voiceprint recognition method
- a voice recognition module configured to perform voice recognition on the command voice according to the user category, by using a corresponding voice recognition model, to obtain a command described by the command voice;
- a module is provided for providing resources according to the user category and command.
- the user category including user gender, user age segment.
- the voiceprint recognition module further includes:
- the voiceprint processing model establishes a sub-module for performing model training according to the sound characteristics of different user categories, and establishing a voiceprint processing model of different user categories.
- the voice recognition module further includes:
- the speech recognition model establishes a sub-module for collecting a corpus forming corpus with colloquial features of different user types, and using the corpus to perform speech recognition model training, and obtaining a speech recognition model corresponding to the user type.
- the providing module includes:
- a presentation submodule configured to search for a target resource that matches the interest category, and present the target resource to the user.
- search sub-module further includes:
- a vertical class determining submodule for determining a current vertical class according to the command
- the content acquisition sub-module is configured to obtain a recommended interest category that matches the command by using a pre-established user interest model according to the current drop class and the attribute information of the user.
- the aspect as described above and any possible implementation manner further provide an implementation manner, where the attribute information includes at least one of a user age group and a user gender.
- search sub-module further includes a user interest model establishment sub-module, configured to:
- Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
- the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
- an apparatus comprising:
- One or more processors are One or more processors;
- a storage device for storing one or more programs
- the one or more programs are executed by the one or more processors such that the one or more processors implement any of the methods described above.
- a computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements any of the above methods.
- the recommendation strategy is more perfect, the recommendation is more accurate, and the user satisfaction is improved; even if the error recommendation error is occasionally recognized, the user does not obviously perceive; the productization technology is lowered. Requirements.
- FIG. 1 is a schematic flow chart of a voiceprint recognition method according to an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a step of searching for a recommended interest category according to the user category according to the user category according to an embodiment of the present invention
- FIG. 3 is a schematic structural diagram of a voiceprint recognition apparatus according to an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a search module of a voiceprint recognition apparatus according to an embodiment of the present disclosure
- FIG. 5 is a block diagram of an exemplary computer system/server suitable for use in implementing embodiments of the present invention.
- FIG. 1 is a schematic flowchart of a method for identifying a voiceprint according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
- the voiceprint recognition method is used to identify the user category in which the command voice is issued.
- the user category includes a user gender and a user age group.
- model training can be performed according to the sound characteristics of different user categories, and sounds of different user categories can be established. Pattern processing model to achieve voiceprint analysis for user groups targeting different user categories.
- the user can use the voiceprint recognition method to identify the gender and age information of the user who issued the command voice.
- the voiceprint of the speaker needs to be modeled, that is, "training” or “learning”. Specifically, the first feature vector of each voice in the training set is extracted by applying a deep neural network DNN voiceprint baseline system; and the gender classification is respectively trained according to the first feature vector of each voice and the pre-labeled gender and age segment labels respectively. And the age classifier, thus establishing a voiceprint processing model that distinguishes between gender and age.
- the gender classifier and the age classifier analyze the first feature information, and obtain the gender tag and the age segment tag of the first feature information, that is, the gender tag and the age segment tag of the command voice.
- the fundamental frequency feature and the Mel frequency cepstrum coefficient MFCC feature can be extracted firstly for the speech request, and then the fundamental frequency feature and the MFCC feature can be based on the Gaussian mixture model.
- Performing a posteriori probability value calculation, and determining the gender of the user according to the calculation result For example, if the Gaussian mixture model is a male Gaussian mixture model, when the calculation result is a high posterior probability value, if it is greater than a certain threshold, the determination may be performed. The gender of the user is male. When the calculation result is that the posterior probability value is small, such as less than a certain threshold, the gender of the user may be determined to be female.
- the user voiceprint ID that issues the command voice is further identified.
- Each user's voice will have a unique voiceprint ID that records personal data such as the user's name, gender, age, and hobbies.
- the voiceprint feature of the command voice of the user is extracted, and the registered voiceprint model pre-stored in the cloud is matched one by one. If the matching value is greater than the threshold, the user voiceprint ID of the user is determined. If the match value is less than the threshold, it is determined that the user is not registered.
- the voiceprint feature is a d-vector feature, which is a feature extracted by a Deep Neural Network (DNN), specifically an output of a last layer of the hidden layer in the DNN.
- DNN Deep Neural Network
- a voice recognition model corresponding to the user category is used to perform voice recognition on the command voice to obtain a command described by the command voice.
- the voice information of the command voice can be recognized as text information, and then the corresponding information can be manipulated according to the text information.
- a corpus forming corpus with different user type colloquial features is collected, and the corpus is used to perform speech recognition model training, and a speech recognition model corresponding to the user type is obtained.
- the corpus having the colloquial characteristics of the child can be collected to form a corpus, and then the corpus can be used for model training to obtain a child speech recognition model.
- the child's colloquial characteristics mentioned here may specifically include repeated words, unclear words, and broken sentences.
- the child mode is automatically turned on, and the voice interaction mode of the child's habitual conversation mode can be used to perform content screening and optimization for the child.
- the interaction of children's patterns should be specially designed to meet the children's dialogue habits.
- the broadcast sound of TTS can be the voice of a child or a young woman, and the distance from the child can be brought closer. The sound of the broadcast can be used more frequently, making the child sound more comfortable. Designing children's chats and growing up with children in response to the chat data that children often express.
- the children's model requires that all content resources be carefully screened to remove the yellow storm content. All content such as music, sound, movies, and television must be precisely tailored to the needs of children. For example, music should be mostly children's songs. Sounds should be mostly children's stories. Movies should be mostly animated films. TV should be mostly cartoons.
- resources are provided in accordance with the user categories and commands.
- Searching for a target resource that matches the recommended interest category presenting the target resource to the user.
- the finding, according to the user category, a recommended interest category that matches the command, as shown in FIG. 2, includes the following sub-steps:
- the current vertical class includes music, audiobook, radio, radio, video, movie, food, chat, and the like;
- a recommended interest category that matches the command is obtained using a pre-established user interest model based on the current drop class and the attribute information of the user.
- the attribute information includes at least one of an age group, a gender, and interest information.
- the user interest model is pre-established, including:
- Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
- the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
- a user history log of a large number of users at a preset time granularity (for example, 2 months, 4 months, or half a year, etc.) can be obtained. Due to the user's behavioral habits, a large number of user history logs can be used to obtain specific behaviors of different user categories under specific categories, namely user interest preferences. In other words, the user historical behavior data is classified and classified according to the user category and the vertical class to obtain the user interest model.
- the user interest model can be used to determine the recommendation strategy, and the recommendation strategies of the different categories in music, audiobook, radio, radio, video, movie, food, chat, etc. include the user age range and gender dimension. That is, according to the current user category and the vertical class, the recommended interest category associated with the current user category and the current drop class is determined using the user interest model.
- children in the age group of children watch videos in the video category including Xiao Ma Baoli, Dora the Explorer, Pig Pecs and other animated videos, and children can be obtained through the mining of the historical behavior of users of this age group.
- the user of the age group has an recommended interest category in the video category as an animated video.
- the recommended content associated with the current user and the current drop class is determined according to the current user class using the user interest model corresponding to the user voiceprint ID.
- the user history behavior data corresponding to the user voiceprint ID is obtained according to the user voiceprint ID; and the user history behavior data is classified and counted according to a vertical class to obtain the user interest model.
- a target resource matching the recommended interest category is searched in a multimedia resource library, and the target resource is presented to the user.
- the generalization requirements such as “putting a song”
- Types of music if it is recognized as an old person, music such as drama is recommended; if it is recognized as a child, music such as children's songs can be played, age and gender can also be combined, and different types of children's songs can be recommended for small boys and little girls.
- the movie category when the user says the generalization requirements such as “putting a movie”, if the user is recognized as a male, the latest hottest recommended action movie and the like; if the user is recognized as a female, then the love is recommended.
- Type movie if it is recognized that the user is a child, an animated movie is recommended.
- the voiceprint recognition process is implicit recommendation recognition, and there is no specific voiceprint or the identification process of the user, but the process of the user's natural speech is very "normal”. Command voice processing, and complete the work of voiceprint recognition.
- Age and gender are added to the recommendation strategy, the recommendation strategy is more complete, and the recommendation is more precise, thus improving user satisfaction.
- FIG. 3 is a schematic structural diagram of a voiceprint recognition apparatus according to another embodiment of the present invention. As shown in FIG. 3, the voiceprint recognition module 301, the voice recognition module 302, and the providing module 303 are included.
- the voiceprint recognition module 301 is configured to identify a user category for issuing a command voice according to the acquired command voice by using a voiceprint recognition method.
- the user category includes a user gender and a user age group.
- the voiceprint recognition module 301 further includes a voiceprint processing model establishing sub-module for sound characteristics according to different user categories. Model training is performed to establish a voiceprint processing model for different user categories to implement voiceprint analysis for user groups of different user categories.
- the user can use the voiceprint recognition method to identify the gender and age information of the user who issued the command voice.
- the voiceprint of the speaker needs to be modeled, that is, "training” or “learning”. Specifically, the first feature vector of each voice in the training set is extracted by applying a deep neural network DNN voiceprint baseline system; and the gender classification is respectively trained according to the first feature vector of each voice and the pre-labeled gender and age segment labels respectively. And age classifier. Thus, a voiceprint processing model that distinguishes between gender and age is established.
- the gender classifier and the age classifier analyze the first feature information, and obtain the gender tag and the age segment tag of the first feature information, that is, the gender tag and the age segment tag of the command voice.
- the fundamental frequency feature and the Mel frequency cepstrum coefficient MFCC feature can be extracted firstly for the speech request, and then the fundamental frequency feature and the MFCC feature can be based on the Gaussian mixture model.
- Performing a posteriori probability value calculation, and determining the gender of the user according to the calculation result For example, if the Gaussian mixture model is a male Gaussian mixture model, when the calculation result is a high posterior probability value, if it is greater than a certain threshold, the determination may be performed. The gender of the user is male. When the calculation result is that the posterior probability value is small, such as less than a certain threshold, the gender of the user may be determined to be female.
- the user voiceprint ID that issues the command voice is further identified.
- Each user's voice will have a unique voiceprint ID that records personal data such as the user's name, gender, age, and hobbies.
- the voiceprint feature of the command voice of the user is extracted, and the registered voiceprint model pre-stored in the cloud is matched one by one. If the matching value is greater than the threshold, the user voiceprint ID of the user is determined. If the match value is less than the threshold, it is determined that the user is not registered.
- the voiceprint feature is a d-vector feature, which is a feature extracted by a Deep Neural Network (DNN), specifically an output of a last layer of the hidden layer in the DNN.
- DNN Deep Neural Network
- the voice recognition module 302 is configured to perform voice recognition on the command voice by using a voice recognition model corresponding to the user category according to the user category, to obtain a command described by the command voice.
- the speech recognition module 302 further includes a speech recognition model modeling sub-module for pre-establishing a speech recognition model for different user categories.
- a corpus forming corpus with different user type colloquial features is collected, and the corpus is used to perform speech recognition model training, and a speech recognition model corresponding to the user type is obtained.
- the corpus having the colloquial characteristics of the child can be collected to form a corpus, and then the corpus can be used for model training to obtain a child speech recognition model.
- the child's colloquial characteristics mentioned here may specifically include repeated words, unclear words, and broken sentences.
- the child guidance module is further configured to automatically open the child mode for the case where the user category is a child user, and the content interaction and optimization of the child may be performed by using a voice interaction mode of a child-friendly conversation mode.
- the interaction of children's patterns should be specially designed to meet the children's dialogue habits.
- the broadcast sound of TTS can be the voice of a child or a young woman, and the distance from the child can be brought closer. The sound of the broadcast can be used more frequently, making the child sound more comfortable. Designing children's chats and growing up with children in response to the chat data that children often express.
- the children's model requires that all content resources be carefully screened to remove the yellow storm content. All content such as music, sound, movies, and television must be precisely tailored to the needs of children. For example, music should be mostly children's songs. Sounds should be mostly children's stories. Movies should be mostly animated films. TV should be mostly cartoons.
- the providing module 303 is configured to provide resources according to the user category and the command; specifically, the method includes:
- a presentation submodule configured to search for a target resource that matches the interest category, and present the target resource to the user.
- the searching submodule is configured to search for a recommended interest category that matches the command according to the user category.
- a vertical class determining sub-module 401 configured to determine a current vertical class according to the command, where the current vertical class includes music, audiobook, radio, radio, video, movie, food, chat, and the like;
- the content acquisition sub-module 402 is configured to acquire a recommended interest category that matches the command by using a pre-established user interest model according to the current drop class and the attribute information of the user.
- the attribute information includes at least one of an age group, a gender, and interest information.
- the user interest model modeling module 403 is further configured to pre-establish a user interest model, including:
- Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
- the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
- a user history log of a large number of users at a preset time granularity (for example, 2 months, 4 months, or half a year, etc.) can be obtained.
- the user historical behavior data is classified and classified according to the user category and the vertical class to obtain the user interest model.
- the user interest model can be used to determine the recommendation strategy, and the recommendation strategies of the different categories in music, audiobook, radio, radio, video, movie, food, chat, etc. include the user age range and gender dimension. That is, according to the current user category and the vertical class, the recommended interest category associated with the current user category and the current drop class is determined using the user interest model.
- children in the age group of children watch videos in the video category including Xiao Ma Baoli, Dora the Explorer, Pig Pecs and other animated videos, and children can be obtained through the mining of the historical behavior of users of this age group.
- the user of the age group has an recommended interest category in the video category as an animated video.
- the recommended content associated with the current user and the current drop class is determined according to the current user class using the user interest model corresponding to the user voiceprint ID.
- the user history behavior data corresponding to the user voiceprint ID is obtained according to the user voiceprint ID; and the user history behavior data is classified and counted according to a vertical class to obtain the user interest model.
- the presentation sub-module is configured to search a multimedia resource library for a target resource that matches the recommended interest category, and present the target resource to the user.
- the generalization requirements such as “putting a song”
- Types of music if it is recognized as an old person, music such as drama is recommended; if it is recognized as a child, music such as children's songs can be played, age and gender can also be combined, and different types of children's songs can be recommended for small boys and little girls.
- the movie category when the user says the generalization requirements such as “putting a movie”, if the user is recognized as a male, the latest hottest recommended action movie and the like; if the user is recognized as a female, then the love is recommended.
- Type movie if it is recognized that the user is a child, an animated movie is recommended.
- the voiceprint recognition process is an implicit recommendation recognition, and there is no specific voiceprint or the identification process of the user, but the process of the user's natural speech is very "normal”. Command voice processing, and complete the work of voiceprint recognition.
- Age and gender are added to the recommendation strategy, the recommendation strategy is more complete, and the recommendation is more precise, thus improving user satisfaction.
- the disclosed methods and apparatus may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
- FIG. 5 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the present invention.
- the computer system/server 012 shown in FIG. 5 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
- computer system/server 012 is represented in the form of a general purpose computing device.
- Components of computer system/server 012 may include, but are not limited to, one or more processors or processing units 016, system memory 028, and bus 018 that connects different system components, including system memory 028 and processing unit 016.
- Bus 018 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
- these architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MAC) bus, an Enhanced ISA Bus, a Video Electronics Standards Association (VESA) local bus, and peripheral component interconnects ( PCI) bus.
- ISA Industry Standard Architecture
- MAC Micro Channel Architecture
- VESA Video Electronics Standards Association
- PCI peripheral component interconnects
- Computer system/server 012 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by computer system/server 012, including volatile and non-volatile media, removable and non-removable media.
- System memory 028 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 030 and/or cache memory 032.
- Computer system/server 012 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
- storage system 034 can be used to read and write non-removable, non-volatile magnetic media (not shown in Figure 5, commonly referred to as a "hard disk drive").
- a disk drive for reading and writing to a removable non-volatile disk such as a "floppy disk”
- a removable non-volatile disk such as a CD-ROM, DVD-ROM
- each drive can be coupled to bus 018 via one or more data medium interfaces.
- Memory 028 can include at least one program product having a set (e.g., at least one) of program modules configured to perform the functions of various embodiments of the present invention.
- Program/utility 040 having a set (at least one) of program modules 042, which may be stored, for example, in memory 028, such program module 042 includes, but is not limited to, an operating system, one or more applications, other programs Modules and program data, each of these examples or some combination may include an implementation of a network environment.
- Program module 042 typically performs the functions and/or methods of the embodiments described herein.
- the computer system/server 012 can also be in communication with one or more external devices 014 (eg, a keyboard, pointing device, display 024, etc.), in which the computer system/server 012 communicates with an external radar device, and can also A plurality of devices that enable a user to interact with the computer system/server 012, and/or any device (eg, a network card, modem, etc.) that enables the computer system/server 012 to communicate with one or more other computing devices Communication. This communication can take place via an input/output (I/O) interface 022.
- I/O input/output
- computer system/server 012 can also communicate with one or more networks (e.g., a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) via network adapter 020.
- network adapter 020 communicates with other modules of computer system/server 012 via bus 018.
- other hardware and/or software modules may be utilized in conjunction with computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems. , tape drives, and data backup storage systems.
- Processing unit 016 performs the functions and/or methods of the described embodiments of the present invention by running a program stored in system memory 028.
- the computer program described above may be provided in a computer storage medium encoded with a computer program that, when executed by one or more computers, causes one or more computers to perform the embodiment of the invention described above Method flow and/or device operation.
- the transmission route of computer programs is no longer limited by tangible media, and can also be downloaded directly from the network. Any combination of one or more computer readable media can be utilized.
- the computer readable medium can be a computer readable signal medium or a computer readable storage medium.
- the computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
- a computer readable storage medium can be any tangible medium that can contain or store a program, which can be used by or in connection with an instruction execution system, apparatus or device.
- a computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying computer readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer readable signal medium can also be any computer readable medium other than a computer readable storage medium, which can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device. .
- Program code embodied on a computer readable medium can be transmitted by any suitable medium, including but not limited to wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for performing the operations of the present invention may be written in one or more programming languages, or a combination thereof, including an object oriented programming language such as Java, Smalltalk, C++, and conventional Procedural programming language—such as the "C" language or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection).
- LAN local area network
- WAN wide area network
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (18)
- 一种声纹识别方法,其特征在于,包括:根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别;根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令;根据所述用户类别和命令提供资源。
- 根据权利要求1所述的声纹识别方法,其特征在于,所述用户类别包括用户性别、用户年龄段。
- 根据权利要求1或2所述的声纹识别方法,其特征在于,所述根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别之前,还包括:根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型。
- 根据权利要求1、2或3所述的声纹识别方法,其特征在于,所述根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令之前,还包括:采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
- 根据权利要求1至4任一权项所述的声纹识别方法,其特征在于,所述根据所述用户类别和命令提供资源包括:根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;搜索与所述推荐兴趣类别匹配的目标资源,将所述目标资源呈现给 用户。
- 根据权利要求5所述的声纹识别方法,其特征在于,所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别包括:根据所述命令确当前垂类;根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得与所述命令相匹配的推荐兴趣类别。
- 根据权利要求6所述的声纹识别方法,其特征在于,所述属性信息包括用户年龄段、用户性别中至少一个。
- 根据权利要求5所述的声纹识别方法,其特征在于,所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别之前,还包括:获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
- 一种声纹识别装置,其特征在于,包括:声纹识别模块,用于根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别;语音识别模块,用于根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令;提供模块,用于根据所述用户类别和命令提供资源。
- 根据权利要求9所述的声纹识别装置,其特征在于,所述用户类别包括用户性别、用户年龄段。
- 根据权利要求9或10所述的声纹识别装置,其特征在于,所述 声纹识别模块还包括:声纹处理模型建立子模块,用于根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型。
- 根据权利要求9、10或11所述的声纹识别装置,其特征在于,所述语音识别模块还包括:语音识别模型建立子模块,用于采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
- 根据权利要求9至12任一权项所述的声纹识别装置,其特征在于,所述提供模块包括:查找子模块,用于根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;呈现子模块,用于搜索与所述兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
- 根据权利要求13所述的声纹识别装置,其特征在于,所述查找子模块包括:垂类确定子模块,用于根据所述命令确当前垂类;内容获取子模块,用于根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得与所述命令相匹配的推荐兴趣类别。
- 根据权利要求14所述的声纹识别装置,其特征在于,所述属性信息包括用户年龄段、用户性别中至少一个。
- 根据权利要求14所述的声纹识别装置,其特征在于,所述查找子模块还包括用户兴趣模型建立子模块,用于:获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
- 一种设备,其特征在于,所述设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-8中任一所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-8中任一所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018546525A JP6711500B2 (ja) | 2017-06-30 | 2018-02-27 | 声紋識別方法及び装置 |
US16/300,444 US11302337B2 (en) | 2017-06-30 | 2018-02-27 | Voiceprint recognition method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710525251.5A CN107507612B (zh) | 2017-06-30 | 2017-06-30 | 一种声纹识别方法及装置 |
CN201710525251.5 | 2017-06-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019000991A1 true WO2019000991A1 (zh) | 2019-01-03 |
Family
ID=60679818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/077359 WO2019000991A1 (zh) | 2017-06-30 | 2018-02-27 | 一种声纹识别方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11302337B2 (zh) |
JP (1) | JP6711500B2 (zh) |
CN (1) | CN107507612B (zh) |
WO (1) | WO2019000991A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188171A (zh) * | 2019-05-30 | 2019-08-30 | 上海联影医疗科技有限公司 | 一种语音搜索方法、装置、电子设备及存储介质 |
CN110335626A (zh) * | 2019-07-09 | 2019-10-15 | 北京字节跳动网络技术有限公司 | 基于音频的年龄识别方法及装置、存储介质 |
CN110503961A (zh) * | 2019-09-03 | 2019-11-26 | 北京字节跳动网络技术有限公司 | 音频识别方法、装置、存储介质及电子设备 |
CN110990685A (zh) * | 2019-10-12 | 2020-04-10 | 中国平安财产保险股份有限公司 | 基于声纹的语音搜索方法、设备、存储介质及装置 |
CN111326163A (zh) * | 2020-04-15 | 2020-06-23 | 厦门快商通科技股份有限公司 | 一种声纹识别方法和装置以及设备 |
CN112530418A (zh) * | 2019-08-28 | 2021-03-19 | 北京声智科技有限公司 | 一种语音唤醒方法、装置及相关设备 |
CN112733025A (zh) * | 2021-01-06 | 2021-04-30 | 天津五八到家货运服务有限公司 | 用户数据服务系统、用户数据处理方法、设备和存储介质 |
WO2022048786A1 (en) | 2020-09-07 | 2022-03-10 | Kiwip Technologies Sas | Secure communication system with speaker recognition by voice biometrics for user groups such as family groups |
US11495217B2 (en) | 2018-04-16 | 2022-11-08 | Google Llc | Automated assistants that accommodate multiple age groups and/or vocabulary levels |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107507612B (zh) * | 2017-06-30 | 2020-08-28 | 百度在线网络技术(北京)有限公司 | 一种声纹识别方法及装置 |
CN108305623A (zh) * | 2018-01-15 | 2018-07-20 | 珠海格力电器股份有限公司 | 电器控制方法及装置 |
CN110046898B (zh) * | 2018-01-17 | 2022-01-25 | 苏州君林智能科技有限公司 | 账户信息的分组方法、装置及支付方法、装置 |
CN108492836A (zh) * | 2018-03-29 | 2018-09-04 | 努比亚技术有限公司 | 一种基于语音的搜索方法、移动终端及存储介质 |
CN108899033B (zh) * | 2018-05-23 | 2021-09-10 | 出门问问信息科技有限公司 | 一种确定说话人特征的方法及装置 |
CN110619870B (zh) * | 2018-06-04 | 2022-05-06 | 佛山市顺德区美的电热电器制造有限公司 | 一种人机对话方法、装置、家用电器和计算机存储介质 |
CN108881649B (zh) * | 2018-06-08 | 2020-11-13 | 百度在线网络技术(北京)有限公司 | 用于提供语音服务的方法和装置 |
CN108737872A (zh) | 2018-06-08 | 2018-11-02 | 百度在线网络技术(北京)有限公司 | 用于输出信息的方法和装置 |
CN108882014A (zh) * | 2018-06-13 | 2018-11-23 | 成都市极米科技有限公司 | 智能电视儿童桌面的管理方法、管理装置和可读存储介质 |
CN108962223A (zh) * | 2018-06-25 | 2018-12-07 | 厦门快商通信息技术有限公司 | 一种基于深度学习的语音性别识别方法、设备及介质 |
CN108831487B (zh) * | 2018-06-28 | 2020-08-18 | 深圳大学 | 声纹识别方法、电子装置及计算机可读存储介质 |
CN108924218B (zh) * | 2018-06-29 | 2020-02-18 | 百度在线网络技术(北京)有限公司 | 用于推送信息的方法和装置 |
CN108933730A (zh) * | 2018-06-29 | 2018-12-04 | 百度在线网络技术(北京)有限公司 | 信息推送方法和装置 |
CN109271585B (zh) * | 2018-08-30 | 2021-06-01 | 广东小天才科技有限公司 | 一种信息推送方法及家教设备 |
CN109119071A (zh) * | 2018-09-26 | 2019-01-01 | 珠海格力电器股份有限公司 | 一种语音识别模型的训练方法及装置 |
CN118503532A (zh) * | 2018-10-02 | 2024-08-16 | 松下电器(美国)知识产权公司 | 信息提供方法 |
CN109582822A (zh) * | 2018-10-19 | 2019-04-05 | 百度在线网络技术(北京)有限公司 | 一种基于用户语音的音乐推荐方法及装置 |
CN111290570A (zh) * | 2018-12-10 | 2020-06-16 | 中国移动通信集团终端有限公司 | 人工智能设备的控制方法、装置、设备及介质 |
CN109462603A (zh) * | 2018-12-14 | 2019-03-12 | 平安城市建设科技(深圳)有限公司 | 基于盲检测的声纹认证方法、设备、存储介质及装置 |
CN109412405A (zh) * | 2018-12-24 | 2019-03-01 | 珠海格力电器股份有限公司 | 电磁辐射调节方法、装置、系统及家电设备 |
CN109671438A (zh) * | 2019-01-28 | 2019-04-23 | 武汉恩特拉信息技术有限公司 | 一种利用语音提供辅助服务的装置及方法 |
CN111724797A (zh) * | 2019-03-22 | 2020-09-29 | 比亚迪股份有限公司 | 基于图像和声纹识别的语音控制方法、系统和车辆 |
CN111859008B (zh) * | 2019-04-29 | 2023-11-10 | 深圳市冠旭电子股份有限公司 | 一种推荐音乐的方法及终端 |
CN110166560B (zh) * | 2019-05-24 | 2021-08-20 | 北京百度网讯科技有限公司 | 一种服务配置方法、装置、设备及存储介质 |
CN110570843B (zh) * | 2019-06-28 | 2021-03-05 | 北京蓦然认知科技有限公司 | 一种用户语音识别方法和装置 |
US11257493B2 (en) | 2019-07-11 | 2022-02-22 | Soundhound, Inc. | Vision-assisted speech processing |
CN112331193B (zh) * | 2019-07-17 | 2024-08-09 | 华为技术有限公司 | 语音交互方法及相关装置 |
CN110336723A (zh) * | 2019-07-23 | 2019-10-15 | 珠海格力电器股份有限公司 | 智能家电的控制方法及装置、智能家电设备 |
JP6977004B2 (ja) * | 2019-08-23 | 2021-12-08 | サウンドハウンド,インコーポレイテッド | 車載装置、発声を処理する方法およびプログラム |
CN110600033B (zh) * | 2019-08-26 | 2022-04-05 | 北京大米科技有限公司 | 学习情况的评估方法、装置、存储介质及电子设备 |
CN110534099B (zh) * | 2019-09-03 | 2021-12-14 | 腾讯科技(深圳)有限公司 | 语音唤醒处理方法、装置、存储介质及电子设备 |
CN110689886B (zh) * | 2019-09-18 | 2021-11-23 | 深圳云知声信息技术有限公司 | 设备控制方法及装置 |
CN112581950A (zh) * | 2019-09-29 | 2021-03-30 | 广东美的制冷设备有限公司 | 空调器的语音控制方法、装置及存储介质 |
CN112735398B (zh) * | 2019-10-28 | 2022-09-06 | 思必驰科技股份有限公司 | 人机对话模式切换方法及系统 |
CN110753254A (zh) * | 2019-10-30 | 2020-02-04 | 四川长虹电器股份有限公司 | 应用于智能语音电视声纹支付的声纹注册方法 |
CN110660393B (zh) * | 2019-10-31 | 2021-12-03 | 广东美的制冷设备有限公司 | 语音交互方法、装置、设备及存储介质 |
CN111023470A (zh) * | 2019-12-06 | 2020-04-17 | 厦门快商通科技股份有限公司 | 空调温度调节方法、介质、设备及装置 |
CN111081249A (zh) * | 2019-12-30 | 2020-04-28 | 腾讯科技(深圳)有限公司 | 一种模式选择方法、装置及计算机可读存储介质 |
CN111274819A (zh) * | 2020-02-13 | 2020-06-12 | 北京声智科技有限公司 | 资源获取方法及装置 |
CN111489756B (zh) * | 2020-03-31 | 2024-03-01 | 中国工商银行股份有限公司 | 一种声纹识别方法及装置 |
CN112002346A (zh) * | 2020-08-20 | 2020-11-27 | 深圳市卡牛科技有限公司 | 基于语音的性别年龄识别方法、装置、设备和存储介质 |
CN112163081B (zh) * | 2020-10-14 | 2024-08-27 | 网易(杭州)网络有限公司 | 标签确定方法、装置、介质及电子设备 |
CN114449312A (zh) * | 2020-11-04 | 2022-05-06 | 深圳Tcl新技术有限公司 | 一种视频播放控制方法、装置、终端设备及存储介质 |
CN112584238A (zh) * | 2020-12-09 | 2021-03-30 | 深圳创维-Rgb电子有限公司 | 影视资源匹配方法、装置及智能电视 |
CN113938755A (zh) * | 2021-09-18 | 2022-01-14 | 海信视像科技股份有限公司 | 服务器、终端设备以及资源推荐方法 |
CN113948084A (zh) * | 2021-12-06 | 2022-01-18 | 北京声智科技有限公司 | 语音数据的处理方法、装置、电子设备、存储介质及产品 |
CN114155845A (zh) * | 2021-12-13 | 2022-03-08 | 中国农业银行股份有限公司 | 服务确定方法、装置、电子设备及存储介质 |
CN116994565B (zh) * | 2023-09-26 | 2023-12-15 | 深圳琪乐科技有限公司 | 一种智能语音助手及其语音控制方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441869A (zh) * | 2007-11-21 | 2009-05-27 | 联想(北京)有限公司 | 语音识别终端用户身份的方法及终端 |
CN102142254A (zh) * | 2011-03-25 | 2011-08-03 | 北京得意音通技术有限责任公司 | 基于声纹识别和语音识别的防录音假冒的身份确认方法 |
CN105068661A (zh) * | 2015-09-07 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机交互方法和系统 |
CN105426436A (zh) * | 2015-11-05 | 2016-03-23 | 百度在线网络技术(北京)有限公司 | 基于人工智能机器人的信息提供方法和装置 |
CN106548773A (zh) * | 2016-11-04 | 2017-03-29 | 百度在线网络技术(北京)有限公司 | 基于人工智能的儿童用户搜索方法及装置 |
CN106557410A (zh) * | 2016-10-25 | 2017-04-05 | 北京百度网讯科技有限公司 | 基于人工智能的用户行为分析方法和装置 |
CN107507612A (zh) * | 2017-06-30 | 2017-12-22 | 百度在线网络技术(北京)有限公司 | 一种声纹识别方法及装置 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040190688A1 (en) * | 2003-03-31 | 2004-09-30 | Timmins Timothy A. | Communications methods and systems using voiceprints |
JP2003115951A (ja) | 2001-10-09 | 2003-04-18 | Casio Comput Co Ltd | 話題情報提供システムおよび話題情報提供方法 |
KR100755678B1 (ko) * | 2005-10-28 | 2007-09-05 | 삼성전자주식회사 | 개체명 검출 장치 및 방법 |
ATE439665T1 (de) * | 2005-11-25 | 2009-08-15 | Swisscom Ag | Verfahren zur personalisierung eines dienstes |
US20110060587A1 (en) * | 2007-03-07 | 2011-03-10 | Phillips Michael S | Command and control utilizing ancillary information in a mobile voice-to-speech application |
JP2009271785A (ja) | 2008-05-08 | 2009-11-19 | Nippon Telegr & Teleph Corp <Ntt> | 情報提供方法及び装置及びコンピュータ読み取り可能な記録媒体 |
US20120042020A1 (en) * | 2010-08-16 | 2012-02-16 | Yahoo! Inc. | Micro-blog message filtering |
US8930187B2 (en) * | 2012-01-03 | 2015-01-06 | Nokia Corporation | Methods, apparatuses and computer program products for implementing automatic speech recognition and sentiment detection on a device |
JP2013164642A (ja) | 2012-02-09 | 2013-08-22 | Nikon Corp | 検索手段制御装置、検索結果出力装置及びプログラム |
JP6221253B2 (ja) | 2013-02-25 | 2017-11-01 | セイコーエプソン株式会社 | 音声認識装置及び方法、並びに、半導体集積回路装置 |
JP6522503B2 (ja) * | 2013-08-29 | 2019-05-29 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 機器制御方法、表示制御方法及び購入決済方法 |
JP5777178B2 (ja) * | 2013-11-27 | 2015-09-09 | 国立研究開発法人情報通信研究機構 | 統計的音響モデルの適応方法、統計的音響モデルの適応に適した音響モデルの学習方法、ディープ・ニューラル・ネットワークを構築するためのパラメータを記憶した記憶媒体、及び統計的音響モデルの適応を行なうためのコンピュータプログラム |
JP6129134B2 (ja) | 2014-09-29 | 2017-05-17 | シャープ株式会社 | 音声対話装置、音声対話システム、端末、音声対話方法およびコンピュータを音声対話装置として機能させるためのプログラム |
CN105045889B (zh) | 2015-07-29 | 2018-04-20 | 百度在线网络技术(北京)有限公司 | 一种信息推送方法及装置 |
US11113714B2 (en) * | 2015-12-30 | 2021-09-07 | Verizon Media Inc. | Filtering machine for sponsored content |
US9812151B1 (en) * | 2016-11-18 | 2017-11-07 | IPsoft Incorporated | Generating communicative behaviors for anthropomorphic virtual agents based on user's affect |
-
2017
- 2017-06-30 CN CN201710525251.5A patent/CN107507612B/zh active Active
-
2018
- 2018-02-27 JP JP2018546525A patent/JP6711500B2/ja active Active
- 2018-02-27 US US16/300,444 patent/US11302337B2/en active Active
- 2018-02-27 WO PCT/CN2018/077359 patent/WO2019000991A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441869A (zh) * | 2007-11-21 | 2009-05-27 | 联想(北京)有限公司 | 语音识别终端用户身份的方法及终端 |
CN102142254A (zh) * | 2011-03-25 | 2011-08-03 | 北京得意音通技术有限责任公司 | 基于声纹识别和语音识别的防录音假冒的身份确认方法 |
CN105068661A (zh) * | 2015-09-07 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机交互方法和系统 |
CN105426436A (zh) * | 2015-11-05 | 2016-03-23 | 百度在线网络技术(北京)有限公司 | 基于人工智能机器人的信息提供方法和装置 |
CN106557410A (zh) * | 2016-10-25 | 2017-04-05 | 北京百度网讯科技有限公司 | 基于人工智能的用户行为分析方法和装置 |
CN106548773A (zh) * | 2016-11-04 | 2017-03-29 | 百度在线网络技术(北京)有限公司 | 基于人工智能的儿童用户搜索方法及装置 |
CN107507612A (zh) * | 2017-06-30 | 2017-12-22 | 百度在线网络技术(北京)有限公司 | 一种声纹识别方法及装置 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11495217B2 (en) | 2018-04-16 | 2022-11-08 | Google Llc | Automated assistants that accommodate multiple age groups and/or vocabulary levels |
US11756537B2 (en) | 2018-04-16 | 2023-09-12 | Google Llc | Automated assistants that accommodate multiple age groups and/or vocabulary levels |
CN110188171A (zh) * | 2019-05-30 | 2019-08-30 | 上海联影医疗科技有限公司 | 一种语音搜索方法、装置、电子设备及存储介质 |
CN110335626A (zh) * | 2019-07-09 | 2019-10-15 | 北京字节跳动网络技术有限公司 | 基于音频的年龄识别方法及装置、存储介质 |
CN112530418A (zh) * | 2019-08-28 | 2021-03-19 | 北京声智科技有限公司 | 一种语音唤醒方法、装置及相关设备 |
CN110503961A (zh) * | 2019-09-03 | 2019-11-26 | 北京字节跳动网络技术有限公司 | 音频识别方法、装置、存储介质及电子设备 |
CN110990685A (zh) * | 2019-10-12 | 2020-04-10 | 中国平安财产保险股份有限公司 | 基于声纹的语音搜索方法、设备、存储介质及装置 |
CN110990685B (zh) * | 2019-10-12 | 2023-05-26 | 中国平安财产保险股份有限公司 | 基于声纹的语音搜索方法、设备、存储介质及装置 |
CN111326163A (zh) * | 2020-04-15 | 2020-06-23 | 厦门快商通科技股份有限公司 | 一种声纹识别方法和装置以及设备 |
WO2022048786A1 (en) | 2020-09-07 | 2022-03-10 | Kiwip Technologies Sas | Secure communication system with speaker recognition by voice biometrics for user groups such as family groups |
CN112733025A (zh) * | 2021-01-06 | 2021-04-30 | 天津五八到家货运服务有限公司 | 用户数据服务系统、用户数据处理方法、设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN107507612B (zh) | 2020-08-28 |
US11302337B2 (en) | 2022-04-12 |
JP6711500B2 (ja) | 2020-06-17 |
CN107507612A (zh) | 2017-12-22 |
US20210225380A1 (en) | 2021-07-22 |
JP2019527371A (ja) | 2019-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019000991A1 (zh) | 一种声纹识别方法及装置 | |
CN107481720B (zh) | 一种显式声纹识别方法及装置 | |
US10977452B2 (en) | Multi-lingual virtual personal assistant | |
CN107492379B (zh) | 一种声纹创建与注册方法及装置 | |
US11417343B2 (en) | Automatic speaker identification in calls using multiple speaker-identification parameters | |
US11475897B2 (en) | Method and apparatus for response using voice matching user category | |
CN105895087B (zh) | 一种语音识别方法及装置 | |
US10679063B2 (en) | Recognizing salient video events through learning-based multimodal analysis of visual features and audio-based analytics | |
KR102333505B1 (ko) | 소셜 대화형 입력들에 대한 컴퓨터 응답 생성 | |
US11494612B2 (en) | Systems and methods for domain adaptation in neural networks using domain classifier | |
US20210158790A1 (en) | Autonomous generation of melody | |
US20230325663A1 (en) | Systems and methods for domain adaptation in neural networks | |
US8972265B1 (en) | Multiple voices in audio content | |
JP7108144B2 (ja) | クロスドメインバッチ正規化を使用したニューラルネットワークにおけるドメイン適応のためのシステム及び方法 | |
CN111415677A (zh) | 用于生成视频的方法、装置、设备和介质 | |
CN109582822A (zh) | 一种基于用户语音的音乐推荐方法及装置 | |
US9684908B2 (en) | Automatically generated comparison polls | |
TW202022851A (zh) | 語音互動方法和裝置 | |
US11943181B2 (en) | Personality reply for digital content | |
KR102226427B1 (ko) | 호칭 결정 장치, 이를 포함하는 대화 서비스 제공 시스템, 호칭 결정을 위한 단말 장치 및 호칭 결정 방법 | |
WO2020154883A1 (zh) | 语音信息的处理方法、装置、存储介质及电子设备 | |
WO2023005580A1 (zh) | 显示设备 | |
RBB et al. | Deliverable 5.1 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018546525 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18822840 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 06.04.2020) |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.05.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18822840 Country of ref document: EP Kind code of ref document: A1 |