WO2019000991A1 - 一种声纹识别方法及装置 - Google Patents

一种声纹识别方法及装置 Download PDF

Info

Publication number
WO2019000991A1
WO2019000991A1 PCT/CN2018/077359 CN2018077359W WO2019000991A1 WO 2019000991 A1 WO2019000991 A1 WO 2019000991A1 CN 2018077359 W CN2018077359 W CN 2018077359W WO 2019000991 A1 WO2019000991 A1 WO 2019000991A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
command
category
voice
voiceprint
Prior art date
Application number
PCT/CN2018/077359
Other languages
English (en)
French (fr)
Inventor
王文宇
胡媛
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to JP2018546525A priority Critical patent/JP6711500B2/ja
Priority to US16/300,444 priority patent/US11302337B2/en
Publication of WO2019000991A1 publication Critical patent/WO2019000991A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • G10L15/07Adaptation to the speaker
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/227Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology

Definitions

  • the present application relates to the field of artificial intelligence applications, and in particular, to a voiceprint recognition method and apparatus.
  • Artificial Intelligence is a new technical science that studies and develops theories, methods, techniques, and applications for simulating, extending, and extending human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence. Research in this area includes robotics, speech recognition, image recognition, Natural language processing and expert systems. Among them, one aspect of artificial intelligence is the voiceprint recognition technology.
  • voice dialogue is that it can record the user's voice.
  • everyone has their own voice, just like a fingerprint. So we also call everyone's voice as a voiceprint. Through the speaker's voiceprint, we can determine who the speaker is. The user and extract the user's data to provide a personalized service.
  • the invention is based on the voiceprint technology and cooperates with a series of product strategies to propose the best solution to the above problems.
  • the degree of productization is low. Due to the single strategy and insufficient technical ability, the product design is limited.
  • the voiceprint is only used for extremely basic functions. Even if it is already productized, it can only be applied to very narrow scenes. For example, it is only used for specific sound wake-up devices, but it is not used to provide personalized services. Voiceprint technology is not without deep productization.
  • the customer needs to participate in the voiceprint recognition, and the voiceprint training process is needed to further identify the user ID. User satisfaction is not high.
  • aspects of the present application provide a voiceprint recognition method and apparatus for providing personalized service to a user.
  • a voiceprint recognition method including:
  • the voiceprint recognition method is used to identify the user category in which the command voice is issued;
  • Resources are provided according to the user categories and commands.
  • the user category includes user gender and user age range.
  • the voiceprint recognition method is used to identify the user category for issuing the command voice, and further includes:
  • model training is performed to establish a voiceprint processing model for different user categories.
  • the method further includes:
  • a corpus forming corpus with different user-type colloquial features is collected, and the corpus is used to perform speech recognition model training, and a speech recognition model corresponding to the user type is obtained.
  • Searching for a target resource that matches the recommended interest category presenting the target resource to the user.
  • searching for a recommended interest category that matches the command includes:
  • the current vertical class is determined
  • the recommended content is obtained by using a pre-established user interest model.
  • the attribute information includes at least one of a user age group and a user gender.
  • the method further includes:
  • Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
  • the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
  • a voiceprint recognition apparatus comprising:
  • a voiceprint recognition module configured to identify a user category that issues a command voice according to the acquired command voice, using a voiceprint recognition method
  • a voice recognition module configured to perform voice recognition on the command voice according to the user category, by using a corresponding voice recognition model, to obtain a command described by the command voice;
  • a module is provided for providing resources according to the user category and command.
  • the user category including user gender, user age segment.
  • the voiceprint recognition module further includes:
  • the voiceprint processing model establishes a sub-module for performing model training according to the sound characteristics of different user categories, and establishing a voiceprint processing model of different user categories.
  • the voice recognition module further includes:
  • the speech recognition model establishes a sub-module for collecting a corpus forming corpus with colloquial features of different user types, and using the corpus to perform speech recognition model training, and obtaining a speech recognition model corresponding to the user type.
  • the providing module includes:
  • a presentation submodule configured to search for a target resource that matches the interest category, and present the target resource to the user.
  • search sub-module further includes:
  • a vertical class determining submodule for determining a current vertical class according to the command
  • the content acquisition sub-module is configured to obtain a recommended interest category that matches the command by using a pre-established user interest model according to the current drop class and the attribute information of the user.
  • the aspect as described above and any possible implementation manner further provide an implementation manner, where the attribute information includes at least one of a user age group and a user gender.
  • search sub-module further includes a user interest model establishment sub-module, configured to:
  • Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
  • the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
  • an apparatus comprising:
  • One or more processors are One or more processors;
  • a storage device for storing one or more programs
  • the one or more programs are executed by the one or more processors such that the one or more processors implement any of the methods described above.
  • a computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements any of the above methods.
  • the recommendation strategy is more perfect, the recommendation is more accurate, and the user satisfaction is improved; even if the error recommendation error is occasionally recognized, the user does not obviously perceive; the productization technology is lowered. Requirements.
  • FIG. 1 is a schematic flow chart of a voiceprint recognition method according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a step of searching for a recommended interest category according to the user category according to the user category according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a voiceprint recognition apparatus according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a search module of a voiceprint recognition apparatus according to an embodiment of the present disclosure
  • FIG. 5 is a block diagram of an exemplary computer system/server suitable for use in implementing embodiments of the present invention.
  • FIG. 1 is a schematic flowchart of a method for identifying a voiceprint according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • the voiceprint recognition method is used to identify the user category in which the command voice is issued.
  • the user category includes a user gender and a user age group.
  • model training can be performed according to the sound characteristics of different user categories, and sounds of different user categories can be established. Pattern processing model to achieve voiceprint analysis for user groups targeting different user categories.
  • the user can use the voiceprint recognition method to identify the gender and age information of the user who issued the command voice.
  • the voiceprint of the speaker needs to be modeled, that is, "training” or “learning”. Specifically, the first feature vector of each voice in the training set is extracted by applying a deep neural network DNN voiceprint baseline system; and the gender classification is respectively trained according to the first feature vector of each voice and the pre-labeled gender and age segment labels respectively. And the age classifier, thus establishing a voiceprint processing model that distinguishes between gender and age.
  • the gender classifier and the age classifier analyze the first feature information, and obtain the gender tag and the age segment tag of the first feature information, that is, the gender tag and the age segment tag of the command voice.
  • the fundamental frequency feature and the Mel frequency cepstrum coefficient MFCC feature can be extracted firstly for the speech request, and then the fundamental frequency feature and the MFCC feature can be based on the Gaussian mixture model.
  • Performing a posteriori probability value calculation, and determining the gender of the user according to the calculation result For example, if the Gaussian mixture model is a male Gaussian mixture model, when the calculation result is a high posterior probability value, if it is greater than a certain threshold, the determination may be performed. The gender of the user is male. When the calculation result is that the posterior probability value is small, such as less than a certain threshold, the gender of the user may be determined to be female.
  • the user voiceprint ID that issues the command voice is further identified.
  • Each user's voice will have a unique voiceprint ID that records personal data such as the user's name, gender, age, and hobbies.
  • the voiceprint feature of the command voice of the user is extracted, and the registered voiceprint model pre-stored in the cloud is matched one by one. If the matching value is greater than the threshold, the user voiceprint ID of the user is determined. If the match value is less than the threshold, it is determined that the user is not registered.
  • the voiceprint feature is a d-vector feature, which is a feature extracted by a Deep Neural Network (DNN), specifically an output of a last layer of the hidden layer in the DNN.
  • DNN Deep Neural Network
  • a voice recognition model corresponding to the user category is used to perform voice recognition on the command voice to obtain a command described by the command voice.
  • the voice information of the command voice can be recognized as text information, and then the corresponding information can be manipulated according to the text information.
  • a corpus forming corpus with different user type colloquial features is collected, and the corpus is used to perform speech recognition model training, and a speech recognition model corresponding to the user type is obtained.
  • the corpus having the colloquial characteristics of the child can be collected to form a corpus, and then the corpus can be used for model training to obtain a child speech recognition model.
  • the child's colloquial characteristics mentioned here may specifically include repeated words, unclear words, and broken sentences.
  • the child mode is automatically turned on, and the voice interaction mode of the child's habitual conversation mode can be used to perform content screening and optimization for the child.
  • the interaction of children's patterns should be specially designed to meet the children's dialogue habits.
  • the broadcast sound of TTS can be the voice of a child or a young woman, and the distance from the child can be brought closer. The sound of the broadcast can be used more frequently, making the child sound more comfortable. Designing children's chats and growing up with children in response to the chat data that children often express.
  • the children's model requires that all content resources be carefully screened to remove the yellow storm content. All content such as music, sound, movies, and television must be precisely tailored to the needs of children. For example, music should be mostly children's songs. Sounds should be mostly children's stories. Movies should be mostly animated films. TV should be mostly cartoons.
  • resources are provided in accordance with the user categories and commands.
  • Searching for a target resource that matches the recommended interest category presenting the target resource to the user.
  • the finding, according to the user category, a recommended interest category that matches the command, as shown in FIG. 2, includes the following sub-steps:
  • the current vertical class includes music, audiobook, radio, radio, video, movie, food, chat, and the like;
  • a recommended interest category that matches the command is obtained using a pre-established user interest model based on the current drop class and the attribute information of the user.
  • the attribute information includes at least one of an age group, a gender, and interest information.
  • the user interest model is pre-established, including:
  • Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
  • the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
  • a user history log of a large number of users at a preset time granularity (for example, 2 months, 4 months, or half a year, etc.) can be obtained. Due to the user's behavioral habits, a large number of user history logs can be used to obtain specific behaviors of different user categories under specific categories, namely user interest preferences. In other words, the user historical behavior data is classified and classified according to the user category and the vertical class to obtain the user interest model.
  • the user interest model can be used to determine the recommendation strategy, and the recommendation strategies of the different categories in music, audiobook, radio, radio, video, movie, food, chat, etc. include the user age range and gender dimension. That is, according to the current user category and the vertical class, the recommended interest category associated with the current user category and the current drop class is determined using the user interest model.
  • children in the age group of children watch videos in the video category including Xiao Ma Baoli, Dora the Explorer, Pig Pecs and other animated videos, and children can be obtained through the mining of the historical behavior of users of this age group.
  • the user of the age group has an recommended interest category in the video category as an animated video.
  • the recommended content associated with the current user and the current drop class is determined according to the current user class using the user interest model corresponding to the user voiceprint ID.
  • the user history behavior data corresponding to the user voiceprint ID is obtained according to the user voiceprint ID; and the user history behavior data is classified and counted according to a vertical class to obtain the user interest model.
  • a target resource matching the recommended interest category is searched in a multimedia resource library, and the target resource is presented to the user.
  • the generalization requirements such as “putting a song”
  • Types of music if it is recognized as an old person, music such as drama is recommended; if it is recognized as a child, music such as children's songs can be played, age and gender can also be combined, and different types of children's songs can be recommended for small boys and little girls.
  • the movie category when the user says the generalization requirements such as “putting a movie”, if the user is recognized as a male, the latest hottest recommended action movie and the like; if the user is recognized as a female, then the love is recommended.
  • Type movie if it is recognized that the user is a child, an animated movie is recommended.
  • the voiceprint recognition process is implicit recommendation recognition, and there is no specific voiceprint or the identification process of the user, but the process of the user's natural speech is very "normal”. Command voice processing, and complete the work of voiceprint recognition.
  • Age and gender are added to the recommendation strategy, the recommendation strategy is more complete, and the recommendation is more precise, thus improving user satisfaction.
  • FIG. 3 is a schematic structural diagram of a voiceprint recognition apparatus according to another embodiment of the present invention. As shown in FIG. 3, the voiceprint recognition module 301, the voice recognition module 302, and the providing module 303 are included.
  • the voiceprint recognition module 301 is configured to identify a user category for issuing a command voice according to the acquired command voice by using a voiceprint recognition method.
  • the user category includes a user gender and a user age group.
  • the voiceprint recognition module 301 further includes a voiceprint processing model establishing sub-module for sound characteristics according to different user categories. Model training is performed to establish a voiceprint processing model for different user categories to implement voiceprint analysis for user groups of different user categories.
  • the user can use the voiceprint recognition method to identify the gender and age information of the user who issued the command voice.
  • the voiceprint of the speaker needs to be modeled, that is, "training” or “learning”. Specifically, the first feature vector of each voice in the training set is extracted by applying a deep neural network DNN voiceprint baseline system; and the gender classification is respectively trained according to the first feature vector of each voice and the pre-labeled gender and age segment labels respectively. And age classifier. Thus, a voiceprint processing model that distinguishes between gender and age is established.
  • the gender classifier and the age classifier analyze the first feature information, and obtain the gender tag and the age segment tag of the first feature information, that is, the gender tag and the age segment tag of the command voice.
  • the fundamental frequency feature and the Mel frequency cepstrum coefficient MFCC feature can be extracted firstly for the speech request, and then the fundamental frequency feature and the MFCC feature can be based on the Gaussian mixture model.
  • Performing a posteriori probability value calculation, and determining the gender of the user according to the calculation result For example, if the Gaussian mixture model is a male Gaussian mixture model, when the calculation result is a high posterior probability value, if it is greater than a certain threshold, the determination may be performed. The gender of the user is male. When the calculation result is that the posterior probability value is small, such as less than a certain threshold, the gender of the user may be determined to be female.
  • the user voiceprint ID that issues the command voice is further identified.
  • Each user's voice will have a unique voiceprint ID that records personal data such as the user's name, gender, age, and hobbies.
  • the voiceprint feature of the command voice of the user is extracted, and the registered voiceprint model pre-stored in the cloud is matched one by one. If the matching value is greater than the threshold, the user voiceprint ID of the user is determined. If the match value is less than the threshold, it is determined that the user is not registered.
  • the voiceprint feature is a d-vector feature, which is a feature extracted by a Deep Neural Network (DNN), specifically an output of a last layer of the hidden layer in the DNN.
  • DNN Deep Neural Network
  • the voice recognition module 302 is configured to perform voice recognition on the command voice by using a voice recognition model corresponding to the user category according to the user category, to obtain a command described by the command voice.
  • the speech recognition module 302 further includes a speech recognition model modeling sub-module for pre-establishing a speech recognition model for different user categories.
  • a corpus forming corpus with different user type colloquial features is collected, and the corpus is used to perform speech recognition model training, and a speech recognition model corresponding to the user type is obtained.
  • the corpus having the colloquial characteristics of the child can be collected to form a corpus, and then the corpus can be used for model training to obtain a child speech recognition model.
  • the child's colloquial characteristics mentioned here may specifically include repeated words, unclear words, and broken sentences.
  • the child guidance module is further configured to automatically open the child mode for the case where the user category is a child user, and the content interaction and optimization of the child may be performed by using a voice interaction mode of a child-friendly conversation mode.
  • the interaction of children's patterns should be specially designed to meet the children's dialogue habits.
  • the broadcast sound of TTS can be the voice of a child or a young woman, and the distance from the child can be brought closer. The sound of the broadcast can be used more frequently, making the child sound more comfortable. Designing children's chats and growing up with children in response to the chat data that children often express.
  • the children's model requires that all content resources be carefully screened to remove the yellow storm content. All content such as music, sound, movies, and television must be precisely tailored to the needs of children. For example, music should be mostly children's songs. Sounds should be mostly children's stories. Movies should be mostly animated films. TV should be mostly cartoons.
  • the providing module 303 is configured to provide resources according to the user category and the command; specifically, the method includes:
  • a presentation submodule configured to search for a target resource that matches the interest category, and present the target resource to the user.
  • the searching submodule is configured to search for a recommended interest category that matches the command according to the user category.
  • a vertical class determining sub-module 401 configured to determine a current vertical class according to the command, where the current vertical class includes music, audiobook, radio, radio, video, movie, food, chat, and the like;
  • the content acquisition sub-module 402 is configured to acquire a recommended interest category that matches the command by using a pre-established user interest model according to the current drop class and the attribute information of the user.
  • the attribute information includes at least one of an age group, a gender, and interest information.
  • the user interest model modeling module 403 is further configured to pre-establish a user interest model, including:
  • Obtaining a user history log where the user history log includes at least: a user identifier, user attribute information, and user historical behavior data;
  • the user historical behavior data is classified and classified according to the user category and the vertical class, and the user interest model is obtained.
  • a user history log of a large number of users at a preset time granularity (for example, 2 months, 4 months, or half a year, etc.) can be obtained.
  • the user historical behavior data is classified and classified according to the user category and the vertical class to obtain the user interest model.
  • the user interest model can be used to determine the recommendation strategy, and the recommendation strategies of the different categories in music, audiobook, radio, radio, video, movie, food, chat, etc. include the user age range and gender dimension. That is, according to the current user category and the vertical class, the recommended interest category associated with the current user category and the current drop class is determined using the user interest model.
  • children in the age group of children watch videos in the video category including Xiao Ma Baoli, Dora the Explorer, Pig Pecs and other animated videos, and children can be obtained through the mining of the historical behavior of users of this age group.
  • the user of the age group has an recommended interest category in the video category as an animated video.
  • the recommended content associated with the current user and the current drop class is determined according to the current user class using the user interest model corresponding to the user voiceprint ID.
  • the user history behavior data corresponding to the user voiceprint ID is obtained according to the user voiceprint ID; and the user history behavior data is classified and counted according to a vertical class to obtain the user interest model.
  • the presentation sub-module is configured to search a multimedia resource library for a target resource that matches the recommended interest category, and present the target resource to the user.
  • the generalization requirements such as “putting a song”
  • Types of music if it is recognized as an old person, music such as drama is recommended; if it is recognized as a child, music such as children's songs can be played, age and gender can also be combined, and different types of children's songs can be recommended for small boys and little girls.
  • the movie category when the user says the generalization requirements such as “putting a movie”, if the user is recognized as a male, the latest hottest recommended action movie and the like; if the user is recognized as a female, then the love is recommended.
  • Type movie if it is recognized that the user is a child, an animated movie is recommended.
  • the voiceprint recognition process is an implicit recommendation recognition, and there is no specific voiceprint or the identification process of the user, but the process of the user's natural speech is very "normal”. Command voice processing, and complete the work of voiceprint recognition.
  • Age and gender are added to the recommendation strategy, the recommendation strategy is more complete, and the recommendation is more precise, thus improving user satisfaction.
  • the disclosed methods and apparatus may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • FIG. 5 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the present invention.
  • the computer system/server 012 shown in FIG. 5 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
  • computer system/server 012 is represented in the form of a general purpose computing device.
  • Components of computer system/server 012 may include, but are not limited to, one or more processors or processing units 016, system memory 028, and bus 018 that connects different system components, including system memory 028 and processing unit 016.
  • Bus 018 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MAC) bus, an Enhanced ISA Bus, a Video Electronics Standards Association (VESA) local bus, and peripheral component interconnects ( PCI) bus.
  • ISA Industry Standard Architecture
  • MAC Micro Channel Architecture
  • VESA Video Electronics Standards Association
  • PCI peripheral component interconnects
  • Computer system/server 012 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by computer system/server 012, including volatile and non-volatile media, removable and non-removable media.
  • System memory 028 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 030 and/or cache memory 032.
  • Computer system/server 012 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 034 can be used to read and write non-removable, non-volatile magnetic media (not shown in Figure 5, commonly referred to as a "hard disk drive").
  • a disk drive for reading and writing to a removable non-volatile disk such as a "floppy disk”
  • a removable non-volatile disk such as a CD-ROM, DVD-ROM
  • each drive can be coupled to bus 018 via one or more data medium interfaces.
  • Memory 028 can include at least one program product having a set (e.g., at least one) of program modules configured to perform the functions of various embodiments of the present invention.
  • Program/utility 040 having a set (at least one) of program modules 042, which may be stored, for example, in memory 028, such program module 042 includes, but is not limited to, an operating system, one or more applications, other programs Modules and program data, each of these examples or some combination may include an implementation of a network environment.
  • Program module 042 typically performs the functions and/or methods of the embodiments described herein.
  • the computer system/server 012 can also be in communication with one or more external devices 014 (eg, a keyboard, pointing device, display 024, etc.), in which the computer system/server 012 communicates with an external radar device, and can also A plurality of devices that enable a user to interact with the computer system/server 012, and/or any device (eg, a network card, modem, etc.) that enables the computer system/server 012 to communicate with one or more other computing devices Communication. This communication can take place via an input/output (I/O) interface 022.
  • I/O input/output
  • computer system/server 012 can also communicate with one or more networks (e.g., a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) via network adapter 020.
  • network adapter 020 communicates with other modules of computer system/server 012 via bus 018.
  • other hardware and/or software modules may be utilized in conjunction with computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems. , tape drives, and data backup storage systems.
  • Processing unit 016 performs the functions and/or methods of the described embodiments of the present invention by running a program stored in system memory 028.
  • the computer program described above may be provided in a computer storage medium encoded with a computer program that, when executed by one or more computers, causes one or more computers to perform the embodiment of the invention described above Method flow and/or device operation.
  • the transmission route of computer programs is no longer limited by tangible media, and can also be downloaded directly from the network. Any combination of one or more computer readable media can be utilized.
  • the computer readable medium can be a computer readable signal medium or a computer readable storage medium.
  • the computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
  • a computer readable storage medium can be any tangible medium that can contain or store a program, which can be used by or in connection with an instruction execution system, apparatus or device.
  • a computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying computer readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer readable signal medium can also be any computer readable medium other than a computer readable storage medium, which can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium can be transmitted by any suitable medium, including but not limited to wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present invention may be written in one or more programming languages, or a combination thereof, including an object oriented programming language such as Java, Smalltalk, C++, and conventional Procedural programming language—such as the "C" language or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection).
  • LAN local area network
  • WAN wide area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种声纹识别方法及装置,包括:根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别(101);根据用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令(102);根据用户类别和命令提供资源(103)。能够避免现有技术中传统的声纹识别方法中,客户需要参与到声纹识别中,需要通过声纹训练过程,进一步识别用户ID;用户的满意度不高的问题。通过用户自然说话的过程中,对这些很"平常"的语音进行处理,同时完成了声纹识别的工作。

Description

一种声纹识别方法及装置
本申请要求了申请日为2017年06月30日,申请号为201710525251.5发明名称为“一种声纹识别方法及装置”的中国专利申请的优先权。
技术领域
本申请涉及人工智能应用领域,尤其涉及一种声纹识别方法及装置。
背景技术
人工智能(Artificial Intelligence;AI),是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器,该领域的研究包括机器人、语言识别、图像识别、自然语言处理和专家系统等。其中,人工智能很重要的一个方面就是声纹识别技术。
近年来,人工智能技术有了深远的发展,并逐步实现产品化。特别是智能语音对话产品,随着国外的亚马逊Echo智能音响及Google Home智能音响的兴起,掀起了以对话为主要交互方式的智能家居产品特别是智能音响产品的流行热潮。
包括智能音箱在内的智能语音对话产品的典型使用场景是在家庭之中,在家庭中用户用语音与机器进行交互十分自然,而家庭中往往是多用户,每个用户必然会有不同的需求,但目前产品的服务都很粗糙,对所有的用户提供的是一套相同的服务,产品对用户请求的应答使用的都是同一套通用标准,造成了用户个性化需求无法得到满足。
语音对话的优势就是能收录用户的声音,每个人都有自己的声音, 就像指纹一样,所以我们又称每个人的声音为声纹,通过说话人的声纹,判断出说话人是哪位用户,并提取该用户的数据,以提供个性化的服务。本发明以声纹技术为基础,配合一系列的产品策略,提出以上问题的最佳解决方案。
目前业界的声纹技术都不成熟,难以达到产品化的要求。现有方法的主要问题在于:
(1)技术依赖性极强,往往要等技术达到极高的准确率才可以产品化,而技术的进步是及其漫长的过程。
(2)策略单一:对于已经使用声纹技术的,声纹使用策略太过单一,没有通过策略弥补技术不足。
(3)产品化程度低,受策略单一,技术能力不足的影响,产品设计受限,声纹仅仅用于极其基础的功能上,即使已经产品化的,也只能应用于非常狭窄的场景,例如只用于特定声音唤醒设备,但并不用于提供个性化服务,声纹技术并没有没有深度产品化。
传统的声纹识别方法中,客户需要参与到声纹识别中,需要通过声纹训练过程,进一步识别用户ID。用户的满意度不高。
发明内容
本申请的多个方面提供一种声纹识别方法及装置,用以为用户提供个性化服务。
本申请的一方面,提供一种声纹识别方法,包括:
根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别;
根据所述用户类别,采用对应的语音识别模型对命令语音进行语音 识别,以得到命令语音所描述的命令;
根据所述用户类别和命令提供资源。
所述用户类别包括用户性别、用户年龄段。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,
所述根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别之前,还包括:
根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,
所述根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令之前,还包括:
采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,
根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;
搜索与所述推荐兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,
所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别包括:
根据所述命令确当前垂类;
根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得推荐内容。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,
所述属性信息包括用户年龄段、用户性别中至少一个。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,
所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别之前,还包括:
获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;
将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
本发明的另一方面,提供一种声纹识别装置,包括:
声纹识别模块,用于根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别;
语音识别模块,用于根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令;
提供模块,用于根据所述用户类别和命令提供资源。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述用户类别包括用户性别、用户年龄段。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述声纹识别模块还包括:
声纹处理模型建立子模块,用于根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述语音识别模块还包括:
语音识别模型建立子模块,用于采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述提供模块包括:
查找子模块,用于根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;
呈现子模块,用于搜索与所述兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述查找子模块还包括:
垂类确定子模块,用于根据所述命令确当前垂类;
内容获取子模块,用于根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得与所述命令相匹配的推荐兴趣类别。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述属性信息包括用户年龄段、用户性别中至少一个。
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述查找子模块还包括用户兴趣模型建立子模块,用于:
获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;
将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
本申请的另一方面,提供一种设备,其特征在于,所述设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现任一上述的方法。
本申请的另一方面,提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现任一上述的方法。
基于上述介绍可以看出,采用本发明所述方案,推荐策略更加完善,推荐也更加精准,从而提高用户的满意度;即使偶尔识别错误推荐错误,用户不会明显感知;降低了产品化对技术的要求。
附图说明
图1为本申请一实施例提供的声纹识别方法的流程示意图;
图2为本申请一实施例提供的声纹识别方法的根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别步骤的流程示意图;
图3为本申请一实施例提供的声纹识别装置的结构示意图
图4为本申请一实施例提供的声纹识别装置的查找模块的结构示意图;
图5为适于用来实现本发明实施例的示例性计算机系统/服务器的框图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面结合附图和具体实施例对本发明进行详细描述。
图1为本申请一实施例提供的声纹识别方法的流程示意图,如图1所示,包括以下步骤:
在101中,根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别。
具体地,所述用户类别包括用户性别、用户年龄段。
由于不同用户类别,即不同性别、年龄段的用户群,具有特殊的声纹特征,因此,在进行声纹识别之前,可以根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型,以实现面向不同用户类别的用户群的声纹分析。当用户发起语音搜索时,可以根据用户发出的命令语音,采用声纹识别方式,识别出发出命令语音的用户性别、年龄段信息。
在声纹识别之前,需要先对说话人的声纹进行建模,即“训练”或“学习”。具体的,通过应用深度神经网络DNN声纹基线系统,提取训练集中每条语音的第一特征向量;根据所述每条语音的第一特征向量以及预先标注的性别、年龄段标签分别训练性别分类器和年龄分类器,从而建立了区分性别、年龄段的声纹处理模型。
根据所获取到的命令语音,提取所述命令语音的第一特征信息,并将第一特征信息分别发送给预先生成的性别分类器和年龄段分类器。性别分类器和年龄段分类器对第一特征信息进行分析,获取所述第一特征信息的性别标签和年龄段标签,也就是命令语音的性别标签和年龄段标签。
举例而言,以性别分类器为高斯混合模型为例,可先对所述语音请求提取基频特征以及梅尔频率倒谱系数MFCC特征,之后,可基于高斯混合模型对基频特征以及MFCC特征进行后验概率值计算,根据计算结果确定该用户的性别,例如,假设该高斯混合模型为男性高斯混合模型, 则当计算结果为后验概率值很高,如大于一定阈值时,可确定该用户的性别为男性,当计算结果为后验概率值很小,如小于一定阈值时,可确定该用户的性别为女性。
优选的,识别出发出命令语音的用户年龄段、性别信息后,进一步识别发出命令语音的用户声纹ID。
每个用户的声音会有一个唯一的声纹ID,该ID记录有该用户姓名、性别、年龄、爱好等个人数据。
优选的,通过提取所述用户的命令语音的声纹特征,与云端预存的注册声纹模型进行逐一匹配,如果匹配值大于阈值,则确定所述用户的用户声纹ID。如果所述匹配值小于阈值,则确定所述用户未进行注册。
优选的,所述声纹特征为d-vector特征,是通过深度神经网络(Deep Neural Network,DNN)提取的一种特征,具体是DNN中最后一层隐层的输出。
在102中,根据所述用户类别,采用对应用户类别的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令。
采用语音识别技术后,可以将命令语音的语音信息识别为文本信息,之后可以根据文本信息进行相应的操控。
为了提高识别的准确性,需要预先建立针对不同用户类别的语音识别模型。
具体的,采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
例如,对于用户类别中年龄段为儿童的情况,可以通过采集具有儿 童口语化特征的语料形成语料库,进而利用该语料库进行模型训练,得到儿童语音识别模型。
这里所说的儿童口语化特征,具体可以包括词语重复、吐字不清和断句错误等。
进一步的,对于用户类别为儿童用户的情况,自动开启儿童模式,可以采用儿童习惯的对话模式的语音交互方式,针对儿童进行内容筛选和优化。
其中,儿童模式的交互应该经过特别设计,符合儿童对话习惯。如TTS的播报声音可以是小孩儿或年轻女性的声音,拉近与儿童的距离,播报的声音可以多用叠音词,让儿童听起来更舒服。针对孩子经常表达的聊天数据,设计儿童聊天,对儿童进行成长陪护。
儿童模式要求所有内容资源经过认真筛选,去除黄暴内容。音乐、有声、电影、电视等所有内容都要是精确符合儿童需求的。例如音乐应该多是儿歌,有声应该多是儿童故事,电影应该多是动画电影,电视应该多是动画片。
在103中,根据所述用户类别和命令提供资源。
具体的,包括以下子步骤:
根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;
搜索与所述推荐兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
其中,所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别,如图2所示,包括以下子步骤:
在201中,根据所述命令确定当前垂类,所述当前垂类包括音乐、 有声书、广播、电台、视频、电影、美食、聊天等;
例如,当用户命令为“放首歌儿”,则确定当前垂类为音乐;
例如,当用户命令为“放个电影”,则确定当前垂类为电影;
例如,当用户命令为“有什么好吃的”,则确定当前垂类为美食。
在202中,根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得与所述命令相匹配的推荐兴趣类别。
其中,所述属性信息包括年龄段、性别和兴趣信息中至少一个。
优选的,预先建立用户兴趣模型,包括:
获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;
将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
可以获取大量用户在预设时间粒度(例如,2个月,4个月,或半年等)的用户历史日志。由于用户的行为习惯,由大量的用户历史日志可以得到不同用户类别在特定的垂类下进行特定的行为,即用户兴趣偏好。换言之,将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。利用用户兴趣模型可以确定推荐策略,在音乐、有声书、广播、电台、视频、电影、美食、聊天等不同场景的垂类推荐策略包括用户年龄段、性别维度。即根据当前用户类别和垂类,利用所述用户兴趣模型,确定与当前用户类别和当前垂类相关联的推荐兴趣类别。
例如,儿童年龄段的用户在视频垂类中观看的视频包括小马宝莉、爱探险的朵拉、小猪佩奇等动画视频,则通过对该年龄段的用户的历史行为的挖掘,可以得到儿童年龄段的用户在视频垂类中其推荐兴趣类别 为动画视频。
优选的,如果确定了所述用户的用户声纹ID,则根据当前垂类,利用该用户声纹ID对应的用户兴趣模型,确定与当前用户和当前垂类相关联的推荐内容。其中,根据所述用户声纹ID,获取与用户声纹ID对应的用户历史行为数据;将所述用户历史行为数据按照垂类进行分类统计,得到所述用户兴趣模型。
在104中,在多媒体资源库中搜索与所述推荐兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
例如,
在音乐垂类中,当用户说出“放首歌儿”等泛化需求时,如果识别到用户是女性,则推荐播放舒缓、浪漫等类型的音乐;如果是男性,则推荐摇滚、热血等类型的音乐;如果识别出是老人,则推荐戏曲等音乐;如果识别出是儿童,则播放儿歌等类别音乐,年龄和性别也可结合,对于小男孩儿和小女孩儿就可以推荐不同类型的儿歌。
在电影垂类中,当用户说出“放个电影”等泛化需求时,如果识别到用户是男性,结合最新最热推荐动作片等类型电影;如果识别到用户是女性,则推荐爱情等类型电影;如果识别到用户是儿童,则推荐动画电影。
在美食垂类上,当用户说出“有什么好吃的”推荐时,如果识别到是儿童,则推荐甜食等类型的美食;如果识别到是女性,也可以推进甜食或推荐就餐环境浪漫的餐厅。
在本实施例的技术方案中,声纹识别过程为隐式推荐识别,没有一个特定的声纹或用户是谁的识别过程,而是通过用户自然说话的过程中, 对这些很“平常”的命令语音进行处理,同时完成了声纹识别的工作。
由于是隐式推荐识别,即使偶尔识别错误推荐错误,用户不会明显感知。
通过智能识别进入儿童模式,充分利用了语音对话产品交互优势。不需要主动询问用户年龄就可以实现智能进入儿童模式,用户体验更好。
在推荐策略中加入了年龄和性别,推荐策略更加完善,推荐也更加精准,从而提高用户的满意度。
降低了产品化对技术的要求,在技术未达到极高准确度的时候也可以实现技术的产品化,让用户能体验到技术带来的满足度提高。同时由于产品化之后会有更多数据,对于以机器学习技术为基础的声纹识别技术,更多数据会加速技术的迭代过程,从而使产品反补技术,技术便能更深入地产品化,进入正向循环。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在所述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
图3为本申请另一实施例提供的声纹识别装置的结构示意图,如图3所示,包括声纹识别模块301、语音识别模块302、提供模块303,其中,
所述声纹识别模块301,用于根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别。
具体地,所述用户类别包括用户性别、用户年龄段。
由于不同用户类别,即不同性别、年龄段的用户群,具有特殊的声纹特征,因此,所述声纹识别模块301还包括声纹处理模型建立子模块,用于根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型,以实现面向不同用户类别的用户群的声纹分析。当用户发起语音搜索时,可以根据用户发出的命令语音,采用声纹识别方式,识别出发出命令语音的用户性别、年龄段信息。
在声纹识别之前,需要先对说话人的声纹进行建模,即“训练”或“学习”。具体的,通过应用深度神经网络DNN声纹基线系统,提取训练集中每条语音的第一特征向量;根据所述每条语音的第一特征向量以及预先标注的性别、年龄段标签分别训练性别分类器和年龄分类器。从而建立了区分性别、年龄段的声纹处理模型。
根据所获取到的命令语音,提取所述命令语音的第一特征信息,并将第一特征信息分别发送给预先生成的性别分类器和年龄段分类器。性别分类器和年龄段分类器对第一特征信息进行分析,获取所述第一特征信息的性别标签和年龄段标签,也就是命令语音的性别标签和年龄段标签。
举例而言,以性别分类器为高斯混合模型为例,可先对所述语音请求提取基频特征以及梅尔频率倒谱系数MFCC特征,之后,可基于高斯混合模型对基频特征以及MFCC特征进行后验概率值计算,根据计算结果确定该用户的性别,例如,假设该高斯混合模型为男性高斯混合模型, 则当计算结果为后验概率值很高,如大于一定阈值时,可确定该用户的性别为男性,当计算结果为后验概率值很小,如小于一定阈值时,可确定该用户的性别为女性。
优选的,识别出发出命令语音的用户年龄段、性别信息后,进一步识别发出命令语音的用户声纹ID。
每个用户的声音会有一个唯一的声纹ID,该ID记录有该用户姓名、性别、年龄、爱好等个人数据。
优选的,通过提取所述用户的命令语音的声纹特征,与云端预存的注册声纹模型进行逐一匹配,如果匹配值大于阈值,则确定所述用户的用户声纹ID。如果所述匹配值小于阈值,则确定所述用户未进行注册。
优选的,所述声纹特征为d-vector特征,是通过深度神经网络(Deep Neural Network,DNN)提取的一种特征,具体是DNN中最后一层隐层的输出。
所述语音识别模块302,用于根据所述用户类别,采用对应用户类别的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令。
为了提高识别的准确性,所述语音识别模块302还包括语音识别模型建模子模块,用于预先建立针对不同用户类别的语音识别模型。
具体的,采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
例如,对于用户类别中年龄段为儿童的情况,可以通过采集具有儿童口语化特征的语料形成语料库,进而利用该语料库进行模型训练,得 到儿童语音识别模型。
这里所说的儿童口语化特征,具体可以包括词语重复、吐字不清和断句错误等。
进一步的,还包括儿童引导模块,用于对于用户类别为儿童用户的情况,自动开启儿童模式,可以采用儿童习惯的对话模式的语音交互方式,针对儿童进行内容筛选和优化。
其中,儿童模式的交互应该经过特别设计,符合儿童对话习惯。如TTS的播报声音可以是小孩儿或年轻女性的声音,拉近与儿童的距离,播报的声音可以多用叠音词,让儿童听起来更舒服。针对孩子经常表达的聊天数据,设计儿童聊天,对儿童进行成长陪护。
儿童模式要求所有内容资源经过认真筛选,去除黄暴内容。音乐、有声、电影、电视等所有内容都要是精确符合儿童需求的。例如音乐应该多是儿歌,有声应该多是儿童故事,电影应该多是动画电影,电视应该多是动画片。
所述提供模块303,用于根据所述用户类别和命令提供资源;具体的,包括:
查找子模块,用于根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;
呈现子模块,用于搜索与所述兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
其中,所述查找子模块,用于根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别。
具体地,如图4所示,包括以下子模块:
垂类确定子模块401,用于根据所述命令确当前垂类,所述当前垂类包括音乐、有声书、广播、电台、视频、电影、美食、聊天等;
例如,当用户命令为“放首歌儿”,则确定当前垂类为音乐;
例如,当用户命令为“放个电影”,则确定当前垂类为电影;
例如,当用户命令为“有什么好吃的”,则确定当前垂类为美食。
内容获取子模块402,用于根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获取与所述命令相匹配的推荐兴趣类别。
其中,所述属性信息包括年龄段、性别和兴趣信息中至少一个。
优选的,还包括用户兴趣模型建模模块403,用于预先建立用户兴趣模型,包括:
获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;
将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
可以获取大量用户在预设时间粒度(例如,2个月,4个月,或半年等)的用户历史日志。
由于用户的行为习惯,由大量的用户历史日志可以得到不同用户类别在特定的垂类下进行特定的行为,即用户兴趣偏好。换言之,将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。利用用户兴趣模型可以确定推荐策略,在音乐、有声书、广播、电台、视频、电影、美食、聊天等不同场景的垂类推荐策略包括用户年龄段、性别维度。即根据当前用户类别和垂类,利用所述用户兴趣模型,确定与当前用户类别和当前垂类相关联的推荐兴趣类别。
例如,儿童年龄段的用户在视频垂类中观看的视频包括小马宝莉、爱探险的朵拉、小猪佩奇等动画视频,则通过对该年龄段的用户的历史行为的挖掘,可以得到儿童年龄段的用户在视频垂类中其推荐兴趣类别为动画视频。
优选的,如果确定了所述用户的用户声纹ID,则根据当前垂类,利用该用户声纹ID对应的用户兴趣模型,确定与当前用户和当前垂类相关联的推荐内容。其中,根据所述用户声纹ID,获取与用户声纹ID对应的用户历史行为数据;将所述用户历史行为数据按照垂类进行分类统计,得到所述用户兴趣模型。
所述呈现子模块,用于在多媒体资源库中搜索与所述推荐兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
例如,
在音乐垂类中,当用户说出“放首歌儿”等泛化需求时,如果识别到用户是女性,则推荐播放舒缓、浪漫等类型的音乐;如果是男性,则推荐摇滚、热血等类型的音乐;如果识别出是老人,则推荐戏曲等音乐;如果识别出是儿童,则播放儿歌等类别音乐,年龄和性别也可结合,对于小男孩儿和小女孩儿就可以推荐不同类型的儿歌。
在电影垂类中,当用户说出“放个电影”等泛化需求时,如果识别到用户是男性,结合最新最热推荐动作片等类型电影;如果识别到用户是女性,则推荐爱情等类型电影;如果识别到用户是儿童,则推荐动画电影。
在美食垂类上,当用户说出“有什么好吃的”推荐时,如果识别到是儿童,则推荐甜食等类型的美食;如果识别到是女性,也可以推进甜 食或推荐就餐环境浪漫的餐厅。
在本实施例的技术方案中,声纹识别过程为隐式推荐识别,没有一个特定的声纹或用户是谁的识别过程,而是通过用户自然说话的过程中,对这些很“平常”的命令语音进行处理,同时完成了声纹识别的工作。
由于是隐式推荐识别,即使偶尔识别错误推荐错误,用户不会明显感知。
通过智能识别进入儿童模式,充分利用了语音对话产品交互优势。不需要主动询问用户年龄就可以实现智能进入儿童模式,用户体验更好。
在推荐策略中加入了年龄和性别,推荐策略更加完善,推荐也更加精准,从而提高用户的满意度。
降低了产品化对技术的要求,在技术未达到极高准确度的时候也可以实现技术的产品化,让用户能体验到技术带来的满足度提高。同时由于产品化之后会有更多数据,对于以机器学习技术为基础的声纹识别技术,更多数据会加速技术的迭代过程,从而使产品反补技术,技术便能更深入地产品化,进入正向循环。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,所述描述的终端和服务器的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或 讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
图5示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。图5显示的计算机系统/服务器012仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图5所示,计算机系统/服务器012以通用计算设备的形式表现。计算机系统/服务器012的组件可以包括但不限于:一个或者多个处理器或者处理单元016,系统存储器028,连接不同系统组件(包括系统存储器028和处理单元016)的总线018。
总线018表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
计算机系统/服务器012典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机系统/服务器012访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器028可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)030和/或高速缓存存储器032。计算机系统/服务器012可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统034可以用于读写不可移动的、非易失性磁介质(图5未显示,通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线018相连。存储器028可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块042的程序/实用工具040,可以存储在例如存储器028中,这样的程序模块042包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块042通常执行本发明所描述的实施例中的功能和/或方法。
计算机系统/服务器012也可以与一个或多个外部设备014(例如键盘、指向设备、显示器024等)通信,在本发明中,计算机系统/服务器012与外部雷达设备进行通信,还可与一个或者多个使得用户能与该计算机系统/服务器012交互的设备通信,和/或与使得该计算机系统/服务 器012能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口022进行。并且,计算机系统/服务器012还可以通过网络适配器020与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图5所示,网络适配器020通过总线018与计算机系统/服务器012的其它模块通信。应当明白,尽管图5中未示出,可以结合计算机系统/服务器012使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元016通过运行存储在系统存储器028中的程序,从而执行本发明所描述的实施例中的功能和/或方法。
上述的计算机程序可以设置于计算机存储介质中,即该计算机存储介质被编码有计算机程序,该程序在被一个或多个计算机执行时,使得一个或多个计算机执行本发明上述实施例中所示的方法流程和/或装置操作。
随着时间、技术的发展,介质含义越来越广泛,计算机程序的传播途径不再受限于有形介质,还可以直接从网络下载等。可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只 读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (18)

  1. 一种声纹识别方法,其特征在于,包括:
    根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别;
    根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令;
    根据所述用户类别和命令提供资源。
  2. 根据权利要求1所述的声纹识别方法,其特征在于,
    所述用户类别包括用户性别、用户年龄段。
  3. 根据权利要求1或2所述的声纹识别方法,其特征在于,所述根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别之前,还包括:
    根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型。
  4. 根据权利要求1、2或3所述的声纹识别方法,其特征在于,所述根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令之前,还包括:
    采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
  5. 根据权利要求1至4任一权项所述的声纹识别方法,其特征在于,所述根据所述用户类别和命令提供资源包括:
    根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;
    搜索与所述推荐兴趣类别匹配的目标资源,将所述目标资源呈现给 用户。
  6. 根据权利要求5所述的声纹识别方法,其特征在于,所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别包括:
    根据所述命令确当前垂类;
    根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得与所述命令相匹配的推荐兴趣类别。
  7. 根据权利要求6所述的声纹识别方法,其特征在于,
    所述属性信息包括用户年龄段、用户性别中至少一个。
  8. 根据权利要求5所述的声纹识别方法,其特征在于,所述根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别之前,还包括:
    获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;
    将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
  9. 一种声纹识别装置,其特征在于,包括:
    声纹识别模块,用于根据所获取到的命令语音,采用声纹识别方式,识别发出命令语音的用户类别;
    语音识别模块,用于根据所述用户类别,采用对应的语音识别模型对命令语音进行语音识别,以得到命令语音所描述的命令;
    提供模块,用于根据所述用户类别和命令提供资源。
  10. 根据权利要求9所述的声纹识别装置,其特征在于,
    所述用户类别包括用户性别、用户年龄段。
  11. 根据权利要求9或10所述的声纹识别装置,其特征在于,所述 声纹识别模块还包括:
    声纹处理模型建立子模块,用于根据不同用户类别的声音特征,进行模型训练,建立不同用户类别的声纹处理模型。
  12. 根据权利要求9、10或11所述的声纹识别装置,其特征在于,所述语音识别模块还包括:
    语音识别模型建立子模块,用于采集具有不同用户类型口语化特征的语料形成语料库,利用所述语料库进行语音识别模型训练,得到对应用户类型的语音识别模型。
  13. 根据权利要求9至12任一权项所述的声纹识别装置,其特征在于,所述提供模块包括:
    查找子模块,用于根据所述用户类别,查找与所述命令相匹配的推荐兴趣类别;
    呈现子模块,用于搜索与所述兴趣类别匹配的目标资源,将所述目标资源呈现给用户。
  14. 根据权利要求13所述的声纹识别装置,其特征在于,所述查找子模块包括:
    垂类确定子模块,用于根据所述命令确当前垂类;
    内容获取子模块,用于根据当前垂类和所述用户的属性信息,利用预先建立的用户兴趣模型,获得与所述命令相匹配的推荐兴趣类别。
  15. 根据权利要求14所述的声纹识别装置,其特征在于,
    所述属性信息包括用户年龄段、用户性别中至少一个。
  16. 根据权利要求14所述的声纹识别装置,其特征在于,所述查找子模块还包括用户兴趣模型建立子模块,用于:
    获取用户历史日志,其中,所述用户历史日志至少包括:用户标识、用户属性信息、用户历史行为数据;
    将用户历史行为数据按照用户类别和垂类进行分类统计,得到所述用户兴趣模型。
  17. 一种设备,其特征在于,所述设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-8中任一所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-8中任一所述的方法。
PCT/CN2018/077359 2017-06-30 2018-02-27 一种声纹识别方法及装置 WO2019000991A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018546525A JP6711500B2 (ja) 2017-06-30 2018-02-27 声紋識別方法及び装置
US16/300,444 US11302337B2 (en) 2017-06-30 2018-02-27 Voiceprint recognition method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710525251.5A CN107507612B (zh) 2017-06-30 2017-06-30 一种声纹识别方法及装置
CN201710525251.5 2017-06-30

Publications (1)

Publication Number Publication Date
WO2019000991A1 true WO2019000991A1 (zh) 2019-01-03

Family

ID=60679818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077359 WO2019000991A1 (zh) 2017-06-30 2018-02-27 一种声纹识别方法及装置

Country Status (4)

Country Link
US (1) US11302337B2 (zh)
JP (1) JP6711500B2 (zh)
CN (1) CN107507612B (zh)
WO (1) WO2019000991A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188171A (zh) * 2019-05-30 2019-08-30 上海联影医疗科技有限公司 一种语音搜索方法、装置、电子设备及存储介质
CN110335626A (zh) * 2019-07-09 2019-10-15 北京字节跳动网络技术有限公司 基于音频的年龄识别方法及装置、存储介质
CN110503961A (zh) * 2019-09-03 2019-11-26 北京字节跳动网络技术有限公司 音频识别方法、装置、存储介质及电子设备
CN110990685A (zh) * 2019-10-12 2020-04-10 中国平安财产保险股份有限公司 基于声纹的语音搜索方法、设备、存储介质及装置
CN111326163A (zh) * 2020-04-15 2020-06-23 厦门快商通科技股份有限公司 一种声纹识别方法和装置以及设备
CN112530418A (zh) * 2019-08-28 2021-03-19 北京声智科技有限公司 一种语音唤醒方法、装置及相关设备
CN112733025A (zh) * 2021-01-06 2021-04-30 天津五八到家货运服务有限公司 用户数据服务系统、用户数据处理方法、设备和存储介质
WO2022048786A1 (en) 2020-09-07 2022-03-10 Kiwip Technologies Sas Secure communication system with speaker recognition by voice biometrics for user groups such as family groups
US11495217B2 (en) 2018-04-16 2022-11-08 Google Llc Automated assistants that accommodate multiple age groups and/or vocabulary levels

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507612B (zh) * 2017-06-30 2020-08-28 百度在线网络技术(北京)有限公司 一种声纹识别方法及装置
CN108305623A (zh) * 2018-01-15 2018-07-20 珠海格力电器股份有限公司 电器控制方法及装置
CN110046898B (zh) * 2018-01-17 2022-01-25 苏州君林智能科技有限公司 账户信息的分组方法、装置及支付方法、装置
CN108492836A (zh) * 2018-03-29 2018-09-04 努比亚技术有限公司 一种基于语音的搜索方法、移动终端及存储介质
CN108899033B (zh) * 2018-05-23 2021-09-10 出门问问信息科技有限公司 一种确定说话人特征的方法及装置
CN110619870B (zh) * 2018-06-04 2022-05-06 佛山市顺德区美的电热电器制造有限公司 一种人机对话方法、装置、家用电器和计算机存储介质
CN108881649B (zh) * 2018-06-08 2020-11-13 百度在线网络技术(北京)有限公司 用于提供语音服务的方法和装置
CN108737872A (zh) 2018-06-08 2018-11-02 百度在线网络技术(北京)有限公司 用于输出信息的方法和装置
CN108882014A (zh) * 2018-06-13 2018-11-23 成都市极米科技有限公司 智能电视儿童桌面的管理方法、管理装置和可读存储介质
CN108962223A (zh) * 2018-06-25 2018-12-07 厦门快商通信息技术有限公司 一种基于深度学习的语音性别识别方法、设备及介质
CN108831487B (zh) * 2018-06-28 2020-08-18 深圳大学 声纹识别方法、电子装置及计算机可读存储介质
CN108924218B (zh) * 2018-06-29 2020-02-18 百度在线网络技术(北京)有限公司 用于推送信息的方法和装置
CN108933730A (zh) * 2018-06-29 2018-12-04 百度在线网络技术(北京)有限公司 信息推送方法和装置
CN109271585B (zh) * 2018-08-30 2021-06-01 广东小天才科技有限公司 一种信息推送方法及家教设备
CN109119071A (zh) * 2018-09-26 2019-01-01 珠海格力电器股份有限公司 一种语音识别模型的训练方法及装置
CN118503532A (zh) * 2018-10-02 2024-08-16 松下电器(美国)知识产权公司 信息提供方法
CN109582822A (zh) * 2018-10-19 2019-04-05 百度在线网络技术(北京)有限公司 一种基于用户语音的音乐推荐方法及装置
CN111290570A (zh) * 2018-12-10 2020-06-16 中国移动通信集团终端有限公司 人工智能设备的控制方法、装置、设备及介质
CN109462603A (zh) * 2018-12-14 2019-03-12 平安城市建设科技(深圳)有限公司 基于盲检测的声纹认证方法、设备、存储介质及装置
CN109412405A (zh) * 2018-12-24 2019-03-01 珠海格力电器股份有限公司 电磁辐射调节方法、装置、系统及家电设备
CN109671438A (zh) * 2019-01-28 2019-04-23 武汉恩特拉信息技术有限公司 一种利用语音提供辅助服务的装置及方法
CN111724797A (zh) * 2019-03-22 2020-09-29 比亚迪股份有限公司 基于图像和声纹识别的语音控制方法、系统和车辆
CN111859008B (zh) * 2019-04-29 2023-11-10 深圳市冠旭电子股份有限公司 一种推荐音乐的方法及终端
CN110166560B (zh) * 2019-05-24 2021-08-20 北京百度网讯科技有限公司 一种服务配置方法、装置、设备及存储介质
CN110570843B (zh) * 2019-06-28 2021-03-05 北京蓦然认知科技有限公司 一种用户语音识别方法和装置
US11257493B2 (en) 2019-07-11 2022-02-22 Soundhound, Inc. Vision-assisted speech processing
CN112331193B (zh) * 2019-07-17 2024-08-09 华为技术有限公司 语音交互方法及相关装置
CN110336723A (zh) * 2019-07-23 2019-10-15 珠海格力电器股份有限公司 智能家电的控制方法及装置、智能家电设备
JP6977004B2 (ja) * 2019-08-23 2021-12-08 サウンドハウンド,インコーポレイテッド 車載装置、発声を処理する方法およびプログラム
CN110600033B (zh) * 2019-08-26 2022-04-05 北京大米科技有限公司 学习情况的评估方法、装置、存储介质及电子设备
CN110534099B (zh) * 2019-09-03 2021-12-14 腾讯科技(深圳)有限公司 语音唤醒处理方法、装置、存储介质及电子设备
CN110689886B (zh) * 2019-09-18 2021-11-23 深圳云知声信息技术有限公司 设备控制方法及装置
CN112581950A (zh) * 2019-09-29 2021-03-30 广东美的制冷设备有限公司 空调器的语音控制方法、装置及存储介质
CN112735398B (zh) * 2019-10-28 2022-09-06 思必驰科技股份有限公司 人机对话模式切换方法及系统
CN110753254A (zh) * 2019-10-30 2020-02-04 四川长虹电器股份有限公司 应用于智能语音电视声纹支付的声纹注册方法
CN110660393B (zh) * 2019-10-31 2021-12-03 广东美的制冷设备有限公司 语音交互方法、装置、设备及存储介质
CN111023470A (zh) * 2019-12-06 2020-04-17 厦门快商通科技股份有限公司 空调温度调节方法、介质、设备及装置
CN111081249A (zh) * 2019-12-30 2020-04-28 腾讯科技(深圳)有限公司 一种模式选择方法、装置及计算机可读存储介质
CN111274819A (zh) * 2020-02-13 2020-06-12 北京声智科技有限公司 资源获取方法及装置
CN111489756B (zh) * 2020-03-31 2024-03-01 中国工商银行股份有限公司 一种声纹识别方法及装置
CN112002346A (zh) * 2020-08-20 2020-11-27 深圳市卡牛科技有限公司 基于语音的性别年龄识别方法、装置、设备和存储介质
CN112163081B (zh) * 2020-10-14 2024-08-27 网易(杭州)网络有限公司 标签确定方法、装置、介质及电子设备
CN114449312A (zh) * 2020-11-04 2022-05-06 深圳Tcl新技术有限公司 一种视频播放控制方法、装置、终端设备及存储介质
CN112584238A (zh) * 2020-12-09 2021-03-30 深圳创维-Rgb电子有限公司 影视资源匹配方法、装置及智能电视
CN113938755A (zh) * 2021-09-18 2022-01-14 海信视像科技股份有限公司 服务器、终端设备以及资源推荐方法
CN113948084A (zh) * 2021-12-06 2022-01-18 北京声智科技有限公司 语音数据的处理方法、装置、电子设备、存储介质及产品
CN114155845A (zh) * 2021-12-13 2022-03-08 中国农业银行股份有限公司 服务确定方法、装置、电子设备及存储介质
CN116994565B (zh) * 2023-09-26 2023-12-15 深圳琪乐科技有限公司 一种智能语音助手及其语音控制方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441869A (zh) * 2007-11-21 2009-05-27 联想(北京)有限公司 语音识别终端用户身份的方法及终端
CN102142254A (zh) * 2011-03-25 2011-08-03 北京得意音通技术有限责任公司 基于声纹识别和语音识别的防录音假冒的身份确认方法
CN105068661A (zh) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法和系统
CN105426436A (zh) * 2015-11-05 2016-03-23 百度在线网络技术(北京)有限公司 基于人工智能机器人的信息提供方法和装置
CN106548773A (zh) * 2016-11-04 2017-03-29 百度在线网络技术(北京)有限公司 基于人工智能的儿童用户搜索方法及装置
CN106557410A (zh) * 2016-10-25 2017-04-05 北京百度网讯科技有限公司 基于人工智能的用户行为分析方法和装置
CN107507612A (zh) * 2017-06-30 2017-12-22 百度在线网络技术(北京)有限公司 一种声纹识别方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190688A1 (en) * 2003-03-31 2004-09-30 Timmins Timothy A. Communications methods and systems using voiceprints
JP2003115951A (ja) 2001-10-09 2003-04-18 Casio Comput Co Ltd 話題情報提供システムおよび話題情報提供方法
KR100755678B1 (ko) * 2005-10-28 2007-09-05 삼성전자주식회사 개체명 검출 장치 및 방법
ATE439665T1 (de) * 2005-11-25 2009-08-15 Swisscom Ag Verfahren zur personalisierung eines dienstes
US20110060587A1 (en) * 2007-03-07 2011-03-10 Phillips Michael S Command and control utilizing ancillary information in a mobile voice-to-speech application
JP2009271785A (ja) 2008-05-08 2009-11-19 Nippon Telegr & Teleph Corp <Ntt> 情報提供方法及び装置及びコンピュータ読み取り可能な記録媒体
US20120042020A1 (en) * 2010-08-16 2012-02-16 Yahoo! Inc. Micro-blog message filtering
US8930187B2 (en) * 2012-01-03 2015-01-06 Nokia Corporation Methods, apparatuses and computer program products for implementing automatic speech recognition and sentiment detection on a device
JP2013164642A (ja) 2012-02-09 2013-08-22 Nikon Corp 検索手段制御装置、検索結果出力装置及びプログラム
JP6221253B2 (ja) 2013-02-25 2017-11-01 セイコーエプソン株式会社 音声認識装置及び方法、並びに、半導体集積回路装置
JP6522503B2 (ja) * 2013-08-29 2019-05-29 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 機器制御方法、表示制御方法及び購入決済方法
JP5777178B2 (ja) * 2013-11-27 2015-09-09 国立研究開発法人情報通信研究機構 統計的音響モデルの適応方法、統計的音響モデルの適応に適した音響モデルの学習方法、ディープ・ニューラル・ネットワークを構築するためのパラメータを記憶した記憶媒体、及び統計的音響モデルの適応を行なうためのコンピュータプログラム
JP6129134B2 (ja) 2014-09-29 2017-05-17 シャープ株式会社 音声対話装置、音声対話システム、端末、音声対話方法およびコンピュータを音声対話装置として機能させるためのプログラム
CN105045889B (zh) 2015-07-29 2018-04-20 百度在线网络技术(北京)有限公司 一种信息推送方法及装置
US11113714B2 (en) * 2015-12-30 2021-09-07 Verizon Media Inc. Filtering machine for sponsored content
US9812151B1 (en) * 2016-11-18 2017-11-07 IPsoft Incorporated Generating communicative behaviors for anthropomorphic virtual agents based on user's affect

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441869A (zh) * 2007-11-21 2009-05-27 联想(北京)有限公司 语音识别终端用户身份的方法及终端
CN102142254A (zh) * 2011-03-25 2011-08-03 北京得意音通技术有限责任公司 基于声纹识别和语音识别的防录音假冒的身份确认方法
CN105068661A (zh) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法和系统
CN105426436A (zh) * 2015-11-05 2016-03-23 百度在线网络技术(北京)有限公司 基于人工智能机器人的信息提供方法和装置
CN106557410A (zh) * 2016-10-25 2017-04-05 北京百度网讯科技有限公司 基于人工智能的用户行为分析方法和装置
CN106548773A (zh) * 2016-11-04 2017-03-29 百度在线网络技术(北京)有限公司 基于人工智能的儿童用户搜索方法及装置
CN107507612A (zh) * 2017-06-30 2017-12-22 百度在线网络技术(北京)有限公司 一种声纹识别方法及装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11495217B2 (en) 2018-04-16 2022-11-08 Google Llc Automated assistants that accommodate multiple age groups and/or vocabulary levels
US11756537B2 (en) 2018-04-16 2023-09-12 Google Llc Automated assistants that accommodate multiple age groups and/or vocabulary levels
CN110188171A (zh) * 2019-05-30 2019-08-30 上海联影医疗科技有限公司 一种语音搜索方法、装置、电子设备及存储介质
CN110335626A (zh) * 2019-07-09 2019-10-15 北京字节跳动网络技术有限公司 基于音频的年龄识别方法及装置、存储介质
CN112530418A (zh) * 2019-08-28 2021-03-19 北京声智科技有限公司 一种语音唤醒方法、装置及相关设备
CN110503961A (zh) * 2019-09-03 2019-11-26 北京字节跳动网络技术有限公司 音频识别方法、装置、存储介质及电子设备
CN110990685A (zh) * 2019-10-12 2020-04-10 中国平安财产保险股份有限公司 基于声纹的语音搜索方法、设备、存储介质及装置
CN110990685B (zh) * 2019-10-12 2023-05-26 中国平安财产保险股份有限公司 基于声纹的语音搜索方法、设备、存储介质及装置
CN111326163A (zh) * 2020-04-15 2020-06-23 厦门快商通科技股份有限公司 一种声纹识别方法和装置以及设备
WO2022048786A1 (en) 2020-09-07 2022-03-10 Kiwip Technologies Sas Secure communication system with speaker recognition by voice biometrics for user groups such as family groups
CN112733025A (zh) * 2021-01-06 2021-04-30 天津五八到家货运服务有限公司 用户数据服务系统、用户数据处理方法、设备和存储介质

Also Published As

Publication number Publication date
CN107507612B (zh) 2020-08-28
US11302337B2 (en) 2022-04-12
JP6711500B2 (ja) 2020-06-17
CN107507612A (zh) 2017-12-22
US20210225380A1 (en) 2021-07-22
JP2019527371A (ja) 2019-09-26

Similar Documents

Publication Publication Date Title
WO2019000991A1 (zh) 一种声纹识别方法及装置
CN107481720B (zh) 一种显式声纹识别方法及装置
US10977452B2 (en) Multi-lingual virtual personal assistant
CN107492379B (zh) 一种声纹创建与注册方法及装置
US11417343B2 (en) Automatic speaker identification in calls using multiple speaker-identification parameters
US11475897B2 (en) Method and apparatus for response using voice matching user category
CN105895087B (zh) 一种语音识别方法及装置
US10679063B2 (en) Recognizing salient video events through learning-based multimodal analysis of visual features and audio-based analytics
KR102333505B1 (ko) 소셜 대화형 입력들에 대한 컴퓨터 응답 생성
US11494612B2 (en) Systems and methods for domain adaptation in neural networks using domain classifier
US20210158790A1 (en) Autonomous generation of melody
US20230325663A1 (en) Systems and methods for domain adaptation in neural networks
US8972265B1 (en) Multiple voices in audio content
JP7108144B2 (ja) クロスドメインバッチ正規化を使用したニューラルネットワークにおけるドメイン適応のためのシステム及び方法
CN111415677A (zh) 用于生成视频的方法、装置、设备和介质
CN109582822A (zh) 一种基于用户语音的音乐推荐方法及装置
US9684908B2 (en) Automatically generated comparison polls
TW202022851A (zh) 語音互動方法和裝置
US11943181B2 (en) Personality reply for digital content
KR102226427B1 (ko) 호칭 결정 장치, 이를 포함하는 대화 서비스 제공 시스템, 호칭 결정을 위한 단말 장치 및 호칭 결정 방법
WO2020154883A1 (zh) 语音信息的处理方法、装置、存储介质及电子设备
WO2023005580A1 (zh) 显示设备
RBB et al. Deliverable 5.1

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018546525

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18822840

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 06.04.2020)

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.05.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18822840

Country of ref document: EP

Kind code of ref document: A1