NL2035518A - Intelligent voice ai pacifying method - Google Patents

Intelligent voice ai pacifying method Download PDF

Info

Publication number
NL2035518A
NL2035518A NL2035518A NL2035518A NL2035518A NL 2035518 A NL2035518 A NL 2035518A NL 2035518 A NL2035518 A NL 2035518A NL 2035518 A NL2035518 A NL 2035518A NL 2035518 A NL2035518 A NL 2035518A
Authority
NL
Netherlands
Prior art keywords
speech
voice
user
timbre
intelligent
Prior art date
Application number
NL2035518A
Other languages
Dutch (nl)
Other versions
NL2035518B1 (en
Inventor
Liu Xufeng
Wang Chaoxian
Wu Shengjun
Fang Peng
Feng Tingwei
Wang Hui
Wang Xiuchao
Original Assignee
Air Force Medical Univ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Medical Univ filed Critical Air Force Medical Univ
Priority to NL2035518A priority Critical patent/NL2035518B1/en
Publication of NL2035518A publication Critical patent/NL2035518A/en
Application granted granted Critical
Publication of NL2035518B1 publication Critical patent/NL2035518B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides an intelligent voice AI pacifying method, and relates to the technical field of artificial intelligence. Based on a policy management method of power industry of classification tree fusion technology, the pacifying 5 Inethod, includes the following steps: SI. performing preliminary construction of the intelligent voice AI; SZ. collecting and inputting the required, voice; S3. performing compilation of a voice code stream based on the collected voice; and S4. emitting the voice fran a user and triggering corresponding instructions IO according to the voice. According to the present invention, AI is started after the following steps of: performing preliminary construction of the intelligent voice AI; collecting and inputting the required voice; performing compilation of voice code stream based on the collected voice; and emitting voice from the user and 15 triggering corresponding instructions according to the voice. (+ Fig. l)

Description

P1852 /NLpd
INTELLIGENT VOICE AI PACIFYING METHOD
Technical field
The present invention relates to the technical field of arti- ficial intelligence, and in particular, relates to an intelligent voice AI pacifying method.
Background technology
Intelligent voice, or intelligent voice technology, is the realization of human-computer language communication, including automatic speech recognition (ASR) and text to speech (TTS). The research of intelligent voice technology starts with the ASR, which can be traced back to the 1950s. As information technology develops, the intelligent voice technology has become the most convenient and effective means of people's information acquisition and communication. The artificial intelligence is a branch of com- puter science, which attempts to understand the essence of intel- ligence and produce a new intelligent machine that can respond in a way similar to human intelligence. The research in this field includes robotics, language recognition, image recognition, natu- ral language processing and expert syspraaks, etc. Since the birth of artificial intelligence, it has become increasingly mature in terms of theory and technology, and its fields of application are also expanding. It can be envisioned that in the future, scien- tific and technological products brought by the artificial intel- ligence will be “containers” of human wisdom.
AI voice function is a technology that a machine automatical- ly converts human voice into texts. AI voice, using ASR, TTS, se- mantic understanding and other artificial intelligence technolo- gies, can interact with customers in a natural and smooth way through anthropomorphic voice, text and other ways, so as to pro- vide independent on-line Q&A, consulting, business services, etc.
The existing intelligent voice has indifferent tone, and poorer intelligent effect, can not be effectively communicated with the user, and can not to pacify the users according to their emotional changes.
Summary of the invention
Solved technical problems
Aiming at the shortcomings of the prior art, the present in- vention provides an intelligent voice AI pacifying method, solving the problems that the existing intelligent voice tone is indiffer- ent, the intelligent effect is poor, the communication with the user can not be effectively carried out, and the corresponding emotional pacifying work can not be carried out according to the emotional transformation of the user.
Technical solutions
In order to realize the above object, the present invention is realized through the following technical solutions: an intelli- gent voice AI pacifying method, including the following steps:
Sl. preliminary construction of intelligent voice AI is per- formed
The intelligence of AI is constructed; although difficult and complicated operations are not need to carry out by AI, correct and smooth communication to basic answers are required by AI, and corresponding information should be ensured to be triggered rea- sonably by AI according to voice data in a database.
S2. the required voice is collected and input
Voice is collected through a corresponding device, so as to obtain corresponding voice data; further, the collected voice data is sent to a cloud database, and the voice data in the cloud data- base is performed automatic voice recognition via a server; the voice data is not limited to external voice, and the voice may comprise communication recording, audio in video, and voice sent in chat softwares; the corresponding voice data is obtained through analysis and integration; results are recognized in the cloud database; and after successful recognition, intelligent voice services can be performed through AI according to the corre- sponding voice.
S3. compilation of a voice code stream is performed based on the collected voice
The voice code stream to be sent is obtained according to need; the audio data in the cloud database is read and answered by
AI, and the voice control information for controlling an audio mixing strategy is obtained according to the voice code stream; the voice code stream is compiled according to the collected voice; imitated timbre and tone are compiled and imitated by audio mixing and compiling, such that the voice emitted by AI is the same as that of the user; and answer and pacifying behaviors can be more effective through the consistent voice.
S4. voice is emitted from a user and corresponding instruc- tions are triggered based on the voice
When the user phonates the voice, instructions are triggered; according to different instructions sent by the user, AI is start- ed according to the corresponding instruction is triggered by the voice; further, along with the words said by the user, AI performs extraction from the cloud database, and the server performs recog- nition, thus entering a corresponding working mode according to the user’s options, which is used for pacifying work in different situations, such as missing, sad, painful, fearful, angry, etc; according to the recognition of the user’s tone, timbre, and spo- ken voice, AI selects the corresponding working mode to extract the corresponding voice data for answering, so as to carry out the work of answering and pacifying.
Preferably, 81: preliminary construction of intelligent voice
AT is performed
The most important thing in the intelligent voice AI pacify- ing is the construction of intelligent voice AI; if the construc- tion of intelligent voice AI cannot be completed, the reasonable construction work cannot be completed in the follow-up; the intel- ligent voice AI not only answers by extracting audio data files, but also determines emotion of the user according to the words, keywords, timbre and tone of the user, such that the corresponding audio data files can be reasonably mobilized to play and pacify the user, corresponding to determining the state of designated agents according to the trigger information of user state, or ac- cording to the voice instructions sent by the user, the corre- sponding text and audio are mobilized to play coherently to an- swer.
Preferably, S3: compilation of a voice code stream is per- formed based on the collected voice
According to the obtained voice code stream to be sent, the obtained voice code stream and voice control information are sent to a voice server, and the voice server receives at least one voice stream; according to functions corresponding to each posi- tion, a corresponding factor is searched in a timbre database as an analog timbre to be output; the timbre database is used for storing the position range of line segments corresponding to each timbre and the function corresponding to the position range, so as to compile the corresponding voice; after the compilation, the voice is stored in the server and the cloud database, waiting for
AI to perform a retrieval work; the audio mixing and compiling are used to compile and imitate the imitated timbre and tone, such that the voice emitted by AI is the same as that of the user, and it is more effective to perform answer and pacify by a consistent voice.
Beneficial effects
The present invention provides an intelligent voice AI paci- fying method. The present invention has the following beneficial effects:
According to the present invention, the corresponding in- structions are started based on the voice through the following steps of: performing preliminary construction of the intelligent voice AI; collecting and inputting the required voice; performing compilation of voice code stream based on the collected voice; and emitting voice from the user. According to the present invention, the intelligent voice AI not only answers by extracting audio data files, but also determines emotion of the user according to the words, keywords, timbre and tone of the user, such that the corre- sponding audio data files can be reasonably mobilized to play and pacify the user, corresponding to determining the state of desig- nated agents according to the trigger information of user state, or according to the voice instructions sent by the user, the cor- responding text and audio are mobilized to play coherently to an- swer, solving the problems that the existing intelligent voice tone is indifferent, the intelligent effect is poor, the communi- cation with the user can not be effectively carried out, and the corresponding emotional pacifying work can not be carried out ac-
cording to the emotional transformation of the user.
Description of attached figures
FIG. 1 is a flow chart of an intelligent voice AI pacifying method according to the present invention; 5 FIG. 2 is a preliminary construction diagram of an intelli- gent voice AI pacifying method according to the present invention;
FIG. 3 is an operating diagram of collecting and inputting the required voice of an intelligent voice AI pacifying method ac- cording to the present invention;
FIG. 4 is a diagram of compiling a voice code stream based on the collected voice of an intelligent voice AI pacifying method according to the present invention; and
FIG. 5 is a diagram of emitting voice form a user and start- ing corresponding instructions based on the voice of an intelli- gent voice AI pacifying method according to the present invention.
Specific embodiments
The technical solutions of the examples in the present inven- tion will be described clearly and completely with reference to the accompanying drawings of the examples in the present invention below. Obviously, the examples described are only some, rather than all examples of the present invention. Based on the examples of the present invention, all the other examples obtained by the ordinary skilled in the art without creative efforts fall within the scope of protection of the present invention.
Example 1:
As shown in FIGs. 1-5, the example of the present invention provides an intelligent voice AI pacifying method, including the following steps:
S1. preliminary construction of intelligent voice AI is per- formed
The intelligence of AI is constructed; although difficult and complicated operations are not required to carry out by AI, cor- rect and smooth communication to basic answers are required by Al, and corresponding information should be ensured to be triggered reasonably by AI according to voice data in a database.
S2. the required voice is collected and input
Voice is collected through a corresponding device, so as to obtain corresponding voice data; further, the collected voice data is sent to a cloud database, and the voice data in the cloud data- base is performed automatic voice recognition via a server; the voice data is not limited to external voice, and the voice may comprise communication recording, audio in video, and voice sent in chat softwares; the corresponding voice data is obtained through analysis and integration; results are recognized in the cloud database; and after successful recognition, intelligent voice services can be performed through AI according to the corre- sponding voice.
S3. compilation of a voice code stream is performed based on the collected voice
The voice code stream to be sent is obtained according to need; the audio data in the cloud database is read and answered by
AI, and the voice control information for controlling an audio mixing strategy is obtained according to the voice code stream; the voice code stream is compiled according to the collected voice; imitated timbre and tone are compiled and imitated by audio mixing and compiling, such that the voice emitted by AI is the same as that of the user; and answer and pacifying behaviors can be more effective through the consistent voice.
S4. voice is emitted from a user and corresponding instruc- tions are triggered based on the voice
When the user phonates the voice, instructions are triggered; according to different instructions sent by the user, AI is start- ed according to the corresponding instruction is triggered by the voice; further, along with the words said by the user, AI performs extraction from the cloud database, and the server performs recog- nition, thus entering a corresponding working mode according to the user's options, which is used for pacifying work in different situations, such as missing, sad, painful, fearful, angry, etc; according to the recognition of the user’s tone, timbre, and spo- ken voice, AI selects the corresponding working mode to extract the corresponding voice data for answering, so as to carry out the work of answering and pacifying.
Example 2:
Sl. specific construction of intelligent voice AI is per-
formed
The most important thing in the intelligent voice AI pacify- ing is the construction of intelligent voice AI; if the construc- tion of intelligent voice AI cannot be completed, the reasonable construction work cannot be completed in the follow-up; the intel- ligent voice AI not only answers by extracting audio data files, but also determines emotion of the user according to the words, keywords, timbre and tone of the user, such that the corresponding audio data files can be reasonably mobilized to play and pacify the user, corresponding to determining the state of designated agents according to the trigger information of user state, or ac- cording to the voice instructions sent by the user, the corre- sponding text and audio are mobilized to play coherently to an- swer.
S3. compilation of a voice code stream is performed based on the collected voice
According to the obtained voice code stream to be sent, the obtained voice code stream and voice control information are sent to a voice server, and the voice server receives at least one voice stream; according to functions corresponding to each posi- tion, a corresponding factor is searched in a timbre database as an analog timbre to be output; the timbre database is used for storing the position range of line segments corresponding to each timbre and the function corresponding to the position range, so as to compile the corresponding voice; after the compilation, the voice is stored in the server and the cloud database, waiting for
AI to perform a retrieval work; the audio mixing and compiling are used to compile and imitate the imitated timbre and tone, such that the voice emitted by AI is the same as that of the user, and it is more effective to perform answer and pacify by a consistent voice.
Although the examples of the present invention have been shown and described, for those ordinary skilled in the art, it can be understood as various changes, modifications, replacements and variations can be made on these examples within the principle and spirit of the present invention. The scope of the present inven- tion is defined by the attached claims and the equivalent thereof.

Claims (3)

CONCLUSIESCONCLUSIONS 1. Werkwijze om intelligente spraak-AI te pacificeren, omvattende de volgende stappen:1. Method to pacify intelligent voice AI, including the following steps: Sl. inleidende constructie van intelligente spraak AI wordt uitge- voerd de intelligentie van AI wordt geconstrueerd; hoewel moeilijke en gecompliceerde handelingen niet door AI hoeven te worden uitge- voerd, is een correcte en soepele communicatie met basisantwoorden wel vereist door AI, en overeenkomstige informatie moet redelij- kerwijs door AI kunnen worden geactiveerd volgens spraakgegevens in een database;Sl. introductory construction of intelligent speech AI is performed the intelligence of AI is constructed; Although difficult and complicated operations do not need to be performed by AI, correct and smooth communication with basic answers is required by AI, and corresponding information should reasonably be able to be activated by AI according to speech data in a database; S2. de vereiste spraak wordt verzameld en ingevoerd de spraak wordt verzameld via een corresponderend apparaat om cor- responderende spraakgegevens te verkrijgen; verder worden de ver- zamelde spraakgegevens naar een clouddatabase gestuurd en de spraakgegevens in de clouddatabase worden via een server aan auto- matische spraakherkenning onderworpen; de spraakgegevens zijn niet beperkt tot externe spraak en de spraak kan bestaan uit communica- tieopnames, audio in video en spraak verzonden in chatsoftwares; de corresponderende spraakgegevens worden verkregen door analyse en integratie; de resultaten worden herkend in de clouddatabase; en na succesvolle herkenning kunnen intelligente spraakdiensten worden uitgevoerd door AI volgens de corresponderende spraak;S2. the required speech is collected and input the speech is collected via a corresponding device to obtain corresponding speech data; furthermore, the collected speech data is sent to a cloud database and the speech data in the cloud database is subjected to automatic speech recognition via a server; the voice data is not limited to external speech and the speech may include communication recordings, audio in video and speech transmitted in chat software; the corresponding speech data is obtained by analysis and integration; the results are recognized in the cloud database; and after successful recognition, intelligent voice services can be performed by AI according to the corresponding speech; S3. Compilatie van een spraakcodestroom wordt uitgevoerd op basis van de verzamelde spraak; de te verzenden spraakcodestroom wordt naar behoefte verkregen; de corresponderende audiogegevens in de clouddatabase worden gelezen en beantwoord door AI, en de spraakbesturingsinformatie voor het regelen van een audiomixstrategie wordt verkregen op basis van de spraakcodestroom; de spraakcodestroom wordt samengesteld op basis van de verzamelde spraak; geïmiteerd timbre en toon worden samen- gesteld en geïmiteerd door audiomixing en compilatie, zodat de spraak die door AI wordt uitgezonden dezelfde is als die van de gebruiker; en beantwoording en pacificerend gedrag kunnen effec- tiever zijn door de consistente spraak; enS3. Compilation of a speech code stream is performed based on the collected speech; the speech code stream to be transmitted is obtained as needed; the corresponding audio data in the cloud database is read and responded to by AI, and the voice control information for controlling an audio mixing strategy is obtained based on the voice code stream; the speech code stream is constructed from the collected speech; imitated timbre and tone are composed and imitated through audio mixing and compilation, so that the speech emitted by AI is the same as that of the user; and response and pacifying behavior can be more effective because of the consistent speech; and S4. een spraak van een gebruiker wordt afgespeeld en overeenkom- stige instructies worden geactiveerd op basis van de spraak; wanneer de gebruiker de spraak foneert, worden instructies geacti- veerd; volgens verschillende instructies die door de gebruiker worden verzonden, wordt AI gestart volgens de overeenkomstige in- structie die door de spraak wordt geactiveerd; verder haalt AT, samen met de woorden die door de gebruiker worden gezegd, uit de clouddatabase en voert de server herkenning uit, waardoor een overeenkomstige werkmodus wordt geactiveerd volgens de opties van de gebruiker, die wordt gebruikt voor pacificerend werk in ver- schillende situaties, zoals gemis, verdriet, pijn, angst, boos, enz; op basis van de herkenning van de toon, het timbre en de ge- sproken spraak van de gebruiker selecteert de AI de overeenkomsti- ge werkmodus om de overeenkomstige spraakgegevens te extraheren voor beantwoording, om zo het werk van beantwoorden en pacificeren uit te voeren.S4. a user's speech is played and corresponding instructions are activated based on the speech; when the user phonates the speech, instructions are activated; according to different instructions sent by the user, AI is started according to the corresponding instruction activated by the speech; furthermore, AT, together with the words said by the user, retrieves from the cloud database and performs server recognition, activating a corresponding working mode according to the user's options, which is used for pacifying work in different situations, such as loss, sadness, pain, fear, angry, etc; Based on the recognition of the user's tone, timbre and spoken speech, the AI selects the corresponding working mode to extract the corresponding speech data for answering, so as to perform the work of answering and pacifying. 2. Werkwijze om intelligente spraak-AI te pacificeren volgens con- clusie 1, waarbij in S1: de specifieke constructie van intelligen- te voice AI wordt uitgevoerd; het belangrijkste in de intelligente voice AI pacificatie is de constructie van intelligente voice AI; als de constructie van in- telligente voice AI niet kan worden voltooid, kan het redelijke constructiewerk niet worden voltooid in het vervolg; de intelli- gente voice AI antwoordt niet alleen door het extraheren van au- diogegevensbestanden, maar bepaalt ook de emotie van de gebruiker volgens de woorden, trefwoorden, klankkleur en toon van de gebrui- ker, zodat de overeenkomstige audiogegevensbestanden redelijker- wijs kunnen worden gemobiliseerd om af te spelen en de gebruiker te kalmeren, overeenstemmend met het bepalen van de toestand van aangewezen agenten volgens de triggerinformatie van de toestand van de gebruiker, of volgens de spraakinstructies die door de ge- bruiker worden verzonden, worden de overeenkomstige tekst en audio gemobiliseerd om coherent af te spelen om te antwoorden.A method of pacifying intelligent voice AI according to claim 1, wherein in S1: the specific construction of intelligent voice AI is performed; the most important thing in the intelligent voice AI pacification is the construction of intelligent voice AI; if the construction of intelligent voice AI cannot be completed, the reasonable construction work cannot be completed in the future; the intelligent voice AI not only responds by extracting audio data files, but also determines the user's emotion according to the user's words, keywords, timbre and tone, so that the corresponding audio data files can be reasonably mobilized to play and calm the user, corresponding to determining the state of designated agents according to the trigger information of the user's state, or according to the voice instructions sent by the user, the corresponding text and audio are mobilized to play coherently to answer. 3. Werkwijze om intelligente spraak-AI te pacificeren volgens con- clusie 1, waarbij in S3: compilatie van een spraakcodestroom wordt uitgevoerd op basis van de verzamelde spraak; volgens de verkregen spraakcodestroom die moet worden verzonden, worden de verkregen spraakcodestroom en spraakcontrole-informatie verzonden naar een spraakserver, en de spraakserver ontvangt min-The method of pacifying intelligent speech AI according to claim 1, wherein in S3: compilation of a speech code stream is performed based on the collected speech; according to the acquired speech code stream to be transmitted, the acquired speech code stream and speech control information are sent to a speech server, and the speech server receives min- stens één spraakstroom; volgens functies die overeenstemmen met elke positie, wordt een overeenkomstige factor gezocht in een tim- bre-database als een analoge timbre die moet worden uitgevoerd; de timbre-database wordt gebruikt voor het opslaan van het positiebe- reik van lijnsegmenten die overeenstemmen met elke timbre en de functie die overeenstemt met het positiebereik, om zo de overeen- komstige spraak te compileren; na de compilatie wordt de spraak opgeslagen op de server en in de clouddatabase, in afwachting dat AI de spraak ophaalt; het mixen en compileren van audio wordt ge- bruikt om de geïmiteerde klankkleur en toon te compileren en te imiteren, zodat de spraak die AI uitzendt dezelfde is als die van de gebruiker en het effectiever is om te antwoorden en te kalmeren met een consistente spraak.at least one speech stream; according to functions corresponding to each position, a corresponding factor is searched in a timbre database as an analog timbre to be output; the timbre database is used to store the position range of line segments corresponding to each timbre and the function corresponding to the position range, so as to compile the corresponding speech; after compilation, the speech is stored on the server and in the cloud database, waiting for AI to retrieve the speech; audio mixing and compiling is used to compile and imitate the imitated timbre and tone, so that the speech AI emits is the same as the user's and it is more effective to respond and calm down with consistent speech.
NL2035518A 2023-07-31 2023-07-31 Intelligent voice ai pacifying method NL2035518B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2035518A NL2035518B1 (en) 2023-07-31 2023-07-31 Intelligent voice ai pacifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2035518A NL2035518B1 (en) 2023-07-31 2023-07-31 Intelligent voice ai pacifying method

Publications (2)

Publication Number Publication Date
NL2035518A true NL2035518A (en) 2023-09-11
NL2035518B1 NL2035518B1 (en) 2024-04-16

Family

ID=87972071

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2035518A NL2035518B1 (en) 2023-07-31 2023-07-31 Intelligent voice ai pacifying method

Country Status (1)

Country Link
NL (1) NL2035518B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200265829A1 (en) * 2019-02-15 2020-08-20 International Business Machines Corporation Personalized custom synthetic speech
WO2021034786A1 (en) * 2019-08-21 2021-02-25 Dolby Laboratories Licensing Corporation Systems and methods for adapting human speaker embeddings in speech synthesis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200265829A1 (en) * 2019-02-15 2020-08-20 International Business Machines Corporation Personalized custom synthetic speech
WO2021034786A1 (en) * 2019-08-21 2021-02-25 Dolby Laboratories Licensing Corporation Systems and methods for adapting human speaker embeddings in speech synthesis

Also Published As

Publication number Publication date
NL2035518B1 (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110427472A (en) The matched method, apparatus of intelligent customer service, terminal device and storage medium
RU2653283C2 (en) Method for dialogue between machine, such as humanoid robot, and human interlocutor, computer program product and humanoid robot for implementing such method
CN106055662A (en) Emotion-based intelligent conversation method and system
JP2022523504A (en) Speech agent
CN104350541A (en) Robot capable of incorporating natural dialogues with a user into the behaviour of same, and methods of programming and using said robot
US11989976B2 (en) Nonverbal information generation apparatus, nonverbal information generation model learning apparatus, methods, and programs
CN109002515A (en) A kind of method and apparatus of intelligent response
CN109065052A (en) A kind of speech robot people
CN108052250A (en) Virtual idol deductive data processing method and system based on multi-modal interaction
CN116049360A (en) Intelligent voice dialogue scene conversation intervention method and system based on client image
CN106557165B (en) The action simulation exchange method and device and smart machine of smart machine
CN106502382A (en) Active exchange method and system for intelligent robot
KR20200059112A (en) System for Providing User-Robot Interaction and Computer Program Therefore
CN110347811A (en) A kind of professional knowledge question and answer robot system based on artificial intelligence
CN109542389A (en) Sound effect control method and system for the output of multi-modal story content
Alam et al. Comparative study of speaker personality traits recognition in conversational and broadcast news speech.
CN106372203A (en) Information response method and device for smart terminal and smart terminal
NL2035518B1 (en) Intelligent voice ai pacifying method
Cherakara et al. Furchat: An embodied conversational agent using llms, combining open and closed-domain dialogue with facial expressions
KR102293743B1 (en) AI Chatbot based Care System
CN111931036A (en) Multi-mode fusion interaction system and method, intelligent robot and storage medium
Heracleous et al. Deep convolutional neural networks for feature extraction in speech emotion recognition
CN110491372A (en) A kind of feedback information generating method, device, storage medium and smart machine
CN115222857A (en) Method, apparatus, electronic device and computer readable medium for generating avatar
CN111209376A (en) AI digital robot operation method