CN108364658A - Cyberchat method and server-side - Google Patents

Cyberchat method and server-side Download PDF

Info

Publication number
CN108364658A
CN108364658A CN201810236611.4A CN201810236611A CN108364658A CN 108364658 A CN108364658 A CN 108364658A CN 201810236611 A CN201810236611 A CN 201810236611A CN 108364658 A CN108364658 A CN 108364658A
Authority
CN
China
Prior art keywords
voice
server
enabled chat
information
chat information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810236611.4A
Other languages
Chinese (zh)
Inventor
冯键能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810236611.4A priority Critical patent/CN108364658A/en
Publication of CN108364658A publication Critical patent/CN108364658A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

The invention discloses a kind of cyberchat method and server-side, which includes:Step S1:Server-side obtains the phonetic reference file that user is uploaded by user terminal;Step S2:The server-side is according to the phonetic reference file acquisition sound characteristic information;Step S3:The server-side obtains the first voice-enabled chat information;Step S4:The server-side is handled the first voice-enabled chat information using the sound characteristic information, and will pass through that described treated that the first voice-enabled chat information is sent to the user terminal.Cyberchat method provided by the invention can carry out communication exchange with simulated implementation user and the chatting object oneself wanted, and can carry out whenever and wherever possible, be conducive to improve user experience by the sound for the chatting object that analog subscriber is wanted.

Description

Cyberchat method and server-side
Technical field
The present invention relates to intelligent sound technical fields, and in particular to a kind of cyberchat method and server-side.
Background technology
The development speed of internet is very surprising, has become the part that modern lives daily.Utilize internet It is one of basic function of internet to carry out mutually communication, and the immediate communication tool fashionable till now from initial Email is all Possess vast user group, another function of internet is to obtain information, however the information of internet is vast as the open sea, even if having strong Big research tool is still difficult to find desired information.In addition, modern people's rhythm of life is getting faster, pressure is increasingly Greatly, the thing followed is exactly that interpersonal exchange is fewer and fewer, and in some cases, people are difficult to find what oneself was wanted Chatting object chats with oneself, even if having found suitable chatting object in daily life, can not according to oneself Wish can chat anywhere or anytime so that the enthusiasm of chat greatly reduces.
Invention content
The purpose of the present invention is to provide a kind of cyberchat method and server-sides, can be thought with oneself with simulated implementation user The chatting object wanted carries out communication exchange.
To achieve the above object, technical scheme of the present invention provides a kind of cyberchat method, including:
Step S1:Server-side obtains the phonetic reference file that user is uploaded by user terminal;
Step S2:The server-side is according to the phonetic reference file acquisition sound characteristic information;
Step S3:The server-side obtains the first voice-enabled chat information;
Step S4:The server-side is handled the first voice-enabled chat information using the sound characteristic information, And it will pass through that described treated that the first voice-enabled chat information is sent to the user terminal.
Further, step S2 includes:
The server-side extracts tamber parameter, intonation parameter, velocity of sound parameter from the phonetic reference file.
Further, step S3 includes:
Step S31:The server-side obtains the second voice-enabled chat information that the user is uploaded by the user terminal;
Step S32:The server-side is according to the first voice-enabled chat information described in the second voice-enabled chat acquisition of information.
Further, step S32 includes:
The server-side is matched in the database according to the second voice-enabled chat information, to obtain described second Voice-enabled chat information is stored between the second voice-enabled chat information and the first voice-enabled chat information in the database Correspondence.
To achieve the above object, technical scheme of the present invention additionally provides a kind of server-side, including:
First acquisition module, the phonetic reference file uploaded by user terminal for obtaining user;
First processing module, for according to the phonetic reference file acquisition sound characteristic information;
Second acquisition module, for obtaining the first voice-enabled chat information;
Second processing module, for being handled the first voice-enabled chat information using the sound characteristic information, And it will pass through that described treated that the first voice-enabled chat information is sent to the user terminal.
Further, the first processing module from the phonetic reference file by extracting tamber parameter, intonation ginseng Sound characteristic information described in number, velocity of sound parameter acquiring.
Further, second acquisition module includes:
Acquiring unit, the second voice-enabled chat information uploaded by the user terminal for obtaining the user;
Processing unit, for according to the first voice-enabled chat information described in the second voice-enabled chat acquisition of information.
Further, the processing unit is matched in the database according to the second voice-enabled chat information, to The second voice-enabled chat information is obtained, the second voice-enabled chat information and first voice are stored in the database Correspondence between chat message.
Cyberchat method provided by the invention can simulate reality by the sound for the chatting object that analog subscriber is wanted Current family carries out communication exchange with the chatting object oneself wanted, and can carry out whenever and wherever possible, is conducive to improve user experience.
Description of the drawings
Fig. 1 is a kind of flow chart for cyberchat method that embodiment of the present invention provides.
Specific implementation mode
With reference to the accompanying drawings and examples, the specific implementation mode of the present invention is described in further detail.Implement below Example is not limited to the scope of the present invention for illustrating the present invention.
It is a kind of flow chart for cyberchat method that embodiment of the present invention provides, the cyberchat referring to Fig. 1, Fig. 1 Method includes:
Step S1:Server-side obtains the phonetic reference file that user is uploaded by user terminal;
Wherein, which can be the recorded audio file for the chatting object that user wants;
For example, user can be literary by the recorded audio of desired chatting object by intelligent terminals such as mobile phone, tablet computers Part is uploaded to server-side;
Step S2:The server-side is according to the phonetic reference file acquisition sound characteristic information;
Wherein, which may include tamber parameter, intonation parameter, velocity of sound parameter etc., for example, server-side exists After the recorded audio file for obtaining the chatting object that user wants, the chat that user wants therefrom is extracted using sound extractive technique The acoustic informations such as tone color, voice, the velocity of sound of object;
Step S3:The server-side obtains the first voice-enabled chat information;
Step S4:The server-side is handled the first voice-enabled chat information using the sound characteristic information, And it will pass through that described treated that the first voice-enabled chat information is sent to the user terminal;
The first voice-enabled chat of sound characteristic information pair information that server-side is obtained according to step S2 is handled, after making processing The tone color of the first voice-enabled chat information, voice, velocity of sound it is identical as the tone color of phonetic reference file, voice, velocity of sound, after making processing The first voice-enabled chat information change of voice be the sound of chatting object that user wants, and will treated the first voice-enabled chat information It is sent to user terminal, realizes cyberchat;
Wherein, in embodiment of the present invention, user terminal can be broadcast immediately after first voice-enabled chat that receives that treated It puts and (cyberchat is carried out with online talking mode), the information can also be played after the instruction for receiving user (i.e. with language Sound short message mode carries out cyberchat).
The cyberchat method that embodiment of the present invention provides can by the sound for the chatting object that analog subscriber is wanted Communication exchange is carried out with simulated implementation user and the chatting object oneself wanted, and can be carried out whenever and wherever possible, is conducive to improve User experience.
Wherein, in above-mentioned steps S3 there are many modes of the first voice-enabled chat information of server-side acquisition, for example, operator can Think that each user distributes a call person, the call person of distribution chats with user, which is call The recorded message (i.e. the content of the first voice-enabled chat information is determined in real time by call person) of member, for example, call person passes through intelligent end The recorded message of oneself is uploaded to server-side by end in real time, and server-side is used to be obtained from the phonetic reference file that user uploads Sound characteristic information handles the recorded message of call person, tone color, voice, velocity of sound and the use of the recorded message that makes that treated The tone color of the desired chatting object in family, voice, velocity of sound are identical, and by the user terminal of treated recorded message is sent to user;
Preferably, server-side also has intelligent sound chat feature, i.e. step S3 includes:
Step S31:The server-side obtains the second voice-enabled chat information that the user is uploaded by the user terminal;
Step S32:The server-side is according to the first voice-enabled chat information, example described in the second voice-enabled chat acquisition of information Such as, the server-side is matched in the database according to the second voice-enabled chat information, to obtain second voice Chat message, pair being stored in the database between the second voice-enabled chat information and the first voice-enabled chat information It should be related to;
For example, user is " hello " by the second voice-enabled chat information that user terminal uploads, server-side receives the information Carry out matching operation in the database afterwards, the first obtained voice-enabled chat information is " hello " or ", hello ", right later The first obtained voice-enabled chat information is handled, tone color, voice, velocity of sound and the use of the first voice-enabled chat information that makes that treated The tone color of the desired chatting object in family, voice, velocity of sound are identical, and the first voice-enabled chat information is sent to user's by treated User terminal;
User, which may be implemented, in the cyberchat method that embodiment provides through the invention to think whenever and wherever possible with oneself The chatting object wanted carries out analog AC, and specifically, process is as follows:
1, user first passes through recorded audio file (the i.e. phonetic reference text that user terminal uploads oneself desired chatting object in advance Part);
For example, user can upload the recorded audio file of multiple chatting objects in advance, server-side is receiving each After the recorded audio file of chatting object, it is handled, obtains the sound characteristic information of each chatting object, including sound Color, voice, velocity of sound etc., and stored;
In addition, user can also upload the other information of chatting object by user terminal to server-side, such as personality, especially title It exhales, the requirement etc. of speech intonation, language kind;
2, when user, which carries out simulation, to link up, the chatting object that can select oneself desired by user terminal uses Family end sends this information to server-side, and server-side finds the sound characteristic letter of the chatting object from pre-stored information Breath and other information;
3, server-side obtains the first voice-enabled chat information;
Wherein, the first voice-enabled chat information can be call person recorded message (i.e. the first voice-enabled chat information is interior Appearance is determined in real time by call person), it can also be obtained using artificial intelligent voice algorithm by server-side;
4, at sound characteristic information pair the first voice-enabled chat information for the chatting object that server-side is wanted using user Reason, and the first voice-enabled chat information after treatment is sent to the user terminal of user, user terminal receive right after the information It is played out, and realizes analog AC;
For example, server-side can carry out acoustic processing by voice changer pair the first voice-enabled chat information, make that treated first The voice-enabled chat information change of voice is the sound for the chatting object that user wants.
With the continuous quickening of social development, current people generally subject the psychological pressure from various aspects, if Emotional stress cannot be released, it is easy to cause psychological injury, it is long and long it, influence whether that work is even lived, lead to Crossing the present invention can make user carry out cyberspeak, as long as user has the recording for the object for wanting chat, skill through the invention Art scheme can be realized user and carry out cyberspeak with it, and so as to effectively alleviate user psychology pressure, it is good to bring them Psychological experiences, be conducive to social progress.
In addition, embodiment of the present invention additionally provides a kind of server-side, including:
First acquisition module, the phonetic reference file uploaded by user terminal for obtaining user;
First processing module, for according to the phonetic reference file acquisition sound characteristic information;
Second acquisition module, for obtaining the first voice-enabled chat information;
Second processing module, for being handled the first voice-enabled chat information using the sound characteristic information, And it will pass through that described treated that the first voice-enabled chat information is sent to the user terminal.
Wherein, in embodiments of the present invention, the first processing module from the phonetic reference file by extracting Tamber parameter, intonation parameter, sound characteristic information described in velocity of sound parameter acquiring.
Wherein, in embodiments of the present invention, second acquisition module includes:
Acquiring unit, the second voice-enabled chat information uploaded by the user terminal for obtaining the user;
Processing unit, for according to the first voice-enabled chat information described in the second voice-enabled chat acquisition of information.
Wherein, in embodiments of the present invention, the processing unit according to the second voice-enabled chat information in database In matched, to obtain the second voice-enabled chat information, second voice-enabled chat letter is stored in the database Correspondence between breath and the first voice-enabled chat information.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention belong to the scope of protection of present invention.

Claims (8)

1. a kind of cyberchat method, which is characterized in that including:
Step S1:Server-side obtains the phonetic reference file that user is uploaded by user terminal;
Step S2:The server-side is according to the phonetic reference file acquisition sound characteristic information;
Step S3:The server-side obtains the first voice-enabled chat information;
Step S4:The server-side is handled the first voice-enabled chat information using the sound characteristic information, and will Treated that the first voice-enabled chat information is sent to the user terminal by described.
2. cyberchat method according to claim 1, which is characterized in that step S2 includes:
The server-side extracts tamber parameter, intonation parameter, velocity of sound parameter from the phonetic reference file.
3. cyberchat method according to claim 1, which is characterized in that step S3 includes:
Step S31:The server-side obtains the second voice-enabled chat information that the user is uploaded by the user terminal;
Step S32:The server-side is according to the first voice-enabled chat information described in the second voice-enabled chat acquisition of information.
4. cyberchat method according to claim 3, which is characterized in that step S32 includes:
The server-side is matched in the database according to the second voice-enabled chat information, to obtain second voice Chat message, pair being stored in the database between the second voice-enabled chat information and the first voice-enabled chat information It should be related to.
5. a kind of server-side, which is characterized in that including:
First acquisition module, the phonetic reference file uploaded by user terminal for obtaining user;
First processing module, for according to the phonetic reference file acquisition sound characteristic information;
Second acquisition module, for obtaining the first voice-enabled chat information;
Second processing module, for being handled the first voice-enabled chat information using the sound characteristic information, and will Treated that the first voice-enabled chat information is sent to the user terminal by described.
6. server-side according to claim 5, which is characterized in that the first processing module is by from the phonetic reference Tamber parameter, intonation parameter, sound characteristic information described in velocity of sound parameter acquiring are extracted in file.
7. server-side according to claim 5, which is characterized in that second acquisition module includes:
Acquiring unit, the second voice-enabled chat information uploaded by the user terminal for obtaining the user;
Processing unit, for according to the first voice-enabled chat information described in the second voice-enabled chat acquisition of information.
8. server-side according to claim 7, which is characterized in that the processing unit is believed according to second voice-enabled chat Breath is matched in the database, and to obtain the second voice-enabled chat information, described second is stored in the database Correspondence between voice-enabled chat information and the first voice-enabled chat information.
CN201810236611.4A 2018-03-21 2018-03-21 Cyberchat method and server-side Pending CN108364658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810236611.4A CN108364658A (en) 2018-03-21 2018-03-21 Cyberchat method and server-side

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810236611.4A CN108364658A (en) 2018-03-21 2018-03-21 Cyberchat method and server-side

Publications (1)

Publication Number Publication Date
CN108364658A true CN108364658A (en) 2018-08-03

Family

ID=63001271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810236611.4A Pending CN108364658A (en) 2018-03-21 2018-03-21 Cyberchat method and server-side

Country Status (1)

Country Link
CN (1) CN108364658A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110392446A (en) * 2019-08-22 2019-10-29 珠海格力电器股份有限公司 A kind of terminal and virtual assistant's server interact method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006039120A (en) * 2004-07-26 2006-02-09 Sony Corp Interactive device and interactive method, program and recording medium
US20060100875A1 (en) * 2004-09-27 2006-05-11 Hauke Schmidt Method and system to parameterize dialog systems for the purpose of branding
CN102479506A (en) * 2010-11-23 2012-05-30 盛乐信息技术(上海)有限公司 Speech synthesis system for online game and implementation method thereof
CN104575487A (en) * 2014-12-11 2015-04-29 百度在线网络技术(北京)有限公司 Voice signal processing method and device
CN105280179A (en) * 2015-11-02 2016-01-27 小天才科技有限公司 Text-to-speech processing method and system
CN106228978A (en) * 2016-08-04 2016-12-14 成都佳荣科技有限公司 A kind of audio recognition method
CN106873936A (en) * 2017-01-20 2017-06-20 努比亚技术有限公司 Electronic equipment and information processing method
CN107093421A (en) * 2017-04-20 2017-08-25 深圳易方数码科技股份有限公司 A kind of speech simulation method and apparatus
CN107481735A (en) * 2017-08-28 2017-12-15 中国移动通信集团公司 A kind of method, server and the computer-readable recording medium of transducing audio sounding

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006039120A (en) * 2004-07-26 2006-02-09 Sony Corp Interactive device and interactive method, program and recording medium
US20060100875A1 (en) * 2004-09-27 2006-05-11 Hauke Schmidt Method and system to parameterize dialog systems for the purpose of branding
CN102479506A (en) * 2010-11-23 2012-05-30 盛乐信息技术(上海)有限公司 Speech synthesis system for online game and implementation method thereof
CN104575487A (en) * 2014-12-11 2015-04-29 百度在线网络技术(北京)有限公司 Voice signal processing method and device
CN105280179A (en) * 2015-11-02 2016-01-27 小天才科技有限公司 Text-to-speech processing method and system
CN106228978A (en) * 2016-08-04 2016-12-14 成都佳荣科技有限公司 A kind of audio recognition method
CN106873936A (en) * 2017-01-20 2017-06-20 努比亚技术有限公司 Electronic equipment and information processing method
CN107093421A (en) * 2017-04-20 2017-08-25 深圳易方数码科技股份有限公司 A kind of speech simulation method and apparatus
CN107481735A (en) * 2017-08-28 2017-12-15 中国移动通信集团公司 A kind of method, server and the computer-readable recording medium of transducing audio sounding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110392446A (en) * 2019-08-22 2019-10-29 珠海格力电器股份有限公司 A kind of terminal and virtual assistant's server interact method

Similar Documents

Publication Publication Date Title
CN105869626B (en) A kind of method and terminal of word speed automatic adjustment
CN103903627B (en) The transmission method and device of a kind of voice data
CN108159702B (en) Multi-player voice game processing method and device
CN108922518A (en) voice data amplification method and system
CN107945790A (en) A kind of emotion identification method and emotion recognition system
CN103514883B (en) A kind of self-adaptation realizes men and women's sound changing method
CN105991847A (en) Call communication method and electronic device
CN105244042B (en) A kind of speech emotional interactive device and method based on finite-state automata
CN108093526A (en) Control method, device and the readable storage medium storing program for executing of LED light
CN107645523A (en) A kind of method and system of mood interaction
CN109065052A (en) A kind of speech robot people
DE102004012208A1 (en) Individualization of speech output by adapting a synthesis voice to a target voice
CN106598955A (en) Voice translating method and device
CN107277276A (en) One kind possesses voice control function smart mobile phone
CN109599094A (en) The method of sound beauty and emotion modification
CN106887231A (en) A kind of identification model update method and system and intelligent terminal
CN107134277A (en) A kind of voice-activation detecting method based on GMM model
CN109545203A (en) Audio recognition method, device, equipment and storage medium
JP2014167517A (en) Conversation providing system, game providing system, conversation providing method, game providing method, and program
CN108494952A (en) Voice communication processing method and relevant device
CN106776557B (en) Emotional state memory identification method and device of emotional robot
CN108364638A (en) A kind of voice data processing method, device, electronic equipment and storage medium
CN101460994A (en) Speech differentiation
CN108364658A (en) Cyberchat method and server-side
CN103701982B (en) The method of adjustment of user terminal displays content, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180803