CN111338598B - Message processing method and electronic equipment - Google Patents

Message processing method and electronic equipment Download PDF

Info

Publication number
CN111338598B
CN111338598B CN202010158272.XA CN202010158272A CN111338598B CN 111338598 B CN111338598 B CN 111338598B CN 202010158272 A CN202010158272 A CN 202010158272A CN 111338598 B CN111338598 B CN 111338598B
Authority
CN
China
Prior art keywords
message
audio
target
input
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010158272.XA
Other languages
Chinese (zh)
Other versions
CN111338598A (en
Inventor
钟昌勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiwo Software Technology Co ltd
Original Assignee
Nanjing Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weiwo Software Technology Co ltd filed Critical Nanjing Weiwo Software Technology Co ltd
Priority to CN202010158272.XA priority Critical patent/CN111338598B/en
Publication of CN111338598A publication Critical patent/CN111338598A/en
Application granted granted Critical
Publication of CN111338598B publication Critical patent/CN111338598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The embodiment of the invention provides a message processing method and electronic equipment, wherein the method comprises the following steps: the electronic device may receive a first input of a message character input by a user, generate a corresponding target audio according to the message character and an input time interval between inputting each message character, obtain a target audio according to the target audio, and finally send a target message and a target audio composed of the message characters. Compared with the mode of directly sending the message input by the user, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has a certain interestingness, so that the user of the electronic equipment can more intuitively experience the emotion of the message sender by sending the target message and the target audio at the same time, the interestingness of the message sending is enhanced, and the user experience is improved.

Description

Message processing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a message processing method and an electronic device.
Background
Currently, when mobile terminal users communicate using social chat software, the users cannot communicate using voice or video calls due to the limitation of environment, network, etc., and can only communicate by sending messages.
The message processing method in the conventional technology is generally that a user directly transmits a message composed of message characters after inputting the message characters into an electronic device. The message processing mode cannot intuitively convey the emotion state when the user inputs the message characters, and the user experience is poor.
Disclosure of Invention
The embodiment of the invention provides a message processing method and electronic equipment, which are used for solving the technical problem that the emotion of a user cannot be intuitively conveyed in the prior art.
In order to solve the above problems, the embodiments of the present invention are implemented as follows:
in a first aspect, an embodiment of the present invention discloses a message processing method, which is applied to an electronic device, and the method includes:
receiving a first input of a message character;
responding to the first input, and generating a corresponding target sound spectrum according to the message characters and the input time interval between the input of each message character;
obtaining target audio according to the target audio spectrum;
and sending the target message formed by the message characters and the target audio.
In a second aspect, an embodiment of the present invention discloses an electronic device, including:
the first receiving module is used for receiving a first input of the message character;
The generation module is used for responding to the first input and generating a corresponding target sound spectrum according to the message characters and the input time interval between the input of each message character;
the acquisition module is used for acquiring target audio according to the target audio spectrum;
and the sending module is used for sending the target message formed by the message characters and the target audio.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program implements the steps of the message processing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the message processing method of the first aspect.
In the embodiment of the invention, the electronic equipment can receive the first input of the message character input by the user, generate the corresponding target sound spectrum according to the message character and the input time interval between the message characters, obtain the target audio according to the target sound spectrum, and finally send the target message and the target audio formed by the message character. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the user of the electronic equipment can more intuitively experience the emotion of the message sender by sending the target message and the target audio at the same time, the interestingness of the message sending is enhanced, and the user experience is improved.
Drawings
FIG. 1 shows a flow chart of the steps of a message processing method of the present invention;
FIG. 2 shows a flow chart of the steps of another message processing method of the present invention;
FIG. 3 shows an interactive flow chart of a message processing method of the present invention;
FIG. 4 shows a block diagram of an electronic device of the present invention;
FIG. 5 shows a block diagram of another electronic device of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, there is shown a flowchart of steps of a message processing method of the present invention, where the method is applied to an electronic device, and the electronic device may be a first electronic device, may be a sender of a message, and the electronic device may be a smart phone, a notebook, a tablet computer, a vehicle-mounted computer, and the method may specifically include:
Step 101, a first input of a message character is received.
In the embodiment of the invention, the first input can be the operation of inputting the message characters by the user. The first input may be input by a user through a virtual keyboard or text writing area on a screen of the electronic device, or may be input by a user through an external device such as a keyboard.
Message characters may be used to compose a message. The message character may be a letter, a stroke that constitutes a Chinese character, a punctuation mark, a number, or other symbol used in the input operation. The message character entered by the user through the first input may be determined by the user based on what is desired to be sent. By way of example, taking the input of the user through the keyboard as an example, the user can trigger the corresponding key according to the actual requirement so as to realize the input of the message characters.
Step 102, responding to the first input, and generating a corresponding target sound spectrum according to the input time interval between the message characters and the input of each message character.
In the embodiment of the invention, the input time interval may refer to a pause time between the user inputting one message character and inputting another message character. The time interval may be expressed in seconds, milliseconds. In a practical scenario, when a user inputs a message character, the input time interval between the message characters is affected by the emotion of the user. For example, when a user is depressed, the input speed may be slow, and the input time interval may be correspondingly long; when the user is in emergency, the input speed is increased, and the input time interval is correspondingly shortened. Thus, the target sound spectrum generated based on the message character and the input time interval can be used to embody the emotion of the user.
The target sound spectrum refers to a music numbered musical notation corresponding to the current message input operation of the user. For example, the music profile may be a digital profile or an accent, the digital profile being a music profile represented using digital scales 1 to 7; the accent is a music numbered musical notation composed of accents represented by accent marks.
And step 103, obtaining target audio according to the target audio spectrum.
In this step, since the target audio can reflect the emotion of the user, according to the target audio, it is possible to ensure that the target audio can reflect the emotion when the user inputs the message by acquiring the audio whose audio matches the target audio as the target audio.
And 104, sending a target message consisting of the message characters and the target audio.
In the embodiment of the invention, the target message refers to a message formed by message characters input by a user. For example, the received message characters may be sequentially combined to obtain the target message. In combination, the input strokes may be combined into words, or the input letters may be combined into words, or words may be spelled according to the pinyin corresponding to the input letters, or the like.
The message receiver in the step may be a second electronic device, where the second electronic device may be a message receiver, and the second electronic device may be an electronic device used by a friend of the user in social software, where the electronic device may specifically be a smart phone, a notebook, a tablet computer, a vehicle-mounted computer, and so on. Of course, the first electronic device and the second electronic device may be the same electronic device, that is, the user may send the target message and the target audio to the electronic device currently used by the user according to the actual needs, which is not limited by the embodiment of the present invention.
Specifically, after the first electronic device determines the target audio corresponding to the first input, the first electronic device may send the target message and the target audio to the second electronic device through network connection. Because the target audio can reflect the emotion of the user when inputting the message and the audio has a certain interest, the second electronic equipment user can know the emotion of the first electronic equipment user more and the interest of the message sending is increased by sending the target audio while the sent target message is annoyed.
In summary, according to the message processing method provided by the embodiment of the invention, the electronic device may receive the first input of the message character input by the user, generate the corresponding target audio according to the message character and the input time interval between the message characters, obtain the target audio according to the target audio, and finally send the target message and the target audio composed of the message characters. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the user of the electronic equipment can more intuitively experience the emotion of the message sender by sending the target message and the target audio at the same time, the interestingness of the message sending is enhanced, and the user experience is improved.
Referring to fig. 2, there is shown a flow chart of steps of another message processing method of the present invention, which is applied to an electronic device, which may be a second electronic device, which may be a recipient of a message.
The method specifically comprises the following steps:
step 201, receiving a target message and target audio sent by a first electronic device; the sound spectrum of the target audio is matched with the target sound spectrum; the target sound spectrum is generated based on message characters in the target message and an input time interval between inputting each of the message characters.
In the embodiment of the invention, the sound spectrum of the target audio is matched with the target sound spectrum; the target audio is generated based on the message characters in the target message and the input time intervals between the message characters, and thus, the target audio can reflect the emotion of the user when inputting the message.
Step 202, displaying the target message and playing the target audio.
In this step, after receiving the target message and the target audio, the second electronic device may display the target message on the screen and play the target audio according to a preset manner. Specifically, the second electronic device may play the target audio in the background, so as to serve as background music for the user to read the content of the target message. Therefore, through playing the target audio, the second electronic equipment user can know the emotion of the first electronic equipment user more, and the interestingness of message reading is increased for the second electronic equipment user.
In summary, in the message processing method provided by the embodiment of the present invention, the second electronic device may receive the target message and the target audio, where the audio spectrum of the target audio matches the target audio spectrum generated based on the message characters forming the target message and the input time intervals between the message characters; the target message and target audio are then presented to the user. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the second electronic equipment user can know the emotion of the first electronic equipment user better by receiving the target message and the target audio which are sent by the first electronic equipment at the same time, thereby being capable of mobilizing the emotion resonance of the receiver, improving the interestingness of the message sending, and avoiding boring and boring of simply sending the message.
Referring to fig. 3, an interactive flow chart of a message processing method of the present invention is shown, which may specifically include:
step 301, a first electronic device receives a first input of a message character.
The implementation manner of this step is the same as that of the foregoing step 101, and will not be described in detail here.
Step 302, the first electronic device responds to the first input, and generates a corresponding target sound spectrum according to the message characters and the input time interval between the input of each message character.
Specifically, this step may be implemented by the implementation one of the following sub-steps (1) to (2) or the implementation two of the sub-steps (3) to (4):
the implementation mode is as follows:
and (2) in the substep (1), the first electronic equipment determines a target preset note corresponding to each message character according to the corresponding relation between the preset message character and the preset note.
In this step, the preset notes may be scales represented by numbers, specifically including 7 scales of 1, 2, 3, 4, 5, 6, and 7, and the pronunciation is do, re, mi, fa, sol, la, ti (si in china). The corresponding relation between the message characters and the preset notes can be preset by a user according to actual needs, and further, because the number of the message characters is more than that of the preset notes, a corresponding preset note can be set for a plurality of message characters through digital mapping so as to ensure that each message character has a corresponding preset note.
Specifically, when determining the preset note corresponding to each message character according to the corresponding relation, searching in the preset corresponding relation according to the received message character to determine the target preset note corresponding to the message character.
Further, in practical applications, a user often uses a keyboard to input a message character, and one message character in the keyboard often corresponds to one key, in this implementation scenario, the correspondence between the message character and a preset note may be understood as the correspondence between the key and the preset note.
For example, if the message that the user needs to send is "hello" and the input mode adopted is an alphabetic keyboard mode, the message character received by the first electronic device may be "ni1hao1", where "1" represents a space character. And then searching out the target preset notes corresponding to each message character according to the corresponding relation or the target preset notes corresponding to each message character according to the key pressed by the user.
And (2) the first electronic equipment takes the input time interval between the input of each message character as the scale interval between each target preset note, and arranges the target preset notes according to the scale interval to obtain the target sound spectrum.
In this step, the scale interval refers to the interval between notes, and the scale interval may be the duration of a preset note.
Specifically, the target preset notes corresponding to the message character may be arranged according to the input order of the message character, and a time value mark may be added to the target preset notes according to the scale interval between the target preset notes, for example, the target preset notes may be determined to be full notes, half notes, quarter notes, eighth notes, etc. according to the length of the scale interval, so as to represent the duration of the target preset notes. Thus, a set of digital profiles including the target preset notes and scale intervals can be obtained as the target notes.
In the embodiment of the invention, the message characters are corresponding to the preset notes, so that the generated target sound spectrum has stronger relevance with the message characters input by the user, further, the target audio acquired based on the target sound spectrum can be ensured to better embody the target message, and the expressive force of the target audio on the target message is further improved.
The implementation mode II is as follows:
and (3) setting accents according to the message characters by the first electronic device, and taking the input time interval between the input of the message characters as the accent interval between the accents.
In this step, the heavy syllable can be used to represent that the note is heavy in the score, and the note needs to be strong and powerful when played. The accent is arranged in such a way that an accent mark is generated for a message character. Accent interval refers to interval between two adjacent accents, and other content may be inserted into the accent interval, for example, notes in the accent interval may be light notes or pause time.
Specifically, in this step, which scale is corresponding to the message character is not limited, and may be denoted as scale X, and after receiving the message character input by the user, a accent is set according to the message character, and may be denoted as scale X with accent marks. Meanwhile, the input time interval is taken as the interval between two accents, and the number of the light accents in the accent interval is not limited.
And (4) arranging the accent sections according to the accent intervals to obtain the target sound spectrum.
In the step, after the accent and the accent interval are determined, the accent is sequentially arranged according to the accent interval, and an accent spectrum formed by the accent and the accent interval is obtained, namely the target tone spectrum.
For example, when the message to be sent by the user is "hello", the input mode adopted is an alphabetic keyboard mode input, and the message character received by the first electronic device may be "ni1hao1", and correspondingly, 7 heavy syllables may be set for the 7 message characters, where the scale of the heavy syllable may be any scale X; then, the time interval between the message characters is used as the accent interval between the accents, and then the accents can be sequentially arranged according to the input order of the message characters, and the interval time between two adjacent accents is determined according to the accent interval. Thus, a set of accent spectra including accents and accent intervals can be obtained as target spectra.
According to the embodiment of the invention, the heavy syllables are set according to the message characters, so that the limitation of the message characters on the target sound spectrum can be reduced while the generated target sound spectrum is associated with the message characters input by the user, the range of the target audio acquired based on the target sound spectrum can be widened, and the degree of agreement between the target audio and the emotion of the user can be improved.
Step 303, the first electronic device obtains the target audio according to the target audio spectrum.
This step may be implemented by the following sub-steps 3031-3032:
substep 3031, the first electronic device calculates the similarity between the target audio spectrum and the audio spectrum of each preset audio in the preset audio library.
In this step, the preset audio may be a sound spectrum of preset audio uploaded by the user according to personal preference or downloaded from the network platform may be a digital numbered musical notation, an accent spectrum, etc.
Further, the similarity may represent a degree of matching between the sound spectra, and may be expressed as a percentage. Specifically, during calculation, the types of notes, scale intervals and the weights of the notes can be compared one by one according to the sequence of the notes, and the similarity is larger as the matching degree is higher.
Substep 3032, if the preset audio with the similarity greater than the preset threshold exists, determining the preset audio with the similarity being the maximum value as the target audio.
In this step, the preset threshold may be a similarity threshold set by the user according to the actual requirement, for example, may be 90%. Specifically, after calculating the similarity between the target audio spectrum and the audio spectrum of each preset audio, if the similarity greater than the preset threshold exists in the plurality of similarities, the preset audio library is indicated to have preset audio capable of better expressing emotion conveyed by the target audio spectrum. Thus, the target audio can be determined from the preset audio whose similarity is the maximum, that is, the preset audio in the preset audio library that can best express the emotion conveyed by the target audio.
Specifically, substep 3032 may be implemented by the following substeps a-E:
and A, displaying the audio confirmation information by the first electronic equipment.
In this step, the audio confirmation information may be used to confirm whether the user selects the preset audio with the greatest similarity as the target audio, and the audio confirmation information may be displayed on the screen of the first electronic device in a dialog box, so as to facilitate the user to perform selection confirmation.
And B, if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio.
In this step, the second input may be transmitted when the user confirms that the preset audio in which the similarity is the maximum value is determined as the target audio. For example, the second input may be an operation of clicking a confirm button by the user in a dialog box displaying audio confirm information.
In the embodiment of the invention, the preset audio is used as the target audio only when the user is satisfied with the audio, and the expression effect of the target audio on the emotion of the user can be ensured by calculating the similarity between the audio of each preset audio in the audio library and the target audio and under the condition that the similarity is larger than the preset threshold value, namely, the preset audio capable of better expressing the emotion conveyed by the target audio exists. Meanwhile, the target audio is directly searched from the existing preset audio, so that the cost for acquiring the target audio can be saved to a certain extent.
And C, if a third input aiming at the audio confirmation information is received, displaying preset musical instrument options.
In this step, the third input may be transmitted when the user is not satisfied with the preset audio determined according to the similarity, and does not want to determine the preset audio with the maximum similarity as the target audio.
Specifically, the first electronic device may display a plurality of preset musical instrument options, such as corresponding options of a piano, a violin, a zither, a suona, etc., on the screen. By displaying the musical instrument options, a user can conveniently select a musical instrument which accords with the preference of the user, and then the target audio can be generated by using the musical instrument which is favored by the user in the subsequent process.
Further, the first electronic device may display the preset musical instrument options when there is no similarity greater than the preset threshold, that is, when there is no preset audio in the preset audio library that matches the target audio, and the preset audio in the preset audio library cannot better express the emotion conveyed by the target audio. To ensure that the target audio can be obtained.
Substep D, receiving a fourth input for the instrument option.
The fourth input may be a selection operation of the instrument options by the user, specifically, a touch operation of the instrument options by the user on the screen, a click operation of the mouse, or the like.
And E, responding to the fourth input, and synthesizing the target audio according to the target audio and the sound effect of the musical instrument corresponding to the musical instrument option indicated by the fourth input.
In this step, the first electronic device may store a plurality of instruments and corresponding sound effects thereof in advance, or simulate the sound effects of the instruments through instrument simulation software, or generate the sound effects of the instruments on line through a network. In the step, firstly, searching from a preset audio library through similarity, namely, preferentially using the existing preset audio, so that the consumption of resources of electronic equipment can be avoided, the efficiency of acquiring target audio is improved, and further, under the condition that a user is not satisfied with searching the determined target audio, the target audio is generated through a target audio spectrum and a musical instrument which is selected by the user and accords with personal preference, and the target audio which accords with personal preference of the user and can express emotion of the user can be ensured to be acquired.
It should be noted that, in the embodiment of the present invention, the preset musical instrument options may be displayed for the user directly without searching in the preset audio library, or the preset musical instrument options may be displayed for the user directly when receiving the instruction for generating the musical instrument sent by the user, that is, when the user needs to use the musical instrument to generate the target audio. Then, according to the sound effects of the musical instruments and the target sound spectrum corresponding to the musical instrument options selected by the user, the target audio is synthesized, which is not limited in the embodiment of the present invention. Therefore, target audio can be generated according to the actual demands of the user, and the use experience of the user is improved.
And step 304, the first electronic device sends the target message and the target audio composed of the message characters to the second electronic device.
In this step, the first electronic device may package and synthesize the target message and the target audio, and generate a message and send the message to the second electronic device. Or respectively sending the two to the second electronic equipment.
Specifically, before step 304, the message processing method may further include the following steps:
the first electronic device displays a transmission confirmation option; if a fifth input for the transmission confirmation option is received, the operation of transmitting the target message and the target audio to a second electronic device is executed; or if a sixth input for the transmission confirmation option is received, deleting the target audio.
In this step, the transmission confirmation option may be an option prompting the user to select whether to transmit the target audio. The fourth input may be a selection operation by which the user confirms the transmission of the target audio. The fifth input may be a selection operation by which the user confirms that the target audio is not transmitted.
By way of example, the send confirmation option may be a dialog box popped up on the electronic screen prompting "do it send target audio? The fifth input may be a click operation of the "confirm" key therein; the sixth input may be a click operation of a "give up" key therein.
Specifically, after the user selects to send the target audio, the first electronic device may send the target message and the target audio to the second electronic device. When the user selects not to send the target audio, the first electronic device can only send the target message to the second electronic device and delete the target audio, so that the memory space is saved and the resource occupation is reduced.
In the step, whether to send the target audio is prompted to the user, so that the target audio can be selectively sent according to the actual requirement of the sender, and unnecessary sending operation is prevented from being executed under the condition that the user does not need to send the target audio, and further system resources are wasted.
Step 305, the second electronic device receives the target message and the target audio sent by the first electronic device; the sound spectrum of the target audio is matched with the target sound spectrum; the target sound spectrum is generated based on message characters in the target message and input time intervals between the respective message characters.
The specific implementation manner of this step is the same as that of step 201, and will not be described here again.
Step 306, the second electronic device displays the target message and plays the target audio.
In this step, the second electronic device may display the target message and play the target audio if a seventh input for the target message is received. Specifically, the target audio may be played in the background. The seventh input may be a selection operation of the user for playing the target audio, for example, a click, a touch operation, or the like of the target message on the screen by the user.
In this step, the target audio is played only when the seventh input of the user is received, so that unnecessary playing operation is avoided to be executed under the condition that the user does not need to play, and further, system resources are wasted.
In summary, according to the message processing method provided by the embodiment of the invention, the first electronic device may receive the first input of the message character input by the user, generate the corresponding target audio according to the message character and the input time interval between the message characters, obtain the target audio according to the target audio, and finally send the target message and the target audio formed by the message character to the second electronic device, where the second electronic device may receive the target message and the target audio and display them to the user. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the second electronic equipment user can more intuitively experience the emotion of the first electronic equipment user by sending the target message and the target audio at the same time, the interestingness of the message sending is enhanced, and the user experience is improved.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, a block diagram of an electronic device of the present invention is shown, and in particular, the electronic device 40 may include the following modules:
the first receiving module 401 is configured to receive a first input of a message character.
A generating module 402, configured to generate, in response to the first input, a corresponding target sound spectrum according to the message characters and an input time interval between inputting the message characters.
And the obtaining module 403 is configured to obtain a target audio according to the target audio spectrum.
And the sending module 404 is configured to send a target message composed of the message characters and the target audio.
Optionally, the generating module 402 is configured to:
Determining a target preset note corresponding to each message character according to the corresponding relation between the preset message character and the preset note; and taking the input time interval between the input of each message character as the scale interval between each target preset note, and arranging the target preset notes according to the scale interval to obtain the target sound spectrum.
Optionally, the generating module 402 is further configured to:
setting accents according to the message characters, and taking the input time interval between the input message characters as the accent interval between the accents; and arranging the accent sections according to the accent intervals to obtain the target sound spectrum.
Optionally, the obtaining module 403 is configured to:
calculating the similarity between the target sound spectrum and sound spectrums of all preset audios in a preset audio library; and if the preset audio with the similarity larger than a preset threshold value exists, determining the preset audio with the similarity as the maximum value as the target audio.
Optionally, the obtaining module 403 is further configured to:
displaying the audio confirmation information; if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio; if a third input aiming at the audio confirmation information is received, displaying preset musical instrument options; receiving a fourth input for the instrument option; and responding to the fourth input, and synthesizing the target audio according to the target audio and the sound effect of the musical instrument corresponding to the musical instrument option indicated by the fourth input.
Optionally, the electronic device 40 further includes:
and the confirmation module is used for displaying the transmission confirmation options.
And the execution module is used for executing the operation of sending the target message and the target audio to the second electronic equipment if the fifth input for the sending confirmation option is received.
And the deleting module is used for deleting the target audio if a sixth input aiming at the sending confirmation option is received.
In summary, the electronic device provided by the embodiment of the present invention may receive the first input of the message character input by the user, generate the corresponding target audio according to the message character and the input time interval between the message characters, obtain the target audio according to the target audio, and finally send the target message and the target audio composed of the message characters. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the user of the electronic equipment can more intuitively experience the emotion of the message sender by sending the target message and the target audio at the same time, the interestingness of the message sending is enhanced, and the user experience is improved.
Fig. 5 is a block diagram of another electronic device of the present invention, the electronic device 50 comprising:
a second receiving module 501, configured to receive a target message and a target audio sent by a first electronic device; the sound spectrum of the target audio is matched with the target sound spectrum; the target sound spectrum is generated based on message characters in the target message and an input time interval between inputting each of the message characters.
The display module 502 is configured to display the target message and play the target audio.
Optionally, the display module 502 is configured to:
displaying the target message; and playing the target audio if a seventh input for the target message is received.
In summary, the electronic device provided by the embodiment of the present invention may receive a target message and a target audio, where a sound spectrum of the target audio matches a target sound spectrum generated based on message characters forming the target message and input time intervals between the message characters; the target message and target audio are then presented to the user. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the second electronic equipment user can know the emotion of the first electronic equipment user better by receiving the target message and the target audio which are sent by the first electronic equipment at the same time, thereby being capable of mobilizing the emotion resonance of the receiver, improving the interestingness of the message sending, and avoiding boring and boring of simply sending the message.
Referring to fig. 6, a schematic diagram of a hardware architecture of an electronic device implementing various embodiments of the invention is shown.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and power supply 611. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In an embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted mobile terminal, a wearable device, a pedometer, and the like.
Wherein the processor 610 is configured to receive a first input of a message character.
A processor 610 is configured to generate a corresponding target sound spectrum according to the message characters and an input time interval between inputting each of the message characters in response to the first input.
And a processor 610, configured to obtain a target audio according to the preset audio spectrum.
And a processor 610 for sending a target message composed of the message characters and the target audio.
In the embodiment of the invention, the electronic equipment can receive the first input of the message character input by the user, generate the corresponding target sound spectrum according to the message character and the input time interval between the message characters, obtain the target audio according to the target sound spectrum, and finally send the target message and the target audio formed by the message character. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the message is input, and the audio has a certain interestingness, so that the user of the electronic equipment can more intuitively experience the emotion of the message sender by sending the target message and the target audio at the same time, the interestingness of the message sending is enhanced, and the user experience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 610; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 601 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 600. The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used for receiving audio or video signals. The input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 6042, the graphics processor 6041 processing image data of still pictures or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphics processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. Microphone 6042 may receive sound and can process such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 601 in the case of a telephone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 6061 and/or the backlight when the electronic device 600 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 606 is used to display information input by a user or information provided to the user. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 6071 or thereabout using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 610, and receives and executes commands sent from the processor 610. In addition, the touch panel 6071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 6071 may be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 610 to determine a type of a touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 608 is an interface to which an external device is connected to the electronic apparatus 600. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 600 or may be used to transmit data between the electronic apparatus 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. The processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may also include a power supply 611 (e.g., a battery) for powering the various components, and preferably the power supply 611 may be logically coupled to the processor 610 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules, which are not shown, and will not be described herein.
Optionally, the embodiment of the present invention further provides an electronic device, including a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program when executed by the processor 610 implements each process of the above embodiment of the message processing method, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
Optionally, an embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements each process of the foregoing embodiment of the message processing method, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (4)

1. A message processing method applied to an electronic device, comprising:
receiving a first input of a message character;
setting accents according to the message characters in response to the first input, and inputting an input time interval between the message characters as accent intervals between the accents;
arranging the accent segments according to the accent intervals to obtain a target sound spectrum; the target sound spectrum comprises the accent and the accent interval;
calculating the similarity between the target sound spectrum and sound spectrums of all preset audios in a preset audio library;
if the preset audio with the similarity larger than a preset threshold value exists, determining the preset audio with the similarity as the maximum value as a target audio;
And sending the target message formed by the message characters and the target audio.
2. The method according to claim 1, wherein the determining the preset audio in which the similarity is the maximum value as the target audio includes:
displaying the audio confirmation information;
if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio;
if a third input aiming at the audio confirmation information is received, displaying preset musical instrument options;
receiving a fourth input for the instrument option;
and responding to the fourth input, and synthesizing the target audio according to the target audio and the sound effect of the musical instrument corresponding to the musical instrument option indicated by the fourth input.
3. An electronic device, comprising:
the first receiving module is used for receiving a first input of the message character;
a generation module for setting accents according to the message characters in response to the first input, and for inputting an input time interval between the message characters as an accent interval between the accents; arranging the accent segments according to the accent intervals to obtain a target sound spectrum; the target sound spectrum comprises the accent and the accent interval;
The acquisition module is used for calculating the similarity between the target sound spectrum and the sound spectrum of each preset audio in the preset audio library; if the preset audio with the similarity larger than a preset threshold value exists, determining the preset audio with the similarity as the maximum value as a target audio;
and the sending module is used for sending the target message formed by the message characters and the target audio.
4. The electronic device of claim 3, wherein the acquisition module is further configured to:
displaying the audio confirmation information;
if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio;
if a third input aiming at the audio confirmation information is received, displaying preset musical instrument options;
receiving a fourth input for the instrument option;
and responding to the fourth input, and synthesizing the target audio according to the target audio and the sound effect of the musical instrument corresponding to the musical instrument option indicated by the fourth input.
CN202010158272.XA 2020-03-09 2020-03-09 Message processing method and electronic equipment Active CN111338598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158272.XA CN111338598B (en) 2020-03-09 2020-03-09 Message processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158272.XA CN111338598B (en) 2020-03-09 2020-03-09 Message processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111338598A CN111338598A (en) 2020-06-26
CN111338598B true CN111338598B (en) 2023-10-13

Family

ID=71184039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158272.XA Active CN111338598B (en) 2020-03-09 2020-03-09 Message processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111338598B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507162B (en) * 2020-11-24 2024-01-09 北京达佳互联信息技术有限公司 Information processing method, device, terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105119815A (en) * 2015-09-14 2015-12-02 小米科技有限责任公司 Method and device for realizing music play in instant messaging interface
CN106649642A (en) * 2016-12-08 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Song searching method, song searching system and related equipment
CN108174274A (en) * 2017-12-28 2018-06-15 广州酷狗计算机科技有限公司 Virtual objects presentation method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080025772A (en) * 2006-09-19 2008-03-24 삼성전자주식회사 Music message service transfering/receiving method and service support sytem using the same for mobile phone

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105119815A (en) * 2015-09-14 2015-12-02 小米科技有限责任公司 Method and device for realizing music play in instant messaging interface
CN106649642A (en) * 2016-12-08 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Song searching method, song searching system and related equipment
CN108174274A (en) * 2017-12-28 2018-06-15 广州酷狗计算机科技有限公司 Virtual objects presentation method, device and storage medium

Also Published As

Publication number Publication date
CN111338598A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN109447234B (en) Model training method, method for synthesizing speaking expression and related device
CN110096580B (en) FAQ conversation method and device and electronic equipment
CN110827826B (en) Method for converting words by voice and electronic equipment
CN109993821B (en) Expression playing method and mobile terminal
CN107731241B (en) Method, apparatus and storage medium for processing audio signal
CN108668024B (en) Voice processing method and terminal
CN108008858B (en) Terminal control method and mobile terminal
CN111524501B (en) Voice playing method, device, computer equipment and computer readable storage medium
CN110097872B (en) Audio processing method and electronic equipment
CN109885162B (en) Vibration method and mobile terminal
CN109412932B (en) Screen capturing method and terminal
CN110830362B (en) Content generation method and mobile terminal
CN110830368B (en) Instant messaging message sending method and electronic equipment
CN111445927B (en) Audio processing method and electronic equipment
CN108600079B (en) Chat record display method and mobile terminal
CN110808019A (en) Song generation method and electronic equipment
CN111372029A (en) Video display method and device and electronic equipment
CN109063076B (en) Picture generation method and mobile terminal
CN111338598B (en) Message processing method and electronic equipment
CN111292727B (en) Voice recognition method and electronic equipment
CN110378677B (en) Red envelope pickup method and device, mobile terminal and storage medium
CN111145734A (en) Voice recognition method and electronic equipment
CN110880330A (en) Audio conversion method and terminal equipment
CN111596841B (en) Image display method and electronic equipment
CN111416955B (en) Video call method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant