CN111338598A - Message processing method and electronic equipment - Google Patents

Message processing method and electronic equipment Download PDF

Info

Publication number
CN111338598A
CN111338598A CN202010158272.XA CN202010158272A CN111338598A CN 111338598 A CN111338598 A CN 111338598A CN 202010158272 A CN202010158272 A CN 202010158272A CN 111338598 A CN111338598 A CN 111338598A
Authority
CN
China
Prior art keywords
target
message
audio
input
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010158272.XA
Other languages
Chinese (zh)
Other versions
CN111338598B (en
Inventor
钟昌勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiwo Software Technology Co ltd
Original Assignee
Nanjing Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weiwo Software Technology Co ltd filed Critical Nanjing Weiwo Software Technology Co ltd
Priority to CN202010158272.XA priority Critical patent/CN111338598B/en
Publication of CN111338598A publication Critical patent/CN111338598A/en
Application granted granted Critical
Publication of CN111338598B publication Critical patent/CN111338598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Hospice & Palliative Care (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides a message processing method and electronic equipment, wherein the method comprises the following steps: the electronic equipment can receive first input of message characters input by a user, generate a corresponding target sound spectrum according to the message characters and input time intervals among the input message characters, obtain target audio according to the target sound spectrum, and finally send a target message composed of the message characters and the target audio. Compared with the mode of directly sending the message input by the user, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the electronic equipment user can more intuitively experience the emotion of the message sender by sending the target message and the target audio simultaneously, the interestingness of message sending is enhanced, and the user experience is improved.

Description

Message processing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a message processing method and an electronic device.
Background
At present, when a mobile terminal user uses social chat software for communication, due to the limitation of environment, network and the like, the user cannot use voice or video call for communication and can only communicate by sending messages.
The message processing method in the conventional technology is usually that after the user inputs message characters on the electronic device, the user directly sends a message composed of the message characters. The message processing mode cannot intuitively convey the emotional state of the user when inputting the message characters, and the user experience is poor.
Disclosure of Invention
The embodiment of the invention provides a message processing method and electronic equipment, and aims to solve the technical problem that the emotion of a user cannot be intuitively conveyed in the prior art.
In order to solve the above problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention discloses a message processing method, which is applied to an electronic device, and the method includes:
receiving a first input of a message character;
responding to the first input, and generating a corresponding target sound spectrum according to the message characters and input time intervals among the input message characters;
obtaining a target audio according to the target audio spectrum;
and sending a target message consisting of the message characters and the target audio.
In a second aspect, an embodiment of the present invention discloses an electronic device, including:
a first receiving module for receiving a first input of a message character;
the generating module is used for responding to the first input, and generating a corresponding target sound spectrum according to the message characters and input time intervals among the input message characters;
the acquisition module is used for acquiring a target audio according to the target audio spectrum;
and the sending module is used for sending the target message composed of the message characters and the target audio.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the message processing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the message processing method according to the first aspect.
In the embodiment of the invention, the electronic equipment can receive the first input of the message characters input by the user, generate the corresponding target sound spectrum according to the message characters and the input time interval among the message characters, then obtain the target audio according to the target sound spectrum, and finally send the target message and the target audio consisting of the message characters. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the electronic equipment user can more intuitively experience the emotion of the message sender by sending the target message and the target audio simultaneously, the interestingness of message sending is enhanced, and the user experience is improved.
Drawings
FIG. 1 is a flow chart illustrating the steps of a message processing method of the present invention;
FIG. 2 is a flow chart illustrating the steps of another message processing method of the present invention;
FIG. 3 illustrates an interaction flow diagram of a message processing method of the present invention;
FIG. 4 shows a block diagram of an electronic device of the present invention;
FIG. 5 shows a block diagram of another electronic device of the present invention;
fig. 6 shows a hardware structure diagram of an electronic device implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a message processing method according to the present invention is shown, where the method is applied to an electronic device, where the electronic device may be a first electronic device and may be a sender of a message, and the electronic device may specifically be a smart phone, a notebook, a tablet computer, a vehicle-mounted computer, and the method may specifically include:
step 101, receiving a first input of a message character.
In the embodiment of the present invention, the first input may be an operation of inputting a message character by a user. The first input may be input by a user through a virtual keyboard or a text writing area on a screen of the electronic device, or may be input by the user through an external device such as a keyboard.
The message characters may be used to compose a message. The message character may be a letter, a stroke constituting a Chinese character, a punctuation mark, a number or other symbol used for an input operation. The message characters entered by the user via the first input may be user-determined based on what the user desires to send. For example, taking the user input through a keyboard as an example, the user may trigger the corresponding key according to actual requirements to input the message character.
And 102, responding to the first input, and generating a corresponding target sound spectrum according to the message characters and the input time interval between the input of each message character.
In embodiments of the present invention, the input time interval may refer to a pause time between the user inputting one message character and inputting another message character. The time interval may be expressed in seconds, milliseconds. In an actual scenario, when a user inputs message characters, an input time interval between each message character may be affected by the emotion of the user. For example, when the user is depressed, the input speed may become slow, and the input time interval may become longer; when the user is in an urgent need, the input speed is increased, and the input time interval is correspondingly shortened. Therefore, the target music score can be generated based on the message characters and the input time interval, and the target music score is used for representing the emotion of the user.
The target music score refers to a music score corresponding to the current message input operation of the user. For example, the music numbered musical notation can be a digital numbered musical notation or a accent numbered musical notation, and the digital numbered musical notation refers to the music numbered musical notation represented by digital scales 1-7; the accent score is a musical score composed of accents represented by accent marks.
And 103, obtaining a target audio according to the target audio spectrum.
In this step, since the target sound spectrum can reflect the emotion of the user, the target audio can be ensured to reflect the emotion of the user when inputting the message by acquiring the audio frequency matched with the sound spectrum as the target audio according to the target sound spectrum.
And step 104, sending a target message composed of the message characters and the target audio.
In the embodiment of the invention, the target message refers to a message composed of message characters input by a user. For example, received message characters may be sequentially combined to obtain the target message. When the input strokes are combined into a word, or the input letters are combined into a word, or the word is spelled according to the pinyin corresponding to the input letters, and the like.
The message receiver in this step may be a second electronic device, which may be a receiver of the message, and the second electronic device may be an electronic device used by a friend of the user in the social software, where the electronic device may specifically be a smart phone, a notebook, a tablet computer, a vehicle-mounted computer, and the like. Of course, the first electronic device and the second electronic device may also be the same electronic device, that is, the user may also send the target message and the target audio to the electronic device currently used by the user according to actual needs, which is not limited in the embodiment of the present invention.
Specifically, after the first electronic device determines the target audio corresponding to the first input, the target message and the target audio may be sent to the second electronic device through network connection. Because the target audio can reflect the emotion of the user when inputting the message and the audio has a certain interest, the emotion of the first electronic equipment user can be known by the second electronic equipment user and the interest of message sending can be increased by sending the target audio while the sent target message is disturbed.
In summary, in the message processing method provided in the embodiments of the present invention, the electronic device may receive a first input of a user inputting a message character, generate a corresponding target sound spectrum according to the message character and an input time interval between the message characters, obtain a target audio according to the target sound spectrum, and finally send a target message composed of the message characters and the target audio. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the electronic equipment user can more intuitively experience the emotion of the message sender by sending the target message and the target audio simultaneously, the interestingness of message sending is enhanced, and the user experience is improved.
Referring to fig. 2, a flow chart of steps of another message processing method of the present invention is shown, which is applied to an electronic device, which may be a second electronic device, which may be a recipient of a message.
The method specifically comprises the following steps:
step 201, receiving a target message and a target audio sent by a first electronic device; the sound spectrum of the target audio frequency is matched with the target sound spectrum; the target voice spectrum is generated based on message characters in the target message and an input time interval between inputting each of the message characters.
In the embodiment of the invention, the target audio spectrum is matched with the target audio spectrum; the target audio is generated based on the message characters in the target message and the input time interval between the message characters, so that the target audio can reflect the emotion of the user when inputting the message.
Step 202, displaying the target message, and playing the target audio.
In this step, after receiving the target message and the target audio, the second electronic device may display the target message on a screen and play the target audio according to a preset mode. Specifically, the second electronic device may play the target audio in the background, so as to serve as the background music for the user to read the target message content. Therefore, by playing the target audio, the second electronic device user can know the emotion of the first electronic device user more, and the interest of message reading is increased for the second electronic device user.
In summary, in the message processing method provided in the embodiment of the present invention, the second electronic device may receive the target message and the target audio, where a sound spectrum of the target audio matches a target sound spectrum generated based on the message characters constituting the target message and the input time interval between the message characters; the targeted message and targeted audio are then presented to the user. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the second electronic device user can know the emotion of the first electronic device user more by receiving the target message and the target audio sent by the first electronic device at the same time, the emotional resonance of a receiver can be further aroused, the interestingness of message sending is improved, and the boring and boring of sending the message only are avoided.
Referring to fig. 3, an interaction flowchart of a message processing method of the present invention is shown, where the method may specifically include:
step 301, a first electronic device receives a first input of a message character.
The implementation of this step is the same as that of the step 101, and is not described herein again.
Step 302, the first electronic device responds to the first input, and generates a corresponding target sound spectrum according to the message characters and input time intervals among the input message characters.
Specifically, this step can be realized by the following first implementation mode consisting of substeps (1) to (2) or second implementation mode consisting of substeps (3) to (4):
the implementation mode is as follows:
and (1) the first electronic equipment determines a target preset note corresponding to each message character according to the corresponding relation between the preset message character and the preset note.
In this step, the preset musical note may be a musical scale represented by a number, specifically including 7 musical scales of 1, 2, 3, 4, 5, 6, and 7, and the reading is do, re, mi, fa, sol, la, and ti (si in china) in sequence. The corresponding relation between the message characters and the preset notes can be preset by a user according to actual needs, and further, because the number of the message characters is more than that of the preset notes, one corresponding preset note can be set for a plurality of message characters through digital mapping so as to ensure that each message character has the corresponding preset note.
Specifically, when the preset note corresponding to each message character is determined according to the corresponding relationship, the preset note corresponding to the message character may be determined by searching in the preset corresponding relationship according to the received message character.
Further, in practical applications, a user often uses a keyboard to input message characters, and one message character in the keyboard often corresponds to one key, and in such an implementation scenario, the correspondence between the message character and the preset note may be understood as the correspondence between the key and the preset note.
For example, if the message that the user needs to send is "hello" and the input mode is the alphabetical keyboard mode, the character of the message received by the first electronic device may be "ni 1hao 1", where "1" represents a space character. And then searching out the target preset note corresponding to each message character according to the corresponding relation or searching out the target preset note corresponding to each message character according to a key pressed by a user.
And (2) the first electronic equipment takes the input time interval between the input message characters as the scale interval between the target preset notes, and arranges the target preset notes according to the scale interval to obtain the target music score.
In this step, the scale interval refers to the interval between notes, and the scale interval may be the duration of a preset note, i.e. the duration of the preset note.
Specifically, the target preset notes corresponding to the message characters may be arranged according to the input sequence of the message characters, and duration marks may be added to the target preset notes according to the scale intervals between the target preset notes, for example, the target preset notes may be determined to be full notes, half notes, quarter notes, eighth notes, and the like according to the length of the scale intervals, so as to reflect the duration of the target preset notes. Thus, a set of digital numbered musical notation including the target preset musical note and the scale interval can be obtained as the target musical notation.
In the embodiment of the invention, the message characters correspond to the preset notes, so that the generated target audio spectrum has stronger relevance with the message characters input by the user, the target audio frequency acquired based on the target audio spectrum can better reflect the target message, and the expressive force of the target audio frequency on the target message is improved.
The implementation mode two is as follows:
and a substep (3) in which the first electronic device sets stressed syllables according to the message characters, and inputs an input time interval between the message characters as stressed interval between the stressed syllables.
In this step, the syllable emphasis can be used to indicate that the note is accent in the score, and the playing needs to be strong. The accent is arranged in such a way that an accent mark is generated for one message character. The accent interval refers to an interval between two adjacent accents, and other content may be inserted into the accent interval, for example, the notes in the accent interval may be light syllables or pause time.
Specifically, in this step, which scale corresponds to the message character is not limited, and may be denoted as scale X, and after receiving the message character input by the user, a syllable with emphasis is set according to the message character, and may be denoted as scale X with an accent mark. The input time interval is also used as the interval between two accent, and the number of the light syllables in the accent interval is not limited.
And (4) arranging the stressed syllables according to the stressed interval to obtain the target sound spectrum.
In this step, after the stressed syllables and stressed intervals are determined, the stressed syllables are sequentially arranged according to the stressed intervals to obtain stressed spectrums formed by the stressed syllables and the stressed intervals, namely the target sound spectrums.
For example, when the message that the user needs to send is "hello", the input mode adopted is the input mode of the alphabetical keyboard, the message character received by the first electronic device may be "ni 1hao 1", accordingly, 7 stressed syllables may be set for the 7 message characters, wherein the scale of the stressed syllables may be any scale X; then, the time interval between the message characters is used as the stress interval between the stress syllables, then the stress syllables can be arranged in sequence according to the input sequence of the message characters, and the interval time between two adjacent stress syllables is determined according to the stress interval. Thus, a set of stress spectra including stress syllables and stress intervals can be obtained as a target spectrum.
In the embodiment of the invention, the syllable emphasis is set according to the message characters, so that the limitation of the message characters on the target music score can be reduced while the generated target music score is associated with the message characters input by the user, the range of the target audio frequency acquired based on the target music score can be widened, and the fit degree of the target audio frequency and the emotion of the user can be improved.
And step 303, the first electronic device obtains a target audio according to the target audio spectrum.
The step can be realized by the following substeps 3031-3032:
and a substep 3031, calculating the similarity between the target sound spectrum and the sound spectrum of each preset audio frequency in a preset audio frequency library by the first electronic equipment.
In this step, the preset audio may be a preset audio spectrum uploaded by the user according to personal preferences or downloaded from the network platform, and the preset audio spectrum may be a digital numbered musical notation, a accent musical notation, or the like.
Further, the similarity may represent the degree of matching between the sound spectrums, and may be expressed in percentage. Specifically, during calculation, the types of the notes, the scale intervals and the weights of the notes can be compared one by one according to the sequence of the notes, and the higher the matching degree is, the greater the similarity is.
Substep 3032, if the preset audio frequency with the similarity larger than a preset threshold exists, determining the preset audio frequency with the similarity as the maximum value as the target audio frequency.
In this step, the preset threshold may be a similarity threshold set by the user according to actual needs, and may be 90%, for example. Specifically, after the similarity between the target sound spectrum and the sound spectrum of each preset audio is calculated, if the similarity greater than a preset threshold exists in the multiple similarities, it is indicated that the preset audio capable of better expressing the emotion conveyed by the target sound spectrum exists in the preset audio library. Therefore, the target audio can be determined by the preset audio with the maximum similarity, namely the preset audio in the preset audio library which can best express the emotion conveyed by the target music score.
Specifically, the sub-step 3032 can be realized by the following sub-steps a to E:
and step A, the first electronic equipment displays audio confirmation information.
In this step, the audio confirmation information may be used to confirm whether the user selects a preset audio with the largest similarity as the target audio, and the audio confirmation information may be displayed on the screen of the first electronic device in a dialog box, so as to facilitate the user to perform selection confirmation.
And a substep B, if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio.
In this step, the second input may be transmitted when the user confirms that the preset audio with the maximum similarity is determined as the target audio. For example, the second input may be an operation of the user clicking a confirmation button in a dialog box displaying audio confirmation information.
In the embodiment of the invention, the similarity between the sound spectrum of each preset audio in the audio library and the target sound spectrum is calculated, and under the condition that the similarity is greater than the preset threshold value, namely, the preset audio capable of better expressing the emotion conveyed by the target sound spectrum exists, and the user is satisfied with the audio, the preset audio is used as the target audio, so that the expression effect of the target audio on the emotion of the user can be ensured. Meanwhile, the target audio is directly searched from the existing preset audio, so that the cost for obtaining the target audio can be saved to a certain extent.
And a substep C of displaying a preset musical instrument option if a third input for the audio confirmation information is received.
In this step, the third input may be sent when the user is not satisfied with the preset audio determined according to the similarity and does not want to determine the preset audio with the maximum similarity as the target audio.
Specifically, the first electronic device may display a plurality of preset musical instrument options on the screen, for example, corresponding options such as a piano, a violin, a zither, and a suona. The instrument options are displayed, so that a user can conveniently select the instruments which are in line with the preference of the user, and the favorite instruments of the user can be used for generating the target audio in the subsequent process.
Further, the first electronic device may also display the preset instrument options when there is no similarity greater than the preset threshold, that is, there is no preset audio in the preset audio library that matches the target sound spectrum, and the preset audio in the preset audio library cannot well express the emotion conveyed by the target sound spectrum. To ensure that the target audio can be acquired.
Substep D, receiving a fourth input for the instrument option.
The fourth input may be a selection operation of the instrument option by the user, and specifically may be a touch operation of the instrument option by the user on the screen, or a click operation of a mouse, or the like.
And a substep E, responding to the fourth input, and synthesizing the target audio according to the target music score and the sound effect of the instrument corresponding to the instrument option indicated by the fourth input.
In this step, the first electronic device may have a plurality of musical instruments and corresponding sound effects pre-stored therein, or may simulate the sound effects of the musical instruments through musical instrument simulation software, or may generate the sound effects of the musical instruments on line through a network. In the step, firstly, the similarity is searched from the preset audio library, namely, the existing preset audio is preferentially used, so that the resource consumption of the electronic equipment can be avoided, the target audio acquisition efficiency is improved, further, under the condition that the user is unsatisfied with the searched and determined target audio, the target audio is generated through the target music spectrum and the instrument which is selected by the user and accords with personal preference, and the target audio which accords with the personal preference of the user and can express the emotion of the user can be ensured to be acquired.
It should be noted that, in the embodiment of the present invention, the preset musical instrument options may also be directly displayed for the user without searching in the preset audio library, or the preset musical instrument options may be directly displayed for the user in a case where a musical instrument generation instruction sent by the user is received, that is, in a case where the user needs to use the musical instrument to generate the target audio. Then, the target audio is synthesized according to the sound effect of the musical instrument corresponding to the musical instrument option selected by the user and the target music score, which is not limited in the embodiment of the present invention. Therefore, the target audio can be generated according to the actual requirements of the user, and the use experience of the user is improved.
And step 304, the first electronic equipment sends the target message and the target audio frequency which are formed by the message characters to the second electronic equipment.
In this step, the first electronic device may package and synthesize the target message and the target audio, generate a message, and send the message to the second electronic device. Or, the two are respectively sent to the second electronic device.
Specifically, before step 304, the message processing method may further include the following steps:
the first electronic device displays a sending confirmation option; if a fifth input aiming at the sending confirmation option is received, executing the operation of sending the target message and the target audio to a second electronic device; or, if a sixth input for the sending confirmation option is received, deleting the target audio.
In this step, the sending confirmation option may be an option that prompts the user to select whether to send the target audio. The fourth input may be a selection operation of the user confirming the transmission target audio. The fifth input may be a selection operation in which the user confirms that the target audio is not transmitted.
For example, the send confirmation option may be a pop-up dialog box on the electronic screen, prompting "is the target audio sent? And "confirm" and "abandon" the virtual key, the fifth input may be a click operation to the "confirm" key thereof; the sixth input may be a click operation on the "abandon" button therein.
Specifically, after the user selects to send the target audio, the first electronic device may send the target message and the target audio to the second electronic device. When the user selects not to send the target audio, the first electronic device may only send the target message to the second electronic device, and delete the target audio, so as to save memory space and reduce resource occupation.
In this step, whether the target audio is sent or not is prompted to the user, so that the target audio can be selectively sent according to the actual requirement of the sender, and unnecessary sending operation is prevented from being executed under the condition that the user does not need to send, and system resources are further wasted.
Step 305, the second electronic device receives a target message and a target audio sent by the first electronic device; the sound spectrum of the target audio frequency is matched with the target sound spectrum; the target voice spectrum is generated based on message characters in the target message and input time intervals between the message characters.
The specific implementation manner of this step is the same as that of step 201, and is not described herein again.
And step 306, displaying the target message by the second electronic equipment, and playing the target audio.
In this step, the second electronic device may display the target message, and play the target audio if a seventh input for the target message is received. Specifically, the target audio may be played in the background. The seventh input may be a selection operation of the user for playing the target audio, for example, a click, a touch operation, and the like of the user on the screen on the target message.
In this step, the target audio is played only when the seventh input of the user is received, so that unnecessary playing operation can be avoided from being executed under the condition that the user does not need to play, and system resources are wasted.
In summary, in the message processing method provided in the embodiment of the present invention, the first electronic device may receive the first input of the message characters input by the user, generate the corresponding target sound spectrum according to the message characters and the input time interval between the message characters, obtain the target audio according to the target sound spectrum, and finally send the target message and the target audio composed of the message characters to the second electronic device, and the second electronic device may receive the target message and the target audio and display the target message and the target audio to the user. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the second electronic equipment user can more intuitively experience the emotion of the first electronic equipment user by sending the target message and the target audio simultaneously, the interestingness of message sending is enhanced, and the user experience is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of an electronic device of the present invention is shown, and specifically, the electronic device 40 may include the following modules:
the first receiving module 401 is configured to receive a first input of a message character.
A generating module 402, configured to generate, in response to the first input, a corresponding target sound spectrum according to the message characters and an input time interval between inputting of the message characters.
An obtaining module 403, configured to obtain a target audio according to the target audio spectrum.
A sending module 404, configured to send a target message composed of the message characters and the target audio.
Optionally, the generating module 402 is configured to:
determining a target preset note corresponding to each message character according to the corresponding relation between the preset message character and the preset note; and taking the input time interval between the input message characters as the scale interval between the target preset notes, and arranging the target preset notes according to the scale interval to obtain the target music score.
Optionally, the generating module 402 is further configured to:
setting stressed syllables according to the message characters, and taking input time intervals among the input message characters as stressed intervals among the stressed syllables; and arranging the stressed syllables according to the stressed interval to obtain the target sound spectrum.
Optionally, the obtaining module 403 is configured to:
calculating the similarity between the target sound spectrum and the sound spectrum of each preset audio frequency in a preset audio frequency library; and if the preset audio with the similarity larger than a preset threshold exists, determining the preset audio with the similarity as the maximum value as the target audio.
Optionally, the obtaining module 403 is further configured to:
displaying the audio confirmation information; if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio; if a third input for the audio confirmation information is received, displaying preset musical instrument options; receiving a fourth input for the instrument option; and in response to the fourth input, synthesizing the target audio according to the target music score and the sound effect of the instrument corresponding to the instrument option indicated by the fourth input. .
Optionally, the apparatus 40 further includes:
and the confirmation module is used for displaying the sending confirmation options.
And the execution module is used for executing the operation of sending the target message and the target audio to the second electronic equipment if a fifth input aiming at the sending confirmation option is received.
A deleting module, configured to delete the target audio if a sixth input for the sending confirmation option is received.
In summary, the electronic device provided in the embodiment of the present invention may receive a first input of a message character input by a user, generate a corresponding target sound spectrum according to the message character and an input time interval between each message character, obtain a target audio according to the target sound spectrum, and finally send a target message and the target audio composed of the message characters. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the electronic equipment user can more intuitively experience the emotion of the message sender by sending the target message and the target audio simultaneously, the interestingness of message sending is enhanced, and the user experience is improved.
Fig. 5 is a block diagram of another electronic device 50 according to the present invention, including:
a second receiving module 501, configured to receive a target message and a target audio sent by a first electronic device; the sound spectrum of the target audio frequency is matched with the target sound spectrum; the target voice spectrum is generated based on message characters in the target message and an input time interval between inputting each of the message characters.
A display module 502, configured to display the target message and play the target audio.
Optionally, the display module 502 is configured to:
displaying the target message; and if a seventh input aiming at the target message is received, playing the target audio.
In summary, the electronic device provided in the embodiment of the present invention may receive a target message and a target audio, where a sound spectrum of the target audio matches a target sound spectrum generated based on message characters constituting the target message and input time intervals between the message characters; the targeted message and targeted audio are then presented to the user. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the second electronic device user can know the emotion of the first electronic device user more by receiving the target message and the target audio sent by the first electronic device at the same time, the emotional resonance of a receiver can be further aroused, the interestingness of message sending is improved, and the boring and boring of sending the message only are avoided.
Referring to fig. 6, a hardware structure diagram of an electronic device implementing various embodiments of the invention is shown.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted mobile terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to receive a first input of a message character.
A processor 610, configured to generate, in response to the first input, a corresponding target sound spectrum according to the message characters and an input time interval between inputting of the respective message characters.
And the processor 610 is configured to obtain a target audio according to the preset audio spectrum.
And the processor 610 is used for sending a target message composed of the message characters and the target audio.
In the embodiment of the invention, the electronic equipment can receive the first input of the message characters input by the user, generate the corresponding target sound spectrum according to the message characters and the input time interval among the message characters, then obtain the target audio according to the target sound spectrum, and finally send the target message and the target audio consisting of the message characters. Compared with the mode of directly sending the message input by the user in the prior art, the target audio can reflect the emotion of the user when the user inputs the message, and the audio has certain interestingness, so that the electronic equipment user can more intuitively experience the emotion of the message sender by sending the target message and the target audio simultaneously, the interestingness of message sending is enhanced, and the user experience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the electronic apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 10041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the electronic apparatus 600 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the electronic apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 600 or may be used to transmit data between the electronic device 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 may be logically connected to the processor 610 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor 610, a memory 609, and a computer program that is stored in the memory 609 and can be run on the processor 610, and when being executed by the processor 610, the computer program implements each process of the above-mentioned message processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
Optionally, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the message processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A message processing method is applied to electronic equipment and is characterized by comprising the following steps:
receiving a first input of a message character;
responding to the first input, and generating a corresponding target sound spectrum according to the message characters and input time intervals among the input message characters;
obtaining a target audio according to the target audio spectrum;
and sending a target message consisting of the message characters and the target audio.
2. The method of claim 1, wherein the step of generating a corresponding target sound spectrum from the message characters and an input time interval between inputting each of the message characters comprises:
determining a target preset note corresponding to each message character according to the corresponding relation between the preset message character and the preset note;
and taking the input time interval between the input message characters as the scale interval between the target preset notes, and arranging the target preset notes according to the scale interval to obtain the target music score.
3. The method of claim 1, wherein the step of generating a corresponding target sound spectrum from the message characters and an input time interval between inputting each of the message characters comprises:
setting stressed syllables according to the message characters, and taking input time intervals among the input message characters as stressed intervals among the stressed syllables;
and arranging the stressed syllables according to the stressed interval to obtain the target sound spectrum.
4. The method of claim 1, wherein the step of obtaining the target audio from the target spectrum comprises:
calculating the similarity between the target sound spectrum and the sound spectrum of each preset audio frequency in a preset audio frequency library;
and if the preset audio with the similarity larger than a preset threshold exists, determining the preset audio with the similarity as the maximum value as the target audio.
5. The method according to claim 4, wherein the determining the preset audio with the maximum similarity as the target audio comprises:
displaying the audio confirmation information;
if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio;
if a third input for the audio confirmation information is received, displaying preset musical instrument options;
receiving a fourth input for the instrument option;
and in response to the fourth input, synthesizing the target audio according to the target music score and the sound effect of the instrument corresponding to the instrument option indicated by the fourth input.
6. An electronic device, comprising:
a first receiving module for receiving a first input of a message character;
the generating module is used for responding to the first input, and generating a corresponding target sound spectrum according to the message characters and input time intervals among the input message characters;
the acquisition module is used for acquiring a target audio according to the target audio spectrum;
and the sending module is used for sending the target message composed of the message characters and the target audio.
7. The electronic device of claim 6, wherein the generation module is configured to:
determining a target preset note corresponding to each message character according to the corresponding relation between the preset message character and the preset note;
and taking the input time interval between the input message characters as the scale interval between the target preset notes, and arranging the target preset notes according to the scale interval to obtain the target music score.
8. The electronic device of claim 6, wherein the generating module is further configured to:
setting stressed syllables according to the message characters, and taking input time intervals among the input message characters as stressed intervals among the stressed syllables;
and arranging the stressed syllables according to the stressed interval to obtain the target sound spectrum.
9. The electronic device of claim 6, wherein the obtaining module is configured to:
calculating the similarity between the target sound spectrum and the sound spectrum of each preset audio frequency in a preset audio frequency library;
and if the preset audio with the similarity larger than a preset threshold exists, determining the preset audio with the similarity as the maximum value as the target audio.
10. The electronic device of claim 9, wherein the obtaining module is further configured to:
displaying the audio confirmation information;
if a second input aiming at the audio confirmation information is received, determining the preset audio with the maximum similarity as the target audio;
if a third input for the audio confirmation information is received, displaying preset musical instrument options;
receiving a fourth input for the instrument option;
and in response to the fourth input, synthesizing the target audio according to the target music score and the sound effect of the instrument corresponding to the instrument option indicated by the fourth input.
CN202010158272.XA 2020-03-09 2020-03-09 Message processing method and electronic equipment Active CN111338598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158272.XA CN111338598B (en) 2020-03-09 2020-03-09 Message processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158272.XA CN111338598B (en) 2020-03-09 2020-03-09 Message processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111338598A true CN111338598A (en) 2020-06-26
CN111338598B CN111338598B (en) 2023-10-13

Family

ID=71184039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158272.XA Active CN111338598B (en) 2020-03-09 2020-03-09 Message processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111338598B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507162A (en) * 2020-11-24 2021-03-16 北京达佳互联信息技术有限公司 Information processing method, device, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080070605A1 (en) * 2006-09-19 2008-03-20 Samsung Electronics Co., Ltd. Music message service method and apparatus for mobile terminal
CN105119815A (en) * 2015-09-14 2015-12-02 小米科技有限责任公司 Method and device for realizing music play in instant messaging interface
CN106649642A (en) * 2016-12-08 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Song searching method, song searching system and related equipment
CN108174274A (en) * 2017-12-28 2018-06-15 广州酷狗计算机科技有限公司 Virtual objects presentation method, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080070605A1 (en) * 2006-09-19 2008-03-20 Samsung Electronics Co., Ltd. Music message service method and apparatus for mobile terminal
CN105119815A (en) * 2015-09-14 2015-12-02 小米科技有限责任公司 Method and device for realizing music play in instant messaging interface
CN106649642A (en) * 2016-12-08 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Song searching method, song searching system and related equipment
CN108174274A (en) * 2017-12-28 2018-06-15 广州酷狗计算机科技有限公司 Virtual objects presentation method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507162A (en) * 2020-11-24 2021-03-16 北京达佳互联信息技术有限公司 Information processing method, device, terminal and storage medium
CN112507162B (en) * 2020-11-24 2024-01-09 北京达佳互联信息技术有限公司 Information processing method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN111338598B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN110827826B (en) Method for converting words by voice and electronic equipment
CN110830362B (en) Content generation method and mobile terminal
CN108668024B (en) Voice processing method and terminal
CN108334272B (en) Control method and mobile terminal
CN109993821B (en) Expression playing method and mobile terminal
CN111445927B (en) Audio processing method and electronic equipment
CN110995919B (en) Message processing method and electronic equipment
CN107731241B (en) Method, apparatus and storage medium for processing audio signal
CN109412932B (en) Screen capturing method and terminal
CN110830368B (en) Instant messaging message sending method and electronic equipment
CN111372029A (en) Video display method and device and electronic equipment
CN109215660A (en) Text error correction method and mobile terminal after speech recognition
CN108600079B (en) Chat record display method and mobile terminal
CN108287644B (en) Information display method of application program and mobile terminal
CN110808019A (en) Song generation method and electronic equipment
CN111273827B (en) Text processing method and electronic equipment
CN109063076B (en) Picture generation method and mobile terminal
CN110780751A (en) Information processing method and electronic equipment
CN110597973A (en) Man-machine conversation method, device, terminal equipment and readable storage medium
CN108108338B (en) Lyric processing method, lyric display method, server and mobile terminal
CN108632465A (en) A kind of method and mobile terminal of voice input
CN111292727B (en) Voice recognition method and electronic equipment
CN111338598B (en) Message processing method and electronic equipment
CN109347721B (en) Information sending method and terminal equipment
CN111445929A (en) Voice information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant