CN111966257A - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN111966257A
CN111966257A CN202010866787.5A CN202010866787A CN111966257A CN 111966257 A CN111966257 A CN 111966257A CN 202010866787 A CN202010866787 A CN 202010866787A CN 111966257 A CN111966257 A CN 111966257A
Authority
CN
China
Prior art keywords
input
voice
information
sent
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010866787.5A
Other languages
Chinese (zh)
Inventor
方晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010866787.5A priority Critical patent/CN111966257A/en
Publication of CN111966257A publication Critical patent/CN111966257A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an information processing method, an information processing device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input of a user under the condition that the text information to be sent is displayed in the input box; displaying a voice conversion interface in response to the first input; receiving a second input of a target control in the voice conversion interface; and responding to the second input, and converting the text information to be sent into voice information corresponding to the target control. According to the method and the device, before the text information is sent, the text information is converted and synthesized into the voice information of the target content and then sent out, so that the user who is inconvenient to record voice can also send the voice information to chat, and therefore user experience is improved.

Description

Information processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an information processing method and device and electronic equipment.
Background
With the continuous development of science and technology, electronic devices (such as mobile phones, tablet computers and the like) have become an indispensable tool in life and work of people.
In daily life, a plurality of users can communicate with relatives and friends by using the chat software installed in the electronic equipment, in the process of chatting by using the chat software, the users can send text information by inputting texts or can record voice to send voice information, and after receiving messages, the opposite side can directly check the text information or listen to the voice information to obtain the chat information.
However, when the user is in an environment unsuitable for recording voice but wants to send a voice message, if the user inputs voice, a noisy sound may accompany the voice, which may cause the voice input by the user to be unclear, thereby affecting the user experience.
Disclosure of Invention
The embodiment of the application aims to provide an information processing method, an information processing device and electronic equipment, and the problems that when the environment where a user is located is not suitable for recording voice, the voice recording of the user is influenced, and the user experience is influenced can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an information processing method, where the method includes:
receiving a first input of a user under the condition that the text information to be sent is displayed in the input box;
displaying a voice conversion interface in response to the first input;
receiving a second input of a target control in the voice conversion interface;
and responding to the second input, and converting the text information to be sent into voice information corresponding to the target control.
In a second aspect, an embodiment of the present application provides an information processing apparatus, including:
the first input receiving module is used for receiving first input of a user under the condition that text information to be sent is displayed in the input box;
the voice interface display module is used for responding to the first input and displaying a voice conversion interface;
the second input receiving module is used for receiving second input of a target control in the voice conversion interface;
and the text information conversion module is used for responding to the second input and converting the text information to be sent into the voice information corresponding to the target control.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the information processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the information processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the information processing method according to the first aspect.
In the embodiment of the application, under the condition that the text information to be sent is displayed in the input box, a first input of a user is received, the voice conversion interface is displayed in response to the first input, a second input of a target control in the voice conversion interface is received, and the text information to be sent is converted into the voice information corresponding to the target control in response to the second input. According to the embodiment of the application, the text information can be converted and synthesized into the voice information of the target content before being sent, and then the voice information can be sent out, so that a user who is inconvenient to record voice can also send the voice information to chat, and the user experience is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of an information processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a text input provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of selecting a voice font according to an embodiment of the present application;
fig. 4 is a schematic diagram of sending voice information according to an embodiment of the present application;
fig. 5 is a schematic diagram of selecting a background sound according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The information processing scheme provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an information processing method provided in an embodiment of the present application is shown, and as shown in fig. 1, the information processing method may specifically include the following steps:
step 101: in a case where text information to be transmitted is displayed in an input box, a first input of a user is received.
The method and the device can be applied to a scene that text information to be sent is converted into voice information to be sent.
The input box is a display box for displaying text information to be transmitted.
The text information to be sent refers to text information input by a user, and as shown in fig. 2, when the user chats with the chat software, the text information "hellhaya" and the like can be input in the input box, and the text information input by the user in the input box can be used as the text information to be sent.
In some examples, the input box may be a conversation window displayed on a conversation interface when the user is in conversation with other users, as shown in fig. 2, and when the user is in conversation with other users through conversation software, the input box is a conversation text input box displaying "hello" as shown in fig. 2.
Of course, without being limited thereto, in a specific implementation, the input box may also be an input box in other forms, for example, when a user needs to convert a certain text into voice information, a trigger input of the user may be performed to display a text display box on a display interface of the certain text, and specifically, a specific form of the input box may be determined according to a service requirement, which is not limited in this embodiment.
The voice conversion interface is an interface for performing voice conversion on text information, as shown in fig. 3, an interface displayed below an input box of the text "hellhaya" is the voice conversion interface, and a target synthesis type, such as "baby sound", "rauliyin", "main broadcasting", and the like, can be displayed in the voice conversion interface.
The first input refers to input which is executed by a user on the text information to be sent and is used for triggering and displaying the voice conversion interface.
In some examples, the first input may be an input formed by a user pressing the text information to be sent, for example, when the user needs to perform voice conversion on the text information to be sent, the user may press the text information to be sent to form the first input.
In some examples, the first input may be an input formed by a user pressing a text sending button for a long time, for example, as shown in fig. 2, when the user needs to perform voice conversion on the text to be sent, the sending button on the right side of the input box may be clicked to form the first input.
Of course, in a specific implementation, the first input may also be an input formed by other forms of operations performed by the user, and specifically, may be determined according to business requirements, and this embodiment does not limit this.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
In the case where the text information to be transmitted is displayed in the input box, a first input of the user may be received, and then, step 102 is performed.
Step 102: in response to the first input, a voice conversion interface is displayed.
The voice conversion interface is an option setting box for converting between text and voice, and options such as voice conversion types are displayed in the voice conversion interface and can be selected by a user.
After receiving the first input of the user, the voice conversion interface may be displayed in response to the first input, as shown in fig. 3, after the user inputs the text information "how you are" to be sent in the text input box, the user may click the "send" button to generate the first input, and further, in response to the first input, the voice conversion interface may be displayed, as shown in fig. 3, on the "voice synthesis operation panel" in which the "selection target synthesis type", such as "doll sound", "rauliyin", "man cast", and other voice types, are displayed.
After the voice conversion interface is displayed in response to the first input, step 103 is performed.
Step 103: and receiving a second input of a target control in the voice conversion interface.
The target control is a control displayed in the speech conversion interface for indicating a type of text-to-speech, for example, as shown in fig. 3, the speech synthesis operation panel is a speech conversion interface, a target synthesis type, such as "pitch over pitch", "tertile pitch", "chinese translation", and other options, are displayed in the interface, each option corresponds to a control, and when the user selects the "baby pitch" option, the control corresponding to the "baby pitch" option may be regarded as the target control.
The second input refers to an input executed by a user on the target control, and in this embodiment, the second input may be an input formed by the user clicking the target control, or an input formed by the user long-pressing the target control, specifically, the second input may be determined according to a business requirement, and this embodiment is not limited to this.
After the voice conversion interface is displayed, a plurality of controls can be displayed on the voice conversion interface, and then a user can execute second input on a target control in the voice conversion interface according to the requirement of the user.
After receiving a second input to a target control within the voice conversion interface, step 104 is performed.
Step 104: and responding to the second input, and converting the text information to be sent into voice information corresponding to a target control.
After receiving a second input to the target control in the voice conversion interface, the text information to be sent can be converted into voice information corresponding to the target control in response to the second input.
According to the embodiment of the application, before the text information is sent, the text information is converted and synthesized into the voice information of the target content and then sent out, so that a user who is inconvenient to record voice can also send the voice information to chat, and therefore user experience is improved.
In particular, the conversion process may be described in detail in connection with the following specific implementation.
In a specific implementation manner of the present application, the step 104 may include:
substep 1041: and converting the text information to be sent into the voice information of the first language under the condition that the target control is used for indicating that the voice information is output in the first language.
In this embodiment, in a case that the target control indicates to output the voice information in the first language, the text information to be sent may be converted into the voice information in the first language, for example, as shown in fig. 3, controls of "chinese translation english" and "english translation chinese" are displayed in the voice conversion interface, and when the text information to be sent is text information in chinese and the user clicks the chinese translation english control, the text information to be sent may be converted into english voice information, and specifically, the conversion process may be to convert the text information to be sent in chinese type into text information in english type first, and further convert the text information in english type into voice information in english type.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
Substeps 1042: and under the condition that the target control is used for indicating that the voice information is output by the target sound effect, converting the text information to be sent into the voice information of the target sound effect.
And under the condition that the target control indicates that the voice information is output by the target sound effect, converting the text information to be sent into the voice information of the target sound effect. For example, as shown in fig. 3, a target synthesis type, such as "baby sound", "rally sound", "main broadcasting", "female broadcasting",., "great tertiary sound", and other sound effect controls, is displayed on the speech conversion interface, and when the user selects the "baby sound" control, the text information to be transmitted can be converted into speech information of the sound effect of the baby sound.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
According to the voice conversion method and device, the multiple types of controls are arranged on the voice conversion interface, and voice conversion of corresponding language types and/or sound effects is achieved through the controls clicked by the user, so that the interest of the user can be improved, and the use experience of the user is further improved.
In this embodiment, after the text information is converted into the voice information, a background sound may be further added to the converted voice information by the user, and specifically, the following detailed description may be made in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, after the step 104, the method may further include:
step M1: receiving a third input to a first control in the voice conversion interface.
In this embodiment, the first control is a control that is displayed in the speech conversion interface and is selected by the user to add a background sound to the converted speech information, for example, as shown in fig. 5, background sound setting options such as "none", "bug sound", "wave" and the like are displayed in the speech conversion interface, each background sound option corresponds to one control, and when the user selects the "bug sound" option, the control corresponding to the "bug sound" option is used as the first control.
The third input refers to an input performed by the user to the first control.
In some examples, the third input may be an input formed by clicking the first control by the user, for example, as shown in fig. 5, when the user needs to add a "wave" background sound to the converted voice information, the user may click the control corresponding to the "wave" on the voice conversion interface, and the click operation performed by the user forms the third input.
In some examples, the third input may be an input formed by a user long-pressing the first control, for example, as shown in fig. 5, when the user needs to add a "bug sound" background sound to the converted voice information, the user may long-press a control corresponding to the "bug sound" on the voice conversion interface, and the long-pressing operation performed by the user forms the third input.
It is to be understood that the above examples are merely examples listed for better understanding of the technical solution of the embodiment of the present application, and in a specific implementation, the third input may also be an input formed by other operations performed by a user, and in particular, may be determined according to a business requirement, and this embodiment is not limited thereto.
After converting the text information to be transmitted into the voice information, a third input of the first control of the voice conversion interface by the user may be received, and then, step M2 is performed.
Step M2: responding to the third input, and adding the background sound information to the voice information according to the background sound information indicated by the first control.
After receiving a third input of the user to the first control in the speech conversion interface, corresponding background sound information may be added to the speech information according to the background sound information indicated by the first control. As shown in fig. 5, after the third input of the "bug sound" control of the voice conversion interface by the user, the "bug sound" background sound may be added to the voice information obtained by conversion according to the "bug sound" background sound indicated by the control.
According to the voice information processing method and device, the background sound is added to the voice information obtained through conversion, the synthesized voice information can be more vivid and interesting, and interestingness of a user can be improved.
In this embodiment, the text information to be sent may also be converted into the speech information with specific speech parameters, and specifically, the following specific implementation manner may be described in detail.
In another specific implementation manner of the present application, the step 104 may include:
substep N1: and under the condition that the target control is used for indicating that the voice information is output by the target voice parameters, converting the text information to be sent into the voice information matched with the target voice parameters.
In this embodiment, the target speech parameter may include at least one of a pitch parameter, a tone parameter, and a volume parameter. As shown in fig. 5, control bars corresponding to the voice parameters, such as a tone control bar, and a loudness control bar, are displayed on the voice conversion interface, and before the text information to be sent is converted into voice information, the user can also adjust the voice parameter control bars to obtain the voice information of the corresponding voice parameters through conversion. At this time, the voice parameter control bar adjusted by the user can be regarded as a target control.
And under the condition that the target control indicates that the voice information is output by using the target voice parameters, converting the text information to be sent into the voice information matched with the target voice parameters.
According to the embodiment of the application, the text information to be sent is converted into the voice information of the specific voice parameter, the problems that the volume of the converted voice information is too large or too small and the like can be avoided, the converted voice can be heard clearly, the phenomenon that the ear of a user is injured due to too large volume can be avoided, and the user experience is further improved.
In this embodiment, the text information to be sent in the input box may also be deleted in time after the converted voice information is sent to other users, and specifically, the following specific implementation manner may be combined for detailed description.
In another specific implementation manner of the present application, after the step 104, the method may further include:
step K1: a fourth input of voice information is received.
In the present embodiment, the fourth input refers to an input performed by the user on the voice information for transmitting the voice information.
In some examples, the fourth input may be an input formed by clicking a send button by the user, for example, a send button is displayed on the display interface of the voice information, and when the user needs to send the voice information, the send button may be clicked, and the fourth input is formed by clicking operation performed by the user.
In some examples, the fourth input may be an input formed by a specific gesture operation performed by the user, for example, a specific gesture for sending voice information is previously saved in the electronic device, and when the user needs to send voice information, an operation corresponding to the specific gesture performed by the user forms the fourth input.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After receiving the fourth input of speech information, step K2 is performed.
Step K2: and responding to the fourth input, sending the voice information to a target user, and deleting the text information to be sent in the input box.
The voice information may be sent to the target user after receiving a fourth input for voice information, which is uttered after receiving a fourth input for 1s of voice information, as shown in fig. 4.
And after the voice information is sent to the target user, deleting the text information to be sent in the input box.
In the embodiment of the application, after the converted voice information is sent to the target user, the text information to be sent in the input box is deleted in time, so that the problem that the text information to be sent occupies the input box and affects the subsequent input of the user can be avoided, and the use experience of the user can be further improved.
According to the information processing method provided by the embodiment of the application, under the condition that the text information to be sent is displayed in the input box, the first input of a user is received, the voice conversion interface is displayed in response to the first input, the second input of the target control in the voice conversion interface is received, and the text information to be sent is converted into the voice information corresponding to the target control in response to the second input. According to the embodiment of the application, the text information can be converted and synthesized into the voice information of the target content before being sent, and then the voice information can be sent out, so that a user who is inconvenient to record voice can also send the voice information to chat, and the user experience is improved.
In the information processing method provided in the embodiment of the present application, the execution main body may be an information processing apparatus, or a control module for executing the information processing method in the information processing apparatus. In the embodiment of the present application, an information processing apparatus executing an information processing method is taken as an example, and the information processing apparatus provided in the embodiment of the present application is described.
Referring to fig. 6, a schematic structural diagram of an information processing apparatus provided in an embodiment of the present application is shown, and as shown in fig. 6, the information processing apparatus 600 may specifically include the following modules:
a first input receiving module 610, configured to receive a first input of a user when text information to be sent is displayed in an input box;
a voice interface display module 620, configured to display a voice conversion interface in response to the first input;
a second input receiving module 630, configured to receive a second input to a target control in the voice conversion interface;
and the text information conversion module 640 is configured to respond to the second input, and convert the text information to be sent into voice information corresponding to the target control.
Optionally, the text information conversion module 640 includes:
the first text conversion unit is used for converting the text information to be sent into the voice information of the first language under the condition that the target control is used for indicating that the voice information is output in the first language;
and the second text conversion unit is used for converting the text information to be sent into the voice information of the target sound effect under the condition that the target control is used for indicating that the voice information is output by the target sound effect.
Optionally, the method further comprises:
the third input receiving module is used for receiving third input of the first control in the voice conversion interface;
and the background sound adding module is used for responding to the third input and adding the background sound information to the voice information according to the background sound information indicated by the first control.
Optionally, the text information conversion module 640 includes:
a third text conversion unit, configured to convert the text information to be sent into voice information matched with the target voice parameter under a condition that the target control is used to instruct to output voice information with the target voice parameter;
wherein the target speech parameter includes at least one of a pitch parameter, a timbre parameter, and a volume parameter.
Optionally, the method further comprises:
the fourth input receiving module is used for receiving fourth input of the voice information;
and the text information deleting module is used for responding to the fourth input, sending the voice information to a target user and deleting the text information to be sent in the input box.
The information processing apparatus provided in the embodiment of the application receives a first input of a user by displaying text information to be sent in an input box, displays a voice conversion interface in response to the first input, receives a second input to a target control in the voice conversion interface, and converts the text information to be sent into voice information corresponding to the target control in response to the second input. According to the embodiment of the application, the text information can be converted and synthesized into the voice information of the target content before being sent, and then the voice information can be sent out, so that a user who is inconvenient to record voice can also send the voice information to chat, and the user experience is improved.
The information processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the information processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 810 is configured to receive a first input of a user when the text information to be sent is displayed in the input box; displaying a voice conversion interface in response to the first input; receiving a second input of a target control in the voice conversion interface; and responding to the second input, and converting the text information to be sent into voice information corresponding to the target control.
According to the embodiment of the application, the text information can be converted and synthesized into the voice information of the target content before being sent, and then the voice information can be sent out, so that a user who is inconvenient to record voice can also send the voice information to chat, and the user experience is improved.
Optionally, the processor 810 is further configured to, in a case that the target control indicates that voice information is output in a first language, convert the text information to be sent into the voice information in the first language; or under the condition that the target control indicates that the voice information is output by the target sound effect, converting the text information to be sent into the voice information of the target sound effect.
Optionally, the processor 810 is further configured to receive a third input to the first control in the voice conversion interface; responding to the third input, and adding the background sound information to the voice information according to the background sound information indicated by the first control.
Optionally, the processor 810 is further configured to, in a case that the target control indicates that voice information is output with a target voice parameter, convert the text information to be sent into voice information matched with the target voice parameter; wherein the target speech parameter includes at least one of a pitch parameter, a timbre parameter, and a volume parameter.
Optionally, a processor 810, further configured to receive a fourth input of voice information; and responding to the fourth input, sending the voice information to a target user, and deleting the text information to be sent in the input box.
The voice synthesis method and the voice synthesis device have the advantages that the background sound is added, so that the synthesized voice information is more vivid and interesting, and the interestingness can be improved.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the information processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An information processing method characterized by comprising:
receiving a first input of a user under the condition that the text information to be sent is displayed in the input box;
displaying a voice conversion interface in response to the first input;
receiving a second input of a target control in the voice conversion interface;
and responding to the second input, and converting the text information to be sent into voice information corresponding to the target control.
2. The method according to claim 1, wherein the converting the text information to be sent into the voice information corresponding to the target control comprises:
converting the text information to be sent into voice information of a first language under the condition that the target control is used for indicating that the voice information is output in the first language; or
And under the condition that the target control is used for indicating that the voice information is output by the target sound effect, converting the text information to be sent into the voice information of the target sound effect.
3. The method according to claim 1, further comprising, after the converting the text information to be sent into the voice information corresponding to the target control in response to the second input, the step of:
receiving a third input to a first control in the voice conversion interface;
responding to the third input, and adding the background sound information to the voice information according to the background sound information indicated by the first control.
4. The method according to claim 1, wherein the converting the text information to be sent into the voice information corresponding to the target control comprises:
converting the text information to be sent into voice information matched with the target voice parameter under the condition that the target control is used for indicating that the voice information is output by the target voice parameter;
wherein the target speech parameter includes at least one of a pitch parameter, a timbre parameter, and a volume parameter.
5. The method according to claim 1, further comprising, after the converting the text information to be sent into the voice information corresponding to the target control in response to the second input, the step of:
receiving a fourth input of voice information;
and responding to the fourth input, sending the voice information to a target user, and deleting the text information to be sent in the input box.
6. An information processing apparatus characterized by comprising:
the first input receiving module is used for receiving first input of a user under the condition that text information to be sent is displayed in the input box;
the voice interface display module is used for responding to the first input and displaying a voice conversion interface;
the second input receiving module is used for receiving second input of a target control in the voice conversion interface;
and the text information conversion module is used for responding to the second input and converting the text information to be sent into the voice information corresponding to the target control.
7. The apparatus of claim 6, wherein the text information conversion module comprises:
the first text conversion unit is used for converting the text information to be sent into the voice information of the first language under the condition that the target control is used for indicating that the voice information is output in the first language;
and the second text conversion unit is used for converting the text information to be sent into the voice information of the target sound effect under the condition that the target control is used for indicating that the voice information is output by the target sound effect.
8. The apparatus of claim 6, further comprising:
the third input receiving module is used for receiving third input of the first control in the voice conversion interface;
and the background sound adding module is used for responding to the third input and adding the background sound information to the voice information according to the background sound information indicated by the first control.
9. The apparatus of claim 6, wherein the text information conversion module comprises:
a third text conversion unit, configured to convert the text information to be sent into voice information matched with the target voice parameter under a condition that the target control is used to instruct to output voice information with the target voice parameter;
wherein the target speech parameter includes at least one of a pitch parameter, a timbre parameter, and a volume parameter.
10. The apparatus of claim 6, further comprising:
the fourth input receiving module is used for receiving fourth input of the voice information;
and the text information deleting module is used for responding to the fourth input, sending the voice information to a target user and deleting the text information to be sent in the input box.
CN202010866787.5A 2020-08-25 2020-08-25 Information processing method and device and electronic equipment Pending CN111966257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010866787.5A CN111966257A (en) 2020-08-25 2020-08-25 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010866787.5A CN111966257A (en) 2020-08-25 2020-08-25 Information processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111966257A true CN111966257A (en) 2020-11-20

Family

ID=73390409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010866787.5A Pending CN111966257A (en) 2020-08-25 2020-08-25 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111966257A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470614A (en) * 2021-06-29 2021-10-01 维沃移动通信有限公司 Voice generation method and device and electronic equipment
CN113703592A (en) * 2021-08-31 2021-11-26 维沃移动通信有限公司 Secure input method and device
CN114489420A (en) * 2022-01-14 2022-05-13 维沃移动通信有限公司 Voice information sending method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105280179A (en) * 2015-11-02 2016-01-27 小天才科技有限公司 Text-to-speech processing method and system
CN106550146A (en) * 2016-10-28 2017-03-29 努比亚技术有限公司 A kind of chat message dispensing device and method
CN107634898A (en) * 2017-08-18 2018-01-26 上海云从企业发展有限公司 True man's voice information communication is realized by the chat tool on electronic communication equipment
CN107731219A (en) * 2017-09-06 2018-02-23 百度在线网络技术(北京)有限公司 Phonetic synthesis processing method, device and equipment
US20180161683A1 (en) * 2016-12-09 2018-06-14 Microsoft Technology Licensing, Llc Session speech-to-text conversion
CN110491367A (en) * 2019-08-16 2019-11-22 东方明珠新媒体股份有限公司 The phonetics transfer method and equipment of smart television
CN110765502A (en) * 2019-10-30 2020-02-07 Oppo广东移动通信有限公司 Information processing method and related product
CN110995304A (en) * 2019-12-25 2020-04-10 北京金山安全软件有限公司 Instant translation communication method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105280179A (en) * 2015-11-02 2016-01-27 小天才科技有限公司 Text-to-speech processing method and system
CN106550146A (en) * 2016-10-28 2017-03-29 努比亚技术有限公司 A kind of chat message dispensing device and method
US20180161683A1 (en) * 2016-12-09 2018-06-14 Microsoft Technology Licensing, Llc Session speech-to-text conversion
CN107634898A (en) * 2017-08-18 2018-01-26 上海云从企业发展有限公司 True man's voice information communication is realized by the chat tool on electronic communication equipment
CN107731219A (en) * 2017-09-06 2018-02-23 百度在线网络技术(北京)有限公司 Phonetic synthesis processing method, device and equipment
CN110491367A (en) * 2019-08-16 2019-11-22 东方明珠新媒体股份有限公司 The phonetics transfer method and equipment of smart television
CN110765502A (en) * 2019-10-30 2020-02-07 Oppo广东移动通信有限公司 Information processing method and related product
CN110995304A (en) * 2019-12-25 2020-04-10 北京金山安全软件有限公司 Instant translation communication method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470614A (en) * 2021-06-29 2021-10-01 维沃移动通信有限公司 Voice generation method and device and electronic equipment
CN113470614B (en) * 2021-06-29 2024-05-28 维沃移动通信有限公司 Voice generation method and device and electronic equipment
CN113703592A (en) * 2021-08-31 2021-11-26 维沃移动通信有限公司 Secure input method and device
CN114489420A (en) * 2022-01-14 2022-05-13 维沃移动通信有限公司 Voice information sending method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN111966257A (en) Information processing method and device and electronic equipment
CN113360238A (en) Message processing method and device, electronic equipment and storage medium
CN111984115A (en) Message sending method and device and electronic equipment
WO2019080873A1 (en) Method for generating annotations and related apparatus
CN113285866B (en) Information sending method and device and electronic equipment
CN109189303B (en) Text editing method and mobile terminal
CN111782115B (en) Application program control method and device and electronic equipment
WO2016119165A1 (en) Chat history display method and apparatus
CN114500432A (en) Session message transceiving method and device, electronic equipment and readable storage medium
KR101592178B1 (en) Portable terminal and method for determining user emotion status thereof
CN113037924A (en) Voice sending method and device and electronic equipment
CN109271262B (en) Display method and terminal
CN114422461A (en) Message reference method and device
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
WO2023131290A1 (en) Information interaction methods and apparatuses, electronic device and medium
CN110300047B (en) Animation playing method and device and storage medium
CN108600079B (en) Chat record display method and mobile terminal
CN112711366A (en) Image generation method and device and electronic equipment
EP2838225A1 (en) Message based conversation function execution method and electronic device supporting the same
CN113593614B (en) Image processing method and device
CN108710521B (en) Note generation method and terminal equipment
CN112637407A (en) Voice input method and device and electronic equipment
CN108491471B (en) Text information processing method and mobile terminal
CN112637409B (en) Content output method and device and electronic equipment
CN112269510B (en) Information processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201120

RJ01 Rejection of invention patent application after publication