CN107786427B - Information interaction method, terminal and computer readable storage medium - Google Patents

Information interaction method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN107786427B
CN107786427B CN201710900836.0A CN201710900836A CN107786427B CN 107786427 B CN107786427 B CN 107786427B CN 201710900836 A CN201710900836 A CN 201710900836A CN 107786427 B CN107786427 B CN 107786427B
Authority
CN
China
Prior art keywords
voice message
sent
fuzzy
user
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710900836.0A
Other languages
Chinese (zh)
Other versions
CN107786427A (en
Inventor
王海华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710900836.0A priority Critical patent/CN107786427B/en
Publication of CN107786427A publication Critical patent/CN107786427A/en
Application granted granted Critical
Publication of CN107786427B publication Critical patent/CN107786427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/066Format adaptation, e.g. format conversion or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/02Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses an information interaction method, a terminal and a computer readable storage medium, wherein after a voice message to be sent is obtained, the voice message to be sent is converted into a graphic file according to the content or the sound characteristic of the voice message to be sent, the voice message to be sent is subjected to fuzzy processing, and the voice message to be sent is converted into at least one of fuzzy voice messages; therefore, the original straight and white voice information is converted into graph or fuzzy voice which is not easy to be recognized, and after the conversion, the information obtained by the conversion is sent to the receiving terminal, or the information obtained by the conversion and the voice message to be sent are sent to the receiving terminal. After the receiving terminal user receives the information, the original voice message of the sending terminal can be guessed through the information obtained after conversion, so that interesting communication between the sending terminal user and the receiving terminal user is realized, the user experience is improved, and the enthusiasm of the users for mutual communication is also promoted.

Description

Information interaction method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of terminal technologies, and in particular, to an information interaction method, a terminal, and a computer-readable storage medium.
Background
With the development of terminal technology, the services carried by mobile terminals are increasing, and various application software provides services covering aspects of clothes, eating and housing, and the like, such as instant messaging software, browser software, a beauty camera, and the like, for users. Among these software, instant messaging software is widely accepted and used once introduced because of its low charge and high performance/price ratio compared with the traditional telephone service and short message service.
At present, the two communication parties can carry out voice or text interaction through third-party application software, however, the interaction between the two communication parties is direct no matter the voice interaction or the text interaction is carried out, the meaning to be expressed by an information sending party can be directly obtained through voice messages or texts for an information receiving party, and the interaction mode is too straight and blank in certain scenes and lacks interestingness.
Disclosure of Invention
The technical problem to be solved by the invention is that users can perform the information interaction in a voice information or text mode in the prior art, the expression of the meaning is too direct and white, and the interestingness is lacked.
In order to solve the above technical problem, the present invention provides an information interaction method, including:
acquiring a voice message to be sent;
converting the voice message to be sent by adopting at least one of the following two modes;
the first method is as follows: converting the voice message to be sent into a graphic file according to the content or the sound characteristic of the voice message to be sent;
the second method comprises the following steps: carrying out fuzzy processing on a voice message to be sent, and converting the voice message to be sent into a fuzzy voice message;
and sending the information obtained by conversion to a receiving terminal, or sending the information obtained by conversion and the voice message to be sent to the receiving terminal.
Optionally, the acquiring the voice message to be sent includes:
acquiring local sound source data selected by a user as a voice message to be sent;
or, receiving the voice input from the outside of the terminal as the voice message to be sent.
Optionally, when there is a step of performing fuzzy processing on the voice message to be sent and converting the voice message to be sent into a fuzzy voice message, the step includes:
reserving true sound of a preset proportion for a voice message to be sent, and synthesizing the content of the remaining proportion of the voice message to be sent by a terminal to obtain a fuzzy voice message;
and/or adjusting a voice sequence of the voice message to be sent to obtain a fuzzy voice message;
and/or, adjusting the rhythm of the voice message to be sent to obtain a fuzzy voice message;
and/or, carrying out male and female voice transformation processing on the voice message to be sent to obtain the fuzzy voice message.
Optionally, when there is a step of converting the voice message to be sent into a graphic file according to the characteristics of the sound in the voice message to be sent, the step includes:
and analyzing the sound wave of the voice message to be sent, and converting the voice message to be sent into a Pop chart according to the analysis result.
Optionally, the type of the popple map includes at least one of a motion map and a small video.
Optionally, when the information sent to the receiving terminal includes the information obtained after the conversion and the voice message to be sent, before sending the information obtained after the conversion and the voice message to be sent to the receiving terminal, the method further includes:
encrypting the voice message to be sent according to a preset encryption key, setting an encryption playing rule for the information obtained after conversion and the voice message to be sent, wherein the encryption playing rule is used for controlling a receiving terminal to play the received information according to the following rule:
when the information obtained after conversion comprises a graphic file, the receiving terminal displays the graphic file and plays the voice message to be sent after receiving a correct decryption key of the voice message to be sent, which is input by a user;
when the converted information comprises the fuzzy voice message, the receiving terminal plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, and plays the voice message to be sent after receiving a correct decryption key of the voice message to be sent, which is input by the user;
and when the converted information comprises a graphic file and a fuzzy voice message, the receiving terminal displays the graphic file, plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, and plays the voice message to be sent after receiving a correct decryption key of the voice message to be sent, which is input by the user.
Optionally, when the information sent to the receiving terminal includes the information obtained after the conversion and the voice message to be sent, before sending the information obtained after the conversion and the voice message to be sent to the receiving terminal, the method further includes:
and setting a time-limited playing rule for the information obtained after conversion and the voice message to be sent, wherein the time-limited playing rule is used for controlling the receiving terminal to play the received information according to the following rules:
when the information obtained after conversion comprises a graphic file, displaying the graphic file, releasing the hiding of the voice message to be sent after a first preset time length after the graphic file is displayed, and automatically playing the voice message to be sent or playing the voice message to be sent after the click operation of the user on the voice message to be sent is received;
when the information obtained after conversion comprises the fuzzy voice message, the receiving terminal firstly plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, releases the hiding of the voice message to be sent after a first preset time length after the fuzzy voice message is played, and automatically plays the voice message to be sent or plays the voice message to be sent after receiving the clicking operation of the user on the voice message to be sent;
when the converted information comprises a graphic file and a fuzzy voice message, the receiving terminal displays the graphic file, plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, releases the hiding of the voice message to be sent after the graphic file is displayed or after a first preset time length after the fuzzy voice message is played, and automatically plays the voice message to be sent or plays the voice message to be sent after receiving the clicking operation of the user on the voice message to be sent.
Optionally, in a scenario where only the converted information is sent to the receiving terminal, the method further includes:
after the information obtained by conversion is sent to a second preset time length after the information is sent to the receiving terminal, sending the voice message to be sent to the receiving terminal;
or after the information obtained by conversion is sent to the receiving terminal, if a request of the receiving terminal for the voice message to be sent is received, the voice message to be sent is sent to the receiving terminal.
Furthermore, the invention also provides a terminal, which comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing one or more programs stored in the memory to realize the steps of the information interaction method.
Further, the present invention also provides a computer readable storage medium, which stores one or more programs, and the one or more programs are executable by one or more processors to implement the steps of the information interaction method as described above.
Has the advantages that:
the invention discloses an information interaction method, a terminal and a computer readable storage medium, wherein before a voice message is sent, the voice message to be sent is obtained; converting the voice message to be sent by adopting at least one of the following two modes; converting the voice message to be sent into a graphic file according to the content or the sound characteristic of the voice message to be sent; carrying out fuzzy processing on a voice message to be sent, and converting the voice message to be sent into a fuzzy voice message; therefore, the original straight and white voice information is converted into graph or fuzzy voice which is not easy to be recognized, and after the conversion, the information obtained by the conversion is sent to the receiving terminal, or the information obtained by the conversion and the voice message to be sent are sent to the receiving terminal. After the receiving terminal user receives the information, the original voice message of the sending party is guessed according to the meaning expressed by the converted information, the interaction method obviously improves the interest of the user communication, improves the communication experience of the user and is beneficial to arousing the enthusiasm of the user for mutual communication.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is an electrical schematic diagram of an alternative terminal for implementing various embodiments of the present invention.
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a flowchart of an information interaction method according to a first embodiment of the present invention;
fig. 4 is a schematic diagram of a terminal obtaining a graph according to analysis of text content in a voice message to be sent in the first embodiment of the present invention;
FIG. 5 is a diagram illustrating a pop-up speech conversion special effect selection bar after a user clicks a "speech conversion" button on an interactive interface according to a first embodiment of the present invention;
FIG. 6 is a diagram of an input interface popped up after the user selects the "reverse conversion" button in the interface shown in FIG. 5, and the user inputs a voice message to be sent;
fig. 7 is a schematic diagram illustrating that the terminal of the user a sends the converted information and the voice message to be sent to the terminal of the user B in the first embodiment of the present invention;
fig. 8 is an input interface diagram of a user inputting a decryption key for a voice message to be transmitted after receiving information obtained after conversion and the voice message to be transmitted, which are transmitted by a terminal of the user a, according to the first embodiment of the present invention;
fig. 9 is a schematic diagram illustrating that the terminal of the user a sends the converted information and the voice message to be sent to the terminal of the user B in the first embodiment of the present invention;
fig. 10 is a block diagram of a terminal according to a second embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal of the present invention can be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems. Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
First embodiment
In order to solve the problem that the interaction mode is too direct and lacks interest when two communication parties interact through application software in the prior art, the embodiment provides an information interaction method, which converts a voice message which a user wants to send, converts direct and white voice content into a graphic file which is not easily and directly obtained or fuzzy voice which is not easily identified, and enables the user of a receiving terminal to guess what an original voice message sent by the user is according to the graphic file or the fuzzy voice after receiving the converted information, so that the interest of the user in communication is improved, and the user experience is improved.
As shown in fig. 3, the information interaction method of the present embodiment includes:
s301, acquiring a voice message to be sent.
The scheme of this embodiment may be applied in various voice interaction scenarios, such as in a communication software voice interaction scenario, or in a live broadcast software voice interaction scenario, which is not limited in this embodiment.
In this embodiment, the voice message to be sent may be a song, audio extracted from a video, user voice collected by a terminal, or the like. Optionally, in an example of this embodiment, the to-be-sent voice message may be data stored in the terminal in advance, and the acquiring the to-be-sent voice message in S301 includes: local sound source data selected by a user is obtained as a voice message to be sent, and the local sound source data can be a local song (or a section of the song), audio extracted from a local video, audio which is locally stored and recorded by the user, and the like. In another example of this embodiment, the voice message to be sent may be information currently recorded by the terminal in real time, for example, acquiring the voice message to be sent includes: receiving a voice input from the outside of the terminal as a voice message to be sent, where the voice input from the outside may be a voice message currently sent by a terminal user, or a voice message played by a device (such as other mobile phones, televisions, etc.) outside the terminal, and the embodiment is not limited thereto. In another example of this embodiment, the to-be-sent voice message may also be voice information downloaded by the terminal from a server or received by another terminal, and acquiring the to-be-sent voice message includes: and taking the voice information currently downloaded from the server or received from other terminals as the voice message to be sent.
S302, converting the voice message to be sent by adopting at least one of the following two modes, and entering S303 or S304.
The first method is as follows: converting the voice message to be sent into a graphic file according to the content or the sound characteristic of the voice message to be sent; it is stated in the description that the graphic file may be a still picture or a video.
The first implementation of the above method aims to convert the voice message to be sent into a visual file which is not easily recognized by the user directly. The graphic files include, but are not limited to, static images, dynamic images, and videos.
In this embodiment, the sound characteristics of the voice message to be sent include, but are not limited to: timbre, loudness, audio. When the voice message to be sent is converted into the graphic file, the voice message to be sent in this embodiment may be analyzed for characteristics such as timbre, loudness, audio frequency, and the like through the existing sound wave analysis algorithm, and the graphic file is generated according to the result of the sound wave analysis. In an example, the graph file may be a bopp graph, and the specific representation form of the graph in the generated spectrogram is not limited in this embodiment, and the spectrogram may adopt a broken line graph of an electrocardiogram, a sine wave graph, a bar (or square) graph, or other representation forms. Alternatively, the spectrogram can be a more rich GIF kinetic or a small video.
The above conversion method can be applied to a scene of guessing songs by looking at a picture, for example, when the user a communicates with the user B by using a WeChat (or QQ and other software), the terminal of the user a converts a song, such as Beethoven Cytose, into a dynamic spectrogram and sends the dynamic spectrogram to the terminal of the user B, and after the user B receives the dynamic spectrogram through the terminal, the name of the song corresponding to the spectrogram is guessed according to the dynamic spectrogram, so that the interest of the WeChat application communication is increased.
In this embodiment, when the voice message to be sent is converted into the graphic file, if the voice message to be sent is a song, the voice message to be sent may also be converted into the graphic file according to the content of the song, for example, the name of the song is identified according to the content of the song, and the graphic file is obtained according to the name of the song. For example, when the user a communicates with the user B by using the WeChat (or software such as QQ), the user a wants to convert a song such as fortune telling music into a dynamic spectrogram and transmit the dynamic spectrogram to the user B by using the terminal, and the terminal of the user a recognizes that the name of the song is fortune telling music and converts the voice message to be transmitted into a graph showing the meaning of fighting with fortune.
In actual life, the contents of a voice message to be sent by many users all contain text information, the text information generally has a specific meaning, when the voice message to be sent is converted into a graphic file, the voice to be sent can be converted into a corresponding graphic according to the text content, and generally, a graphic capable of expressing the text content to a certain extent is selected as the converted graphic file. The graphic file may be selected from a locally stored graphic, or downloaded from a server or other terminal via a network, which is not limited in this embodiment.
For example, the user a wants to make a text message to be sent to the user B, but is exposed to be directly expressed in voice, at this time, the user a may adopt the scheme of this embodiment, input a voice message to be sent of "i like you" to the terminal, the terminal analyzes the text content in the voice message, determines that the like in the text content can be expressed by a love graphic, and downloads a heart-shaped pattern from the server as a converted graphic of the voice message to be sent according to the corresponding relationship "i like you" corresponding to the love graphic as shown in fig. 4.
The second method comprises the following steps: and carrying out fuzzy processing on the voice message to be sent, and converting the voice message to be sent into a fuzzy voice message.
In the second mode, the fuzzy processing of the voice message, including but not limited to text replacement, sound change processing, sound mixing processing, voice rhythm adjustment, etc., can increase the difficulty of the receiving terminal user in identifying the content of the fuzzy voice message.
When the voice message to be sent contains text information, the fuzzy processing on the voice message to be sent can be to carry out fuzzy processing on a certain section of text content in the voice message to be sent, including but not limited to replacing the text content with other voices, for example, also in a scene that the user a wants to make a white list to the user B, the user a can adopt the scheme of the embodiment to input the voice message to be sent of 'i like you' to the terminal, the terminal analyzes the text content in the voice message, determines that 'like' in the text content can be replaced by sound of fast heartbeat, and downloads a section of voice representing heartbeat from the server to replace 'like' in the voice message to be sent to obtain the fuzzy voice message.
In another example of this embodiment, performing fuzzy processing on a voice message to be sent, and converting the voice message to be sent into a fuzzy voice message includes: and keeping the true sound of the voice message to be sent in a preset proportion, and synthesizing the content of the remaining proportion of the voice message to be sent by the terminal to obtain the fuzzy voice message. In this example, the preset ratio may be designated by a user or automatically set by the terminal, and the specific value of the preset ratio may be 35%, 50%, 60%, etc., and the preset ratio may be set according to a specific requirement for the difficulty of recognizing the ambiguous voice message.
In another example of this embodiment, performing fuzzy processing on a voice message to be sent, and converting the voice message to be sent into a fuzzy voice message includes: and adjusting the voice sequence of the voice message to be sent to obtain the fuzzy voice message. In this example, the adjustment to the voice sequence of the voice message to be sent includes, but is not limited to: and converting the voice message to be sent from the original voice sequence played from beginning to end into the voice sequence played from end to head. Therefore, when the fuzzy voice message is played on the receiving terminal, the receiving terminal user hears the voice to be equivalent to the reverse playing of the voice message to be sent.
In another example of this embodiment, performing fuzzy processing on the voice message to be sent, and converting the voice message to be sent into a fuzzy voice message includes: and adjusting the rhythm of the voice message to be sent to obtain the fuzzy voice message. For example, the rhythm of the voice message to be sent is slowed down, and the voice message of the slow rhythm version is obtained as the voice message to be sent.
The fuzzy processing of the voice message to be sent and the conversion of the voice message to be sent into a fuzzy voice message comprises the following steps: and carrying out male and female voice conversion processing on the voice message to be sent to obtain the fuzzy voice message. For example, if the terminal detects a female voice in the voice message to be sent, the female voice is converted into a male voice to obtain a fuzzy voice message.
In one example, when performing the fuzzy processing on the voice message to be transmitted, the voice message to be transmitted may be processed in multiple ways among the specific ways of the fuzzy processing shown in the above examples, which is not limited in this embodiment. For example, the rhythm regulation and the man and woman voice conversion processing regulation are simultaneously performed on the voice message input by the user a. In addition, in this embodiment, the specific manner (or manners) of obtaining the voice fuzzy message in the above examples is/are adopted to process the voice message to be sent, which may be selected by the user himself or may be automatically set by the terminal. When the user selects the terminal, the step of the terminal receiving the user' S selection may be before S301, or may be after S301 and before S302, which is not limited in this embodiment.
In practice, the function of converting the voice message to be sent into a graphic file or a fuzzy voice message may be integrated into an application, a communication software provides a virtual button on a voice interaction interface to enter a selection page with multiple voice conversion effects, and a user clicks the virtual button to select the voice conversion effect in a pop-up page. For example, in the process that the user a is communicating through the terminal, the user B clicks a "voice conversion" button near the dialog box on the interactive interface shown in fig. 5 to trigger the voice conversion special effect selection bar 51 shown in fig. 5, and a graphic conversion special effect button may be integrated in the voice special effect selection bar — the voice message to be sent is converted into a graphic file according to the content of the voice message to be sent or the characteristics of the sound; a voice mixing conversion special effect button is used for reserving true sound with a preset proportion for the voice message to be sent, and the terminal synthesizes the content of the residual proportion of the voice message to be sent to obtain a fuzzy voice message; and a reverse playing conversion special effect button, namely reversing the voice sequence of the voice message to be sent to obtain a fuzzy voice message; and rhythm adjusting special effect buttons, namely adjusting the rhythm of the voice message to be sent to obtain a fuzzy voice message; and a man/woman voice conversion special effect button, namely, carrying out man/woman voice conversion processing on the voice message to be sent to obtain a fuzzy voice message and the like. The user a may click a specific special effect, for example, after the user a clicks the reverse conversion special effect in fig. 5, the terminal of the user a enters the voice input interface shown in fig. 6, and presses the voice input box to input the voice message to be sent (or selects voice information such as a song from a local file in another manner). After the terminal of the user A receives the voice message to be sent, the voice sequence of the voice message to be sent is reversed to obtain the fuzzy voice message with the reverse playing effect. In one example, the special effect set by the user for the voice message to be sent may be a one-time special effect, that is, after the user sends the converted graphic file or the fuzzy voice message, the special effect is cancelled, and normal voice interaction or text interaction is performed.
And S303, transmitting the converted information to a receiving terminal.
In step S303, the converted information includes a graphic file, or blurred voice information, or both a graphic file and blurred voice information.
In this scenario, the user of the receiving terminal guesses what the original meaning of the voice information the user sent is based on the graphic file or the ambiguous voice information. When the receiving terminal user is uncertain about the answer, the receiving terminal user may send a request for the original voice message to be sent to the terminal, and the terminal sends the voice message to be sent to the receiving terminal after receiving the request for the voice message to be sent from the receiving terminal.
In another example, after S303, the terminal may send the voice information to be sent to the receiving user terminal under the active trigger of the user.
In another example, the terminal may actively send the voice message to be sent to the receiving terminal after a second preset time period after S303.
And S304, sending the information obtained after conversion and the voice message to be sent to a receiving terminal.
In S304, the information sent to the receiving terminal includes the original voice message to be sent, so as to avoid directly turning on the voice message to be sent by the receiving terminal for playing, thereby reducing the interest. In this embodiment, before sending the information to the receiving terminal, the playing rules of the information obtained after conversion and the voice message to be sent at the receiving terminal may be set, so as to limit the playing sequence of the two types of information. In this embodiment, the playing rules for the information obtained after conversion and the voice message to be sent include an encrypted playing rule and a time-limited playing rule. The encryption playing rule limits that the receiving terminal plays the voice message to be sent only after receiving the correct decryption key of the voice message to be sent. The time-limited play rule limits that the receiving terminal plays the voice message to be sent only after playing the first preset time length of the information obtained after conversion.
In an example, when the information sent to the receiving terminal includes the converted information and the voice message to be sent, before S304, the method further includes:
setting an encryption playing rule for the information obtained after conversion and the voice message to be sent, wherein the encryption playing rule is used for controlling a receiving terminal to play the received information according to the following rule:
when the information obtained after conversion comprises a graphic file, directly displaying the graphic file, and playing the voice message to be sent after receiving a correct decryption key of the voice message to be sent, which is input by a user;
when the converted information comprises the fuzzy voice message, the receiving terminal directly plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, and plays the voice message to be sent after receiving a correct decryption key of the voice message to be sent, which is input by the user;
and when the converted information comprises a graphic file and a fuzzy voice message, the receiving terminal displays the graphic file, plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, and plays the voice message to be sent after receiving a correct decryption key of the voice message to be sent, which is input by the user.
It can be understood that, when the encryption playing rule is set for the information obtained after the conversion and the voice message to be sent, the encryption of the voice message to be sent according to the preset encryption key (corresponding to the decryption key) is also included.
In the following, referring to fig. 7-8, a scheme corresponding to the encrypted playing rule is described as an example, assuming that, in the process of interaction between the user a and the user B, the user a terminal converts the song a (to-be-sent voice message) selected by the user a into a fuzzy voice message with 50% true sound preserved through steps S301 and S302, and before sending the fuzzy voice message, the user a terminal sets the encrypted playing rule for the fuzzy voice message and the original to-be-sent voice message, song a, and encrypts the original to-be-sent voice message according to a preset encryption key. Thereafter, as shown in fig. 7, the user a terminal transmits the vague voice message to the user B terminal together with song a. After the user B clicks the information sent by the user a on the terminal thereof (71 in fig. 7), the user B terminal first plays the fuzzy voice message, and then the playing is stopped. If the user B wants to listen to the original voice of song a, the user B still clicks the information 71 in fig. 7, and then, as shown in fig. 8, pops up an input box on the user interface and prompts the user to input a key, so that the terminal of the user B plays the original voice of song a after receiving the correct decryption key.
In an example, when the information sent to the receiving terminal includes the converted information and the voice message to be sent, before S304, the method further includes:
and setting a time-limited playing rule for the information obtained after conversion and the voice message to be sent, wherein the time-limited playing rule is used for controlling the receiving terminal to play the received information according to the following rules:
when the information obtained after conversion comprises a graphic file, displaying the graphic file, releasing the hiding of the voice message to be sent after a first preset time length after the graphic file is displayed, and automatically playing the voice message to be sent or playing the voice message to be sent after the click operation of the user on the voice message to be sent is received;
when the information obtained after conversion comprises the fuzzy voice message, the receiving terminal firstly plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, releases the hiding of the voice message to be sent after a first preset time length after the fuzzy voice message is played, and automatically plays the voice message to be sent or plays the voice message to be sent after receiving the clicking operation of the user on the voice message to be sent;
when the converted information comprises a graphic file and a fuzzy voice message, the receiving terminal displays the graphic file, plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, releases the hiding of the voice message to be sent after the graphic file is displayed or after a first preset time length after the fuzzy voice message is played, and automatically plays the voice message to be sent or plays the voice message to be sent after receiving the clicking operation of the user on the voice message to be sent.
The first preset duration may be a preset fixed value such as 5S or 10S, but considering that the longer the duration of the fuzzy voice message is, the longer the duration required by the user to guess is, the first preset duration may also be a value set by the terminal according to the length of the actual duration of the fuzzy voice.
In the following, with reference to fig. 9, a scheme corresponding to the time-limited play rule is described as an example, assuming that, in the process of interaction between the user a and the user B, the user a terminal converts the song a (to-be-sent voice message) selected by the user a into a fuzzy voice message retaining 50% true sound through steps S301 and S302, and before sending the fuzzy voice message, the user a terminal sets the time-limited play rule for the fuzzy voice message and the original to-be-sent voice message, that is, song a, and hides (or encrypts) the original to-be-sent voice message. Thereafter, as shown in fig. 9, the terminal of user a transmits the vague voice message to the terminal of user B together with song a. After the user B clicks the information sent by the user a on the terminal thereof (91 in fig. 9), the terminal of the user B first plays the fuzzy voice message, and the playing is stopped. And the terminal of the user B starts timing, and when the accumulated timing exceeds a first preset time length set by the time-limited playing rule, the terminal of the user B releases the concealment of the song A (or the terminal B decrypts the song A) and plays the song A.
By adopting the information interaction method shown in the embodiment, before the terminal sends the voice message to be sent, the voice message to be sent can be converted into the graphic file or the fuzzy voice message, and the user of the receiving terminal guesses the words the sender wants to say, or the songs to send and the like according to the converted graphic file or the fuzzy voice message.
Second embodiment:
as shown in fig. 10, the present embodiment provides a terminal, which includes a processor 101, a memory 102 and a communication bus 103;
the communication bus 103 is used for realizing connection communication between the processor 101 and the memory 102;
the processor 101 is configured to execute one or more programs stored in the memory 102 to implement the steps of the information interaction method according to the first embodiment.
The present embodiment also provides a computer-readable storage medium, which stores one or more programs, where the one or more programs are executable by one or more processors to implement the steps of the information interaction method according to the first embodiment.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An information interaction method, comprising:
acquiring a voice message to be sent;
converting the voice message to be sent by adopting at least one of the following two modes;
the first method is as follows: converting the voice message to be sent into a graphic file according to the content of the voice message to be sent, or converting the voice message to be sent into the graphic file according to the sound characteristic of the voice message to be sent; the graphic file can be a static graph, a dynamic graph or a video;
the second method comprises the following steps: carrying out fuzzy processing on the voice message to be sent, and converting the voice message to be sent into a fuzzy voice message;
encrypting the voice message to be sent according to a preset encryption key, and setting an encryption playing rule for the information obtained after conversion and the voice message to be sent, wherein the encryption playing rule is used for controlling the receiving terminal to play the received information according to the following rule:
when the information obtained after conversion comprises the graphic file, the receiving terminal displays the graphic file and plays the voice message to be sent after receiving a correct decryption key for the voice message to be sent, which is input by a user;
when the information obtained after conversion comprises the fuzzy voice message, the receiving terminal plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, and plays the voice message to be sent after receiving a correct decryption key input by the user for the voice message to be sent;
when the information obtained after conversion comprises the graphic file and the fuzzy voice message, the receiving terminal displays the graphic file, plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, and plays the voice message to be sent after receiving a correct decryption key for the voice message to be sent, which is input by the user;
sending the information obtained by conversion to a receiving terminal, or sending the information obtained by conversion and the voice message to be sent to the receiving terminal;
the information interaction method guesses what the original voice message of the sender is through the meaning of the converted information expression.
2. The information interaction method of claim 1, wherein the obtaining the voice message to be sent comprises:
acquiring local sound source data selected by a user as a voice message to be sent;
or, receiving the voice input from the outside of the terminal as the voice message to be sent.
3. The information interaction method of claim 1, wherein when there is the step of converting the voice message to be sent into the fuzzy voice message by performing the fuzzy processing on the voice message to be sent, the step comprises:
keeping true sound of a preset proportion for the voice message to be sent, and synthesizing the content of the remaining proportion of the voice message to be sent by a terminal to obtain a fuzzy voice message;
and/or adjusting the voice sequence of the voice message to be sent to obtain a fuzzy voice message;
and/or adjusting the rhythm speed of the voice message to be sent to obtain a fuzzy voice message;
and/or, carrying out male and female voice conversion processing on the voice message to be sent to obtain a fuzzy voice message.
4. The information interaction method as claimed in claim 1, wherein, when there is the step of converting the voice message to be transmitted into a graphic file according to characteristics of sound in the voice message to be transmitted, the step includes:
and analyzing the sound wave of the voice message to be sent, and converting the voice message to be sent into a Pop chart according to an analysis result.
5. The information interaction method of claim 4, wherein the type of the Pop graph includes at least one of a dynamic graph and a small video.
6. The information interaction method according to any one of claims 1 to 5, wherein when the information sent to the receiving terminal includes the information obtained after conversion and the voice message to be sent, before sending the information obtained after conversion and the voice message to be sent to the receiving terminal, the method further comprises:
setting a time-limited playing rule for the information obtained after the conversion and the voice message to be sent, wherein the time-limited playing rule is used for controlling the receiving terminal to play the received information according to the following rules:
when the information obtained after conversion comprises the graphic file, displaying the graphic file, releasing the hiding of the voice message to be sent after a first preset duration after the graphic file is displayed, and automatically playing the voice message to be sent or playing the voice message to be sent after a click operation of a user on the voice message to be sent is received;
when the information obtained after conversion comprises the fuzzy voice message, the receiving terminal firstly plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, releases the hiding of the voice message to be sent after playing the fuzzy voice message for a first preset time, and automatically plays the voice message to be sent or plays the voice message to be sent after receiving the clicking operation of the user on the voice message to be sent;
when the information obtained after conversion comprises the graphic file and the fuzzy voice message, the receiving terminal displays the graphic file, plays the fuzzy voice message after receiving the clicking operation of the user on the fuzzy voice message, releases the hiding of the voice message to be sent after the graphic file is displayed or after a first preset duration after the fuzzy voice message is played, and automatically plays the voice message to be sent or plays the voice message to be sent after receiving the clicking operation of the user on the voice message to be sent.
7. The information interaction method according to any one of claims 1 to 5, wherein in a scenario where only the converted information is transmitted to the receiving terminal, further comprising:
after a second preset time length after the information obtained by conversion is sent to a receiving terminal, sending the voice message to be sent to the receiving terminal;
or after the information obtained by conversion is sent to a receiving terminal, if a request of the receiving terminal for the voice message to be sent is received, the voice message to be sent is sent to the receiving terminal.
8. A terminal, characterized in that the terminal comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the information interaction method according to any one of claims 1 to 7.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the information interaction method according to any one of claims 1 to 7.
CN201710900836.0A 2017-09-28 2017-09-28 Information interaction method, terminal and computer readable storage medium Active CN107786427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710900836.0A CN107786427B (en) 2017-09-28 2017-09-28 Information interaction method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710900836.0A CN107786427B (en) 2017-09-28 2017-09-28 Information interaction method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107786427A CN107786427A (en) 2018-03-09
CN107786427B true CN107786427B (en) 2021-07-16

Family

ID=61434328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710900836.0A Active CN107786427B (en) 2017-09-28 2017-09-28 Information interaction method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107786427B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109343756B (en) * 2018-09-21 2021-05-07 武汉华中时讯科技有限责任公司 Method for controlling recording and changing sound through gesture touch and sliding operation in android system, memory and terminal
CN109308893A (en) * 2018-10-25 2019-02-05 珠海格力电器股份有限公司 Method for sending information and device, storage medium, electronic device
CN110215692B (en) * 2019-07-10 2023-02-28 网易(杭州)网络有限公司 Method and device for processing voice information in game, storage medium and electronic device
CN115497489A (en) * 2022-09-02 2022-12-20 深圳传音通讯有限公司 Voice interaction method, intelligent terminal and storage medium
CN115860013B (en) * 2023-03-03 2023-06-02 深圳市人马互动科技有限公司 Dialogue message processing method, device, system, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105336329A (en) * 2015-09-25 2016-02-17 联想(北京)有限公司 Speech processing method and system
CN106156310A (en) * 2016-06-30 2016-11-23 努比亚技术有限公司 A kind of picture processing apparatus and method
CN106531149A (en) * 2016-12-07 2017-03-22 腾讯科技(深圳)有限公司 Information processing method and device
CN107194268A (en) * 2017-06-30 2017-09-22 珠海市魅族科技有限公司 A kind of information processing method, device, computer installation and readable storage medium storing program for executing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219237B (en) * 2014-08-29 2018-09-14 广州华多网络科技有限公司 The processing method and system of multi-medium data based on team's voice communication platform
CN104866275B (en) * 2015-03-25 2020-02-11 百度在线网络技术(北京)有限公司 Method and device for acquiring image information
CN105070283B (en) * 2015-08-27 2019-07-09 百度在线网络技术(北京)有限公司 The method and apparatus dubbed in background music for singing voice

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105336329A (en) * 2015-09-25 2016-02-17 联想(北京)有限公司 Speech processing method and system
CN106156310A (en) * 2016-06-30 2016-11-23 努比亚技术有限公司 A kind of picture processing apparatus and method
CN106531149A (en) * 2016-12-07 2017-03-22 腾讯科技(深圳)有限公司 Information processing method and device
CN107194268A (en) * 2017-06-30 2017-09-22 珠海市魅族科技有限公司 A kind of information processing method, device, computer installation and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN107786427A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN107835464B (en) Video call window picture processing method, terminal and computer readable storage medium
CN107786427B (en) Information interaction method, terminal and computer readable storage medium
CN108289244B (en) Video subtitle processing method, mobile terminal and computer readable storage medium
CN108540655B (en) Caller identification processing method and mobile terminal
CN109408168B (en) Remote interaction method and terminal equipment
CN106973330B (en) Screen live broadcasting method, device and system
CN109701266B (en) Game vibration method, device, mobile terminal and computer readable storage medium
CN110784771B (en) Video sharing method and electronic equipment
CN107818787B (en) Voice information processing method, terminal and computer readable storage medium
CN109412932B (en) Screen capturing method and terminal
CN108600079B (en) Chat record display method and mobile terminal
WO2019120190A1 (en) Dialing method and mobile terminal
CN108200287B (en) Information processing method, terminal and computer readable storage medium
CN112437472B (en) Network switching method, equipment and computer readable storage medium
CN107809527B (en) Method for presenting shortcut operation and electronic equipment
CN109495643B (en) Object multi-chat frame setting method and terminal
CN109561221B (en) Call control method, device and computer readable storage medium
CN112887776B (en) Method, equipment and computer readable storage medium for reducing audio delay
CN111416955B (en) Video call method and electronic equipment
CN110278402B (en) Dual-channel audio processing method and device and computer readable storage medium
CN111399739B (en) Touch event conversion processing method, terminal and computer readable storage medium
CN109640000B (en) Rich media communication method, terminal equipment and computer readable storage medium
CN109495683B (en) Interval shooting method and device and computer readable storage medium
CN109862182B (en) Communication service processing method and mobile terminal
CN112700783A (en) Communication sound changing method, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant