CN111130998B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN111130998B
CN111130998B CN201911319627.2A CN201911319627A CN111130998B CN 111130998 B CN111130998 B CN 111130998B CN 201911319627 A CN201911319627 A CN 201911319627A CN 111130998 B CN111130998 B CN 111130998B
Authority
CN
China
Prior art keywords
information
video
content
outputting
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911319627.2A
Other languages
Chinese (zh)
Other versions
CN111130998A (en
Inventor
李陈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911319627.2A priority Critical patent/CN111130998B/en
Publication of CN111130998A publication Critical patent/CN111130998A/en
Application granted granted Critical
Publication of CN111130998B publication Critical patent/CN111130998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides an information processing method and electronic equipment, and relates to the technical field of communication. The method comprises the following steps: acquiring first information, wherein the first information comprises first information content; outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content; wherein the first information comprises at least one of: text information, voice information, image information, video information. The scheme of the invention is used for solving the problem that the existing scheme can not meet the requirements of users.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information processing method and an electronic device.
Background
With the development of communication technology, electronic devices such as smart phones have become an important part of people's lives. An intelligent assistant is often installed in the electronic device to help a user to realize more convenient operation.
However, when the current intelligent assistant of the electronic device interacts with the user, the answer content of the current intelligent assistant often presents a situation of asking questions, which causes a problem that the user requirements cannot be met.
Disclosure of Invention
The embodiment of the invention provides an information processing method and electronic equipment, which can solve the problem that the existing scheme cannot meet the requirements of users.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information processing method, including:
acquiring first information, wherein the first information comprises first information content;
outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content;
wherein the first information comprises at least one of: text information, voice information, image information, video information.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the first acquisition module is used for acquiring first information, and the first information comprises first information content;
the output module is used for outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content;
wherein the first information comprises at least one of: text information, voice information, image information, video information.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the steps of the information processing method described above are implemented.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the information processing method described above.
In this way, in the embodiment of the present invention, after the first information including the first information content is acquired, the video content matching with the first information content is output based on the first information, so that the purpose of automatically outputting the targeted video reply according to the acquired first information is achieved, and the conversation requirement of the user is met.
Drawings
FIG. 1 is a schematic flowchart illustrating steps of an information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a dialog interface according to an embodiment of the present invention;
fig. 3 is an application diagram of an information processing method according to an embodiment of the present invention;
fig. 4 is a second schematic application diagram of the information processing method according to the embodiment of the present invention;
FIG. 5 is a second schematic diagram of a dialog interface according to an embodiment of the present invention;
FIG. 6 is a third schematic diagram illustrating a dialog interface according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an information processing method according to an embodiment of the present invention includes:
step 101, first information is obtained, wherein the first information comprises first information content.
Here, the first information is user information, for example, information input after a user opens a man-machine conversation with the electronic device. Wherein the first information comprises first information content, the first information content being capable of representing a meaning of the first information. In this step, the first information is acquired to output a corresponding video according to the user requirement.
Step 102, outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content; wherein the first information comprises at least one of: text information, voice information, image information, video information.
In this step, based on the first information acquired in step 101, the first video including the video content matched with the first information content is output, and a desired reply is performed to the user, so that the interaction with the user is completed.
Thus, through step 101 and step 102, after the first information including the first information content is acquired, the electronic device that applies the information processing method according to the embodiment of the present invention outputs the video content that includes the video content that matches the first information content based on the first information, so that a targeted video reply is automatically output according to the acquired first information, and a user's conversation demand is satisfied.
Optionally, in this embodiment, step 102 includes:
determining first information content based on the first information;
determining a second information content based on the first information content;
and acquiring a first video comprising the second information content, and outputting the first video.
Thus, for the first information acquired in step 101, first information content included in the first information is determined; then, a second information content matching the first information content is determined, so that a first video including the second information content is obtained, and the output of the first video is completed.
The first information content corresponds to different implementations of the first information and can be determined in different ways. If the first information is realized by text information or voice information, the content of the first information can be a keyword in the first information; the first information is realized by image information or video information, and the content of the first information can be words recognized in the image/video or information obtained based on image/video analysis. For example, if the first information is an image of a character of "new year good", the first information content may be determined to be "new year good".
After determining the first information content, the second information content may further be determined based on the association relationship. The association relationship may be a question-answer relationship, a dialogue relationship established based on big data statistics, and the like. And then, acquiring the first video and outputting the first video according to the determined second information content.
It should also be appreciated that in this embodiment, the acquisition of the first video may be based on a video stored locally by the electronic device or may be based on a video stored in the network. However, whether the electronic device is local or in a network, in order to facilitate searching in a large number of videos, a multi-level tag is set for a video according to attribute information of the video, and then the video is stored. For example, each video store contains at least 3-level tags, the range of the first-level tags is large and can be used for answers of a plurality of scenes, the second-level tags subdivide related scenes, and the third-level tags are style types of corresponding videos. Specifically, the first-level label of the video a is the information content 'good', the second-level label is the detailed scene 'greeting', and the third-level label is the style type 'humor'. Here, the style types include humorous, serious, travel, gourmet, and the like.
Therefore, optionally, the obtaining a first video including the second information content and outputting the first video includes:
determining an information type of the first information;
and acquiring a first video which comprises the second information content and is matched with the information type of the first information, and outputting the first video.
Here, the information type is corresponding to a tag set at the time of video storage, and if the genre type of video is set, it is determined that the information type of the first information is also the genre type. Therefore, when the first video is obtained, the first video more suitable for the current conversation can be matched by combining the second information content and the information type, and the requirements of the user are further met.
Optionally, the determining the information type of the first information includes:
performing information characteristic identification on the first information, and identifying the information type of the first information;
or, pre-stored preference information of a sender of the first information is acquired, and the information type of the first information is determined based on the preference information.
Thus, on one hand, the information type of the first information can be more accurately determined by identifying the information characteristics of the first information; on the other hand, the type of information for the preference of the sender can be determined by the preference information of the sender of the first information. And then, finishing the acquisition of the first video corresponding to the information type and the second information content.
Wherein the information characteristic identification may comprise emotion identification. Emotion recognition is classified into physiological recognition and non-physiological recognition. For example, emotion recognition is performed on voice information, and the emotion of the user can be known through analysis of tone, tone color and the like of the voice information based on physiological information; the emotion of the user can also be known by extracting certain emotional words from the voice information based on the non-physiological information. Of course, the genre preferred by the user may be used as the genre of the video to be retrieved according to the user preference information.
In this embodiment, the preference information may be obtained by big data analysis, such as genre of the most videos in the videos stored by the user within a preset time period.
In this embodiment, the outputting the first video includes at least one of:
sending the first video to a target contact person, wherein the target contact person is the contact person sending the first information;
displaying a first identifier, wherein the first identifier is used for indicating the first video;
and playing the first video.
In this way, for the acquired first video, the first video can be directly sent to the target contact person, so that the target contact person can directly view the first video; or only displaying a first identifier indicating the first video, and further finishing the viewing of the video through the first identifier; it is also possible to play only the first video.
The information processing method provided by the embodiment of the invention can be applied to a conversation scene between the user and the intelligent assistant, and realizes the video reply of the intelligent assistant aiming at the first information input by the user so as to meet the conversation requirement of the user. The intelligent assistant may be a voice assistant or a chat assistant, etc. Wherein, in order to better simulate the chat process of the user, in this embodiment, a dialog interface is provided to show the dialog between the user and the intelligent assistant. For example, after the user opens the man-machine conversation, the electronic device displays the conversation interface for the first information input by the user, and the electronic device outputs the first video, as shown in fig. 2. At this time, the user may input first information of voice or text through the dialog input box. And the electronic device, performs the steps as shown in fig. 3: in step 301, acquiring first information that can be input by a user through a dialog input box; determining a second information content based on a first information content of the first information, as in step 302; determining an information type of the first information as in step 303; acquiring a first video which comprises the second information content and is matched with the information type as shown in step 304; in step 305, the first video is played on the dialog interface and presented to the user as input by the intelligent assistant.
In addition, in this embodiment, before step 101, the method further includes:
displaying a shooting interface in a first area of a display screen of the electronic equipment;
step 101 comprises:
acquiring first information acquired by a camera, wherein the first information is an image shot by the camera or a video recorded by the camera;
the outputting the first video comprises:
and playing the first video in a second area of the display screen of the electronic equipment.
Therefore, for the first information of the image shot by the camera or the video recorded by the camera, a shooting interface can be displayed in the first area of the display screen of the electronic equipment, so that the first information can be acquired, and then the first video is further played in the second area of the display screen of the electronic equipment after the first video is acquired.
Preferably, the first area and the second area are different areas on the display screen.
Specifically, after the user starts a man-machine conversation, the display screen of the electronic device is divided into a first area 401 and a second area 402 as shown in fig. 4. The user takes an image or video in the shooting interface displayed in the first area 401, for example, a long press of the record button. And the electronic device, performs the steps as shown in fig. 5: in step 305, acquiring first information shot by a user through a camera; determining a second information content based on a first information content of the first information, as in step 502; determining an information type of the first information as in step 503; acquiring a first video which comprises the second information content and is matched with the information type as in step 504; in the second area, the first video is played, as shown in step 505, and presented to the user as a reply to the intelligent assistant, as shown in FIG. 6.
In this embodiment, for the first information implemented by the video information, optionally, after step 102, the method further includes:
and performing video synthesis on the first video and the second video, and outputting a third video.
Here, the second video is a video as the first information. In this way, the third video obtained by synthesizing the first video and the second video can be used as an effective interactive video for other scenes.
Specifically, for the first video and the second video which can be played in different areas of the display screen, the synthesis mode can be realized by recording the display screen. In addition, the later video can be connected behind the earlier video according to the time sequence of the first video and the second video.
Thus, the information processing method of the embodiment of the invention can also be applied to video recording, and video recording is carried out aiming at the conversation between the user and the intelligent assistant. For a video A input by a user, the intelligent assistant acquires a video B matched with the information content in the video A, and then synthesizes the video A and the video B to output a new video, thereby meeting the acquisition requirement for video recording.
In this embodiment, optionally, after the step 102, the method further includes:
storing the first video;
under the condition that first input of a user in an information input box is received, third information input by the first input is acquired;
and displaying a second identifier under the condition that the information content of the third information is the same as that of the second information, wherein the second identifier is used for indicating the first video.
Through the steps, after the first video is output, the first video can be stored, so that when the user inputs third information based on the information input box, the third information can be used, and when the information content of the third information is the same as that of the second information, the mark for indicating the first video is displayed, and matched video recommendation is realized.
Thus, the information processing method of the embodiment of the invention can also be applied to the scene of the reply information of the user. For example, in the chat software, the video reply recommendation can be performed according to the information currently input by the user in the information input box. If the user inputs the information including the 'bye' information content in the information input box in the chat software, the identifier of the video C is displayed, so that the user can select whether to directly use the video C as the current input according to the requirement, and more intelligent and diversified chatting is realized.
In summary, after the first information including the first information content is acquired, the information processing method according to the embodiment of the present invention outputs the video content including the video content matching with the first information content based on the first information, so that the targeted video reply is automatically output according to the acquired first information, and the dialog requirement of the user is met.
FIG. 7 is a block diagram of an electronic device of one embodiment of the invention. The electronic device 700 shown in fig. 7 includes a first obtaining module 710 and an output module 720.
The first obtaining module 710 is configured to obtain first information, where the first information includes first information content.
Here, the first information is user information, for example, information input after a user opens a man-machine conversation with the electronic device. Wherein the first information comprises first information content, the first information content being capable of representing a meaning of the first information. The module acquires the first information to output a corresponding video according to the user requirement subsequently.
An output module 720, configured to output a first video based on the first information, where the first video includes video content matching the first information content; wherein the first information comprises at least one of: text information, voice information, image information, video information.
Based on the first information acquired by the first acquisition module 710, a first video including video content matched with the first information content is output, and a required reply is performed to the user, so that interaction with the user is completed.
In this way, after the first information including the first information content is acquired by the first acquisition module 710 and the output module 720, the electronic device outputs the video content including the video content matched with the first information content based on the first information, so that a targeted video reply is automatically output according to the acquired first information, and a conversation requirement of a user is met.
Optionally, the output module includes:
a first determining submodule, configured to determine first information content based on the first information;
a second determining submodule for determining a second information content based on the first information content;
and the processing submodule is used for acquiring a first video comprising the second information content and outputting the first video.
Optionally, the processing sub-module includes:
a determining unit configured to determine an information type of the first information;
and the processing unit is used for acquiring a first video which comprises the second information content and is matched with the information type of the first information, and outputting the first video.
Optionally, the determining unit is further configured to:
performing information characteristic identification on the first information, and identifying the information type of the first information;
or, pre-stored preference information of a sender of the first information is acquired, and the information type of the first information is determined based on the preference information.
Optionally, the outputting the first video comprises at least one of:
sending the first video to a target contact person, wherein the target contact person is the contact person sending the first information;
displaying a first identifier, wherein the first identifier is used for indicating the first video;
and playing the first video.
Optionally, the electronic device further comprises:
the first display module is used for displaying a shooting interface in a first area of a display screen of the electronic equipment;
the first obtaining module is further configured to:
acquiring first information acquired by a camera, wherein the first information is an image shot by the camera or a video recorded by the camera;
the output module is further configured to:
and playing the first video in a second area of the display screen of the electronic equipment.
Optionally, the electronic device further comprises:
and the synthesis processing module is used for carrying out video synthesis on the first video and the second video and outputting a third video.
Optionally, the electronic device further comprises:
the storage module is used for storing the first video;
the second acquisition module is used for acquiring third information input by a first input under the condition of receiving the first input of a user in an information input box;
and the second display module is used for displaying a second identifier under the condition that the information content of the third information is the same as the second information content, wherein the second identifier is used for indicating the first video.
The electronic device 700 is capable of implementing each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 6, and details are not repeated here to avoid repetition.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, where the electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 810 is configured to obtain first information, where the first information includes first information content;
outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content;
wherein the first information comprises at least one of: text information, voice information, image information, video information.
Therefore, after the electronic equipment acquires the first information comprising the first information content, the electronic equipment outputs the video content matched with the first information content based on the first information, so that the aim of automatically outputting the video reply according to the acquired first information is realized, and the conversation requirement of the user is met.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 802, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the electronic apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The electronic device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the electronic device 800 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation can be transmitted to the processor 810 to determine a type of the touch event, and then the processor 810 can provide a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 808 is an interface for connecting an external device to the electronic apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 800 or may be used to transmit data between the electronic device 800 and external devices.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby monitoring the whole electronic device. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The electronic device 800 may also include a power supply 811 (e.g., a battery) for powering the various components, and preferably, the power supply 811 may be logically coupled to the processor 810 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the electronic device 800 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the computer program is executed by the processor, the computer program implements each process of the information processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the information processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An information processing method applied to an electronic device, the method comprising:
acquiring first information, wherein the first information comprises first information content;
outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content;
wherein the first information comprises at least one of: text information, voice information, image information, video information;
wherein, after outputting the first video based on the first information, the method further comprises:
storing the first video;
in a chat program, under the condition that a first input of a user in an information input box is received, acquiring third information input by the first input;
under the condition that the information content of the third information is the same as that of the second information, displaying a second identifier, wherein the second identifier is used for indicating the first video;
wherein the second information content is determined based on the first information;
the outputting a first video based on the first information comprises:
determining first information content based on the first information;
determining a second information content based on the first information content;
acquiring a first video including the second information content, and outputting the first video;
the acquiring a first video including the second information content and outputting the first video includes:
determining an information type of the first information, wherein the information type corresponds to a tag of the first video, and the tag of the first video is set according to attribute information of the first video;
and acquiring a first video which comprises the second information content and is matched with the information type of the first information, and outputting the first video.
2. The method of claim 1, wherein the determining the information type of the first information comprises:
performing information characteristic identification on the first information, and identifying the information type of the first information;
or, pre-stored preference information of a sender of the first information is acquired, and the information type of the first information is determined based on the preference information.
3. The method of claim 1, wherein the outputting the first video comprises at least one of:
sending the first video to a target contact person, wherein the target contact person is the contact person sending the first information;
displaying a first identifier, wherein the first identifier is used for indicating the first video;
and playing the first video.
4. The method of claim 1, wherein before obtaining the first information, further comprising:
displaying a shooting interface in a first area of a display screen of the electronic equipment;
the acquiring of the first information includes:
acquiring first information acquired by a camera, wherein the first information is an image shot by the camera or a video recorded by the camera;
the outputting the first video comprises:
and playing the first video in a second area of the display screen of the electronic equipment.
5. The method of claim 4, wherein after outputting the first video based on the first information, further comprising:
and performing video synthesis on the first video and the second video, and outputting a third video.
6. An electronic device, comprising:
the first acquisition module is used for acquiring first information, and the first information comprises first information content;
the output module is used for outputting a first video based on the first information, wherein the first video comprises video content matched with the first information content;
wherein the first information comprises at least one of: text information, voice information, image information, video information;
further comprising:
the storage module is used for storing the first video;
the second acquisition module is used for acquiring third information input by a first input under the condition that the first input of a user in an information input box is received in a chat program;
a second display module, configured to display a second identifier when information content of the third information is the same as second information content, where the second identifier is used to indicate the first video;
wherein the second information content is determined based on the first information;
the output module includes:
a first determining submodule, configured to determine first information content based on the first information;
a second determining submodule for determining a second information content based on the first information content;
the processing submodule is used for acquiring a first video comprising the second information content and outputting the first video;
the processing submodule comprises:
a determining unit, configured to determine an information type of the first information, where the information type corresponds to a tag of the first video, and the tag of the first video is set according to attribute information of the first video;
and the processing unit is used for acquiring a first video which comprises the second information content and is matched with the information type of the first information, and outputting the first video.
7. The electronic device of claim 6, wherein the determination unit is further configured to:
performing information characteristic identification on the first information, and identifying the information type of the first information;
or, pre-stored preference information of a sender of the first information is acquired, and the information type of the first information is determined based on the preference information.
8. The electronic device of claim 6, wherein the outputting the first video comprises at least one of:
sending the first video to a target contact person, wherein the target contact person is the contact person sending the first information;
displaying a first identifier, wherein the first identifier is used for indicating the first video;
and playing the first video.
9. The electronic device of claim 6, further comprising:
the first display module is used for displaying a shooting interface in a first area of a display screen of the electronic equipment;
the first obtaining module is further configured to:
acquiring first information acquired by a camera, wherein the first information is an image shot by the camera or a video recorded by the camera;
the output module is further configured to:
and playing the first video in a second area of the display screen of the electronic equipment.
10. The electronic device of claim 9, further comprising:
and the synthesis processing module is used for carrying out video synthesis on the first video and the second video and outputting a third video.
11. An electronic device, comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information processing method according to any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, implements the steps of the information processing method according to any one of claims 1 to 5.
CN201911319627.2A 2019-12-19 2019-12-19 Information processing method and electronic equipment Active CN111130998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911319627.2A CN111130998B (en) 2019-12-19 2019-12-19 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911319627.2A CN111130998B (en) 2019-12-19 2019-12-19 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111130998A CN111130998A (en) 2020-05-08
CN111130998B true CN111130998B (en) 2022-05-03

Family

ID=70500214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911319627.2A Active CN111130998B (en) 2019-12-19 2019-12-19 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111130998B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534965A (en) * 2016-11-30 2017-03-22 北京小米移动软件有限公司 Method and device for obtaining video information
CN110286756A (en) * 2019-06-13 2019-09-27 深圳追一科技有限公司 Method for processing video frequency, device, system, terminal device and storage medium
CN110400251A (en) * 2019-06-13 2019-11-01 深圳追一科技有限公司 Method for processing video frequency, device, terminal device and storage medium
CN110413841A (en) * 2019-06-13 2019-11-05 深圳追一科技有限公司 Polymorphic exchange method, device, system, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534965A (en) * 2016-11-30 2017-03-22 北京小米移动软件有限公司 Method and device for obtaining video information
CN110286756A (en) * 2019-06-13 2019-09-27 深圳追一科技有限公司 Method for processing video frequency, device, system, terminal device and storage medium
CN110400251A (en) * 2019-06-13 2019-11-01 深圳追一科技有限公司 Method for processing video frequency, device, terminal device and storage medium
CN110413841A (en) * 2019-06-13 2019-11-05 深圳追一科技有限公司 Polymorphic exchange method, device, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111130998A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN109525707B (en) Audio playing method, mobile terminal and computer readable storage medium
CN108632658B (en) Bullet screen display method and terminal
CN111010610B (en) Video screenshot method and electronic equipment
CN108616448B (en) Information sharing path recommendation method and mobile terminal
CN110830362B (en) Content generation method and mobile terminal
CN110097872B (en) Audio processing method and electronic equipment
CN107908765B (en) Game resource processing method, mobile terminal and server
CN109993821B (en) Expression playing method and mobile terminal
CN109412932B (en) Screen capturing method and terminal
CN109388456B (en) Head portrait selection method and mobile terminal
CN108763475B (en) Recording method, recording device and terminal equipment
CN109495638B (en) Information display method and terminal
CN108600079B (en) Chat record display method and mobile terminal
CN108668024B (en) Voice processing method and terminal
CN111698550B (en) Information display method, device, electronic equipment and medium
CN110808019A (en) Song generation method and electronic equipment
CN110062281B (en) Play progress adjusting method and terminal equipment thereof
CN109949809B (en) Voice control method and terminal equipment
CN111143614A (en) Video display method and electronic equipment
CN111124569A (en) Application sharing method, electronic equipment and computer readable storage medium
CN110784394A (en) Prompting method and electronic equipment
CN110795188A (en) Message interaction method and electronic equipment
CN110750198A (en) Expression sending method and mobile terminal
CN110706679A (en) Audio processing method and electronic equipment
CN108093119B (en) Strange incoming call number marking method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant