CN109672845B - Video call method and device and mobile terminal - Google Patents

Video call method and device and mobile terminal Download PDF

Info

Publication number
CN109672845B
CN109672845B CN201811643272.8A CN201811643272A CN109672845B CN 109672845 B CN109672845 B CN 109672845B CN 201811643272 A CN201811643272 A CN 201811643272A CN 109672845 B CN109672845 B CN 109672845B
Authority
CN
China
Prior art keywords
data
input
terminal
video
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811643272.8A
Other languages
Chinese (zh)
Other versions
CN109672845A (en
Inventor
彭作
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811643272.8A priority Critical patent/CN109672845B/en
Publication of CN109672845A publication Critical patent/CN109672845A/en
Application granted granted Critical
Publication of CN109672845B publication Critical patent/CN109672845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The embodiment of the invention provides a method, a device and a mobile terminal for video call, wherein the method comprises the following steps: displaying first call data in a first area of a first screen and displaying second call data in a second screen in the process of video call with a second terminal; receiving a first input of a first terminal user to a first program in a state that the first program is displayed in a second area of a first screen; generating target data based on the first input in response to the first input; and sending the target data to the second terminal. By the embodiment of the invention, the information sharing is realized by adopting the double-sided screen, the influence of the information sharing on the video call is avoided, and the convenience of the information sharing in the video call process is improved.

Description

Video call method and device and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a method and a device for video call and a mobile terminal.
Background
With the development of science and technology, mobile terminals are increasingly popularized, and users can use the mobile terminals to conduct video calls such as chatting and meetings, so that great convenience is brought.
In the process of video call, a part of area of the display screen of the mobile terminal can be used to display the picture of the local terminal, and another part of area can be used to display the picture of the network terminal, and the pictures of the two terminals occupy the whole display screen.
Disclosure of Invention
The embodiment of the invention provides a video call method, a video call device and a mobile terminal, and aims to solve the problem of inconvenience in information sharing in the video call process.
In a first aspect, an embodiment of the present invention further provides a video call method, which is applied to a first terminal, where the first terminal has a first screen and a second screen, and the method includes:
displaying first call data in a first area of a first screen and displaying second call data in a second screen in the process of video call with a second terminal;
receiving a first input of a first terminal user to a first program in a state that the first program is displayed in a second area of a first screen;
generating target data based on the first input in response to the first input;
sending the target data to the second terminal;
the second communication data is video data of a second terminal user under the condition that the first communication data is video data of a first terminal user; and under the condition that the first call data is the video data of a second terminal user, the second call data is the video data of the first terminal user.
In a second aspect, an embodiment of the present invention further provides a video call method, which is applied to a second terminal, where the second terminal has a third screen and a fourth screen, and the method includes:
displaying third communication data on a third screen and displaying fourth communication data on a fourth screen in the process of video communication with the first terminal;
receiving target data sent by a first terminal;
receiving a fourth input from the second end user;
in response to the fourth input, displaying data content of the target data;
wherein, under the condition that the third communication data is the video data of the first terminal user, the fourth communication data is the video data of the second terminal user; and under the condition that the third communication data is the video data of the second terminal user, the fourth communication data is the video data of the first terminal user.
In a third aspect, an embodiment of the present invention further provides a first terminal, where the first terminal has a first screen and a second screen, and the first terminal includes:
the first call module is used for displaying first call data in a first area of a first screen and displaying second call data in a second screen in the process of video call with a second terminal;
the first input receiving module is used for receiving first input of a first terminal user to a first program in the state that the first program is displayed in a second area of a first screen;
a first input response module to generate target data based on the first input in response to the first input;
the target data sending module is used for sending the target data to the second terminal;
the second communication data is video data of a second terminal user under the condition that the first communication data is video data of a first terminal user; and under the condition that the first call data is the video data of a second terminal user, the second call data is the video data of the first terminal user.
In a fourth aspect, an embodiment of the present invention further provides a second terminal, where the second terminal has a third screen and a fourth screen, and the second terminal includes:
the second communication module is used for displaying third communication data on a third screen and displaying fourth communication data on a fourth screen in the process of video communication with the first terminal;
the target data receiving module is used for receiving target data sent by the first terminal;
a fourth input receiving module, configured to receive a fourth input of the second end user;
a fourth input response module, configured to respond to the fourth input and display the data content of the target data;
wherein, under the condition that the third communication data is the video data of the first terminal user, the fourth communication data is the video data of the second terminal user; and under the condition that the third communication data is the video data of the second terminal user, the fourth communication data is the video data of the first terminal user.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the method for video call as described above.
In a sixth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for video call as described above.
In the embodiment of the invention, in the process of video call with the second terminal, the first call data is displayed in the first area of the first screen, the second call data is displayed in the second screen, then the first input of the first terminal user to the first program is received in the state that the first program is displayed in the second area of the first screen, the target data is generated based on the first input in response to the first input, and the target data is sent to the second terminal, so that the information sharing is realized by adopting the double-sided screen, the influence of the information sharing on the video call is avoided, and the convenience of the information sharing in the process of the video call is improved.
Drawings
Fig. 1 is a flow chart of a method of video call in an embodiment of the invention;
FIG. 2a is a schematic diagram of a display interface according to an embodiment of the invention;
FIG. 2b is a schematic view of another display interface according to an embodiment of the invention;
FIG. 2c is a schematic view of another display interface according to an embodiment of the invention;
FIG. 2d is a schematic view of another display interface according to an embodiment of the invention;
FIG. 2e is a schematic view of another display interface according to an embodiment of the invention;
FIG. 3 is a flow chart of another method of video calling in accordance with an embodiment of the present invention;
FIG. 4a is a schematic view of another display interface according to an embodiment of the present invention;
FIG. 4b is a schematic view of another display interface of an embodiment of the present invention;
FIG. 4c is a schematic view of another display interface according to an embodiment of the invention;
fig. 5 is a block diagram of a first terminal according to an embodiment of the present invention;
fig. 6 is a block diagram of a second terminal according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of another mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a method for video call according to an embodiment of the present invention is shown, where the method may be applied to a first terminal, and the first terminal may be a mobile terminal having a dual-sided screen, as shown in fig. 2a, and the first terminal may have a first screen 210 and a second screen 220.
Specifically, the method can comprise the following steps:
step 101, in the process of video call with a second terminal, displaying first call data in a first area of a first screen, and displaying second call data in a second screen;
wherein, under the condition that the first call data is the video data of the first terminal user, the second call data can be the video data of the second terminal user; in the case where the first call data is video data of a second end user, the second call data may be video data of the first end user.
In practical applications, as shown in fig. 2b, the first screen 210 may include a first area 211, after calling and establishing a connection, the first terminal and the second terminal may perform a video call, the first terminal may obtain first call data and second call data from the local terminal and the network terminal, and further may display the first call data in the first area of the first screen and display the second call data in the second screen.
102, receiving a first input of a first terminal user to a first program in a state that the first program is displayed in a second area of a first screen;
as shown in fig. 2b, the first screen 210 may further include a second area 212, in which the first program, such as a video application, a browser, etc., may be displayed, and the first end user may operate the first program in the second area, such as playing a video, browsing a web page, etc.
In order to share information in the first program to the second terminal, the first terminal user may make a first input, which the first terminal may receive.
Specifically, the first input may comprise a slide input, and step 102 may comprise the following sub-steps:
a swipe input by a first end user on a first screen is received.
Wherein the slide start position of the slide input may be located in the second region, and the slide end position of the slide input may be located in the first region.
Because the first program is displayed in the second area, the first terminal user can slide from the second area of the first screen to the first area of the first screen, and the first terminal can receive the sliding input of the first terminal user, so that the information in the first program can be shared to the second terminal.
Step 103, responding to the first input, and generating target data based on the first input;
after receiving the first input, information may be extracted from the first program to obtain target data corresponding to the first input, such as multimedia data of video, picture, music, text, and the like.
In an embodiment of the present invention, step 103 may include the following sub-steps:
and recording a first video with a first duration under the condition that the first input is a first preset characteristic.
Wherein the first duration may be associated with the first input and the video picture of the first video may be the display content in the second area.
In the process of video call, when the first input is the condition of the first preset characteristic, if the first input further comprises long press input, as shown in fig. 2c, video recording can be performed to obtain a first video, and then video information can be shared in the process of video call.
For example, a first terminal user adopts a video application to play a video in the second area, and when a video clip in the video application needs to be shared to the second terminal, the first terminal user can perform long-press input in the second area, and then can record the video clip in the video application to obtain a first video.
In an embodiment of the present invention, the step of recording the first video with the first duration may include the following sub-steps:
starting video recording from the input starting time of the first input; and ending video recording at the input ending time of the first input to generate a first video with a first duration.
In a specific implementation, the first terminal may record the display content in the second area from the input start time of the first input until the video recording is finished at the end time of the first input, so as to obtain a first video with a first duration corresponding to the first input.
In an embodiment of the present invention, step 103 may further include the following sub-steps:
and intercepting the display content in the second area to generate a first image under the condition that the first input is a second preset feature.
In the process of video call, when the first input is the second preset feature, if the first input only includes a sliding input, as shown in fig. 2d, the display content in the second area may be directly intercepted according to the first input to obtain the first image, and further, image information may be shared in the process of video call.
And 104, sending the target data to the second terminal.
After the target data is obtained, the target data may be sent to the second terminal.
In an embodiment of the present invention, before step 104, the following steps may be further included:
receiving a second input from the first end user; and responding to the second input, and setting a prompt tag corresponding to the target data.
Wherein the target data may include a hint tag.
In a specific implementation, multiple prompt tags may be preset, where the prompt tags may be used to indicate a display mode, such as displaying a special effect, and the first terminal user may select a prompt tag corresponding to the target data through the second input, as shown in fig. 2e, set a prompt tag corresponding to the special effect of the bomb for the target data, and expand the display mode of the shared information by setting the prompt tag.
In an embodiment of the present invention, before step 104, the following steps may be further included:
displaying the target data in a first area of the first screen.
In an embodiment of the present invention, before step 104, the following steps may be further included:
receiving a third input of the target data displayed in the first area by a first terminal user; editing the target data in response to the third input.
Before the target data is sent, the first terminal user can display the target data corresponding to the first program in the second area in the first area through sliding input so as to realize information preview before sending.
And before the target data is sent, the first terminal user can perform third input on the target data in the first area, and in response to the third input, the first terminal can edit the target data, such as adding a name and a text description to the first video, or else, intercepting part of the picture from the first picture, and further can send the edited target data, thereby realizing editing of the shared information.
In the embodiment of the invention, in the process of video call with the second terminal, the first call data is displayed in the first area of the first screen, the second call data is displayed in the second screen, then the first input of the first terminal user to the first program is received in the state that the first program is displayed in the second area of the first screen, the target data is generated based on the first input in response to the first input, and the target data is sent to the second terminal, so that the information sharing is realized by adopting the double-sided screen, the influence of the information sharing on the video call is avoided, and the convenience of the information sharing in the process of the video call is improved.
Referring to fig. 3, a flowchart illustrating steps of a method for video call according to an embodiment of the present invention is shown, and the method is applied to a second terminal, which may be a mobile terminal with a dual-sided screen, as shown in fig. 4a, and the second terminal may have a third screen 410 and a fourth screen 420.
Specifically, the method can comprise the following steps:
step 301, in the process of video call with the first terminal, displaying third call data on a third screen and displaying fourth call data on a fourth screen;
wherein, in case that the third communication data is the video data of the first end user, the fourth communication data may be the video data of the second end user; in the case where the third communication data is video data of the second end user, the fourth communication data may be video data of the first end user.
In a specific implementation, the first terminal and the second terminal can perform video call, and the second terminal can obtain third call data and fourth call data from the local terminal and the network terminal, so that the third call data can be displayed on the third screen and the fourth call data can be displayed on the fourth screen.
Step 302, receiving target data sent by a first terminal;
in a specific implementation, target data sent by the first terminal may be received.
Step 303, receiving a fourth input of the second end user;
after receiving the target data, the second terminal user may make a fourth input, such as a click input, a double-click input, etc., with respect to the target data, and the second terminal may receive the fourth input.
Step 304, responding to the fourth input, and displaying the data content of the target data.
And after receiving the fourth input, the second terminal displays the data content of the target data, such as displaying pictures, displaying characters, playing videos and the like.
In an embodiment of the present invention, after step 302, the following steps may be further included:
displaying a prompt label of the target data;
because the first terminal side sets the prompt tag corresponding to the target data, after receiving the target data, the second terminal may display the prompt tag in the target data, such as the bomb tag in fig. 4 b.
Step 303 may comprise the sub-steps of:
and receiving a fourth input of the prompt label by the second terminal user.
Accordingly, the second end user may trigger the display of the target data by a fourth input to the reminder tab, such as clicking on the bomb tab in fig. 4 b.
Step 304 may include the following sub-steps:
and displaying the data content of the target data according to the display mode associated with the prompt tag.
After receiving the fourth input, the second terminal may prompt the display mode associated with the tag, and then display the data content of the target data according to the display mode, where if the prompt tag is a bomb tag, the target data may be displayed in an explosion mode corresponding to the bomb tag after the second terminal user clicks the bomb tag.
In an embodiment of the present invention, the displaying the data content of the target data according to the display mode associated with the hint tag may include the following sub-steps:
and displaying the third communication data in a third area of the third screen, and displaying the data content of the target data in a fourth area of the third screen.
After the second terminal user performs the fourth input, that is, under the condition that the second terminal user clicks the prompt tag, the split-screen display may be automatically triggered, as shown in fig. 4c, the third screen is divided into a third area 411 and a fourth area 412, and then the third communication data may be displayed in the third area, and the data content of the target data may be displayed in the fourth area, so that the data content of the target data can be played, and the display of the local preview screen of the third communication data is not affected.
In the embodiment of the invention, in the process of video call with the first terminal, the third call data is displayed on the third screen, the fourth call data is displayed on the fourth screen, the target data sent by the first terminal is received, the fourth input of the second terminal user is received, and the data content of the target data is displayed in response to the fourth input, so that the information sharing by adopting the double-sided screen is realized, the influence of the information sharing on the video call is avoided, and the compatibility of the video call is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a block diagram of a first terminal according to an embodiment of the present invention is shown, where the first terminal has a first screen and a second screen, and specifically includes the following modules:
a first call module 501, configured to display first call data in a first area of a first screen and display second call data in a second screen in a video call with a second terminal;
a first input receiving module 502, configured to receive a first input to a first program from a first terminal user in a state where the first program is displayed in a second area of the first screen;
a first input response module 503, configured to generate target data based on the first input in response to the first input;
a target data sending module 504, configured to send the target data to the second terminal;
the second communication data is video data of a second terminal user under the condition that the first communication data is video data of a first terminal user; and under the condition that the first call data is the video data of a second terminal user, the second call data is the video data of the first terminal user.
In an embodiment of the present invention, the first input receiving module 502 includes:
and the sliding input receiving submodule is used for receiving a sliding input of a first terminal user on the first screen, wherein the sliding starting position of the sliding input is positioned in the second area, and the sliding ending position of the sliding input is positioned in the first area.
In an embodiment of the present invention, the first input response module 503 includes:
the first video recording submodule is used for recording a first video with a first duration under the condition that the first input is a first preset characteristic;
wherein the first duration is associated with the first input; and the video picture of the first video is the display content in the second area.
In an embodiment of the present invention, the first video recording sub-module includes:
a recording start unit configured to start video recording from the input start time of the first input;
and the recording ending unit is used for ending the video recording at the input ending moment of the first input to generate a first video with a first duration.
In an embodiment of the present invention, the first input response module 503 includes:
and the first image generation submodule is used for intercepting the display content in the second area to generate a first image under the condition that the first input is a second preset feature.
In an embodiment of the present invention, the method further includes:
the second input receiving module is used for receiving second input of the first terminal user;
the prompt tag setting module is used for responding to the second input and setting a prompt tag corresponding to the target data;
wherein the target data comprises the hint tag.
In an embodiment of the present invention, the method further includes:
and the target data display module is used for displaying the target data in a first area of the first screen.
In an embodiment of the present invention, the method further includes:
a third input receiving module, configured to receive a third input of the target data displayed in the first area by a first terminal user;
and the third input response module is used for responding to the third input and editing the target data.
Referring to fig. 6, a block diagram of a second terminal according to an embodiment of the present invention is shown, where the second terminal has a third screen and a fourth screen, and specifically includes the following modules:
the second communication module 601 is configured to display third communication data on a third screen and display fourth communication data on a fourth screen in a video communication process with the first terminal;
a target data receiving module 602, configured to receive target data sent by a first terminal;
a fourth input receiving module 603, configured to receive a fourth input of the second end user;
a fourth input response module 604, configured to display data content of the target data in response to the fourth input;
wherein, under the condition that the third communication data is the video data of the first terminal user, the fourth communication data is the video data of the second terminal user; and under the condition that the third communication data is the video data of the second terminal user, the fourth communication data is the video data of the first terminal user.
In an embodiment of the present invention, the method further includes:
the prompt label display module is used for displaying a prompt label of the target data;
the fourth input receiving module 603 includes:
the fourth input receiving submodule is used for receiving fourth input of the second terminal user to the prompt tag;
the fourth input response module 604 includes:
and the data content display sub-module is used for displaying the data content of the target data according to the display mode associated with the prompt tag.
In an embodiment of the present invention, the data content display sub-module includes:
and the sub-area display unit is used for displaying the third communication data in a third area of the third screen and displaying the data content of the target data in a fourth area of the third screen.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The apparatus/mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 and fig. 3, and for avoiding repetition, details are not described here again.
Referring to fig. 7, a hardware structure diagram of a mobile terminal for implementing various embodiments of the present invention is shown.
The mobile terminal 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
In an embodiment of the present invention, the mobile terminal 700 is a first terminal having a first screen (not shown) and a second screen (not shown), and the processor 710 is configured to display first call data in a first area of the first screen and display second call data in the second screen during a video call with the second terminal; receiving a first input of a first terminal user to a first program in a state that the first program is displayed in a second area of a first screen; generating target data based on the first input in response to the first input; and sending the target data to the second terminal.
The second communication data is video data of a second terminal user under the condition that the first communication data is video data of a first terminal user; and under the condition that the first call data is the video data of a second terminal user, the second call data is the video data of the first terminal user.
In the embodiment of the invention, in the process of video call with the second terminal, the first call data is displayed in the first area of the first screen, the second call data is displayed in the second screen, then the first input of the first terminal user to the first program is received in the state that the first program is displayed in the second area of the first screen, the target data is generated based on the first input in response to the first input, and the target data is sent to the second terminal, so that the information sharing is realized by adopting the double-sided screen, the influence of the information sharing on the video call is avoided, and the convenience of the information sharing in the process of the video call is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access via the network module 702, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the mobile terminal 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The mobile terminal 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the mobile terminal 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the mobile terminal 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 700 or may be used to transmit data between the mobile terminal 700 and external devices.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby integrally monitoring the mobile terminal. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The mobile terminal 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and the power supply 711 may be logically coupled to the processor 710 via a power management system that may enable managing charging, discharging, and power consumption by the power management system.
In addition, the mobile terminal 700 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 710, a memory 509, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program, when executed by the processor 710, implements each process of the above-mentioned method for video call, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned method for video call, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Referring to fig. 8, a hardware structure of a mobile terminal for implementing various embodiments of the present invention is schematically illustrated.
The mobile terminal 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 8 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
In an embodiment of the present invention, the mobile terminal 800 is a second terminal, and has a third screen (not shown) and a fourth screen (not shown), the processor 810 is configured to display third communication data on the third screen and display fourth communication data on the fourth screen during the video communication with the first terminal; receiving target data sent by a first terminal; receiving a fourth input from the second end user; in response to the fourth input, displaying data content of the target data.
In the embodiment of the invention, in the process of video call with the first terminal, the third call data is displayed on the third screen, the fourth call data is displayed on the fourth screen, the target data sent by the first terminal is received, the fourth input of the second terminal user is received, and the data content of the target data is displayed in response to the fourth input, so that the information sharing by adopting the double-sided screen is realized, the influence of the information sharing on the video call is avoided, and the compatibility of the video call is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 802, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the mobile terminal 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The mobile terminal 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the mobile terminal 800 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 808 is an interface through which an external device is connected to the mobile terminal 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 800 or may be used to transmit data between the mobile terminal 800 and external devices.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby integrally monitoring the mobile terminal. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The mobile terminal 800 may also include a power supply 811 (e.g., a battery) for powering the various components, and the power supply 811 may be logically coupled to the processor 810 via a power management system that may be used to manage charging, discharging, and power consumption.
In addition, the mobile terminal 800 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 810, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program, when executed by the processor 810, implements each process of the above-mentioned method for video call, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned method for video call, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A video call method is applied to a first terminal, the first terminal is provided with a first screen and a second screen, and the method is characterized by comprising the following steps:
displaying first call data in a first area of a first screen and displaying second call data in a second screen in the process of video call with a second terminal;
receiving a first input of a first terminal user to a first program in a state that the first program is displayed in a second area of a first screen;
generating target data based on the first input in response to the first input;
sending the target data to the second terminal;
the second communication data is video data of a second terminal user under the condition that the first communication data is video data of a first terminal user; under the condition that the first call data is video data of a second terminal user, the second call data is video data of the first terminal user;
wherein the generating target data based on the first input comprises:
recording a first video with a first duration under the condition that the first input is a first preset characteristic;
wherein the first duration is associated with the first input; and the video picture of the first video is the display content in the second area.
2. The method of claim 1, wherein receiving a first input of the first program by a first end user comprises:
and receiving a sliding input of a first terminal user on a first screen, wherein the sliding starting position of the sliding input is positioned in the second area, and the sliding ending position of the sliding input is positioned in the first area.
3. The method of claim 1, wherein recording the first video for the first duration comprises:
starting video recording from the input starting time of the first input;
and ending video recording at the input ending time of the first input to generate a first video with a first duration.
4. The method of claim 1 or 2, wherein generating target data based on the first input comprises:
and intercepting the display content in the second area to generate a first image under the condition that the first input is a second preset feature.
5. The method of claim 1, wherein before the sending the target data to the second terminal, further comprising:
receiving a second input from the first end user;
responding to the second input, and setting a prompt tag corresponding to the target data;
wherein the target data comprises the hint tag.
6. The method of claim 1, wherein before the sending the target data to the second terminal, further comprising:
displaying the target data in a first area of the first screen.
7. The method of claim 1, wherein before the sending the target data to the second terminal, further comprising:
receiving a third input of the target data displayed in the first area by a first terminal user;
editing the target data in response to the third input.
8. A video call method is applied to a second terminal, the second terminal is provided with a third screen and a fourth screen, and the method is characterized by comprising the following steps:
displaying third communication data on a third screen and displaying fourth communication data on a fourth screen in the process of video communication with the first terminal; the first terminal is provided with a first screen and a second screen, a first area of the first screen is used for displaying first communication data in the process of video communication with the second terminal, and the second screen is used for displaying second communication data in the process of video communication with the second terminal;
receiving target data sent by a first terminal; the first terminal receives a first input of a first terminal user to a first program in a state that the target data is displayed in a second area of the first screen, and responds to the first input and is generated based on the first input; the generation process of the target data comprises the following steps: under the condition that the first input is a first preset characteristic, the first terminal records a first video with a first duration; wherein the first duration is associated with the first input; the video picture of the first video is the display content in the second area;
receiving a fourth input from the second end user;
in response to the fourth input, displaying data content of the target data;
wherein, under the condition that the third communication data is the video data of the first terminal user, the fourth communication data is the video data of the second terminal user; and under the condition that the third communication data is the video data of the second terminal user, the fourth communication data is the video data of the first terminal user.
9. The method of claim 8, wherein after receiving the target data sent by the first terminal, the method further comprises:
displaying a prompt label of the target data;
said receiving a fourth input by a second end user, comprising:
receiving a fourth input of the prompt tag by the second end user;
the displaying data content of the target data in response to the fourth input includes:
and displaying the data content of the target data according to the display mode associated with the prompt tag.
10. The method according to claim 9, wherein the displaying the data content of the target data according to the display mode associated with the prompt tag comprises:
and displaying the third communication data in a third area of the third screen, and displaying the data content of the target data in a fourth area of the third screen.
11. A first terminal having a first screen and a second screen, the first terminal comprising:
the first call module is used for displaying first call data in a first area of a first screen and displaying second call data in a second screen in the process of video call with a second terminal;
the first input receiving module is used for receiving first input of a first terminal user to a first program in the state that the first program is displayed in a second area of a first screen;
a first input response module to generate target data based on the first input in response to the first input;
the target data sending module is used for sending the target data to the second terminal;
the second communication data is video data of a second terminal user under the condition that the first communication data is video data of a first terminal user; under the condition that the first call data is video data of a second terminal user, the second call data is video data of the first terminal user;
wherein the first input response module comprises:
the first video recording submodule is used for recording a first video with a first duration under the condition that the first input is a first preset characteristic;
wherein the first duration is associated with the first input; and the video picture of the first video is the display content in the second area.
12. A second terminal having a third screen and a fourth screen, the second terminal comprising:
the second communication module is used for displaying third communication data on a third screen and displaying fourth communication data on a fourth screen in the process of video communication with the first terminal; the first terminal is provided with a first screen and a second screen, a first area of the first screen is used for displaying first communication data in the process of video communication with the second terminal, and the second screen is used for displaying second communication data in the process of video communication with the second terminal;
the target data receiving module is used for receiving target data sent by the first terminal; the first terminal receives a first input of a first terminal user to a first program in a state that the target data is displayed in a second area of the first screen, and responds to the first input and is generated based on the first input; the generation process of the target data comprises the following steps: under the condition that the first input is a first preset characteristic, the first terminal records a first video with a first duration; wherein the first duration is associated with the first input; the video picture of the first video is the display content in the second area;
a fourth input receiving module, configured to receive a fourth input of the second end user;
a fourth input response module, configured to respond to the fourth input and display the data content of the target data;
wherein, under the condition that the third communication data is the video data of the first terminal user, the fourth communication data is the video data of the second terminal user; and under the condition that the third communication data is the video data of the second terminal user, the fourth communication data is the video data of the first terminal user.
13. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the method of video telephony according to any one of claims 1 to 10.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of video telephony according to any one of claims 1 to 10.
CN201811643272.8A 2018-12-29 2018-12-29 Video call method and device and mobile terminal Active CN109672845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811643272.8A CN109672845B (en) 2018-12-29 2018-12-29 Video call method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811643272.8A CN109672845B (en) 2018-12-29 2018-12-29 Video call method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN109672845A CN109672845A (en) 2019-04-23
CN109672845B true CN109672845B (en) 2020-11-03

Family

ID=66147434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811643272.8A Active CN109672845B (en) 2018-12-29 2018-12-29 Video call method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN109672845B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111049991B (en) * 2019-12-27 2021-06-15 维沃移动通信有限公司 Content sharing method and electronic equipment
CN113452945A (en) * 2020-03-27 2021-09-28 华为技术有限公司 Method and device for sharing application interface, electronic equipment and readable storage medium
CN111669461A (en) * 2020-05-25 2020-09-15 维沃移动通信有限公司 Information display method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080044005A (en) * 2006-11-15 2008-05-20 엘지전자 주식회사 Mobile terminal
CN104133610A (en) * 2014-07-11 2014-11-05 深圳市中兴移动通信有限公司 Screen-splitting interaction method of mobile terminal and mobile terminal
CN105871682A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Method and device for video call and terminal
CN106708452A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Information sharing method and terminal
CN106791571A (en) * 2017-01-09 2017-05-31 宇龙计算机通信科技(深圳)有限公司 A kind of image display method and device for shuangping san terminal
CN108989900A (en) * 2017-06-02 2018-12-11 中兴通讯股份有限公司 A kind of method for processing video frequency and terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101959820B1 (en) * 2012-10-12 2019-03-20 삼성전자주식회사 Method and apparatus for transmitting and receiving composition information in multimedia communication system
KR102067642B1 (en) * 2012-12-17 2020-01-17 삼성전자주식회사 Apparataus and method for providing videotelephony in a portable terminal
CN103067585B (en) * 2012-12-26 2015-03-04 广东欧珀移动通信有限公司 Multiparty call display controlling method, device and mobile terminal
KR102304305B1 (en) * 2015-01-21 2021-09-23 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN106210277A (en) * 2016-06-29 2016-12-07 努比亚技术有限公司 Mobile terminal call device and method, system
CN108377410A (en) * 2018-03-19 2018-08-07 聚好看科技股份有限公司 The method, apparatus and TV of video calling are realized in TV split screen

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080044005A (en) * 2006-11-15 2008-05-20 엘지전자 주식회사 Mobile terminal
CN104133610A (en) * 2014-07-11 2014-11-05 深圳市中兴移动通信有限公司 Screen-splitting interaction method of mobile terminal and mobile terminal
CN106708452A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Information sharing method and terminal
CN105871682A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Method and device for video call and terminal
CN106791571A (en) * 2017-01-09 2017-05-31 宇龙计算机通信科技(深圳)有限公司 A kind of image display method and device for shuangping san terminal
CN108989900A (en) * 2017-06-02 2018-12-11 中兴通讯股份有限公司 A kind of method for processing video frequency and terminal

Also Published As

Publication number Publication date
CN109672845A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN108762954B (en) Object sharing method and mobile terminal
CN109525707B (en) Audio playing method, mobile terminal and computer readable storage medium
CN110995923A (en) Screen projection control method and electronic equipment
WO2021017776A1 (en) Information processing method and terminal
CN108737904B (en) Video data processing method and mobile terminal
CN109525710B (en) Method and device for accessing application program
CN110768805B (en) Group message display method and electronic equipment
CN109491738B (en) Terminal device control method and terminal device
CN109710349B (en) Screen capturing method and mobile terminal
CN109412932B (en) Screen capturing method and terminal
CN109889757B (en) Video call method and terminal equipment
WO2020192322A1 (en) Display method and terminal device
CN109271262B (en) Display method and terminal
CN109189303B (en) Text editing method and mobile terminal
CN109672845B (en) Video call method and device and mobile terminal
CN111124223A (en) Application interface switching method and electronic equipment
CN110855549A (en) Message display method and terminal equipment
CN111383175A (en) Picture acquisition method and electronic equipment
CN111610903A (en) Information display method and electronic equipment
US11669237B2 (en) Operation method and terminal device
CN111061446A (en) Display method and electronic equipment
CN107704159B (en) Application icon management method and mobile terminal
CN111447598B (en) Interaction method and display device
CN111694497B (en) Page combination method and electronic equipment
CN111049977B (en) Alarm clock reminding method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant