CN110830747A - Video call method, wearable device and storage medium - Google Patents

Video call method, wearable device and storage medium Download PDF

Info

Publication number
CN110830747A
CN110830747A CN201911205031.XA CN201911205031A CN110830747A CN 110830747 A CN110830747 A CN 110830747A CN 201911205031 A CN201911205031 A CN 201911205031A CN 110830747 A CN110830747 A CN 110830747A
Authority
CN
China
Prior art keywords
image
wearable device
screen
electronic device
video call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911205031.XA
Other languages
Chinese (zh)
Inventor
杜莉莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911205031.XA priority Critical patent/CN110830747A/en
Publication of CN110830747A publication Critical patent/CN110830747A/en
Priority to PCT/CN2020/131084 priority patent/WO2021104248A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The application discloses a video call method, wearable equipment and a storage medium, and relates to the technical field of communication. The method is applied to wearable equipment and comprises the following steps: acquiring media information acquired by second electronic equipment paired with the wearable equipment under the condition that the wearable equipment and the first electronic equipment establish video call connection; sending media information to the first electronic device; wherein the media information includes an image or audio. The video call method, the wearable device and the storage medium can achieve a good communication effect, and user experience is improved.

Description

Video call method, wearable device and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a video call method, a wearable device, and a storage medium.
Background
Wearable equipment is a general name of intelligent equipment which has an independent operating system like a smart phone, can be provided with programs provided by software service providers such as software and games, can complete functions of adding schedules, map navigation, interacting with friends, shooting photos and videos, developing video calls with friends and the like through voice or action control, and can realize wireless network access through a mobile communication network.
As technology advances, more and more wearable devices support the functionality of video telephony. At present, when video conversation is carried out with other people through wearable equipment, the media information that wearable equipment gathered can not satisfy the user's demand, can't reach better communication effect, influences user experience.
Disclosure of Invention
The embodiment of the application provides a video call method, which is used for solving the problem that media information acquired by wearable equipment cannot meet the requirements of users in the prior art.
In order to solve the technical problem, the invention is realized as follows:
a video call method is applied to a wearable device and comprises the following steps:
acquiring media information acquired by a second electronic device paired with the wearable device under the condition that the wearable device and a first electronic device establish video call connection;
sending the media information to the first electronic device;
wherein the media information comprises an image or audio.
In a first aspect, an embodiment of the present invention further provides a wearable device, where the wearable device includes:
the acquisition module is configured to acquire media information acquired by a second electronic device paired with the wearable device under the condition that the wearable device establishes a video call connection with a first electronic device;
a sending module configured to send the media information to the first electronic device;
wherein the media information comprises an image or audio.
In a second aspect, embodiments of the present invention provide a wearable device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video call method as described above.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the video call method are implemented as described above.
Under the condition that video call connection is established with the first electronic equipment, media information acquired by the second electronic equipment paired with the wearable equipment is acquired, and the acquired media information is sent to the first electronic equipment, so that the media information acquired by the opposite side can meet the user requirements, a good communication effect is achieved, and user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic application environment diagram of a video call method according to an embodiment of the present application.
Fig. 2 is a flowchart of a video call method according to an embodiment of the present application.
Fig. 3 is a flowchart of another video call method according to an embodiment of the present application.
Fig. 4 is a schematic view illustrating that the wearable device only displays a picture corresponding to the first electronic device according to the embodiment of the present application.
Fig. 5 is a schematic diagram illustrating a process of switching a displayed frame to a frame next to a currently displayed frame.
Fig. 6 is a flowchart of another video call method according to an embodiment of the present application.
Fig. 7 is a block diagram schematic diagram of a wearable device provided in an embodiment of the present application.
Fig. 8 is a block diagram schematic diagram of another wearable device provided in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, in order to more intuitively understand the scheme provided by the embodiment of the present application, the following describes an architecture of a video call scheme provided by the embodiment of the present application with reference to fig. 1.
Please refer to fig. 1, which is a schematic diagram illustrating an application environment of a video call method according to one or more embodiments of the present application. As shown in fig. 1, the wearable device establishes a communication connection with a first electronic device through a network to perform a video call, and also pairs with a second electronic device and establishes communication. The wearable device may be, but is not present in, smart glasses, smart watches, smart wristbands, smart helmets, smart headbands, and the like. The first electronic device may be, but is not limited to, smart glasses, a personal computer, a smartphone, a tablet computer, a laptop portable computer, a personal digital assistant, and the like. The second electronic device may be, but is not limited to, a personal computer, a smart phone, a tablet computer, a laptop portable computer, a personal digital assistant, and the like. The network may be a wired or wireless network. The wearable device and the second electronic device can be paired in a Bluetooth mode, a wireless network mode or a wired mode.
The following describes the video call method provided in the embodiment of the present application in detail.
The video call method provided by the embodiment of the application can be applied to wearable equipment, and for convenience of description, except for special description, the embodiment of the application takes the wearable equipment as a main body for description.
It is to be understood that the described execution body does not constitute a limitation of the embodiments of the present application.
Specifically, the flow of the video call method is shown in fig. 2, and may include the following steps:
step S201, in a case that the wearable device establishes a video call connection with the first electronic device, media information collected by the second electronic device is acquired.
In this embodiment of the application, a video call between the wearable device and the first electronic device may be established by accepting a video call initiated by the first electronic device, or may be established by accepting a video call initiated to the first electronic device.
In the case that the video call is established by accepting the video call initiated by the first electronic device or accepting the video call initiated by the first electronic device, the acquisition media information of the second electronic device can be acquired by sending a request to the second electronic device. The media information includes, but is not limited to, an image or audio, and the image may be an image captured by the second electronic device in a video call, a video captured by the second electronic device in a video call, an image stored by the second electronic device, a video stored by the second electronic device, an image browsed online by the second electronic device, or a video browsed online by the second electronic device, and the like, and is not specifically limited in this embodiment of the application.
In one embodiment of the application, issuing the request to the second electronic device may be a manual trigger by the user. Specifically, the wearable device may set a manually controlled touch area. Taking the smart glasses as an example, the touch area may be disposed at the edge of the lens, the center of the lens, or the outer side of the frame of the smart glasses. When a video call is established with a first electronic device, the wearable device does not send a request to a second electronic device, but sends a request to the second electronic device after a user manually triggers the wearable device.
The user may be a wearer wearing the wearable device, or a user who uses the wearable device for a video call but does not wear the wearable device.
The manual trigger can be divided into two cases, that is, an originating session request is accepted and a session request is accepted.
For the above two cases, the user may control the wearable device to issue a request to the second electronic device by manually touching the touch area, performing a sliding operation in a predetermined direction or trajectory in the touch area, and the like.
In the embodiment of the application, the wearable device can be triggered to send a request to a second electronic device in communication connection with the wearable device through manual operation so as to acquire the media information acquired by the second electronic device. If the wearable device does not complete pairing with the second electronic device in advance, the wearable device can be triggered to send a pairing request to the second electronic device through manual operation, and the request is automatically sent to the second electronic device after pairing with the second electronic device is successfully established, or the wearable device is triggered to send the request to the second electronic device through manual operation again.
It should be noted that, the manual triggering of the wearable device to send the request to the second electronic device may also be implemented in some other ways, for example, the wearable device may also be controlled to send the request to the second electronic device through a predetermined gesture or a control button provided on the wearable device.
In another embodiment of the present application, in the case of establishing a video call with a first electronic device, the wearable device may also directly issue a request to a second electronic device to obtain media information collected by the second electronic device, without manual triggering by a user. Or, in the case of establishing a video call with the first electronic device, the wearable device does not need to send a request to the second electronic device, and the second electronic device directly and automatically sends the acquired media information to the wearable device.
After the wearable device and the second electronic device are successfully paired, the second electronic device can send the acquired media information to the wearable device, and the wearable device receives the media information sent by the second electronic device.
For example, the second electronic device may be provided with a camera, and when receiving a request sent by the wearable device, the second electronic device may turn on its camera to capture a current image or video, and send the captured image or video to the wearable device.
Or the second electronic device may store the image or the video in advance, and after the wearable device and the second electronic device are successfully paired, the second electronic device may send the currently browsed local image or the video to the wearable device.
Or the second electronic device may browse the image or the video online, and after the wearable device and the second electronic device are successfully paired, the second electronic device may send the online browsed image or the video to the wearable device.
Step S203, sending the media information to the first electronic device.
After receiving the media information sent by the second electronic device, the wearable device forwards the received media information to the first electronic device, so that a call with the first electronic device is realized.
The media information can be audio collected by the second electronic device, an image currently shot by the second electronic device, a video currently shot by the second electronic device, an image stored by the second electronic device, a video stored by the second electronic device, an image browsed online by the second electronic device, or a video browsed online by the second electronic device. Therefore, the media information sent to the first electronic device is not limited by the content which can be acquired by the wearable device any more, so that the video call can be applied to various scenes such as face-to-face video calls, remote teaching, remote conferences, remote images and video rebroadcasts, a better communication effect is achieved, and the user experience is improved. .
Referring to fig. 3, a flow of a video call method according to another embodiment of the present application is shown, and a detailed description will be given to a flow process shown in fig. 3.
Step S301, under the condition that the wearable device and the first electronic device establish video call connection, media information collected by the second electronic device is obtained.
In the embodiment of the application, under the condition that the wearable device is connected with the first electronic device in a video call mode, the second electronic device can collect media information and send the collected media information to the wearable device.
In this embodiment of the application, a video call between the wearable device and the first electronic device may be established by accepting a video call initiated by the first electronic device, or may be established by accepting a video call initiated to the first electronic device.
When the video call initiated by the first electronic device is accepted or the video call initiated by the first electronic device is accepted to establish the video call, a request can be sent to the second electronic device to acquire the media information acquired by the second electronic device. Alternatively, the acquired media information may be automatically sent to the wearable device directly by the second electronic device without a request to the second electronic device.
The sending request can be automatically sent to the second electronic device when the video call is established with the first electronic device, or can be sent due to manual triggering of a user.
After the wearable device and the second electronic device are successfully paired, the second electronic device can send the collected media information to the wearable device, and the wearable device receives the media information sent by the second electronic device.
Step S303, sending the media information to the first electronic device.
After receiving the media information sent by the second electronic device, the wearable device forwards the received media information to the first electronic device.
In step S305, a target input is received.
In an embodiment of the present application, the wearable device may be provided with two display areas or screens, such as smart glasses. For convenience of description, in the embodiments of the present application, two display areas or display screens of the wearable device are referred to as a first screen and a second screen, respectively.
When the media information is an image, in the process of video call, the wearable device can display the local image acquired by the second electronic device and the remote image acquired by the first electronic device on the first screen and the second screen by default. For convenience of description, in the embodiment of the present application, a content displayed in a first screen by default is referred to as a first screen, and a content displayed in a second screen by default is referred to as a second screen.
In the process of video call, habits of different users are different, and some users are used to display the first picture on the first screen and the second picture on the second screen. Some users are used to display the first picture on the second screen and the second picture on the first screen. If the user is not accustomed to the display positions of the first picture and the second picture, a target input for indicating the exchange of the display positions of the first picture and the second picture can be input, and the wearable device receives the target input for indicating the exchange of the display positions of the first picture and the second picture.
Step S307, in response to the target input, displaying a second screen on the first screen, and displaying the first screen on the second screen.
After receiving the target input, the wearable device responds to the target input and switches display contents of the first screen and the second screen. That is, the second screen is displayed on the first screen, and the first screen is displayed on the second screen.
In this embodiment of the application, the target input may be, but is not limited to, generated based on a click operation, a sliding operation, a long-press operation, and the like of a user on the touch area, and is not specifically limited in this embodiment of the application.
For example, when a user needs to switch the contents displayed on the first screen and the second screen, the user may click the touch area and slide in a predetermined direction, or click the touch area and slide according to a predetermined track, and at this time, the contents displayed on the first screen and the second screen are switched.
Further, when the user needs to cancel the display of the target screen, a target input indicating that the target screen is to be canceled may be input, and the wearable device responds to the target input and cancels the display of the target screen. The target screen may be the first screen, the second screen, or both the first screen and the second screen.
In the embodiment of the present application, the target input indicating to cancel the display of the target screen may be, but is not limited to, generated based on a click operation, a slide operation, a long-press operation, and the like of the user on the touch area, and is not specifically limited in the embodiment of the present application.
As shown in fig. 4, when a video call is performed through the smart glasses, if a user only wants to display a picture corresponding to the first electronic device at the far end, the user can cancel displaying a local picture by performing a predefined operation on the touch area.
It should be understood that the above-mentioned embodiments of switching the display content of the first screen and the second screen through the target input and canceling the display of the target screen are only examples and do not limit the present application.
Further, in this embodiment of the application, when the media information is an image stored in the second electronic device, a video stored in the second electronic device, an image browsed online by the second electronic device, or a video browsed online by the second electronic device, a touch operation may be performed in the touch area, and at this time, in response to the touch operation of the user, the displayed picture (picture or video) is switched to a previous frame or a next frame of the currently displayed picture, and then the switched picture is sent to the first electronic device.
The touch operation may be a pressing operation plus a sliding operation, for example, a user may touch a touch area with two fingers, where one finger is fixed and the other finger performs a sliding operation, and at this time, a displayed screen is switched to a previous frame or a next frame of a currently displayed screen according to a sliding direction of the sliding operation. For example, a leftward swipe is a switch to a previous frame of the current display screen, and a rightward swipe is a switch to a next frame of the current display screen. Or, the upward sliding is switched to the previous frame of the current display screen, the downward sliding to the right is switched to the next frame of the current display screen, and the like.
As shown in fig. 5(a), the image sent by the second electronic device may be displayed on one of the first screen and the second screen, when the index finger of the user touches the touch area and is fixed, the sliding middle finger indicates that the image displayed on the display screen exits from the currently displayed image frame, and the next frame of the currently displayed image is displayed by switching as shown in fig. 5(b) and (c).
In the embodiment of the present application, the image that is switched and displayed every time the image is slid is the previous frame or the next frame of the currently displayed image. It is understood that in other embodiments, the frame may be switched every short distance while the displayed image is switched continuously by continuously sliding. Therefore, the displayed picture can be further conveniently switched.
In addition, in the case where the media information is a video, since the number of video image frames in one video is large, the difference between adjacent frames is extremely small, and there is almost no difference in change between the previous frame image and the next frame image, it is possible to switch the displayed image to an image frame before or after a plurality of frames of the local image frame by sliding once in order to switch and display different image frames.
It is to be understood that the touch operation described above is only an example and is not a limitation of the present application, and in other embodiments, the touch operation may have other forms.
The media information can be audio collected by the second electronic device, an image currently shot by the second electronic device, a video currently shot by the second electronic device, an image stored by the second electronic device, a video stored by the second electronic device, an image browsed online by the second electronic device, or a video browsed online by the second electronic device. Therefore, the media information sent to the first electronic device is not limited by the content which can be acquired by the wearable device any more, so that the video call can be applied to various scenes such as face-to-face video calls, remote teaching, remote conferences, remote images and video rebroadcasts, a better communication effect is achieved, and the user experience is improved. And secondly, the display positions of the first picture and the second picture can be conveniently adjusted according to the habits of the user, the display control of the user on the target picture is facilitated, and the individual habits of different users are met. In addition, the currently displayed picture content can be switched according to the touch operation of the user, so that the video call function is further improved, and the user experience is improved. The video call method provided by the embodiment of the application can enable the video call operation to be more convenient and faster, and is easy to use.
Referring to fig. 6, a flow of a video call method according to another embodiment of the present application is shown, and a detailed description will be given to a flow process shown in fig. 6.
Step 601, acquiring an image in a first direction through a camera under the condition that the wearable device and the first electronic device establish video call connection.
In the embodiment of the application, the wearable device can be provided with the camera and can be used for shooting images. Under the condition that the wearable device is connected with the first electronic device in a video call mode, the wearable device can acquire images in a first direction through the arranged camera.
Wherein, the first direction refers to the visual field direction of the wearable device when the wearable device is worn normally. In the embodiment of the present application, the first direction may be a direction away from the user.
Step 603, capturing an image of a second direction by a second electronic device paired with the wearable device.
In the embodiment of the application, under the condition that the wearable device is connected with the first electronic device in a video call mode, the camera arranged by the second electronic device can be used for collecting images in the second direction, and the second electronic device sends the collected images in the second direction to the wearable device.
In this embodiment, the second direction may be a direction facing the user.
Step 605, receiving the image in the second direction sent by the second electronic device.
Step 607, sending the image of the first direction and the image of the second direction to the first electronic device.
Specifically, after the image in the first direction is acquired through the camera and the image in the second direction sent by the second electronic device is received, the wearable device can perform de-duplication processing on the image in the first direction and the image in the second direction to obtain the target image. And then sending the target image obtained by the de-duplication to the first electronic equipment.
The image in the first direction and the image in the second direction are subjected to the deduplication processing, which may be, but is not limited to, image deduplication based on python-PIL, an image deduplication algorithm based on block DCT, and the like, and is not specifically limited in the embodiment of the present application.
Under the condition that the wearable device is connected with the first electronic device in a video call mode, the camera acquires the image in the first direction and the second electronic device matched with the wearable device acquires the image in the second direction, and the image in the first direction and the image in the second direction are sent to the first electronic device. Therefore, the images received by the first electronic device are not limited by the images acquired by the wearable device any more, so that the video call can be applied to various scenes such as face-to-face video calls, remote teaching, remote conferences, remote images and video rebroadcasts, a better communication effect is achieved, and user experience is improved. Secondly, when the image in the first direction and the image in the second direction are sent to the first electronic device, the image in the first direction and the image in the second direction are subjected to de-duplication processing, so that the data transmission amount in the video call process can be reduced. The video call method provided by the embodiment of the application can enable the video call to be more convenient and easy to use.
Fig. 7 is a block diagram schematic diagram of a wearable device 700 provided by one embodiment of the present description. Referring to fig. 7, in one software implementation, a wearable device 700 may include:
an obtaining module 701 configured to obtain media information collected by a second electronic device paired with the wearable device 700 when the wearable device 700 establishes a video call connection with a first electronic device.
A sending module 702 configured to send the media information to the first electronic device.
The media information includes an image or audio, and the image may be an image currently captured by the second electronic device, a video currently captured by the second electronic device, an image stored by the second electronic device, a video stored by the second electronic device, an image browsed online by the second electronic device, or a video browsed online by the second electronic device.
In this embodiment, the wearable device 700 includes a first screen and a second screen, and the wearable device 700 further includes:
the receiving module 703 is configured to receive a target input when the first screen displays a first screen and the second screen displays a second screen.
A display control module 704 configured to display the second screen on the first screen in response to the target input, the second screen displaying the first screen; or canceling display of a target screen in response to the target input, the target screen being at least one of the first screen and the second screen.
In this embodiment, the wearable device 700 is further provided with a camera, and the wearable device 700 further includes:
an acquisition control module 705 configured to acquire, by the camera, an image of a first direction in a case where the wearable device establishes a video call connection with a first electronic device.
Wherein the first direction is a direction away from the user.
The acquisition module 701 is specifically configured to acquire an image of a second orientation by a second electronic device paired with the wearable device.
Wherein the second direction is a direction facing the user.
The sending module 702 is specifically configured to send the image of the first direction and the image of the second direction to the first electronic device.
Specifically, the sending module 702 is configured to perform deduplication processing on the image in the first direction and the image in the second direction to obtain a target image; and sending the target image to the first electronic device.
The wearable device 700 provided by the application can acquire the media information through the second electronic device paired with the wearable device under the condition that the wearable device and the first electronic device establish a video call, and send the acquired media information to the first electronic device, so that the media information acquired by the first electronic device is not limited by the content acquired by the wearable device, and the video call can be applied to various scenes such as face-to-face video call, remote teaching, remote conference, remote image and video rebroadcast, thereby achieving a better communication effect and improving user experience. And secondly, the display positions of the first picture and the second picture can be conveniently adjusted according to the habits of the user, the display control of the user on the target picture is facilitated, and the individual habits of different users are met. In addition, when the image in the first direction and the image in the second direction are sent to the first electronic device, the image in the first direction and the image in the second direction can be subjected to de-duplication processing, so that the data transmission amount in the video call process can be reduced. The wearable device 700 provided by the embodiment of the application can enable the video call operation to be more convenient and fast, and is easy to use.
Fig. 8 is a hardware structure diagram of a wearable device according to an embodiment of the present disclosure.
The wearable device includes but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the configuration shown in fig. 8 does not constitute a limitation of the wearable device, which may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In an embodiment of the invention, the wearable device comprises smart glasses, a wearable helmet and the like.
The radio frequency unit 801 is configured to acquire media information acquired by a second electronic device paired with the wearable device, when the wearable device and a first electronic device establish a video call connection.
A processor 810 configured to send the media information to the first electronic device.
The wearable device provided by the application can acquire the media information through the second electronic device paired with the wearable device under the condition that the wearable device and the first electronic device establish a video call, and send the acquired media information to the first electronic device, so that the media information acquired by the first electronic device is not limited by the content acquired by the wearable device, and the video call can be applied to various scenes such as face-to-face video call, remote teaching, remote conference, remote image and video rebroadcast, thereby achieving a better communication effect and improving user experience.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The wearable device provides wireless broadband internet access to the user through the network module 802, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the wearable device (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The wearable device also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the wearable device is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the wearable device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the wearable device. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the wearable device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the wearable device, and the implementation is not limited herein.
The interface unit 808 is an interface through which an external device is connected to the wearable apparatus. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the wearable apparatus or may be used to transmit data between the wearable apparatus and the external device.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the wearable device, connects various parts of the entire wearable device by various interfaces and lines, and performs various functions of the wearable device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling up data stored in the memory 809, thereby monitoring the wearable device as a whole. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The wearable device may also include a power supply 811 (such as a battery) to power the various components, and preferably, the power supply 811 may be logically coupled to the processor 810 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the wearable device includes some functional modules that are not shown, and are not described in detail here.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the method embodiments shown in fig. 2, fig. 3, and fig. 6, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A video call method is applied to a wearable device and comprises the following steps:
acquiring media information acquired by a second electronic device paired with the wearable device under the condition that the wearable device and a first electronic device establish video call connection;
sending the media information to the first electronic device;
wherein the media information comprises an image or audio.
2. The method of claim 1, wherein the wearable device comprises a first screen and a second screen, the method further comprising:
receiving target input under the condition that the first screen displays a first picture and the second screen displays a second picture;
displaying the second picture on the first screen in response to the target input, wherein the second screen displays the first picture; or the like, or, alternatively,
canceling display of a target screen in response to the target input, the target screen being at least one of the first screen and the second screen.
3. The method of claim 1, wherein the wearable device is provided with a camera, and wherein if the wearable device establishes a video call connection with a first electronic device, the method further comprises:
acquiring an image in a first direction through the camera;
the acquiring media information collected by a second electronic device paired with the wearable device includes:
acquiring an image of a second direction acquired by a second electronic device paired with the wearable device;
the sending the media information to the first electronic device includes:
and sending the image of the first direction and the image of the second direction to the first electronic equipment.
4. The method of claim 3, wherein sending the image of the first orientation and the image of the second orientation to the first electronic device comprises:
carrying out duplicate removal processing on the image in the first direction and the image in the second direction to obtain a target image;
and sending the target image to the first electronic equipment.
5. The method of claim 3, wherein the first direction is a direction away from a user and the second direction is a direction facing the user.
6. A wearable device, characterized in that the wearable device comprises:
the acquisition module is configured to acquire media information acquired by a second electronic device paired with the wearable device under the condition that the wearable device establishes a video call connection with a first electronic device;
a sending module configured to send the media information to the first electronic device;
wherein the media information comprises an image or audio.
7. The wearable device of claim 6, comprising a first screen and a second screen, the wearable device further comprising:
a receiving module configured to receive a target input in a case where the first screen displays a first screen and the second screen displays a second screen;
a display control module configured to display the second screen on the first screen in response to the target input, the second screen displaying the first screen; or the like, or, alternatively,
canceling display of a target screen in response to the target input, the target screen being at least one of the first screen and the second screen.
8. Wearable device according to claim 6, provided with a camera, further comprising:
the acquisition control module is configured to acquire an image in a first direction through the camera under the condition that the wearable device and a first electronic device establish video call connection;
the acquisition module is configured to acquire an image of a second orientation through a second electronic device paired with the wearable device;
the sending module is configured to send the image of the first direction and the image of the second direction to the first electronic device.
9. The wearable device of claim 8, wherein the sending module is configured to de-duplicate the image of the first orientation and the image of the second orientation to obtain a target image; and
and sending the target image to the first electronic equipment.
10. The wearable device of claim 8, wherein the first direction is a direction away from a user and the second direction is a direction facing the user.
11. A wearable device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the video call method of any of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video call method according to any one of claims 1 to 5.
CN201911205031.XA 2019-11-29 2019-11-29 Video call method, wearable device and storage medium Pending CN110830747A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911205031.XA CN110830747A (en) 2019-11-29 2019-11-29 Video call method, wearable device and storage medium
PCT/CN2020/131084 WO2021104248A1 (en) 2019-11-29 2020-11-24 Video call method, wearable device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911205031.XA CN110830747A (en) 2019-11-29 2019-11-29 Video call method, wearable device and storage medium

Publications (1)

Publication Number Publication Date
CN110830747A true CN110830747A (en) 2020-02-21

Family

ID=69542238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911205031.XA Pending CN110830747A (en) 2019-11-29 2019-11-29 Video call method, wearable device and storage medium

Country Status (2)

Country Link
CN (1) CN110830747A (en)
WO (1) WO2021104248A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181344A (en) * 2020-10-19 2021-01-05 Oppo广东移动通信有限公司 Device calling method, device calling apparatus, interaction system, electronic device, and storage medium
WO2021104248A1 (en) * 2019-11-29 2021-06-03 维沃移动通信有限公司 Video call method, wearable device, and storage medium
CN114095685A (en) * 2021-11-15 2022-02-25 珠海读书郎软件科技有限公司 Video call method based on multi-screen telephone watch

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011083934A2 (en) * 2010-01-05 2011-07-14 Lg Electronics Inc. Method for connecting video communication to other device, video communication apparatus and display apparatus thereof
CN105657324A (en) * 2015-12-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and device for processing video images, and terminals
CN109068084A (en) * 2018-08-31 2018-12-21 努比亚技术有限公司 Video calling picture display process, mobile terminal and computer readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102274743B1 (en) * 2015-01-30 2021-07-08 삼성전자주식회사 Electronic device and method for processing a display area in electronic device
CN106814988A (en) * 2016-12-26 2017-06-09 宇龙计算机通信科技(深圳)有限公司 A kind of screen control method, device and terminal
CN107317993B (en) * 2017-08-08 2019-08-16 维沃移动通信有限公司 A kind of video call method and mobile terminal
CN108712577B (en) * 2018-08-28 2021-03-12 维沃移动通信有限公司 Call mode switching method and terminal equipment
CN110139064B (en) * 2018-09-29 2021-10-01 广东小天才科技有限公司 Video call method of wearable device and wearable device
CN110830747A (en) * 2019-11-29 2020-02-21 维沃移动通信有限公司 Video call method, wearable device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011083934A2 (en) * 2010-01-05 2011-07-14 Lg Electronics Inc. Method for connecting video communication to other device, video communication apparatus and display apparatus thereof
CN105657324A (en) * 2015-12-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and device for processing video images, and terminals
CN109068084A (en) * 2018-08-31 2018-12-21 努比亚技术有限公司 Video calling picture display process, mobile terminal and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021104248A1 (en) * 2019-11-29 2021-06-03 维沃移动通信有限公司 Video call method, wearable device, and storage medium
CN112181344A (en) * 2020-10-19 2021-01-05 Oppo广东移动通信有限公司 Device calling method, device calling apparatus, interaction system, electronic device, and storage medium
CN114095685A (en) * 2021-11-15 2022-02-25 珠海读书郎软件科技有限公司 Video call method based on multi-screen telephone watch

Also Published As

Publication number Publication date
WO2021104248A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
CN108668083B (en) Photographing method and terminal
CN108182019B (en) Suspension control display processing method and mobile terminal
CN108184070B (en) Shooting method and terminal
CN109388304B (en) Screen capturing method and terminal equipment
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN110196667B (en) Notification message processing method and terminal
CN110213440B (en) Image sharing method and terminal
CN109379484B (en) Information processing method and terminal
CN109525710B (en) Method and device for accessing application program
CN109032486B (en) Display control method and terminal equipment
CN111666009B (en) Interface display method and electronic equipment
CN108881617B (en) Display switching method and mobile terminal
CN109710349B (en) Screen capturing method and mobile terminal
WO2021104248A1 (en) Video call method, wearable device, and storage medium
CN107728923B (en) Operation processing method and mobile terminal
CN108228902B (en) File display method and mobile terminal
CN108804628B (en) Picture display method and terminal
WO2019120190A1 (en) Dialing method and mobile terminal
CN108600544B (en) Single-hand control method and terminal
CN110321449B (en) Picture display method and terminal
CN109669656B (en) Information display method and terminal equipment
CN109885242B (en) Method for executing operation and electronic equipment
CN107967086B (en) Icon arrangement method and device for mobile terminal and mobile terminal
CN111694497B (en) Page combination method and electronic equipment
CN111526248B (en) Audio output mode switching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221

RJ01 Rejection of invention patent application after publication