CN112351235B - Video call method - Google Patents

Video call method Download PDF

Info

Publication number
CN112351235B
CN112351235B CN201911108857.4A CN201911108857A CN112351235B CN 112351235 B CN112351235 B CN 112351235B CN 201911108857 A CN201911108857 A CN 201911108857A CN 112351235 B CN112351235 B CN 112351235B
Authority
CN
China
Prior art keywords
camera
identification information
video data
data acquired
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911108857.4A
Other languages
Chinese (zh)
Other versions
CN112351235A (en
Inventor
陈勇
张创
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to PCT/CN2020/105097 priority Critical patent/WO2021023055A1/en
Publication of CN112351235A publication Critical patent/CN112351235A/en
Application granted granted Critical
Publication of CN112351235B publication Critical patent/CN112351235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/11Allocation or use of connection identifiers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application provides a video call method, which is applied to first equipment with at least one camera, the first equipment establishes a video call with second equipment by using the camera and one application, and the method comprises the following steps: after the first equipment establishes a video call with the second equipment by using the camera and the application, displaying identification information of at least one third equipment which is in wireless connection with the first equipment and is provided with the camera on an interactive interface for carrying out the video call; when detecting that a user acts on preset operation of first identification information on an interactive interface, a first device triggers at least one camera of the device corresponding to the first identification information to acquire video data, wherein the first identification information is one of identification information of at least one third device with the camera; the first device obtains video data collected by at least one camera of the device corresponding to the first identification information, and uses the video data collected by at least one camera of the target device corresponding to the first identification information for video call.

Description

Video call method
The present application claims priority of chinese patent application with the title "a method for video call and mobile device" filed by the chinese patent office on 6/8/2019 with application number 201910723119.4, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of wireless communications, and in particular, to a method for video call.
Background
With the popularization of various mobile devices, at present, scenes of performing a video call with another mobile device by using an application program on the mobile device (for example, a mobile phone) are more and more, in the process of performing the video call on the mobile device, the mobile device can only use image data or video data acquired by a camera of the mobile device, and if the performance parameter of the camera of the mobile device is poor or a visual angle is blocked due to obstruction of an obstacle, the resolution of a video picture in the process of performing the video call on the mobile device is low and the visual range is small. The application provides a video call method, which can greatly improve user experience by using other mobile devices which are in communication connection with the mobile devices to carry out video call.
Disclosure of Invention
The application provides a method for video call, an application program can acquire images acquired by cameras of other devices, but the application program considers that at least one camera of a first device is still used, the cameras of the other devices are equivalent to a local virtual camera of the first device, data are switched from one camera to the cameras of the other devices, the application program has no perception, better visual angles can be obtained through the cameras of the other devices, and user experience is improved.
A first aspect provides a method for video call, which is applied to a first device with at least one camera, where the first device establishes a video call with a second device by using the at least one camera and an application, where the first device and the second device are both installed with the application, and the application has a video call function, and the method includes: after the first device establishes a video call with the second device by using the at least one camera and the application, displaying identification information of at least one third device which is in wireless connection with the first device and is provided with a camera on an interactive interface for carrying out the video call; when the first device detects that a user acts on preset operation of first identification information on the interactive interface, triggering at least one camera of the device corresponding to the first identification information to acquire video data, wherein the first identification information is one of identification information of at least one third device with the camera; the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the first identification information, and uses the video data acquired by the at least one camera of the equipment corresponding to the first identification information for video call. It should be understood that: "at least one third device" refers to one or more devices that do not include the first device and the second device.
The identification information may include an icon, and text, a control, or text, and the application is not limited to a specific form thereof.
The user can select any one of the multiple devices, and the video data is acquired through the camera of the selected device, for example, when the device is one of the at least one third device, the application program can acquire the video data acquired by the camera of the device, but the application program considers that the at least one camera of the first device is still used, the camera of the device is equivalent to a local virtual camera of the first device, and the data is switched from the at least one camera of the first device to the camera of the device without sensing of the application program, so that a better viewing angle can be acquired through the camera of the external device, and user experience is improved.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: when the first device detects that a user acts on preset operation of second identification information on the interactive interface, triggering at least one camera of the device corresponding to the second identification information to acquire video data, wherein the second identification information is one of identification information of at least one third device with the camera; and the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the second identification information, and uses the video data acquired by at least one camera of the equipment corresponding to the second identification information for video call.
After a user can select equipment corresponding to first identification information in a plurality of pieces of equipment as equipment for acquiring video data, the user can also select equipment corresponding to second identification information as equipment for acquiring video data, so that the video data is acquired through a camera of the equipment corresponding to the second identification information, the camera of the equipment is equivalent to a local virtual camera of the first equipment, data is switched from at least one camera of the first equipment to the camera of the equipment, an application program is unaware, a better visual angle can be obtained through the camera of external equipment, and user experience is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the acquiring, by the first device, video data acquired by at least one camera of the device corresponding to the first identification information, and using the video data acquired by the at least one camera of the device corresponding to the first identification information for a video call includes: the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the first identification information, does not acquire the video data acquired by the at least one camera of the first equipment any more, and uses the video data acquired by the at least one camera of the equipment corresponding to the first identification information for video call; or the first device acquires video data acquired by at least one camera of the device corresponding to the first identification information, acquires video data acquired by at least one camera of the first device, and uses the video data acquired by at least one camera of the device corresponding to the first identification information for video call; or the first device acquires video data acquired by at least one camera of the device corresponding to the first identification information, acquires video data acquired by at least one camera of the first device, and uses the video data acquired by at least one camera of the device corresponding to the first identification information and the video data acquired by at least one camera of the first device for video call.
According to the embodiment of the application, after a user selects a device corresponding to the first identification information in the multiple devices as a device for acquiring video data, at least one camera of the first device can be reconfigured according to actual needs.
With reference to the first aspect, in some implementation manners of the first aspect, the acquiring, by the first device, video data acquired by at least one camera of the device corresponding to the second identification information, and using the video data acquired by the at least one camera of the device corresponding to the second identification information for a video call by using the video data includes: the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, does not acquire the video data acquired by the at least one camera of the first device and the at least one camera of the device corresponding to the first identification information any more, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information for video call; or the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires the at least one camera of the first device and the video data acquired by the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information for video call; or the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires the video data acquired by the at least one camera of the first device, does not acquire the video data acquired by the at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information and the at least one camera of the first device for video call; or the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires video data acquired by at least one camera of the device corresponding to the first identification information, does not acquire video data acquired by at least one camera of the first device, and uses the video data acquired by at least one camera of the device corresponding to the second identification information and the video data acquired by at least one camera of the device corresponding to the first identification information for video call; or the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires the video data acquired by the at least one camera of the first device and the at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the first identification information, the video data acquired by the at least one camera of the device corresponding to the second identification information and the video data acquired by the at least one camera of the first device for video call.
According to the embodiment of the application, after the user selects the equipment corresponding to the second identification information in the multiple pieces of equipment to replace the equipment corresponding to the first identification information as the equipment for acquiring the video data, the rest of cameras and the equipment can be configured according to actual needs.
With reference to the first aspect, in some implementations of the first aspect, before using at least one camera of a device corresponding to the first identification information, the at least one camera of the device corresponding to the first identification information is configured according to a configuration parameter of the at least one camera of the first device; and/or before using the at least one camera of the device corresponding to the second identification information, configuring the at least one camera of the device corresponding to the second identification information according to the configuration parameters of the at least one camera of the first device.
According to the embodiment of the application, the first equipment can carry out relevant configuration on the camera of the equipment according to at least one camera of the first equipment, so that the image acquired by the camera of the equipment can meet the requirement.
With reference to the first aspect, in certain implementations of the first aspect, the interactive interface for video call further includes third identification information of a fourth device that is wirelessly connected to the first device and has at least one microphone; when the first device detects that a user acts on preset operation of third identification information on the interactive interface, triggering the device corresponding to the third identification information to acquire audio data by using at least one microphone; and the first equipment acquires audio data acquired by at least one microphone of the equipment corresponding to the third identification information, and uses the audio data acquired by at least one microphone of the equipment corresponding to the third identification information for video call.
With reference to the first aspect, in some implementations of the first aspect, the interactive interface for video call further includes a fourth identifier of a fifth device that is wirelessly connected to the first device and has at least one speaker; when the first device detects that a user acts on the preset operation of a fourth identification on the interactive interface, the first device sends audio data received in a video call to a device corresponding to the fourth identification, and triggers the device corresponding to the fourth identification to play the received audio data.
With reference to the first aspect, in certain implementations of the first aspect, the interactive interface for video call further includes a fifth identifier of a sixth device wirelessly connected to the first device and having at least one microphone and at least one speaker; when the first device detects that a user acts on a preset operation of a fifth identifier on the interactive interface, triggering the device corresponding to the fifth identifier to acquire audio data by using at least one microphone, sending the audio data received in a video call to the device corresponding to the fifth identifier, and triggering the device corresponding to the fifth identifier to play the received audio data; and the first equipment acquires audio data acquired by at least one microphone of the equipment corresponding to the fifth identifier, and uses the audio data acquired by the at least one microphone for video call.
With reference to the first aspect, in some implementations of the first aspect, the device that maintains the wireless connection with the first device is implemented by any one of a bluetooth protocol, a Wi-Fi protocol, an NFC protocol, or a mobile communication protocol.
The mobile communication protocol may include a 2G standard protocol, a 3G standard protocol, a 4G standard protocol, a 5G standard protocol, a 6G standard protocol, or a 6G subsequent standard protocol.
With reference to the first aspect, in certain implementation manners of the first aspect, the acquiring, by the first device, video data acquired by at least one camera of the target device corresponding to the first identification information, and using the video data acquired by the at least one camera of the target device corresponding to the first identification information for a video call by using the video data includes: after the data transmission channel module of the first device receives the video data, format conversion is carried out on the resolution and the color space of the received video data according to the application requirement; the data switching module receives an acquisition request of at least one camera of the first device and forwards the acquisition request to the data transmission channel module; the data transmission channel module receives video data from the equipment corresponding to the first identification information and transmits the video data to the data switching module; and the data switching module transmits the received video data acquired by the at least one camera of the equipment corresponding to the first identification information to a data return interface of a camera equipment session management module, and the acquired video data acquired by the at least one camera of the equipment corresponding to the first identification information is returned to the application according to an original data return flow.
A second aspect provides a method of shooting, applied to a first device having at least one camera, the method including: presenting identification information of each of a plurality of devices, the plurality of devices including a first device and at least one third device, the at least one third device being in communication connection with the first device, and the at least one third device each having a camera; responding to a user operation to determine a device from the plurality of devices; and acquiring and presenting images shot by the equipment. It should be understood that: "at least one third device" refers to one or more devices that do not include the first device.
The user can select a camera of any one of the multiple devices (for example, the third device) to acquire an image, it should be understood that the third device is one of the at least one third device, and the application program inside the first device can acquire the image acquired by the camera of the third device, but the application program considers that the at least one camera of the first device is still being used, so that a better viewing angle can be obtained through the camera of the external device, and user experience is improved.
With reference to the second aspect, in certain implementations of the second aspect, the obtaining an image representing an image taken by the device includes: sending a shooting instruction to the equipment; and receiving an image shot by the equipment according to the shooting instruction.
According to the embodiment of the application, the first device can send a shooting instruction to the device according to the shooting requirement of the application program in use, so that the device can shoot according to the requirement.
With reference to the second aspect, in some implementations of the second aspect, each of the at least one third device installs a corresponding application, and the first device and the at least one third device perform signaling interaction and data transmission through the application, respectively.
It should be understood that: the application is installed on the at least one third device, so that the first device and the at least one third device respectively use the application to carry out signaling interaction and data transmission, and the transmission efficiency is higher by using the application.
With reference to the second aspect, in some implementations of the second aspect, the presenting identification information of each of the plurality of devices includes: and presenting the identification information of each device in the plurality of devices on an interactive interface corresponding to the at least one camera.
The camera switching method and device can display identification information of each device in multiple devices in an interactive interface for a user to select in a scene that an application program uses at least one camera, for example, video call or shooting, and can improve user experience.
With reference to the second aspect, in certain implementations of the second aspect, when the device is one of the at least one third device, the method further includes: generating first configuration information according to the configuration of the at least one camera; first configuration information is sent to the device.
The first equipment can carry out relevant configuration on the camera of the equipment according to at least one camera, so that the image acquired by the camera of the equipment can meet the requirements.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: acquiring second configuration information, wherein the second configuration information is used for indicating the configuration of a camera of the equipment; and generating first configuration information according to the configuration of the at least one camera, including: and generating the first configuration information according to the configuration of the at least one camera and the second configuration information.
When the device is one of the at least one third device, the second configuration information may be sent to the first device according to the capability of the camera of the device, and may be used to indicate the configuration of the camera of the device, and the first device may determine the capability of the camera of the device according to the second configuration information, so that the first device may generate the first configuration information according to the configuration of the at least one camera and the second configuration information, where the first configuration information is used to configure the camera of the device.
With reference to the second aspect, in certain implementations of the second aspect, when the device is one of the at least one third device, the method further includes: turning off the at least one camera while presenting the image taken by the device.
When the device is one of the at least one third device, the at least one camera of the first device is turned off when the image taken by the device is presented, which may reduce power consumption of the first device.
With reference to the second aspect, in some implementations of the second aspect, the wireless connection is implemented by any one of a bluetooth protocol, a wireless local area network Wi-Fi protocol, an NFC protocol, or a mobile communication protocol.
In a third aspect, the present application provides a mobile device, comprising: a touch screen, wherein the touch screen comprises a touch sensitive surface and a display; at least one camera; at least one processor; a memory; a plurality of application programs; and at least one program, wherein the at least one program is stored in the memory, the at least one program comprising instructions which, when run on the first device, cause the first device to perform the method of any of the above first or second aspects.
In a fourth aspect, there is provided a computer storage medium having stored therein computer-executable instructions for causing the computer, when invoked by the computer, to perform the method of any of the first or second aspects above.
In a fifth aspect, there is provided a graphical user interface system on an electronic device, the electronic device having a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, wherein the graphical user interface comprises a graphical user interface displayed when the electronic device performs a method as described in any one of the first aspect or the second aspect above.
Drawings
Fig. 1 is a schematic architecture diagram of a communication system suitable for use in embodiments of the present application.
Fig. 2 is a schematic flowchart of a shooting method provided in an embodiment of the present application.
Fig. 3 is an interaction interface of a mobile device according to an embodiment of the present disclosure.
Fig. 4 is an interaction interface of a mobile device according to an embodiment of the present disclosure.
Fig. 5 is an interaction interface of a mobile device according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a software architecture of a first device according to an embodiment of the present application.
Fig. 7 is a schematic diagram of interaction of modules of a first device according to an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of a handover principle provided in an embodiment of the present application.
Fig. 9 is a schematic diagram of another software architecture of a first device according to an embodiment of the present application.
Fig. 10 is a schematic diagram of interaction of modules of another first device according to an embodiment of the present application.
Fig. 11 is a schematic diagram of another handover principle provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The mobile device in this application embodiment can be the mobile device that has the camera such as cell-phone, panel computer, notebook computer, intelligent bracelet, intelligent wrist-watch, intelligent helmet, intelligent glasses, unmanned aerial vehicle. The Mobile device may also be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a Mobile device in a 5G network, or a Mobile device in a Public Land Mobile Network (PLMN) for future evolution, and the like, which are not limited in this embodiment.
Fig. 1 is a schematic diagram of an architecture of a system suitable for use in embodiments of the present application.
As shown in fig. 1, the system 100 may include at least one mobile device 101, where the mobile device 101 may be wirelessly connected to other mobile devices, and the other mobile devices may be a mobile phone, an unmanned aerial vehicle, an external camera, a computer, a wireless smart speaker, a wireless player, a wireless television, a wireless large-screen device, or a smart wearable device, and the like, where the wireless connection technology includes but is not limited to: including global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time division multiple access (time-division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), fifth generation mobile communication technology (english: 5th generation mobile network or 5th generation wireless system, 5G) global navigation satellite system (global navigation satellite system, GNSS), wireless local area network (wireless local area network, wireless fidelity (WLAN), wireless local area network (wireless local area network, wireless local area Network (NFC), wireless radio communication (FM, short-range communication), IR), and the like. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
It should be understood that in some particular cases, the mobile devices may also be communicatively coupled via a wired connection.
The mobile device 101 may include a housing and a display screen, wherein the display screen is mounted on the housing. The mobile device 101 further includes electronic components (not shown) disposed inside the housing, including but not limited to a processor, a camera, a flash, a microphone, a battery, and the like.
Alternatively, the mobile device 101 may be connected to another mobile device in a Wireless communication manner, and the Wireless communication may be implemented by any one of a bluetooth protocol, a Wireless Fidelity (Wi-Fi) protocol, and a mobile communication protocol.
The mobile communication protocol may include a 2G standard protocol, a 3G standard protocol, a 4G standard protocol or a 5G standard protocol, a 6G standard protocol or a 6G subsequent standard protocol, etc.
With the popularization of various mobile devices, scenes of shooting or video call with another mobile device by using an application program on the mobile device (such as a mobile phone) are more and more, in the shooting or video call process of the mobile device, the mobile device can only use image data or video data acquired by a camera of the mobile device, and if the performance parameters of the camera of the mobile device are poor or a visual angle is blocked due to obstruction of an obstacle, the resolution of an acquired image is low or the resolution of a video picture in the video call process is low and the visual range is small when the mobile device shoots, so that the user experience is reduced. The application provides a video call method, which can greatly improve user experience by utilizing video data acquired by cameras of other mobile devices which are in wireless connection with the mobile devices to carry out video call.
The application provides a method for video call, which can be used for acquiring image data by mobile equipment through a camera by using other mobile equipment with the camera, which is in wireless connection with the mobile equipment, and applying the acquired data to a mobile phone.
It should be understood that a video is a continuous image sequence, is a plurality of frames of mutually associated images, is dynamic, and is also referred to as a video stream, and the video data described in the embodiment of the present application may be a picture or a video taken in a mobile device, and the present application is not limited thereto.
Here, a few usage scenarios of the present invention are listed, which are only examples and do not limit the usage scenarios of the present invention.
Scene one: driving scenario
In a driving scene, a user uses a mobile phone to carry out video call before getting on the vehicle, enters a vehicle cab, and after the vehicle starts to drive, the user can place the mobile phone at a central control position, a mobile phone camera cannot shoot a face picture and only can carry out voice call, and the user can fix the mobile phone near a vehicle steering wheel and face the front of the face, but driving risks can be caused possibly, and the operation experiences are not good enough.
A camera of an automobile is arranged on a part of intelligent automobile driving instrument panel in the future, if the intelligent automobile driving instrument panel enters the automobile, a mobile phone of a user can be in wireless connection with the camera on the automobile instrument panel, the mobile phone can directly use the camera of the automobile instrument panel to acquire video data, and the mobile phone does not need to be specially placed or taken up when the automobile is driven. The mobile phone uses a camera on an automobile dashboard for video call, and can automatically switch a front camera to a camera in an automobile in several ways, for example, after the mobile phone establishes connection with one camera in the automobile in the video call process, namely, the mobile phone uses data collected by the camera in the automobile to carry out video call; the mobile phone can also switch the camera after receiving a voice instruction of a user, or after the mobile phone is connected with one camera in the automobile, the mobile phone displays the identification of the camera in the automobile on a video call interface, and after the mobile phone detects that the user clicks the identification of the camera in the automobile, the mobile phone switches the front camera to the camera in the automobile, namely the mobile phone carries out video call by using data collected by the camera in the automobile. It should be understood that when the mobile phone and the plurality of cameras in the automobile (for example, the camera of the dashboard, the camera of the car recorder, etc.), the aforementioned three modes can also be adopted to select the camera for performing the video call, and if the camera is automatically selected by the mobile phone, the mobile phone can determine which camera in the automobile is automatically switched according to a default setting, or a commonly used selection, or the last selection. Optionally, when the mobile phone detects that the vehicle enters the vehicle, the camera can be automatically switched by the mobile phone, or the camera can be switched by a voice instruction of a user, or the camera can be manually switched on an interactive interface of a video call. The mobile phone application directly uses the camera of the instrument panel to obtain the image, so that the opposite end of the video call can also see the front picture of the driver, and more friendly use experience can be provided for the user. The mobile phone can use the picture of the automobile data recorder when carrying out video call, and carry out travel scenery sharing and the like for an opposite-end user of the video call.
Scene two: home scene
Electronic equipment in the family at present is abundant, and different equipment have own dominant ability, for example the television camera visual angle is wider, and the television screen is big, and the demonstration is clear, and intelligent stereo set playback and pickup effect are good. When the mobile phone is used for video, if a plurality of people in a family wish to participate in video, the mobile phone is held by a hand, the visual angle of a mobile phone camera is narrow, the plurality of people cannot enter a mobile phone video picture simultaneously, if the mobile phone is placed far away, the mobile phone screen is small, the opposite video picture cannot be seen clearly, at the moment, if the camera of a living room television can be used for acquiring video as the input of video data during video call of mobile phone application, the plurality of people can sit on a sofa opposite to the television, the television camera can be used for video call, the mobile phone video picture is displayed on the television screen, and the data of a mobile phone microphone/loudspeaker is projected onto a household intelligent sound box, so that good-experience audio and video experience can be provided.
Scene three: unmanned aerial vehicle scene
The popularization of unmanned aerial vehicle is more and more common at present, but the video that general unmanned aerial vehicle shot can only cooperate with the application of unmanned aerial vehicle manufacturer. When the user flies the unmanned aerial vehicle, the user watches the unmanned aerial vehicle through the corresponding application on the mobile phone. If can let the application on the cell-phone can use the camera of using unmanned aerial vehicle when shooing, for example, can utilize unmanned aerial vehicle's camera to shoot the picture, then can provide a brand-new shooting embodiment, perhaps when the application on the cell-phone carries out the video conversation, the video picture that utilizes unmanned aerial vehicle camera to shoot replaces the video picture that the leading camera of cell-phone was shot, the video picture that unmanned aerial vehicle camera was shot can be seen to the opposite end of carrying out the video conversation like this, the opposite end of giving the video conversation like this provides a brand-new flight video and watches experience, a more real-time unmanned aerial vehicle visual angle is provided.
Under these scenes, the mobile phone side is required to provide a video stream acquisition scheme of a universal external camera, and meanwhile, the video streams of other devices can be used without adapting and modifying the existing applications of the mobile phone, so that the method is favorable for use and popularization, and therefore when the data of the external camera is required to be used, the applications for video call are not required to be adapted.
Fig. 2 is a schematic flowchart of a method for video call according to an embodiment of the present disclosure.
S101, the first device and the second device establish a video call.
The technical scheme of the application can be applied to first equipment with at least one camera, the first equipment utilizes the at least one camera and one application to establish video call with second equipment, the same application is installed on both the first equipment and the second equipment, and the application has a video call function.
It should be understood that the application is a video call application, or the application is an application with video call functionality, such as WeChat.
S102, the first device displays the identification information.
The identification information may include an icon, and text, a control, or text, and the application is not limited to a specific form thereof.
After the first device establishes a video call with the second device by using at least one camera and an application, the identification information of at least one third device which is in wireless connection with the first device and is provided with the camera is displayed on an interactive interface for carrying out the video call.
As shown in fig. 3 to 5, the first identification information may correspond to an external camera, the second identification information corresponds to an unmanned aerial vehicle, the third identification information corresponds to a wireless television, fig. 3 is a display of an image on an interactive interface of a video call established between the first device and the second device obtained by using the external camera after the user selects the first identification information on the interactive interface of the mobile device provided in the embodiment of the present application, fig. 4 is a display of an image on an interactive interface of a video call established between the first device and the second device obtained by using the unmanned aerial vehicle after the user selects the second identification information on the interactive interface of the mobile device provided in the embodiment of the present application, fig. 5 is a display of an image on an interactive interface of a video call established between the first device and the second device obtained by using the wireless television after the user selects the third identification information on the interactive interface of the mobile device provided in the embodiment of the present application, the third identification information may be hidden in the total identification information, and the user clicks the identification information and then displays the specific identification information.
It should be understood that the first identification information, the second identification information and the third identification information may be specifically characters or special marks or menu buttons for which a user clicks to display detailed options or display in other interactive interfaces, as shown in fig. 3 to 5.
Optionally, on the video call interface of the first device as shown in fig. 3 or fig. 4, the selected identification information is highlighted relative to the other identification information to prompt the user of the device currently acquiring the video data. In addition, in addition to the identification information of at least one third device that maintains wireless connection with the first device, the identification information displayed on the video call interface of the first device further includes at least one camera of the first device (for example, the identification information of all front and back cameras or the identification information of part of cameras is displayed).
Optionally, the mode of selecting the first identification information by the user may be clicking the first identification information, or selecting the first identification information by a voice instruction, or automatically selecting the identification information when detecting that the equipment wirelessly connected to the first equipment selects the identification information in history, which is not limited by the application.
S103, the first device determines first identification information.
When the first device detects that a user acts on the preset operation of the first identification information on the interactive interface, at least one camera of the device corresponding to the first identification information is triggered to collect video data, and the first identification information is one of the identification information of at least one third device with the camera.
Optionally, the preset operation may be that the user clicks the identification information, or the user instructs to select the identification information by voice, or the selection is automatically performed according to the history of the user.
S104, the first equipment acquires video data.
The first device obtains video data collected by at least one camera of the device corresponding to the first identification information, and uses the video data collected by the at least one camera of the device corresponding to the first identification information for video call.
It should be understood that a user may select a camera of any one of the multiple devices (e.g., a third device) to acquire video data, it should be understood that the third device is one of at least one third device, an application inside the first device may acquire video data acquired by the camera of the third device, but the application considers that at least one camera of the first device is still being used, the camera of the third device is equivalent to a virtual camera local to the first device, the application switches from the video data acquired by using the at least one camera of the first device to the video data acquired by using the camera of the third device, and the application is unaware of the application, so that a better viewing angle can be obtained through the camera of the external device, and user experience is improved.
Optionally, after the first device acquires the video data acquired by the at least one camera of the device corresponding to the first identification information, the first device does not acquire the video data acquired by the at least one camera of the first device any more, and uses the video data acquired by the at least one camera of the device corresponding to the first identification information for video call, that is, the first device directly passes through the video data acquired by the external device, and uses the video data acquired by the external camera for video call.
Optionally, the first device obtains video data collected by at least one camera of the device corresponding to the first identification information, obtains video data collected by at least one camera of the first device, and uses the video data collected by at least one camera of the device corresponding to the first identification information for video call, that is, the first device obtains video data collected by the local camera and the external device simultaneously, and uses the video data collected by the external camera for video call.
Optionally, the first device obtains video data collected by at least one camera of the device corresponding to the first identification information, obtains video data collected by at least one camera of the first device, and uses the video data collected by at least one camera of the device corresponding to the first identification information and the video data collected by at least one camera of the first device for video call, that is, the first device obtains video data collected by the local camera and the external device simultaneously, and uses the video data collected by the local camera and the external camera for video call.
Optionally, the method for video call may further include: when the first device detects that a user acts on preset operation of second identification information on the interactive interface, at least one camera of the device corresponding to the second identification information is triggered to collect video data, the second identification information is one of identification information of at least one third device with the camera, the first device obtains the video data collected by the at least one camera of the device corresponding to the second identification information, and the video data collected by the at least one camera of the device corresponding to the second identification information is used for video call. The user can select at least one camera of the device corresponding to other identification information to acquire video data required by the video call, and the switching can be freely performed.
Optionally, the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, and does not acquire the at least one camera of the first device and the video data acquired by the first identification information any more, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information for video call. After the user switches the external camera, the video data of the previously applied camera can not be acquired any more.
Optionally, the first device obtains video data acquired by at least one camera of the device corresponding to the second identification information, obtains the at least one camera of the first device and the video data acquired by the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information for video call. After the user switches the external camera, the video data of the previously applied camera can be acquired, and can be stored in the memory or the cloud for later viewing.
Optionally, the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires video data acquired by the at least one camera of the first device, does not acquire video data acquired by the at least one camera of the device corresponding to the first identification information, and uses the at least one camera of the device corresponding to the second identification information and the video data acquired by the at least one camera of the first device for video call. After the user switches the external camera, the user can only acquire the video data of the local camera without acquiring the video data of the previously applied external camera, and the video data can be stored in a memory or a cloud end for later viewing.
Optionally, the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires video data acquired by at least one camera of the device corresponding to the first identification information, does not acquire video data acquired by at least one camera of the first device, and uses the at least one camera of the device corresponding to the second identification information and the video data acquired by at least one camera of the device corresponding to the first identification information for video call. After the user switches the external camera, the user can only acquire the video data of the external camera which is applied before without acquiring the video data of the local camera, and the video data can be stored in a memory or a cloud end for later viewing.
Optionally, the first device obtains video data acquired by at least one camera of the device corresponding to the second identification information, obtains the video data acquired by the at least one camera of the first device and the at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the first identification information, the video data acquired by the at least one camera of the device corresponding to the second identification information, and the video data acquired by the at least one camera of the first device for video call.
Optionally, before the application uses the at least one camera of the device corresponding to the first identification information, the first device may configure the at least one camera of the device corresponding to the first identification information according to the configuration parameter of the at least one camera of the first device.
Optionally, before the application uses the at least one camera of the device corresponding to the second identification information, the first device may configure the at least one camera of the device corresponding to the second identification information according to the configuration parameter of the at least one camera of the first device.
It should be understood that, by configuring the at least one external camera with the configuration parameters of the at least one camera of the first device, the video data acquired by the external camera and the video data acquired by the local camera have the same format, such as definition, and the like, so that the relevant processing is facilitated.
Optionally, the first device may further include, on the interactive interface for performing the video call, third identification information of a fourth device that is in wireless connection with the first device and has at least one microphone; when the first equipment detects that a user acts on the preset operation of third identification information on the interactive interface, triggering equipment corresponding to the third identification information to acquire audio data by using at least one microphone; the first device acquires audio data acquired by at least one microphone of the device corresponding to the third identification information, and uses the audio data acquired by at least one microphone of the device corresponding to the third identification information for video call.
Optionally, the first device may further include, on the interactive interface for conducting the video call, a fourth identifier of a fifth device that is wirelessly connected to the first device and has at least one speaker; when the first device detects that the user acts on the preset operation of the fourth identifier on the interactive interface, the first device sends the audio data received in the video call to the device corresponding to the fourth identifier, and triggers the device corresponding to the fourth identifier to play the received audio data.
The user can also use an external microphone to collect audio data, and can also use an external loudspeaker to play audio data in the video call.
Optionally, the first device may further include, on the interactive interface for conducting a video call, a fifth identifier of a sixth device wirelessly connected to the first device and having at least one microphone and at least one speaker; when the first device detects that a user acts on the preset operation of a fifth identifier on the interactive interface, triggering the device corresponding to the fifth identifier to acquire audio data by using at least one microphone, sending the audio data received in the video call to the device corresponding to the fifth identifier, and triggering the device corresponding to the fifth identifier to play the received audio data; and the first equipment acquires the audio data acquired by at least one microphone of the equipment corresponding to the fifth identification, and uses the audio data acquired by the at least one microphone for video call.
Optionally, the first device obtains video data acquired by at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the first identification information for video call, including: the first device acquires video data acquired by at least one camera of the device corresponding to the first identification information, acquires video data acquired by at least one camera of the first device, stores the video data acquired by at least one camera of the first device in a memory of the first device, and uses the video data acquired by at least one camera of the device corresponding to the first identification information for video call.
Optionally, video data acquired by at least one camera of the first device may also be uploaded to the cloud server for storage, so as to be queried by a user.
Optionally, the application also provides a photographing method.
S201, the first device presents identification information of each device in the plurality of devices.
The first equipment is provided with at least one camera, the multiple pieces of equipment comprise the first equipment and at least one third equipment, the at least one third equipment is in wireless connection with the first equipment, and the at least one third equipment is provided with the camera.
Optionally, a plurality of identification information may be displayed on the interactive interface of the first device for indicating the selectable devices.
Optionally, each of the at least one third device installs a corresponding application, and the application may be used for data transmission between the first device and the at least one third device, for example, the first device may send a shooting instruction or configuration information to the third device, and the third device may send a shot image or capability information of the camera to the first device.
Optionally, the first device may present, on an interactive interface corresponding to the at least one camera, the identification information of each of the multiple devices, that is, when an application program of the first device calls the at least one camera to acquire image data, the identification information may be displayed on the interactive interface of the application program.
Optionally, the wireless connection is implemented by any one of a bluetooth protocol, a Wi-Fi protocol, an NFC protocol, or a mobile communication protocol.
The mobile communication protocol may include a 2G standard protocol, a 3G standard protocol, a 4G standard protocol or a 5G standard protocol, a 6G standard protocol or a 6G subsequent standard protocol, etc.
S202, the first device responds to user operation to determine a target device from the multiple devices.
Optionally, when the user operates the identification information displayed on the interactive interface of the first device, the user may perform a preset operation, where the preset operation may be that the user clicks the identification information displayed on the interactive interface of the first device, or that the user slides the identification information displayed on the interactive interface of the first device from left to right. The first device may pre-store the preset operation, and determine the target device according to the user operation after the user completes the preset operation.
Optionally, the preset operation may be that the user clicks the identification information, or the user instructs to select the identification information by voice, or the selection is automatically performed according to the history of the user.
Alternatively, the first device may transmit a shooting instruction to the target device, the first device may receive an image shot by the target device according to the shooting instruction, and the shooting instruction may indicate information such as a time when the target device performs shooting.
Optionally, when the target device is one of the at least one third device, the first device may generate first configuration information according to the configuration of the at least one camera, and send the first configuration information to the target device.
Optionally, the first device may obtain second configuration information, where the second configuration information is used to indicate a configuration of at least one camera of the target device, and generate the first configuration information according to the configuration of the at least one camera of the first device and the second configuration information.
And S203, the first device acquires and presents the image shot by the target device.
Optionally, when the target device is one of the at least one third device, the first device turns off the at least one camera of the first device when presenting the image taken by the target device.
Optionally, the plurality of devices may further include at least one fourth device, the at least one fourth device being wirelessly connected with the first device, and the at least one fourth device each having a microphone. When the user operates the identification information corresponding to the fourth device displayed on the interactive interface, the microphone of the corresponding fourth device may be triggered to acquire the audio data, and the acquired audio data may be transmitted to the first device.
Optionally, the plurality of devices may further include at least one fifth device, the at least one fifth device being wirelessly connected with the first device, and the at least one fifth device each having a speaker. When the user operates the identification information corresponding to the fifth device displayed on the interactive interface, the audio data for the currently running application program is sent to the corresponding fifth device, and the corresponding loudspeaker of the fifth device is triggered to play the audio data.
Optionally, the multiple devices may further include at least one sixth device, where the at least one sixth device is wirelessly connected to the first device and has at least one microphone and at least one speaker, and when the first device detects that a user operates on the identification information corresponding to the sixth device on the interactive interface, the sixth device is triggered to acquire audio data by using the at least one microphone, send the received audio data to the speaker, and play the received audio data.
It should be understood that the data information obtained by the fourth device, the fifth device, and the sixth device may all be applied to the video call of the first device, and the fourth device, the fifth device, and the sixth device may respectively correspond to the third identification information, the fourth identification information, and the fifth identification information, which is not limited in this application.
Fig. 6 is a schematic diagram of a software architecture of a first device according to an embodiment of the present application.
In this embodiment, taking an android system as an example, a layered architecture divides software into a plurality of layers, and the layers communicate with each other through software interfaces. In some embodiments, the android system is divided into four layers, which are an application layer, an application framework layer (FWK), a Hardware Abstraction Layer (HAL), and a kernel layer (kernel), from top to bottom.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
The application framework layer may include a window manager, a content provider, a view system, an explorer, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
Content providers are used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
For example, in the present application, the content controller may acquire an image acquired in the preview interface in real time, and display the processed image in a corresponding interactive interface of the application program.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of at least one view. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
For example, in the present application, the content such as "first identification information" displayed on the interactive interface may be displayed by the view system receiving an instruction from the processor, so as to remind the user of information such as an external camera available for selection.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The hardware abstraction layer is an interface layer between the kernel layer and the hardware circuit, and is intended to abstract the hardware. The virtual hardware platform hides the hardware interface details of a specific platform, provides a virtual hardware platform for an operating system, has hardware independence, and can be transplanted on various platforms.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the android system, the camera system is respectively located in an application framework layer, a hardware abstraction layer and a kernel layer. Applications of the mobile device, such as shooting, video, etc., operate the local camera to obtain images using the standard camera API of the application framework layer. And a standard interface is defined between the hardware abstraction layer and the application framework layer and is used for business communication between the application framework layer and the hardware abstraction layer. The kernel layer is a driving layer of a camera of the mobile equipment, and is called by a camera module of the hardware abstraction layer to operate a hardware device of the camera.
In the embodiment of the application, a data switching module is added in the hardware abstraction layer, and is used for acquiring an image by the first device by using a camera of the third device after the third device of the at least one third device corresponding to the identification information selected by the user is the target device, and the first device can complete data switching according to the following procedure so as to present the image shot by the target device.
It should be understood that the embodiment of the present application is described by taking the first device and the at least one third device to maintain a wireless connection as an example, and does not limit the remaining different types of devices to implement the technical solution of the present application. On the left side of fig. 6, the internal software framework of the first device is illustrated, and it is noted that: the application scene of the first device comprises that the first device and the second device respectively utilize the second application and the respective camera to carry out video call, and the first device and the third device are in wireless connection.
1) The data switching module of the first device acquires image format information of at least one camera of the first device and sends the image format information to the data transmission channel module.
Alternatively, the image format information may include information on the number of data streams, the resolution, the frame rate, the color space, and the like of each data stream, which provide the first device with the image format information data.
2) And the data transmission channel module of the first device receives the image format information sent by the data switching module, determines first configuration information of at least one camera of the target device according to the image format information, and sends the first configuration information to the target device.
Optionally, the target device is installed with a corresponding application, where the application may be used for signaling interaction and data transmission between the first device and the target device, and the application includes a communication protocol adapted to the first device, a transmission parameter, and the like.
Alternatively, the application may be installed in an adaptation module of the target device.
Alternatively, the adaptation module may be an adaptation Software Development Kit (SDK) module, and the application may communicate through the SDK module.
Optionally, the first device may send the first configuration information to the target device through the data transmission channel module, and the target device side adaptation module may perform related configuration on at least one camera of the target device according to the first configuration information, for example, the first configuration information may indicate a resolution, a frame rate, a color space, and the like of an image format acquired by the camera. Meanwhile, the adaptation module of the target device can perform relevant configuration on the camera according to the first configuration information, and the target device acquires video data through the camera and then sends the acquired video data to the data transmission channel module of the first device through the adaptation module.
Optionally, when the data transmission channel module determines the first configuration information for the target device according to the image format information sent by the data switching module, the data transmission channel module may obtain the second configuration information of the at least one camera of the target device in advance, and generate the first configuration information according to the configuration of the at least one camera of the first device and the second configuration information.
Optionally, the first device may mask, according to the second configuration information, a capability that is not supported by at least one camera of the target device. For example, resolution information of at least one camera of the target device may be obtained in advance, and if the image format information requires to obtain a video resolution of 1080p, and the at least one camera of the target device can only obtain data with a resolution of 480p, the data transmission channel module masks the resolution in the image format information, and does not send the configuration to the target device along with the first configuration information.
Optionally, if the application program of the application layer has a request for obtaining multiple paths of data, only one data stream may be provided between the data transmission channel module and the adaptation module of the target device, and the data transmission channel module may transmit the data stream to multiple demand terminals after receiving the data stream transmitted by the adaptation module, for example, the application of the first device needs to store a video captured by the target device to the local and cloud terminals at the same time, and the data transmission module may transmit one path of data stream transmitted by the adaptation module to the local and cloud terminals. This reduces bandwidth requirements and delays since it does not occupy too much radio resources.
Alternatively, the first device may transmit a shooting instruction to the target device, and the shooting instruction may indicate information such as a time when the target device performs shooting.
Alternatively, the photographing instruction may be included in the first configuration information.
3) After the data transmission channel module receives the video data, the video data may need to be converted into relevant formats such as resolution, color space and the like according to the service requirements of the application program, so that the format requirements of each path of data stream of the application program are met, after the data transmission channel module obtains the effective video data, the data transmission channel module considers that the data source of the target device is ready, and at the moment, the data transmission channel module sends a switching preparation completion instruction to the data switching module.
4) After receiving the switching preparation completion instruction, the data switching module notifies the camera equipment session management module to halt an image acquisition process of local camera hardware, takes over the camera acquisition request, forwards the camera acquisition request to the data transmission channel module, the data transmission channel module transmits data received from the target equipment to the data switching module, the data switching module fills the video data into a data return interface of the camera equipment session management module, and then the acquired video data is returned to an upper layer service according to the original data return process, and finally, the application program can acquire an image acquired by at least one camera of the target equipment, but the application program considers that the at least one camera of the first equipment is still used, and the at least one camera of the target equipment is equivalent to a local virtual camera of the first equipment, the data are switched to the at least one camera of the target device from the at least one camera, the application program is free of sensing, a better visual angle can be obtained through the camera of the external device, and user experience is improved.
Fig. 7 is a schematic diagram of interaction of modules of a first device according to an embodiment of the present disclosure.
S401, the first equipment detects that the user acts on the preset operation of the identification information on the interactive interface, at least one camera of the equipment corresponding to the identification is triggered to switch operation, and video data are collected.
Optionally, the preset operation may be that the user clicks the identification information, or the user instructs to select the identification information by voice, or the selection is automatically performed according to the history of the user.
Optionally, the first device presents identification information for each of the plurality of devices. The first equipment is provided with at least one camera, the multiple pieces of equipment comprise the first equipment and at least one third equipment, the at least one third equipment is in wireless connection with the first equipment, and the at least one third equipment is provided with the camera.
Optionally, when the user operates the identification information displayed on the interactive interface, the user may perform a preset operation, where the preset operation may be that the user clicks the identification information displayed on the interactive interface of the first device, or that the user slides the identification information displayed on the interactive interface of the first device from left to right. The first device may pre-store a preset operation, and determine the target device according to the user operation after the user completes the preset operation.
Optionally, the interactive interface may be an interactive interface of an application program, and the application program may be a photographing application, a camera application, a video call application, or another application that requires a camera.
Optionally, the identification information of each of the multiple devices is presented on the interactive interface corresponding to the at least one camera, that is, when the application program of the first device calls the at least one camera to acquire the video data, the identification information may be displayed on the interactive interface of the application program.
S402, the data switching module judges whether the camera switching parameter is legal or not and records whether at least one camera of the first device is in use or not.
Optionally, the camera switching parameter may be an operating state of at least one camera of the first device, camera information of the first device of the target device, or whether there is any other switching task conflicting with the current task.
Optionally, the switching can only be performed when at least one camera of the first device is running.
S403, the data switching module records the configuration of at least one camera of the first device which is currently running.
S404, the data switching module indicates the data transmission channel module to interact with the adaptation module of the target device, and starts the connection of at least one camera of the target device.
S405, the data transmission module is connected with an adaptation module of the target device.
Optionally, the adaptation module may send second configuration information to the data transmission module, where the second configuration information is used to indicate a configuration of at least one camera of the target device.
S406, the data switching module obtains the second configuration information.
S407, the data switching module may determine, according to the second configuration information, at least one camera configuration of the target device, and configure at least one camera interface of the first device, and the data switching module may generate the first configuration information according to the configuration of the at least one camera of the first device and the second configuration information.
Optionally, the data switching module may shield, according to the second configuration information, a capability that is not supported by at least one camera of the target device.
S408, the data transmission channel module sends first configuration information to an adaptation module of the target device, and the target device can configure the camera according to the first configuration information.
Optionally, the data transmission channel module may send a shooting instruction to the adaptation module of the target device, where the shooting instruction may indicate information such as a time when the target device takes a picture.
Alternatively, the photographing instruction may be included in the first configuration information.
And S409, triggering a switch for switching data flow and control flow by the data switching module to switch the cameras, wherein the switch can be positioned in a camera unit of an application framework layer.
S410, the data switching module instructs the data transmission channel module to acquire the data stream acquired by at least one camera of the target device.
Optionally, the related operations of the user on the cameras are also transferred to the implementation of the at least one camera of the target device, for example, any one of the at least one camera is selected to acquire video data.
S411a, the at least one camera of the target device obtains video data and transmits the video data to the data transmission channel module through the adaptation module.
S411b, the data transmission channel module may transmit the video data to the data switching module.
Optionally, the video data transmission mode may be implemented by a wireless communication mode, a wired communication mode, or a server communication mode.
Optionally, the wireless communication mode is implemented by any one of a bluetooth protocol, a Wi-Fi protocol, an NFC protocol, or a mobile communication protocol.
The mobile communication protocol may include a 2G standard protocol, a 3G standard protocol, a 4G standard protocol, a 5G standard protocol, a 6G standard protocol, or a 6G subsequent standard protocol.
S412, the data switching module fills the image buffer with the video data acquired by the at least one camera of the target device.
Alternatively, the image buffer may be located in the memory of the first device.
S413, the data switching module may perform a shutdown operation on at least one camera of the first device, so as to reduce power consumption of the first device.
And S414, after the data flow and the control flow are switched, the data switching module can return a switching success response to the interactive interface of the application program.
Alternatively, the switching success response may be displaying the completion identification information on the interactive interface, for example, displaying "switching completion" or "switching success" on the interactive interface.
According to the technical scheme provided by the embodiment of the application, the application program can acquire the image acquired by the at least one camera of the target equipment, but the application program considers that the at least one camera of the first equipment is still used, the at least one camera of the target equipment is equivalent to a local virtual camera of the first equipment, data is switched from the at least one camera to the at least one camera of the target equipment, the application program is unaware, a better visual angle can be obtained through the camera of the external equipment, and user experience is improved.
Fig. 8 is a schematic diagram of a handover principle provided in an embodiment of the present application.
As shown in fig. 8, in the technical solution of the present application, a data switching module and a data transmission channel module are added on a first device side, and an adaptation module is added on a target device side. According to the technical scheme, codes corresponding to the cameras of the hardware abstraction layer can be modified, an image acquisition request label can be added, and the image acquisition request label can be used for marking whether the image acquisition request issued by the application framework layer is processed by at least one camera of the first device or is sent to the data stream switching module to be processed by at least one camera of the target device.
Under the process of using at least one camera of the first device, an image acquisition request issued by an application framework layer can be sent to at least one camera driver of the first device for processing, the camera driver calls a kernel layer camera hardware driver, and the at least one camera of the first device is operated to acquire video data.
In the process of using the at least one camera of the target device, after a user triggers the at least one camera of the target device, the data stream switching module triggers the data stream and control stream switching switch to set a label, and then an image acquisition request issued by the application framework layer is forwarded to the data stream switching module for processing, and at this time, the at least one camera of the first device can be paused/closed to reduce power consumption. If the user does not use the at least one camera of the target device any more, the data flow and control flow switch may be turned off, and the image acquisition request issued by the subsequent application framework layer continues to be processed by the at least one camera of the first device.
Fig. 9 is a schematic diagram of another software architecture of a first device according to an embodiment of the present application.
It should be understood that the embodiment of the present application is described by taking the first device and the at least one third device to maintain a wireless connection as an example, and does not limit the remaining different types of devices to implement the technical solution of the present application. On the left side of fig. 9, the internal software framework of the first device is illustrated as follows: the application scene of the first device comprises that the first device utilizes the second application and a camera of the first device to take pictures or record videos, and the first device is wirelessly connected with the third device.
In the embodiment of the application, a data switching module is added in the application framework layer, and is used for acquiring an image by the first device by using the camera of the third device after the user selects one of the at least one third device as the target device, and the first device can complete data switching according to the following procedures so as to present the image shot by the target device.
1) The data switching module of the first device acquires image format information of at least one camera of the first device and sends the image format information to the data transmission channel module.
Alternatively, the image format information may include information on the number of data streams, the resolution, the frame rate, the color space, and the like of each data stream, which provide the first device with the image format information data.
2) And the data transmission channel module of the first device receives the image format information sent by the data switching module, determines first configuration information of at least one camera of the target device according to the image format information, and sends the first configuration information to the target device.
Optionally, the target device installs a corresponding application, where the application may be used for signaling interaction and data transmission between the first device and the target device, and the application may include a communication protocol adapted to the first device, a transmission parameter, and the like.
Alternatively, the application may be installed in an adaptation module of the target device.
Alternatively, the adaptation module may be an adaptation SDK module, and the application may communicate through the SDK module.
Optionally, the first device may send the first configuration information to the target device through the data transmission channel module, and the target device side adaptation module may perform related configuration on at least one camera of the target device according to the first configuration information, for example, the first configuration information may indicate a resolution, a frame rate, a color space, and the like of an image format acquired by the camera. Meanwhile, the adaptation module of the target device can perform relevant configuration on the camera according to the first configuration information, and the target device acquires video data through the camera and then sends the acquired video data to the data transmission channel module of the first device through the adaptation module.
Optionally, when the data transmission channel module determines the first configuration information for the target device according to the image format information sent by the data switching module, the data transmission channel module may obtain the second configuration information of the at least one camera of the target device in advance, and generate the first configuration information according to the configuration of the at least one camera of the first device and the second configuration information.
Optionally, the first device may mask, according to the second configuration information, a capability that is not supported by at least one camera of the target device. For example, resolution information of at least one camera of the target device may be obtained in advance, and if the image format information requires to obtain a video resolution of 1080p and the at least one camera of the target device can only obtain data with a resolution of 480p, the data transmission channel module masks the resolution in the image format information and does not send the configuration to the target device along with the first configuration information.
Optionally, if the application program in the application layer has a request for obtaining multiple paths of data, there may be only one data stream between the data transmission channel module and the adaptation module of the target device, and the data transmission channel module may transmit the data stream to multiple demand terminals after receiving the data stream transmitted by the adaptation module, for example, the application of the first device needs to store a video taken by the target device to the local and cloud terminals at the same time, and the data transmission module may transmit one path of data stream transmitted by the adaptation module to the local and cloud servers. This reduces bandwidth requirements and delays since it does not occupy too much radio resources.
Alternatively, the first device may transmit a shooting instruction to the target device, and the shooting instruction may indicate information such as a time when the target device performs shooting.
Alternatively, the photographing instruction may be included in the first configuration information.
3) After the data transmission channel module receives the video data, the video data may need to be converted into relevant formats such as resolution, color space and the like according to the service requirements of the application program, so that the format requirements of each path of data stream of the application program are met, after the data transmission channel module obtains the effective video data, the data transmission channel module considers that the data source of the target device is ready, and at the moment, the data transmission channel module sends a switching preparation completion instruction to the data switching module.
4) After the data switching module receives the switching preparation completion instruction, the data switching module continuously acquires video data from the data transmission channel module and fills the video data into a temporary cache returned by the camera application framework layer, and because the application program of the application layer does not know that at least one camera of the target device is adopted to acquire the data, the image processing request is still issued to the application framework layer by the application layer and is issued to the hardware abstraction layer, but after the video data is acquired by the at least one camera of the target device, the hardware abstraction layer does not acquire the video data, so the hardware abstraction layer can return the image processing request, and the data switching module executes the operation. After an image processing request returned by the hardware abstraction layer is received, the video data in the temporary cache can be filled into an image buffer area, and finally, the application program can obtain an image obtained by at least one camera of the target device, but the application program considers that the at least one camera of the first device is still used, and the at least one camera of the target device is equivalent to a local virtual camera of the first device, so that a user can obtain different visual angles after selecting different target devices and apply the different visual angles to video calls, and the flexibility is greatly improved. .
Fig. 10 is a schematic diagram of interaction of modules of another first device according to an embodiment of the present application.
S801, when the first device detects that the user acts on the preset operation of the identification information on the interactive interface, at least one camera of the device corresponding to the identification is triggered to switch operation, and video data are collected.
Optionally, the preset operation may be that the user clicks the identification information, or the user instructs to select the identification information by voice, or the selection is automatically performed according to the history of the user.
Optionally, the first device presents identification information for each of the plurality of devices. The first equipment is provided with at least one camera, the multiple pieces of equipment comprise first equipment and at least one third equipment, the at least one third equipment is in wireless connection with the first equipment, and the at least one third equipment is provided with a camera.
Optionally, when the user operates the identification information displayed on the interactive interface, the user may perform a preset operation, where the preset operation may be that the user clicks the identification information displayed on the interactive interface of the first device, or that the user slides the identification information displayed on the interactive interface of the first device from left to right. The first device may pre-store a preset operation, and determine the target device according to the user operation after the user completes the preset operation.
Optionally, the interactive interface may be an interactive interface of an application program, and the application program may be a photographing application, a camera application, a video call application, or another application that requires a camera.
Optionally, the identification information of each of the multiple devices is presented on an interactive interface corresponding to at least one camera of the first device, that is, when an application program of the first device calls the at least one camera to acquire video data, the identification information may be displayed on the interactive interface of the application program.
S802, the data switching module judges whether the camera switching parameter is legal or not and records whether at least one camera of the first device is in use or not.
Optionally, the camera switching parameter may be an operation state of at least one camera of the first device, information of at least one camera of the target device, or whether there is any other switching task conflicting with the task.
Optionally, the switching can only be performed when at least one camera of the first device is running.
And S803, the data switching module records the configuration of at least one camera of the first device which is currently running.
S804, the data switching module indicates the data transmission channel module to interact with the adaptation module of the target device, and starts the connection of at least one camera of the target device.
And S805, the data transmission module is connected with the adaptation module of the target device.
Optionally, the adaptation module may send second configuration information to the data transmission module, where the second configuration information is used to indicate a configuration of at least one camera of the target device.
S806, the data switching module obtains the second configuration information.
S807, the data switching module may determine at least one camera configuration of the target device according to the second configuration information, and configure at least one camera interface of the first device, and the data switching module may generate the first configuration information according to the configuration of the at least one camera of the first device and the second configuration information.
Optionally, the data switching module may shield, according to the second configuration information, a capability that is not supported by at least one camera of the target device.
S808, the data transmission channel module sends first configuration information to an adaptation module of the target device, and the target device can configure the camera according to the first configuration information.
Optionally, the data transmission channel module may send a shooting instruction to the adaptation module of the target device, where the shooting instruction may indicate information such as a time when the target device takes a picture.
Alternatively, the photographing instruction may be included in the first configuration information.
And S809, triggering a switch for switching data flow and control flow by the data switching module to switch the cameras, wherein the switch can be positioned in the camera unit of the application framework layer.
And S810, the data switching module instructs the data transmission channel module to acquire the data stream acquired by at least one camera of the target device.
Optionally, the related operations of the user on the cameras are also transferred to the implementation of the at least one camera of the target device, for example, any one of the at least one camera is selected to acquire video data.
S811a, the at least one camera of the target device obtains video data and transmits the video data to the data transmission channel module through the adaptation module.
S811b, the data transmission channel module may transmit the video data to the data switching module.
Optionally, the video data transmission mode may be realized by a wireless communication mode, a wired communication mode or a server communication mode.
Optionally, the wireless communication mode is implemented by any one of a bluetooth protocol, a Wi-Fi protocol, or a mobile communication protocol.
The mobile communication protocol may include a 2G standard protocol, a 3G standard protocol, a 4G standard protocol or a 5G standard protocol, a 6G standard protocol or a 6G subsequent standard protocol, etc.
S812, the data switching module caches the video data acquired by the target device to the temporary cache of the first device.
And S813, after the data switching module receives the image processing request returned by the hardware abstraction layer, filling the video data in the temporary cache into an image buffer area.
Optionally, both the temporary buffer and the image buffer may be located in the memory of the first device.
S813, the data switching module may perform a shutdown operation on at least one camera of the first device, so as to reduce power consumption of the first device.
S814, after the data flow and the control flow are switched, the data switching module can return a switching success response to the interactive interface of the application program.
Alternatively, the switching success response may be displaying the completion identification information on the interactive interface, for example, displaying "switching completion" or "switching success" on the interactive interface.
According to the technical scheme provided by the embodiment of the application, the application program can acquire the image acquired by the at least one camera of the target device, but the application program considers that the at least one camera of the first device is still used, and the at least one camera of the target device is equivalent to a local virtual camera of the first device. The user can select different target devices according to actual needs of the user, and the video images are obtained through at least one camera of the target devices and applied to video calls, so that the selection range of the user is expanded, and the user experience is better.
Fig. 11 is a schematic diagram of another handover principle provided in an embodiment of the present application.
As shown in fig. 11, in the technical solution of the present application, a data switching module and a data transmission channel module are added on a first device side, and an adaptation module is added on a target device side. The technical scheme of the application can modify the code corresponding to the camera of the application framework layer. The method comprises the steps that a camera service module (Cameraservice) in an application framework layer camera unit can increase a label of whether to use video data acquired by an external camera to cover local video data, a data switching module acquires the video data acquired by at least one camera of target equipment through a data transmission channel module and carries out temporary cache, when the camera service module processes the video data returned by a hardware abstraction layer, the temporarily cached video data covers an image buffer area of the hardware abstraction layer to return the video data, then the video data is upwardly returned to an image API of the application framework layer camera, at the moment, the video data acquired by an application program is already an image acquired by at least one camera of the target equipment, and meanwhile, the application program does not sense the change of bottom layer data.
The embodiment of the application also provides mobile equipment which can comprise at least one camera, a display module, a data switching module and a data transmission channel module.
The system comprises at least one camera, a camera and a control module, wherein the at least one camera is used for acquiring images; the display module is used for presenting identification information of each device in the multiple devices, the multiple devices comprise a first device and at least one third device, the at least one third device is in wireless connection with the first device, and the at least one third device is provided with a camera; the data switching module is used for responding to user operation to determine target equipment from the plurality of equipment; the data transmission channel module is used for acquiring an image shot by the target equipment; the display module is also used for presenting the image shot by the target equipment.
Optionally, the data transmission channel module may be further configured to send a shooting instruction to the target device, and the data transmission channel module is further configured to receive an image shot by the target device according to the shooting instruction.
Optionally, each of the at least one third device includes an adaptation module, and the adaptation module is configured to perform data transmission between the first device and the at least one third device.
Optionally, the display module presents identification information of each of the plurality of devices, including: the display module presents identification information of each device in the multiple devices on an interactive interface corresponding to the at least one camera.
Optionally, when the target device is one of the at least one third device, the data switching module generates first configuration information according to the configuration of the at least one camera; the data transmission channel module may also send the first configuration information to the target device.
Optionally, the data transmission channel module is further configured to obtain second configuration information, where the second configuration information is used to indicate configuration of at least one camera of the target device; and the processing module generates first configuration information according to the configuration of at least one camera of the first device, and the first configuration information comprises: and the data switching module generates first configuration information according to the configuration of at least one camera and the second configuration information.
Optionally, when the target device is one of the at least one third device, the at least one camera of the first device is turned off when the display module presents the image taken by the target device.
Optionally, the wireless connection is implemented by any one of a bluetooth protocol, a wireless local area network Wi-Fi protocol, an NFC protocol, or a mobile communication protocol.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or wireless connection may be through some interfaces, indirect coupling or wireless connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes at least one computer instruction. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of the present invention are generated in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including at least one available media integrated server, data center, or the like. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for video call is applied to a first device with at least one camera, the first device establishes video call with a second device by using the at least one camera and an application, the first device and the second device are both provided with the application, the application has video call function, the second device has at least one camera, and the method comprises the following steps:
after the first device establishes a video call with the second device by using the at least one camera and the application, displaying identification information of at least one third device which is in wireless connection with the first device and is provided with a camera on an interactive interface for carrying out the video call;
when the first device detects that a user acts on preset operation of first identification information on the interactive interface, triggering at least one camera of the device corresponding to the first identification information to acquire video data, wherein the first identification information is one of identification information of at least one third device;
the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the first identification information, and uses the video data acquired by the at least one camera of the equipment corresponding to the first identification information for video call;
the first device obtains video data acquired by at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the first identification information for video call, including:
after the data transmission channel module of the first device receives the video data, format conversion is carried out on the resolution and the color space of the received video data according to the application requirement;
a data switching module of the first device receives an acquisition request of at least one camera of the first device and forwards the acquisition request to the data transmission channel module;
the data transmission channel module receives video data from the equipment corresponding to the first identification information and transmits the video data to the data switching module;
and the data switching module transmits the received video data acquired by the at least one camera of the equipment corresponding to the first identification information to a data return interface of a camera equipment session management module, and the acquired video data acquired by the at least one camera of the equipment corresponding to the first identification information is returned to the application according to an original data return flow.
2. The method of claim 1, further comprising:
when the first device detects that a user acts on preset operation of second identification information on the interactive interface, triggering at least one camera of the device corresponding to the second identification information to acquire video data, wherein the second identification information is one of identification information of at least one third device;
and the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the second identification information, and uses the video data acquired by at least one camera of the equipment corresponding to the second identification information for video call.
3. The method according to claim 1, wherein the acquiring, by the first device, video data collected by at least one camera of a device corresponding to the first identification information and using the video data collected by at least one camera of the device corresponding to the first identification information for a video call comprises:
the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the first identification information, does not acquire the video data acquired by the at least one camera of the first equipment any more, and uses the video data acquired by the at least one camera of the equipment corresponding to the first identification information for video call; alternatively, the first and second electrodes may be,
the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the first identification information, acquires video data acquired by at least one camera of the first equipment, and uses the video data acquired by at least one camera of the equipment corresponding to the first identification information for video call; alternatively, the first and second electrodes may be,
the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the first identification information, acquires the video data acquired by the at least one camera of the first equipment, and uses the video data acquired by the at least one camera of the equipment corresponding to the first identification information and the video data acquired by the at least one camera of the first equipment for video call.
4. The method according to claim 2, wherein the acquiring, by the first device, video data collected by at least one camera of a device corresponding to the second identification information and using the video data collected by at least one camera of the device corresponding to the second identification information for a video call comprises:
the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, and does not acquire the video data acquired by the at least one camera of the first device and the at least one camera of the device corresponding to the first identification information any longer, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information for video call; alternatively, the first and second electrodes may be,
the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires video data acquired by the at least one camera of the first device and the at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information for video call; alternatively, the first and second electrodes may be,
the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires the video data acquired by the at least one camera of the first device, does not acquire the video data acquired by the at least one camera of the device corresponding to the first identification information, and uses the video data acquired by the at least one camera of the device corresponding to the second identification information and the at least one camera of the first device for video call; alternatively, the first and second electrodes may be,
the first device acquires video data acquired by at least one camera of the device corresponding to the second identification information, acquires video data acquired by at least one camera of the device corresponding to the first identification information, does not acquire video data acquired by at least one camera of the first device, and uses the video data acquired by at least one camera of the device corresponding to the second identification information and the video data acquired by at least one camera of the device corresponding to the first identification information for video call; alternatively, the first and second electrodes may be,
the first equipment acquires video data acquired by at least one camera of the equipment corresponding to the second identification information, acquires the video data acquired by the at least one camera of the first equipment and the at least one camera of the equipment corresponding to the first identification information, and uses the video data acquired by the at least one camera of the equipment corresponding to the first identification information, the video data acquired by the at least one camera of the equipment corresponding to the second identification information and the video data acquired by the at least one camera of the first equipment for video call.
5. The method according to any one of claims 1 to 4,
before using at least one camera of the device corresponding to the first identification information, configuring at least one camera of the device corresponding to the first identification according to configuration parameters of the at least one camera of the first device; and/or the presence of a gas in the gas,
before using the at least one camera of the device corresponding to the second identification information, configuring the at least one camera of the device corresponding to the second identification information according to the configuration parameters of the at least one camera of the first device.
6. The method according to any one of claims 1 to 4,
the interactive interface for video call also comprises third identification information of a fourth device which is in wireless connection with the first device and is provided with at least one microphone;
when the first device detects that a user acts on preset operation of third identification information on the interactive interface, triggering the device corresponding to the third identification information to acquire audio data by using at least one microphone;
and the first equipment acquires audio data acquired by at least one microphone of the equipment corresponding to the third identification information, and uses the audio data acquired by at least one microphone of the equipment corresponding to the third identification information for video call.
7. The method according to any one of claims 1 to 4,
the interactive interface for video call also comprises fourth identification information of a fifth device which is in wireless connection with the first device and is provided with at least one loudspeaker;
and when the first device detects that the user acts on the preset operation of the fourth identification information on the interactive interface, sending the audio data received in the video call to the device corresponding to the fourth identification, and triggering the device corresponding to the fourth identification information to play the received audio data.
8. The method according to any one of claims 1 to 4,
the interactive interface for video call also comprises fifth identification information of a sixth device which is in wireless connection with the first device and is provided with at least one microphone and at least one loudspeaker;
when the first device detects that a user acts on the preset operation of fifth identification information on the interactive interface, triggering the device corresponding to the fifth identification information to acquire audio data by using at least one microphone, sending the audio data received in a video call to the device corresponding to the fifth identification information, and triggering the device corresponding to the fifth identification information to play the received audio data;
and the first equipment acquires audio data acquired by at least one microphone of the equipment corresponding to the fifth identification information, and uses the audio data acquired by at least one microphone of the equipment corresponding to the fifth identification information for video call.
9. The method of any one of claims 1 to 4, wherein the device maintaining the wireless connection with the first device is implemented by any one of a Bluetooth protocol, a Wi-Fi protocol, an NFC protocol, or a mobile communication protocol.
10. A mobile device, comprising:
a touch screen, wherein the touch screen comprises a touch sensitive surface and a display;
one or more cameras;
one or more processors;
a memory;
a plurality of application programs;
and one or more programs, wherein the one or more programs are stored in the memory, the one or more programs comprising instructions, which when run on the mobile device, cause the mobile device to perform the method of any of claims 1-9.
11. A computer-readable storage medium comprising instructions that, when executed on a device, cause the device to perform the method of any of claims 1 to 9.
12. A graphical user interface system on an electronic device, the electronic device having a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, the graphical user interface comprising a graphical user interface displayed when the electronic device performs the method of any of claims 1-9.
CN201911108857.4A 2019-08-06 2019-11-13 Video call method Active CN112351235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/105097 WO2021023055A1 (en) 2019-08-06 2020-07-28 Video call method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910723119 2019-08-06
CN2019107231194 2019-08-06

Publications (2)

Publication Number Publication Date
CN112351235A CN112351235A (en) 2021-02-09
CN112351235B true CN112351235B (en) 2022-06-07

Family

ID=74367881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911108857.4A Active CN112351235B (en) 2019-08-06 2019-11-13 Video call method

Country Status (2)

Country Link
CN (1) CN112351235B (en)
WO (1) WO2021023055A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995631A (en) * 2021-03-19 2021-06-18 恒大新能源汽车投资控股集团有限公司 Vehicle information processing system and information processing method
CN113220446A (en) * 2021-03-26 2021-08-06 西安神鸟软件科技有限公司 Image or video data processing method and terminal equipment
CN113271425A (en) * 2021-04-19 2021-08-17 瑞芯微电子股份有限公司 Interaction system and method based on virtual equipment
CN113473011B (en) * 2021-06-29 2023-04-25 广东湾区智能终端工业设计研究院有限公司 Shooting method, shooting system and storage medium
CN115022570B (en) * 2021-12-24 2023-04-14 荣耀终端有限公司 Method for acquiring video frame, electronic equipment and readable storage medium
CN114363654B (en) * 2022-01-12 2023-12-19 北京字节跳动网络技术有限公司 Video push method, device, terminal equipment and storage medium
CN117675773A (en) * 2022-09-06 2024-03-08 华为技术有限公司 Video call interaction method, vehicle and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760824A (en) * 1995-12-29 1998-06-02 Lucent Technologies Inc. Multimedia telephone having wireless camera and television module and method of operation thereof
CN104735259A (en) * 2015-03-31 2015-06-24 努比亚技术有限公司 Mobile terminal shooting parameter setting method and device and mobile terminal
CN104853135A (en) * 2015-05-13 2015-08-19 广州物联家信息科技股份有限公司 Method and system for video switching during voice communication process
CN105163059A (en) * 2015-08-21 2015-12-16 深圳创维-Rgb电子有限公司 Smart home device based video call method and video call system
CN105191254A (en) * 2013-03-15 2015-12-23 高通股份有限公司 System and method for allowing multiple devices to communicate in a network
CN105847913A (en) * 2016-05-20 2016-08-10 腾讯科技(深圳)有限公司 Live video broadcast control method, mobile terminal and system
CN106412483A (en) * 2016-10-28 2017-02-15 腾讯科技(深圳)有限公司 Camera sharing method and apparatus
CN107295419A (en) * 2017-07-25 2017-10-24 湖南克拉视通科技有限公司 Set-top box video call implementing method
CN108696673A (en) * 2017-04-11 2018-10-23 中兴通讯股份有限公司 A kind of camera arrangement, method and apparatus
CN109040651A (en) * 2018-09-25 2018-12-18 北京小米移动软件有限公司 The method and device of video communication
CN109936717A (en) * 2019-04-28 2019-06-25 上海掌门科技有限公司 Video communication method and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2463110B (en) * 2008-09-05 2013-01-16 Skype Communication system and method
US10652289B1 (en) * 2012-08-31 2020-05-12 EMC IP Holding Company LLC Combining data and video communication for customer support of electronic system
CN104184982B (en) * 2013-05-28 2018-04-10 华为技术有限公司 Audio/video communication method, system, terminal device and voice and video telephone service centre
US9282286B2 (en) * 2014-03-06 2016-03-08 Citrix Systems, Inc. Participating in an online meeting while driving

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760824A (en) * 1995-12-29 1998-06-02 Lucent Technologies Inc. Multimedia telephone having wireless camera and television module and method of operation thereof
CN105191254A (en) * 2013-03-15 2015-12-23 高通股份有限公司 System and method for allowing multiple devices to communicate in a network
CN104735259A (en) * 2015-03-31 2015-06-24 努比亚技术有限公司 Mobile terminal shooting parameter setting method and device and mobile terminal
CN104853135A (en) * 2015-05-13 2015-08-19 广州物联家信息科技股份有限公司 Method and system for video switching during voice communication process
CN105163059A (en) * 2015-08-21 2015-12-16 深圳创维-Rgb电子有限公司 Smart home device based video call method and video call system
CN105847913A (en) * 2016-05-20 2016-08-10 腾讯科技(深圳)有限公司 Live video broadcast control method, mobile terminal and system
CN106412483A (en) * 2016-10-28 2017-02-15 腾讯科技(深圳)有限公司 Camera sharing method and apparatus
CN108696673A (en) * 2017-04-11 2018-10-23 中兴通讯股份有限公司 A kind of camera arrangement, method and apparatus
CN107295419A (en) * 2017-07-25 2017-10-24 湖南克拉视通科技有限公司 Set-top box video call implementing method
CN109040651A (en) * 2018-09-25 2018-12-18 北京小米移动软件有限公司 The method and device of video communication
CN109936717A (en) * 2019-04-28 2019-06-25 上海掌门科技有限公司 Video communication method and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无人飞机在交通信息采集中的研究进展和展望;彭仲仁等;《交通运输工程学报》;20121215(第06期);全文 *

Also Published As

Publication number Publication date
CN112351235A (en) 2021-02-09
WO2021023055A1 (en) 2021-02-11

Similar Documents

Publication Publication Date Title
CN112351235B (en) Video call method
CN110109636B (en) Screen projection method, electronic device and system
US11601861B2 (en) Unmanned aerial vehicle control method and apparatus
CN106165430B (en) Net cast method and device
CN105847913B (en) A kind of method, mobile terminal and system controlling net cast
CN110377365B (en) Method and device for showing small program
US20170304735A1 (en) Method and Apparatus for Performing Live Broadcast on Game
EP3125547A1 (en) Method and device for switching color gamut mode
EP3432588B1 (en) Method and system for processing image information
EP2985980B1 (en) Method and device for playing stream media data
WO2022121775A1 (en) Screen projection method, and device
US20210250647A1 (en) Method for playing videos and electronic device
CN109582976A (en) A kind of interpretation method and electronic equipment based on voice communication
EP3757738A1 (en) Method and device for page processing
JP6351356B2 (en) Portable device, image supply device, image display device, imaging system, image display system, image acquisition method, image supply method, image display method, image acquisition program, image supply program, and image display program
CN114598414B (en) Time slice configuration method and electronic equipment
CN109417802B (en) Method and device for transmitting flight information
WO2024001940A1 (en) Vehicle searching method and apparatus, and electronic device
US11950162B2 (en) Unmanned aerial vehicle control method and apparatus
CN112015359A (en) Display method and electronic equipment
CN113726954B (en) Control method of Near Field Communication (NFC) function and electronic equipment
WO2022095712A1 (en) Data sharing method, apparatus and system, and electronic device
CN112423008B (en) Live broadcast method, device, terminal, server and storage medium
CN110213531B (en) Monitoring video processing method and device
CN111835977B (en) Image sensor, image generation method and device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210421

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Device Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant