WO2022170918A1 - 合拍方法和电子设备 - Google Patents

合拍方法和电子设备 Download PDF

Info

Publication number
WO2022170918A1
WO2022170918A1 PCT/CN2022/072235 CN2022072235W WO2022170918A1 WO 2022170918 A1 WO2022170918 A1 WO 2022170918A1 CN 2022072235 W CN2022072235 W CN 2022072235W WO 2022170918 A1 WO2022170918 A1 WO 2022170918A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
electronic device
image
interface
video
Prior art date
Application number
PCT/CN2022/072235
Other languages
English (en)
French (fr)
Inventor
陈兰昊
徐世坤
于飞
孟庆吉
陈中领
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110528138.9A external-priority patent/CN114943662A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US18/264,875 priority Critical patent/US20240056677A1/en
Priority to EP22752080.6A priority patent/EP4270300A4/en
Publication of WO2022170918A1 publication Critical patent/WO2022170918A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Definitions

  • the present application relates to the field of electronic equipment, and more particularly, to a matching method and electronic equipment.
  • the present application provides a co-shooting method and electronic device, the purpose of which includes reducing the amount of post-processing, improving the clarity of composite pictures or composite videos, etc., thereby helping to improve the user experience of multi-user co-shooting in different places.
  • a co-production method including:
  • the first electronic device establishes a video call connection between the first electronic device and the second electronic device, where the first electronic device is the electronic device of the first user, and the second electronic device is the electronic device of the second user;
  • the first electronic device acquires the first video data of the first user during a video call
  • the first electronic device acquires the second video data of the second user from the second electronic device through the video call connection;
  • the first electronic device acquires, according to the first video data and the second video data, a co-shot file of the first user and the second user.
  • the effect of synchronous remote co-production can be achieved.
  • Multiple users can communicate and communicate, which is conducive to improving the matching degree of multiple users.
  • a co-shot picture or a co-shot video with a relatively good co-production effect can be obtained, which is beneficial to reduce the workload of the user during the co-production process, such as the workload of post-production image retouching, etc.
  • the co-production method before the first electronic device establishes a video call connection between the first electronic device and the second electronic device, the co-production method further includes:
  • the first electronic device displays a first interface of a shooting application, the first interface includes a shooting control;
  • the first electronic device displays a second interface in response to an operation acting on the synchronizing control, the second interface includes a plurality of user controls corresponding to a plurality of users one-to-one, and the plurality of users include the second interface user;
  • the first electronic device sends a co-production invitation to the second electronic device to establish the video call connection in response to manipulation of a user control acting on the second user.
  • the shooting application may have a built-in co-shooting control, and the co-shooting control may call user controls from other applications other than the shooting application, so as to initiate a co-shooting request to other users.
  • multiple applications (including a shooting application) of the electronic device can be made to run cooperatively, so as to realize the co-shot of multiple users.
  • the co-production method before the first electronic device establishes a video call connection between the first electronic device and the second electronic device, the co-production method further includes:
  • the first electronic device displays a third interface of a video call application, the third interface includes a plurality of video call controls corresponding to a plurality of users one-to-one, and the plurality of users includes the second user;
  • the first electronic device sends a video call invitation to the second electronic device in response to an operation of a video call control acting on the second user to establish the video call connection.
  • Video calling apps can run in tandem with other apps to achieve co-production of multiple users. Therefore, in addition to the function of video calling, the video calling application may also have the function of generating a co-shot file.
  • the matching method further includes:
  • the first electronic device displays a first interface area and a second interface area on the fourth interface according to the first video data and the second video data, the first interface area includes a first user image, and the second interface area A second user image is included, the first user image includes pixels corresponding to the first user, and the second user image includes pixels corresponding to the second user.
  • the first electronic device Before acquiring the co-shot file, the first electronic device displays images of multiple users, so that the general effect of the co-shot by the multiple users can be previewed, so that the user can extract and predict the general situation of the co-shot file, which is conducive to making the co-shot video easier to meet the user's expectation .
  • the fourth interface includes a split screen switch control and a background removal switch control, and the split screen switch control is in an on state, and the background removal switch control When turned on,
  • the first interface area further includes a second background image or a target gallery image, and/or,
  • the second interface area also includes a first background image or a target gallery image
  • the first background image includes pixels corresponding to the scene where the first user is located
  • the second background image includes pixels corresponding to the scene where the second user is located.
  • the background removal switch control is in the ON state, which means that the background of at least one of the first interface area and the second interface area is removed, so that the first interface area and the second interface area can use the same background. This allows the image of the first user and the image of the second user to be considered to be in the same background or scene.
  • the split screen switch control is turned on, which means that the image of the first user and the image of the second user can be attributed to different areas on the user interface, which can be more suitable for scenarios that require a relatively clear distinction between user images, such as due to Different user identities are not suitable for scenes where images of multiple users are mixed.
  • the fourth interface includes a split screen switch control and a background removal switch control, and the split screen switch control is in an off state, and the background removal switch control In the open state, the fourth interface includes a background interface area, the background interface area is the background of the first interface area and the second interface area, and the background interface area includes any of the following: A first background image, a second background image, and a target gallery image, wherein the first background image includes pixels corresponding to the scene where the first user is located, and the second background image includes pixels corresponding to the scene where the second user is located The pixel corresponding to the scene.
  • the background removal switch control is in the ON state, which means that the background of at least one of the first interface area and the second interface area is removed, so that the first interface area and the second interface area can use the same background. This allows the image of the first user and the image of the second user to be considered to be in the same background or scene.
  • the split screen switch control is in an off state, which means that the image of the first user and the image of the second user can intersect and overlap each other, which is beneficial to further integrate the image of the first user and the image of the second user into one. This can be more suitable for scenes that do not require significant area user images, such as group co-shot scenes.
  • the matching method further includes:
  • the first electronic device adjusts the size of the first interface area and/or the second interface area in response to an operation acting on the fourth interface.
  • the user Before obtaining the co-shot file, the user can adjust the proportion of the first user's image and the second user's image in the display interface through operations, thereby indirectly adjusting the co-shot effect of the co-shot file.
  • the matching method further includes:
  • the first electronic device adjusts the display priority of the first interface area or the second interface area in response to the operation acting on the fourth interface.
  • the user can set the display priority to cover the first interface area on the second interface area, or cover the second interface area on the first interface area. On the interface area, you can then indirectly adjust the co-shot effect of the co-shot file.
  • the fourth interface further includes a recording control, and the first user and the first user are acquired according to the first video data and the second video data
  • the co-shot file of the second user includes:
  • the first electronic device acquires the co-shot file according to the first video data and the second video data in response to the operation acting on the recording control.
  • the recording control can be used to indicate the moment to start the co-shooting, so that during the video call, the first electronic device does not have to generate the co-shooting file from the start of the video call to the end of the video call.
  • the co-shot file includes a first image area and a second image area
  • the first image area includes pixels corresponding to the first user
  • the first image area includes pixels corresponding to the first user
  • the second image area includes pixels corresponding to the second user.
  • the co-shot file may include images of multiple users, so as to realize the co-shot of multiple users.
  • the first image area includes pixels corresponding to any one of the following: a first background image, a second background image, and a target gallery image.
  • a variety of background images can be selected for the first image area, so that the background selection of the first user can be relatively flexible.
  • the second image area includes pixels corresponding to any one of the following: a first background image, a second background image, and a target gallery image.
  • a variety of background images can be selected for the second image area, so that the background selection of the second user can be relatively flexible.
  • the co-shot file further includes a background image area, and the background image area is the background of the first image area and the second image area, and the The background image area includes pixels corresponding to any one of the following: the first background image, the second background image, and the target gallery image.
  • the background image area can act as a common background for multiple users, so that in a co-shot file, multiple users can be considered to be in the same scene.
  • a variety of background images can be selected in the background image area, so that the background selection of multi-user co-shooting can be relatively flexible.
  • the resolution of the co-shot file is higher than the display resolution of the first electronic device.
  • the definition of the co-shot image or the co-shot video may be relatively high. This helps to improve the quality of the co-shot file.
  • the co-shot file is a co-shot image or a co-shot video.
  • an electronic device characterized in that it includes:
  • a processor for storing a computer program
  • the processor for executing the computer program stored in the memory
  • the processor is configured to establish a video call connection between the electronic device and a second electronic device, where the electronic device is the electronic device of the first user, and the second electronic device is the electronic device of the second user;
  • the processor is further configured to, during a video call, acquire first video data of the first user;
  • the transceiver is configured to acquire the second video data of the second user from the second electronic device through the video call connection;
  • the processor is further configured to acquire, according to the first video data and the second video data, a co-shot file of the first user and the second user.
  • the processor before the processor establishes a video call connection between the electronic device and the second electronic device, the processor is further configured to:
  • the first interface including a co-shot control
  • the second interface includes a plurality of user controls corresponding to a plurality of users one-to-one, the plurality of users including the second user;
  • a co-production invitation is sent to the second electronic device to establish the video call connection.
  • the processor before the processor establishes a video call connection between the electronic device and the second electronic device, the processor is further configured to:
  • the third interface includes a plurality of video call controls corresponding to a plurality of users one-to-one, the plurality of users including the second user;
  • a video call invitation is sent to the second electronic device to establish the video call connection.
  • the processor is further configured to:
  • a first interface area and a second interface area are displayed on the fourth interface, the first interface area includes a first user image, and the second interface area includes a second user image,
  • the first user image includes pixels corresponding to the first user, and the second user image includes pixels corresponding to the second user.
  • the fourth interface includes a split screen switch control and a background removal switch control, when the split screen switch control is in an on state, and the background removal switch control When turned on,
  • the first interface area further includes a second background image or a target gallery image, and/or,
  • the second interface area also includes a first background image or a target gallery image
  • the first background image includes pixels corresponding to the scene where the first user is located
  • the second background image includes pixels corresponding to the scene where the second user is located.
  • the fourth interface includes a split screen switch control and a background removal switch control, and the split screen switch control is in an off state, and the background removal switch control In the open state, the fourth interface includes a background interface area, the background interface area is the background of the first interface area and the second interface area, and the background interface area includes any of the following: the first background image, the second background image, the target gallery image,
  • the first background image includes pixels corresponding to the scene where the first user is located
  • the second background image includes pixels corresponding to the scene where the second user is located.
  • the processor is further configured to:
  • the size of the first interface area and/or the second interface area is adjusted.
  • the processor is further configured to:
  • the display priority of the first interface area or the second interface area is adjusted.
  • the fourth interface further includes a recording control, and the processor is specifically configured to:
  • the co-shot file is acquired according to the first video data and the second video data.
  • the co-shot file includes a first image area and a second image area
  • the first image area includes pixels corresponding to the first user
  • the first image area includes pixels corresponding to the first user
  • the second image area includes pixels corresponding to the second user.
  • the first image area includes pixels corresponding to any one of the following: a first background image, a second background image, and a target gallery image.
  • the second image area includes pixels corresponding to any one of the following: a first background image, a second background image, and a target gallery image.
  • the co-shot file further includes a background image area, and the background image area is the background of the first image area and the second image area, and the The background image area includes pixels corresponding to any one of the following: the first background image, the second background image, and the target gallery image.
  • the resolution of the co-shot file is higher than the display resolution of the electronic device.
  • the co-shot file is a co-shot image or a co-shot video.
  • a computer storage medium including computer instructions, which, when the computer instructions are executed on an electronic device, cause the electronic device to perform the above-mentioned any one of the possible implementations of the first aspect. Match method.
  • a computer program product which, when the computer program product runs on a computer, causes the computer to execute the in-beat method described in any of the possible implementations of the first aspect above.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a software structural block diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 14 is a relationship diagram of an application module provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 21 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 22 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 24 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 25 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 26 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 27 is a schematic structural diagram of a user interface provided by an embodiment of the present application.
  • FIG. 28 is a relationship diagram of an application module provided by an embodiment of the present application.
  • FIG. 29 is a schematic flowchart of a matching method provided by an embodiment of the present application.
  • FIG. 30 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • a and/or B can mean: A alone exists, A and B exist at the same time, and B exists alone, A and B can be singular or plural.
  • the character "/" generally indicates that the associated objects are an "or" relationship.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the electronic device may be a portable electronic device that also includes other functions such as personal digital assistant and/or music player functions, such as a mobile phone, a tablet computer, a wearable electronic device with wireless communication capabilities (eg, a smart watch) Wait.
  • portable electronic devices include, but are not limited to, carry-on Or portable electronic devices with other operating systems.
  • the above-mentioned portable electronic device may also be other portable electronic devices, such as a laptop computer (Laptop) or the like. It should also be understood that, in some other embodiments, the above-mentioned electronic device may not be a portable electronic device, but a desktop computer.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , the mobile communication module 150, the wireless communication module 160, the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, the button 190, the camera 193, the display screen 194, and the subscriber identification module (SIM) card Interface 195, etc.
  • SIM subscriber identification module
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent components, or may be integrated in one or more processors.
  • the electronic device 101 may also include one or more processors 110 .
  • the controller can generate an operation control signal according to the instruction operation code and the timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in the processor 110 may be a cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. In this way, repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the electronic device 101 in processing data or executing instructions.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal) asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM card interface, and/or USB interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 101, and can also be used to transmit data between the electronic device 101 and peripheral devices.
  • the USB interface 130 can also be used to connect an earphone, and play audio through the earphone.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • electronic device 100 may include one or more display screens 194 .
  • the display screen 194 of the electronic device 100 may be a flexible screen.
  • the flexible screen has attracted much attention due to its unique characteristics and great potential.
  • flexible screens have the characteristics of strong flexibility and bendability, which can provide users with new interactive methods based on the bendable characteristics, and can meet more needs of users for electronic devices.
  • the foldable display screen on the electronic device can be switched between a small screen in a folded state and a large screen in an unfolded state at any time. Therefore, users are using the split-screen function more and more frequently on electronic devices equipped with foldable displays.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or more cameras 193 .
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store one or more computer programs including instructions.
  • the processor 110 may execute the above-mentioned instructions stored in the internal memory 121, thereby causing the electronic device 101 to execute the method for off-screen display, various applications and data processing provided in some embodiments of the present application.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the stored program area may store the operating system; the stored program area may also store one or more applications (such as gallery, contacts, etc.) and the like.
  • the storage data area may store data (such as photos, contacts, etc.) created during the use of the electronic device 101 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage components, flash memory components, universal flash storage (UFS), and the like.
  • the processor 110 may cause the electronic device 101 to execute the instructions provided in the embodiments of the present application by executing the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor 110 .
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • FIG. 2 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as gallery, camera, Changlian, map, navigation, and Changlian.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library media library
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the solutions provided in the embodiments of the present application can be applied to a scene of multi-user co-production, especially to a remote multi-user co-production scene.
  • the remote multi-user co-production scenario may refer to the inability or difficulty of at least two users to complete co-production at the same time with the same camera device.
  • the remote multi-user co-production scenario is described below with some examples.
  • User A can take a selfie through an electronic device A with a camera function to obtain a selfie video A; user B can take a selfie through an electronic device B with a camera function. Finally, by synthesizing video A and video B, a co-shot video of user A and user B can be obtained.
  • the selfie video A and the selfie video B can be obtained by asynchronous shooting.
  • the distance between user A and electronic device A may be quite different from the distance between user B and electronic device B, so the outline size of user A in selfie video A and the outline size of user B in selfie video B are quite different.
  • user A and user B perform similar actions, but user A's action is relatively fast and the action is relatively large, while the action of user B is relatively slow and small.
  • the user needs to perform post-processing with a large workload on the co-shot video.
  • Example 1 takes video shooting as an example for description. In fact, the co-shot pictures obtained by the method shown in Example 1 also have similar problems.
  • User A can make a video call with user B through the electronic device A with the camera function, and obtain a co-shot video that includes both user A and user B by recording the screen.
  • the maximum resolution of the co-shot video usually depends on the display resolution of the electronic device A.
  • Example 2 takes video shooting as an example for description. In fact, the co-shot picture obtained by the method shown in Example 2 also has a similar problem.
  • the embodiments of the present application provide a new method for co-production, which aims to reduce the post-processing workload of users for co-production files (such as co-production images and co-production videos), improve the clarity of co-production files, etc., thereby helping to improve the multi-user co-production in different places. User experience.
  • FIG. 3 is a schematic diagram of a user interface 300 provided by an embodiment of the present application.
  • the user interface 300 may be displayed on the first electronic device.
  • the user interface 300 may be an interface of a camera application, or an interface of other applications having a photographing function. That is to say, the first electronic device carries a camera application or other application with a shooting function.
  • the first electronic device may display the user interface 300 in response to the first user's operations on the applications.
  • the first user may open the camera application by clicking on the icon of the camera application, and then the first electronic device may display the user interface 300 .
  • the camera application can call the camera 193 shown in FIG. 1 to capture the scene around the first electronic device.
  • the camera application may call the front camera of the first electronic device to take a selfie image of the first user, and display the selfie image on the user interface 300 .
  • the user interface 300 may include a plurality of function controls 310 (the function controls 310 may be presented on the user interface 300 in the form of tabs), and the plurality of function controls 310 may respectively correspond to a plurality of camera functions of the camera application.
  • the plurality of camera functions may include, for example, a portrait function, a photographing function, a video recording function, a co-shooting function, a professional function, etc.
  • the plurality of function controls 310 may include a portrait function control, a photographing function control, a video recording function control, Functional controls, professional functional controls.
  • the first electronic device may switch the current camera function to a function for completing a photo shoot in response to an operation (eg, a sliding operation) performed by the first user on the user interface 300 , such as the co-photo function shown in FIG. 3 .
  • an operation eg, a sliding operation
  • the camera application may include other camera functions for completing the co-shot. The embodiments of the present application are described below by taking the co-shot function as an example.
  • the user interface 300 may include a user co-shot control 320 .
  • the user synchronization control 320 can be used to select or invite the second user to complete the synchronization synchronization between the first user and the second user.
  • the first electronic device may display a user interface 400 as shown in FIG. 4 in response to an operation (eg, a click operation) performed by the first user on the user snap control 320 .
  • the user interface 300 may further include a material synchronizing control 330 .
  • the material co-shot control 330 may be used to select a material from the cloud, so as to complete the co-production between the first user and the material.
  • the material may be, for example, various files such as photos, comics, expressions, stickers, animations, and videos.
  • the first user can take a selfie of a video A through the first electronic device.
  • the first electronic device may capture a video A including the first user.
  • the first electronic device may crop the video A according to the outline of the first user in the video A to obtain the user sub-video a, which may include the image of the first user and not include the background image.
  • the following takes a subframe A of video A as an example for detailed description.
  • the subframe A may include a plurality of pixel points A, and the plurality of pixel points A may include a plurality of pixel points a corresponding to the outline of the first user.
  • a plurality of pixel points a' located within the plurality of pixel points a in the subframe A may form a subframe a' of the user sub-video a.
  • the first electronic device can synthesize the user sub-video a and the material to obtain a new video A', and the material can serve as the background of the user's sub-video a in the video A'.
  • the first user may take a selfie of a video B by using the first electronic device.
  • the first electronic device may capture a video B including the first user.
  • the first electronic device can synthesize the video B and the material to obtain a new video B', and the video B can serve as the background of the material in the video B'.
  • the user interface 300 may further include a gallery snap control 340 .
  • the gallery co-shooting control 340 may be used to select multimedia data (the multimedia data may include pictures and videos) from the local gallery, so as to complete the co-shooting between the first user and the multimedia data.
  • the first user can take a video C by using the first electronic device.
  • the first electronic device may capture a video C containing the first user.
  • the first electronic device may crop the video C according to the outline of the first user in the video C to obtain the user sub-video c, and the user sub-video c may include the image of the first user and not include the background image.
  • the first electronic device can synthesize the user sub-video c and the multimedia data to obtain a new video C', and the multimedia data can serve as the background of the user's sub-video c in the video C'.
  • the first user can take a selfie of a video D through the first electronic device.
  • the first electronic device may capture a video D containing the first user.
  • the first electronic device can synthesize the video D and the multimedia data to obtain a new video D', and the video D can serve as the background of the multimedia data in the video D'.
  • the user interface 300 may further include a gallery control 350 .
  • the first electronic device may jump to the gallery application to view the multimedia data that has been photographed.
  • the user interface 400 may include a plurality of user controls 410 corresponding to a plurality of users in one-to-one correspondence.
  • User interface 400 may include user controls 411 for the second user. That is, the plurality of users may include the second user.
  • user interface 400 may include user search control 420 .
  • the first electronic device can obtain the relevant information of the second user (eg part or all of the name of the second user, initials of the second user's name, part or all of the video call number of the second user, etc.).
  • the first electronic device may determine user records of the second user from multiple user records stored in the first electronic device according to the relevant information of the second user, and the multiple user records may be in one-to-one correspondence with the multiple users.
  • the first electronic device can quickly display the user control 411 of the second user on the user interface 400 .
  • the first user may invite the second user to make a video call through the first electronic device.
  • the first electronic device in response to an operation (such as a click operation) performed by the first user on the user control 411 of the second user, the first electronic device can initiate a video call to the second electronic device, wherein the second electronic device can be used by the second user electronic equipment.
  • the second user may receive the video call invitation from the first user through the second electronic device.
  • the second electronic device may display an interface for the video call invitation, and the interface may include controls for answering the video call.
  • a video call connection can be established between the first electronic device and the second electronic device.
  • multiple users may be arranged in alphabetical order, for example.
  • user interface 400 may include common user controls 412 .
  • the first electronic device may count the user with the most times of co-shooting as user A, and display a common user control A on the user interface 400 , where the common user control A may be a control corresponding to user A.
  • the first electronic device may count the user with the most video calls as user B, and display a common user control B on the user interface 400, where the common user control B may be a control corresponding to user B.
  • the first electronic device After the first electronic device establishes a video call connection with the second electronic device, the first electronic device can obtain the first video by shooting, and the second electronic device can obtain the second video by shooting; and the first electronic device can obtain the video through the first electronic device.
  • the call is connected to obtain the second video, and the second electronic device can obtain the first video through the video call connection.
  • the first electronic device and/or the second electronic device may display a user interface 500 as shown in FIG. 5 .
  • the user interface 500 may include a first interface area 560 and a second interface area 570.
  • the first interface area 560 may display part or all of the image currently captured by the first electronic device, and the second interface area 570 may display the current image captured by the second electronic device. to some or all of the images.
  • the first interface region 560 and the second interface region 570 may not cross each other.
  • the first interface area 560 and the second interface area 570 may be located anywhere on the user interface 500 . As shown in FIG. 5 , the first interface area 560 may be located above the user interface 500 , and the second interface area 570 may be located below the user interface 500 . That is, some or all of the images captured by the first electronic device and some or all of the images captured by the second electronic device may be displayed on the user interface 500 at the same time.
  • the first interface area 560 may include the first User image 561
  • the second interface area 570 may include a second user image 571 . That is, the first interface area 560 may include pixels corresponding to the first user, and the second interface area 570 may include pixels corresponding to the second user. It should be understood that in other examples, the first electronic device and/or the second electronic device may use a rear-facing camera to capture an image containing the user.
  • User interface 500 may include recording controls 510 . Responding to the user (if the user interface 500 is displayed on the first electronic device, the user is the first user; if the user interface 500 is displayed on the second electronic device, the user is the second user, and the following similar situations will not be described in detail ) operation on the recording control 510, the electronic device (if the user interface 500 is displayed on the first electronic device, the electronic device is the first electronic device; if the user interface 500 is displayed on the second electronic device, the electronic device is the first electronic device; The electronic device is a second electronic device, and similar situations will not be described in detail below.)
  • the first video shot by the first electronic device and the second video shot by the second electronic device can be synthesized to obtain the target video.
  • the target video includes a first image area and a second image area.
  • the first image area corresponds to the first interface area 560 and the second image area corresponds to the second interface area 570 .
  • the electronic device can obtain images of both user A and user B by recording a screen.
  • screen recording actually acquires the display data of the electronic device, not the shooting data.
  • the sharpness of the displayed data should be less than that of the captured data. That is to say, the definition of the target video may be higher than the display definition of the electronic device.
  • the user interface may also include a number of controls for adjusting the synchronisation effect. These controls allow the user to adjust the timing effect before or during the timing.
  • the user interface 500 may include a beauty switch control 540 .
  • the electronic device may perform portrait beautification for the first user image 561 and/or the second user image 571 . That is, the electronic device may display the first user image 561 and/or the second user image 571 after beautification of the portrait on the user interface 500; in the synthesized target video, the user image and/or the user image in the first image area Or the user image in the second image area may be an image after beautification processing.
  • the electronic device may not perform portrait beautification for the first user image 561 and the second user image 571 . That is to say, the electronic device may display the first user image 561 and the second user image 571 on the user interface 500 according to the original image of the first user and the original image of the second user, the first user image 561 and the second user image 571 can be an image that has not been beautified; in the synthesized target video, the user image in the first image area can be obtained from the original image of the first user, and the user image in the second image area can be obtained from the second user image. , that is, the user image in the first image area and the user image in the second image area may be images without beauty treatment.
  • the user interface 500 may further include a filter switch control 550 .
  • the electronic device may perform filter beautification for the image of the first video and/or the image of the second video. That is, the electronic device may display the image of the first video and/or the image of the first video after beautification by the filter on the user interface 500; and, in the synthesized target video, the image in the first image area And/or the image in the second image area may be a filtered image.
  • the electronic device may not perform filter beautification for the first user image 561 and the second user image 571 . That is to say, the electronic device can display the unfiltered image in the user interface 500 according to the original image of the first video and the original image of the second video; in the synthesized target video, the image in the first image area The image may be obtained according to the original image of the first video, and the image in the second image area may be obtained according to the original image of the second first video, that is, the target video may not include the filtered image.
  • the user interface 500 may further include a background removal switch control 520 .
  • the background removal switch control 520 When the background removal switch control 520 is in an off state, the electronic device may not deduct the background of the first video and the background of the second video, that is, retain the background of the first video and the background of the second video.
  • the background removal switch control 520 When the background removal switch control 520 is in an on state, the electronic device may, for example, deduct the background of the first video and/or the background of the second video.
  • the electronic device may deduct the background of the first video and retain the background of the second video; for another example, the electronic device may deduct the background of the second video and retain the background of the first video; in another example, the electronic device may deduct the background of the first video The background of the video and the background of the second video.
  • the relationship between the user image (or user pixels, user image blocks) and the background (or background pixels, background image blocks) is described below with an example.
  • the user e can take a video by using the electronic device e.
  • the electronic device e can crop the video E according to the outline of the user e in the video E to obtain the user sub-video and the background sub-video.
  • the user sub-video may include the image of the user e and does not include the background image; the background sub-video may include the background image and not include the image of the user e.
  • the subframe E may include a plurality of pixel points E, and the plurality of pixel points E may include a plurality of pixel points e corresponding to the outline of the user e.
  • a plurality of pixel points e' located within the plurality of pixel points e in the subframe E can form a subframe e' of the user's sub-video, and can form the image of the user e; the plurality of pixel points in the subframe E are located
  • a plurality of pixel points e" other than e may form a subframe e" of the background sub-video, and may form the background image.
  • the background removal switch control 520 is currently in an off state.
  • the first interface area 560 may display a first user image 561 and a first background image 562 .
  • the first background image 562 may be the background image of the first user.
  • the first background image 562 may be obtained by photographing the scene where the first user is located. That is, the first interface area 560 may include pixels corresponding to the first user and pixels corresponding to the scene where the first user is located.
  • the second interface area 570 may be displayed with a second user image 571 and a second background image 572 .
  • the second background image 572 may be a background image of the second user.
  • the second background image 572 may be obtained by photographing the scene where the second user is located. That is, the second interface area 570 may include pixels corresponding to the second user and pixels corresponding to the scene where the second user is located.
  • the electronic device may display the user interface 600 .
  • the user interface 600 may include a first user background control 610 , a second user background control 620 , and a gallery background control 630 .
  • the first user background control 610 may be used to instruct the electronic device to display the first background image 562 in both the first interface area 560 and the second interface area 570 .
  • the first interface area 560 may display a first video captured by the first electronic device, and the first video may include a first user image 561 and a first background image 562 .
  • the first background image 562 contains image information of the scene where the first user is located.
  • the second interface area 570 displays a second user image 571 and the first background image 562, and the second user image 571 may be a part of the second video captured by the second electronic device.
  • the second interface area 570 may not display the second background image 572 as shown in FIG. 5 or FIG. 6 . That is, the second interface area 570 may not include pixels corresponding to the scene where the second user is located.
  • the electronic device can obtain the target video by synthesizing the first user image 561 in the first video, the first background image 562 in the first video, and the second user image 571 in the second video, and the first image area of the target video can be Corresponding to the first user image 561 and the first background image 562 , the second image area of the target video may correspond to the second user image 571 and the first background image 562 .
  • the electronic device can obtain the first video and the second video in the manner mentioned above; the electronic device can determine the first background image 562 according to the first video; the electronic device can synthesize the first background image 562 with the second user Image 571 to obtain a third video; the electronic device can display the first video in the first interface area 560, and display the third video in the second interface area 570; the electronic device can synthesize the first video and the third video into a target video , wherein the first video corresponds to the first image area of the target video, and the third video corresponds to the second image area of the target video.
  • the second user background control 620 may be used to instruct the electronic device to display the second background image 572 in both the second interface area 570 and the second interface area 570 .
  • the second interface area 570 may display a second video captured by the second electronic device, and the second video may include a second user image 571 and a second background image 572.
  • the second background image 572 contains image information of the scene where the second user is located.
  • the first interface area 560 displays the first user image 561 and the second background image 572, and the first user image 561 may be a part of the first video captured by the first electronic device.
  • the first interface area 560 may not display the first background image 562 as shown in FIG. 5 or FIG. 6 . That is to say, neither the first interface area 560 nor the first image area may include pixels corresponding to the scene where the first user is located.
  • the electronic device can obtain the target video by synthesizing the first user image 561 in the first video, the second user image 571 in the second video, and the second background image 572 in the second video.
  • An image area may correspond to the first user image 561 and the second background image 572
  • the second image area of the target video may correspond to the second user image 571 and the second background image 572 .
  • the electronic device can obtain the first video and the second video in the manner mentioned above; the electronic device can determine the second background image 572 according to the second video; the electronic device can synthesize the second background image 572 with the first user Image 561 to obtain the fourth video; the electronic device can display the fourth video in the first interface area 560, and display the second video in the second interface area 570; the electronic device can synthesize the second video and the fourth video into a target video , wherein the fourth video corresponds to the first image area of the target video, and the second video corresponds to the second image area of the target video.
  • the gallery background control 630 may be used to instruct the electronic device to obtain the target gallery image 910 from the gallery and set the target gallery image 910 as the background image.
  • the target gallery image 910 may be from a video or image previously stored on the electronic device.
  • the target gallery image 910 may be a subframe of the video. For example, when a user co-shoots a video, a certain subframe of the co-shooting video may correspond to a target gallery image, and multiple subframes of the co-shooting video may correspond one-to-one with multiple subframes of the video where the target gallery image is located.
  • the electronic device may display the target gallery image 910 and the first user image 561 in the first interface area 560, where the target gallery image 910 may be a background image of the first user.
  • the electronic device may display the target gallery image 910 and the second user image 571 in the second interface area 570, where the target gallery image 910 may be a background image of the second user.
  • the first interface area 560 may display a first user image 561 and the target gallery image 910 , and the first user image 561 may be a part of the first video captured by the first electronic device. That is, the first interface area 560 may include pixels corresponding to the first user and pixels corresponding to the target gallery image 910 .
  • the first interface area 560 may not display the first background image 562 as shown in FIG. 5 or FIG. 6 , nor the second background image 572 as shown in FIG. 8 .
  • the second interface area 570 may display a second user image 571 and the target gallery image 910 , and the second user image 571 may be a part of the second video captured by the second electronic device. That is, the second interface area 570 may include pixels corresponding to the second user and pixels corresponding to the target gallery image 910 .
  • the second interface area 570 may not display the second background image 572 as shown in FIG. 5 or FIG. 6 , nor the first background image 562 as shown in FIG. 7 .
  • the electronic device can obtain the target video by synthesizing the first user image 561 in the first video, the second user image 571 in the second video, and the target gallery image 910, and the first image area of the target video can be combined with the first user image 561.
  • the target gallery image 910 corresponding to the target gallery image 910
  • the second image area of the target video may correspond to the second user image 571 and the target gallery image 910 .
  • the electronic device may acquire the first video and the second video in the manner mentioned above; the electronic device may determine the first user image 561 according to the first video; the electronic device may determine the second user image according to the second video 571; the electronic device can synthesize the target gallery image 910 and the first user image 561 to obtain the fifth video; the electronic device can synthesize the target gallery image 910 and the second user image 571 to obtain the sixth video; the electronic device can display the sixth video in the first interface area
  • the fifth video is displayed in 560, and the sixth video is displayed in the second interface area 570; the electronic device can synthesize the fifth video and the sixth video into a target video, wherein the fifth video corresponds to the first image area of the target video, and the fifth video corresponds to the first image area of the target video.
  • Six videos correspond to the second image area of the target video.
  • the user interface shown in FIGS. 6 to 9 may be displayed on the first electronic device; then “select our background” in FIGS. 6 to 9 may correspond to the first user background control 610, FIG. 6
  • the “select counterparty background” in FIG. 9 may correspond to the second user background control 620 .
  • the user interface shown in FIGS. 6 to 9 may be displayed on the second electronic device; then “select our background” in FIGS. 6 to 9 may correspond to the second user background control 620, then FIG. 6 to 9 may correspond to the first user background control 610 .
  • the user interface may further include a split screen switch control 530 .
  • the first interface area 560 and the second interface area 570 can be, for example, two regular display areas. That is, the outline of the first interface area 560 may not match (or not correspond to) the outline of the first user, and the outline of the second interface area 570 may not match (or not correspond to) the outline of the second user.
  • the area of the first interface region 560 and the area of the second interface region 570 may correspond to a fixed ratio (eg, 1:1, 1:1.5, etc.).
  • the split screen switch control 530 is currently in an on state. Both the first interface region 560 and the second interface region 570 may be rectangular in shape.
  • the electronic device may perform an operation of synthesizing a video to obtain a target video, and the first image area and the second image area of the target video may be two regular display areas.
  • the outline of the first image area 560 may not match (or correspond to) the outline of the first user, and the outline of the second image area 570 may not match (or not correspond) to the outline of the second user.
  • the area of the first image area 560 and the area of the second image area 570 may correspond to a fixed ratio (eg, 1:1, 1:1.5, etc.).
  • the shapes of the first image area and the second image area may both be rectangles.
  • the outline of the first interface area 560 may match (or correspond to) the outline of the first user, for example, and the outline of the second interface area 570 may, for example, match Matches (or corresponds to) the profile of the second user.
  • the background removal switch control 520 may be turned on. That is, the first interface area 560 may not include the first background image 562 of the first video as shown in FIG. 5 or FIG. 6 ; the second interface area 570 may not include the second background image as shown in FIG. 5 or FIG. 6 . Second background image 572 in the video.
  • the outline of the first image area 560 may match (or correspond to) the outline of the first user
  • the outline of the second image area 570 may match (or correspond to) the outline of the first user.
  • the contours of the two users are matched (or corresponded). That is, the first image area 560 may not include the first background image 562 of the first video as shown in FIG. 5 or FIG. 6 ; the second image area 570 may not include the second background image as shown in FIG. 5 or FIG. 6 . Second background image 572 in the video.
  • the electronic device may preferentially display the first user image 561 or the second user image 571 .
  • the first user image 561 may be overlaid on the second user image 571 , then the display priority of the first user image 561 or the first interface area 560 may be higher than that of the second user image 571 or the second interface area 570 Show priority.
  • the second user image 571 may be overlaid on the first user image 561, then the display priority of the first user image 561 or the first interface area 560 may be higher than the display priority of the second user image 571 or the second interface area 570 class.
  • the electronic device may display the first user image 561 before the second user image 571 .
  • the outline of the first interface area 560 may be determined by the first user image 561 ; the outline of the second interface area 570 may be determined by the second user image 571 , and the first user image 561 overlaps with the second user image 571 part is ok.
  • the first interface area 560 may display all pixels of the first user image 561
  • the second interface area 570 may display only part of the pixels of the second user image 571 .
  • the outline of the first image area may be determined by the first user image 561
  • the outline of the second image area may be determined by the second user image 571
  • the The portion of the one user image 561 that overlaps the second user image 571 is determined.
  • the first image area may include all pixel points corresponding to the first user image 561
  • the second image area may include part of the pixel points corresponding to the second user image 571.
  • the electronic device may display the second user image 571 before the first user image 561 , as indicated by the arrow 1040 in FIG. 10 .
  • the outline of the second interface area 570 may be determined by the second user image 571 ; the outline of the first interface area 560 may be determined by the first user image 561 , and the second user image 571 overlaps with the first user image 561 part is ok.
  • the second interface area 570 may display all the pixels of the second user image 571 , and the first interface area 560 may display only part of the pixels of the first user image 561 .
  • the outline of the second image area may be determined by the second user image 571; the outline of the first image area may be determined by the first user image 561, and the The portion of the second user image 571 overlapping the first user image 561 is determined.
  • the first image area may include only part of the pixel points corresponding to the first user image 561
  • the second image area may include all pixel points corresponding to the second user image 571 .
  • the user can adjust the display size of the first user image 561 and the second user image 571, and then adjust the size ratio of the first user image and the second user image in the target video.
  • the user may perform a zoom operation on the first interface area 560 .
  • the electronic device may zoom the first interface area 560 . Since the outline of the first interface area 560 matches the outline of the first user image 561 , the display scale of the first user image 561 on the user interface 1000 can be adjusted. Accordingly, the image scale of the first image region in the target video can be adjusted.
  • the user may perform a zoom operation on the second interface area 570 .
  • the electronic device may zoom the second interface area 570 . Since the outline of the second interface area 570 matches the outline of the second user image 571 , the display scale of the second user image 571 on the user interface 1000 can be adjusted. Accordingly, the image scale of the second image area in the target video can be adjusted.
  • the user may perform a zoom operation on the user interface 1000 .
  • the electronic device may adjust the display scales of the first interface area 560 and the second interface area 570 on the user interface 1000 .
  • the image ratios of the first image area and the second image ratio in the target video can be adjusted. That is, the electronic device may have the ability to resize multiple user images at one time.
  • the display scale of the first interface area 560 on the user interface may be the first display scale
  • the display scale of the second interface area 570 on the user interface may be the second display scale.
  • the display ratio of the interface area on the user interface can be understood as the ratio of the number of pixels in the interface area to the number of pixels in the user interface.
  • the first display scale and the second display scale may be the same or approximately the same.
  • the number of pixels included in the first interface region 560 and the number of pixels included in the second interface region 570 are the same or not significantly different.
  • the display ratios of multiple user images on the user interface are not much different.
  • This display method is relatively more suitable for a scene with a large number of co-production users, which is beneficial to reduce the workload of the user to adjust the display ratio of the user images one by one.
  • the image of the first user and the image of the second user may appear to be the same size.
  • the ratio of the first user image 561 relative to the first video may be the first image ratio (the image ratio may refer to, in a certain frame of the video image, the number of pixels of the user image is the same as the number of pixels in the frame ratio of the total number of pixels), the ratio of the second user image 571 to the second video may be the second image ratio, the ratio of the first display ratio to the first image ratio (when the ratio of the display ratio to the image ratio is 1 It can be understood that the electronic device displays the user image according to the original size of the user image in the video) can be the first ratio, and the ratio of the second display ratio to the second image ratio can be the second ratio. After the electronic device adjusts the display ratios of the first interface area 560 and the second interface area 570 on the user interface, the first ratio and the second ratio may be the same.
  • the proportion of the first user image 561 in the first video is relatively large, and the proportion of the second user image 571 in the second video is relatively small, the proportion of the first user image 561 in the user interface may be larger than that of the second user image 571 in the user interface.
  • the proportion of image 571 in the second video can be relatively closer to the original video.
  • the user can resize the user in the user interface and in the target video after the shot by moving closer to the camera or farther away from the camera.
  • the co-shot target video can basically restore the pixels of the original video, which is beneficial to improve the clarity of the co-shot video. As shown in the example shown in FIG.
  • the display ratio of the user image on the user interface during co-shooting can match the display ratio of the user image on the user interface during self-portrait. This is beneficial to make it easier for users to adapt to the matching method provided by the embodiments of the present application.
  • the user interface 1000 may also include a background interface area 580 .
  • the pixel points in the background interface area 580 may be, for example, default values (eg, gray, white, etc. by default).
  • the background interface area 580 may display any one of the first background image 562 , the second background image 572 , and the target gallery image 910 .
  • the target video may include a background image area.
  • the background image area may correspond to any one of the first background image 562 , the second background image 572 , and the target gallery image 910 .
  • the user interface may include a first user background control 1010 .
  • the first user background control 1010 may be used to instruct the electronic device to display the first background image 562 within the background image area.
  • the electronic device may display the target gallery image 910 in the background interface area 580 .
  • the background interface area 580 may include pixels corresponding to the first background image 562 .
  • the electronic device may hide the first user background control 1010 on the user interface.
  • the electronic device may automatically hide the first user background control 1010 on the user interface so that the first user background control 1010 does not block the first background image 562 .
  • the electronic device may also hide the second user background control 1020 and the gallery background control 1030 . Since the user has selected an appropriate background image, the electronic device can hide some of the controls, thereby simplifying the user interface and reducing the occlusion of the preview video by the controls.
  • the electronic device may synthesize the first user image 561 in the first video, the second user image 571 in the second video, and the target gallery image 910, To obtain the target video, the first image area of the target video may correspond to the first user image 561 , the second image area of the target video may correspond to the second user image 571 , and the background image area of the target video may correspond to the target gallery image 910 .
  • the user interface may include a second user background control 1020 .
  • the second user background control 1020 may be used to instruct the electronic device to display the second background image 572 within the background image area.
  • the electronic device may display the target gallery image 910 in the background interface area 580 .
  • the background interface area 580 may include pixel points corresponding to the second background image 572 .
  • the electronic device may hide the second user background control 1020 on the user interface.
  • the electronic device may automatically hide the second user background control 1020 on the user interface so that the second user background control 1020 does not block the first background image 562 .
  • the electronic device may also hide the first user background control 1010 and the gallery background control 1030 . Since the user has selected a suitable background image, the electronic device can hide some of the controls, thereby simplifying the user interface and reducing the occlusion of the preview video by the controls.
  • the electronic device may synthesize the first user image 561 in the first video, the first background image 562 in the first video, and the The second user image 571 is used to obtain the target video.
  • the first image area of the target video may correspond to the first user image 561
  • the second image area of the target video may correspond to the second user image 571
  • the background image area of the target video may correspond to The first background image 562 corresponds.
  • the user interface may include a gallery background control 1030 .
  • the gallery background control 1030 is used to instruct the electronic device to obtain the target gallery image 910 from the gallery; in response to the user's operation on the gallery background control 1030 , the electronic device can display the target gallery image 910 in the background interface area 580 .
  • the background interface area 580 may include pixel points corresponding to the target gallery image 910 .
  • the target gallery image 910 may be a subframe of the video.
  • the electronic device may hide the gallery background control 1030 on the user interface. For example, the electronic device may automatically hide the gallery background control 1030 on the user interface so that the gallery background control 1030 does not obscure the first background image 562 .
  • the electronic device may also hide the first user background control 1010 and the second user background control 1020. Since the user has selected an appropriate background image, the electronic device can hide some of the controls, thereby simplifying the user interface and reducing the occlusion of the preview video by the controls.
  • the electronic device may synthesize the first user image 561 in the first video, the second user image 571 in the second video, and the second user image 571 in the second video.
  • the second background image 572 is obtained to obtain the target video.
  • the first image area of the target video may correspond to the first user image 561
  • the second image area of the target video may correspond to the second user image 571
  • the background image area of the target video may correspond to The second background image 572 corresponds.
  • the user interface may display a video call hang-up control (not shown in FIGS. 3 to 13 ).
  • the electronic device can hang up the video call between the first user and the second user.
  • the first user can use the camera application to implement a video call and remote co-photography with the second user.
  • the electronic device can retrieve the target video, so that the user can watch the target video.
  • the electronic device may perform post-adjustment on the target video. For example, you can adjust the speed of the first image area, the playback speed of the second image area, beautify the first image area, beautify the second image area, the size of the first image area, and the size of the second image area.
  • FIG. 14 shows a relationship diagram of an application module provided by an embodiment of the present application.
  • the camera application shown in FIG. 14 may correspond to the camera application shown in FIG. 2 , for example.
  • the smooth connection application shown in FIG. 14 may correspond to the smooth connection application shown in FIG. 2 .
  • the gallery application shown in FIG. 14 may correspond to the gallery application shown in FIG. 2 .
  • the camera application may include a camera module.
  • the photographing module may be used to photograph the scene where the first user is located to obtain the first video.
  • the photographing module may be used to photograph the scene where the second user is located to obtain the second video.
  • the Changlian application may include a video call module.
  • the video call module may be configured to send the first video to a second electronic device, and obtain the second video from the second electronic device, the second electronic device being the electronic device of the second user.
  • the video calling module may be configured to send the second video to the first electronic device, and obtain the first video from the first electronic device, where the first electronic device is the electronic device of the first user.
  • the camera application can also include a composition module.
  • the synthesis module can synthesize the target video according to the first video and the second video.
  • Gallery applications may include multimedia modules.
  • the multimedia module can retrieve the target video and perform post-processing on the target video.
  • the solutions provided by the embodiments of the present application can realize shooting through the camera application, and can call the smooth connection application of the electronic device through the camera application, thereby realizing a video call, so as to realize the effect of synchronizing and remote shooting.
  • a video call can facilitate communication and communication between multiple users, it is beneficial to improve the matching degree of co-production of multiple users.
  • a co-shot picture or a co-shot video with a relatively good co-production effect can be obtained, which is beneficial to reduce the workload of the user during the co-production process, such as the workload of post-production image retouching, etc.
  • the quality of the call is relatively good (for example, when the signal of the electronic device is good)
  • the definition of the co-shot image or the co-shot video may be relatively high.
  • FIG. 15 is a schematic diagram of another user interface 1400 provided by an embodiment of the present application.
  • the user interface 1400 may be displayed on the first electronic device.
  • the user interface 1400 may be an interface of a Connect application, or an interface of other applications having a video call function. That is to say, the first electronic device carries the Changlian application or other applications with a video call function.
  • the first electronic device may display the user interface 1400 in response to the first user's actions on the applications.
  • the first user can open the Changlian application by clicking on the icon of the Changlian application, and then the first electronic device can display the user interface 1400 .
  • the user interface 1400 may include a plurality of user controls 1410 in a one-to-one correspondence with a plurality of users.
  • the plurality of users may include a second user.
  • the first electronic device may display the contact information of the second user on the user interface 1500 shown in FIG. 16 .
  • the contact information of the second user may include at least one of the following: the name 1510 of the second user, the contact information 1520 of the second user, the call record 1530 of the second user, and the like.
  • user interface 1400 may include user search control 1420 .
  • the first user may invite the second user to make a video call through the user search control 1420 .
  • an operation such as a click operation
  • a series of subsequent operations such as text input, voice input, scanning a QR code, etc.
  • the first electronic device can obtain the relevant information of the second user (eg part or all of the name of the second user, initials of the second user's name, part or all of the video call number of the second user, etc.).
  • the first electronic device may determine user records of the second user from multiple user records stored in the first electronic device according to the relevant information of the second user, and the multiple user records may be in one-to-one correspondence with the multiple users. Further, the first electronic device can quickly display the user control of the second user on the user interface 1400 .
  • user interface 1400 may include common user controls 1412 .
  • the second user may belong to frequently used contacts, and the user interface 1400 may include a frequently used user control 1411 corresponding to the second user.
  • the first electronic device may count the user with the most times of co-shooting as user A, and display a common user control A on the user interface 1400 , where the common user control A may be a control corresponding to user A.
  • the first electronic device may count the user with the most video calls as user B, and display a common user control B on the user interface 1400, where the common user control B may be a control corresponding to user B.
  • multiple users may be arranged in alphabetical order, for example.
  • the user interface may include a smooth video control 1430 .
  • the user interface 1400 may include a plurality of connected video controls 1430 corresponding to a plurality of users one-to-one.
  • the user interface 1500 may include a connected video control 1430 corresponding to the second user.
  • the first user may invite the second user to make a video call through the first electronic device. 15 to 17 , in response to an operation (such as a click operation) performed by the first user on the connected video control 1430 corresponding to the second user, the first electronic device may initiate a video call to the second electronic device, wherein,
  • the second electronic device may be an electronic device used by the second user.
  • the first electronic device may display a video call interface 1600 as shown in FIG. 17 .
  • the second user may receive the video call invitation from the first user through the second electronic device.
  • the second electronic device may display an interface for the video call invitation, and the interface may include controls for answering the video call.
  • a video call connection can be established between the first electronic device and the second electronic device.
  • the first electronic device may display a user interface 1700 as shown in FIG. 18 , for example.
  • the first electronic device After the first electronic device establishes a video call connection with the second electronic device, the first electronic device can obtain the first video by shooting, and the second electronic device can obtain the second video by shooting; and the first electronic device can obtain the video through the first electronic device.
  • the call is connected to obtain the second video, and the second electronic device can obtain the first video through the video call connection.
  • the first user can invite the second user to take a photo remotely during the video call.
  • the second user may invite the first user to take a photo remotely during the video call.
  • the first electronic device and the second electronic device may display a user interface 1800 as shown in FIG. 19 .
  • User interface 1800 may be a preparation interface for a remote sync.
  • the user interface shown in FIG. 15 or FIG. 16 may further include a remote synchronization control 1440 .
  • the user interface 1400 may include a plurality of remote snap controls 1440 that correspond one-to-one with a plurality of users.
  • the user interface 1500 may include a remote snap control 1440 corresponding to the second user.
  • the first user may invite the second user to complete the remote co-shooting through a video call through the remote co-shooting control 1440 .
  • the first electronic device in response to an operation (such as a click operation) performed by the first user on the remote snap control 1440, the first electronic device can initiate a video call to the second electronic device and send an instruction to the second electronic device information, the indication information is used to invite the second user to take a photo together, wherein the second electronic device may be an electronic device used by the second user.
  • the first electronic device may display a video call interface 1600 as shown in FIG. 17 .
  • the second user may receive the remote co-shooting invitation from the first user through the second electronic device.
  • the second electronic device may display an interface for the remote co-shooting invitation, and the interface may include video call answering controls.
  • a video call connection can be established between the first electronic device and the second electronic device, and both the first electronic device and the second electronic device can display as shown in Figure 19 user interface 1800.
  • the user interface 1800 may include a first interface area 1860 and a second interface area 1870.
  • the first interface area 1860 may display part or all of the images currently captured by the first electronic device, and the second interface area 1870 may display Part or all of the images currently captured by the second electronic device.
  • the first interface region 1860 and the second interface region 1870 may not cross each other.
  • the first interface area 1860 and the second interface area 1870 may be located anywhere on the user interface 1800 .
  • the first interface area 1860 may be located above the user interface 1800
  • the second interface area 1870 may be located below the user interface 1800 . That is, some or all of the images captured by the first electronic device and some or all of the images captured by the second electronic device may be displayed on the user interface 1800 at the same time.
  • the first interface area 1860 may include the first User image 1861
  • the second interface area 1870 may include a second user image 1871 . That is, the first interface area 1860 may include pixels corresponding to the first user, and the second interface area 1870 may include pixels corresponding to the second user. It should be understood that in other examples, the first electronic device and/or the second electronic device may use a rear camera to capture an image containing the user.
  • User interface 1800 may include recording controls 1810 .
  • the electronic device can synthesize the first video shot by the first electronic device and the second video shot by the second electronic device to obtain the target video.
  • the target video includes a first image area and a second image area.
  • the first image area corresponds to the first interface area 1860
  • the second image area corresponds to the second interface area 1870 . It can be known from the above that the definition of the target video may be higher than the display definition of the electronic device.
  • the user interface may also include a number of controls for adjusting the synchronisation effect. These controls allow the user to adjust the timing effect before or during the timing. Some possible control examples provided by the present application are described below with reference to FIGS. 19 to 27 .
  • the user interface 1800 may include a beautification switch control 1840 .
  • the beautification switch control 1840 may have the functions of the beauty switch control 540 and/or the filter switch control 550 described above, which will not be described in detail here.
  • the user interface 1800 may further include a background removal switch control 1820 .
  • the background removal switch control 1820 may be used to instruct the electronic device whether to subtract the background of the first video and/or the background of the second video.
  • the electronic device may display the user interface 1800 .
  • User interface 1800 may include a first user background control, a second user background control, and a gallery background control.
  • the background removal switch control 1820 may currently be in an off state.
  • the first interface area 1860 may display a first user image 1861 and a first background image 1862 ; the second interface area 1870 may display a second user image 1871 and a second background image 1872 .
  • the first background image 1862 may be the background image of the first user.
  • the second background image 1872 may be a background image of the second user. That is, the first interface area 1860 may include pixels corresponding to the first user and pixels corresponding to the scene where the first user is located; the second interface area 1870 may include pixels corresponding to the second user, and pixels corresponding to the scene where the first user is located; The pixel corresponding to the scene where the second user is located.
  • the electronic device may display the user interface 1900 .
  • the user interface 1900 may include a first user background control 1910 , a second user background control 1920 , and a gallery background control 1930 .
  • the first user background control 1910 is used to instruct the electronic device to display the first background image 1862 in both the first interface area 1860 and the second interface area 1870 .
  • the first interface area 1860 may display a first video captured by the first electronic device, and the first video may include a first user image 1861 and a first background image 1862.
  • the first background image 1862 contains image information of the scene where the first user is located.
  • the second interface area 1870 displays a second user image 1871 and the first background image 1862, and the second user image 1871 may be a part of the second video captured by the second electronic device.
  • the second interface area 1870 may not display the second background image 1872 as shown in FIG. 19 or FIG. 20 . That is, the second interface area 1870 may not include pixels corresponding to the scene where the second user is located.
  • the electronic device can obtain the target video by synthesizing the first user image 1861 in the first video, the first background image 1862 in the first video, and the second user image 1871 in the second video, and the first image area of the target video can be Corresponding to the first user image 1861 and the first background image 1862 , the second image area of the target video may correspond to the second user image 1871 and the first background image 1862 .
  • the electronic device can obtain the first video and the second video in the manner mentioned above; the electronic device can determine the first background image 1862 according to the first video; the electronic device can synthesize the first background image 1862 with the second user Image 1871 to obtain a third video; the electronic device can display the first video in the first interface area 1860, and display the third video in the second interface area 1870; the electronic device can synthesize the first video and the third video into a target video , wherein the first video corresponds to the first image area of the target video, and the third video corresponds to the second image area of the target video.
  • the second user background control 1920 may be used to instruct the electronic device to display the second background image 1872 in both the second interface area 1870 and the second interface area 1870 .
  • the second interface area 1870 may display a second video captured by the second electronic device, and the second video may include a second user image 1871 and a second background image 1872.
  • the second background image 1872 contains image information of the scene where the second user is located.
  • the first interface area 1860 displays the first user image 1861 and the second background image 1872, and the first user image 1861 may be a part of the first video captured by the first electronic device.
  • the first interface area 1860 may not display the first background image 1862 as shown in FIG. 19 or FIG. 20 . That is, the first interface area 1860 may not include pixels corresponding to the scene where the first user is located.
  • the electronic device can obtain the target video by synthesizing the first user image 1861 in the first video, the second user image 1871 in the second video, and the second background image 1872 in the second video, and the first image area of the target video can be Corresponding to the first user image 1861 and the second background image 1872 , the second image area of the target video may correspond to the second user image 1871 and the second background image 1872 .
  • the electronic device can obtain the first video and the second video in the manner mentioned above; the electronic device can determine the second background image 1872 according to the second video; the electronic device can synthesize the second background image 1872 with the first user Image 1861 to obtain the fourth video; the electronic device can display the fourth video in the first interface area 1860, and display the second video in the second interface area 1870; the electronic device can synthesize the second video and the fourth video into a target video , wherein the fourth video corresponds to the first image area of the target video, and the second video corresponds to the second image area of the target video.
  • the gallery background control 1930 may be used to instruct the electronic device to obtain the target gallery image 2210 from the gallery and set the target gallery image 2210 as the background image.
  • the target gallery image 2210 may be a subframe of the video. For example, when a user takes a video together, a certain subframe of the video may correspond to the target gallery image 2210, and multiple subframes of the video may correspond to multiple subframes of the video where the target gallery image 2210 is located.
  • the first interface area 1860 may display a first user image 1861 and the target gallery image 2210, and the first user image 1861 may be a part of the first video captured by the first electronic device. That is, the first interface area 1860 may include pixels corresponding to the first user and pixels corresponding to the target gallery image 2210 . The first interface area 1860 may not display the first background image 1862 as shown in FIG. 19 or FIG. 20 , nor the second background image 1872 as shown in FIG. 22 .
  • the second interface area 1870 may display a second user image 1871 and the target gallery image 2210, and the second user image 1871 may be a part of the second video captured by the second electronic device. That is, the second interface area 1870 may include pixels corresponding to the second user and pixels corresponding to the target gallery image 2210 .
  • the second interface area 1870 may not display the second background image 1872 as shown in FIG. 19 or FIG. 20 , nor the first background image 1862 as shown in FIG. 21 .
  • the electronic device may display the target gallery image 2210 only within the first interface area 1860 or the second interface area 1870 to serve as a background image for the interface area.
  • the electronic device can obtain the target video by synthesizing the first user image 1861 in the first video, the second user image 1871 in the second video, and the target gallery image 2210, and the first image area of the target video can be combined with the first user image 1861.
  • the target gallery image 2210 corresponding to the target gallery image 2210
  • the second image area of the target video may correspond to the second user image 1871 and the target gallery image 2210 .
  • the electronic device may obtain the first video and the second video in the manner mentioned above; the electronic device may determine the first user image 1861 according to the first video; the electronic device may determine the second user image according to the second video 1871; the electronic device can synthesize the target gallery image 2210 and the first user image 1861 to obtain the fifth video; the electronic device can synthesize the target gallery image 2210 and the second user image 1871 to obtain the sixth video; the electronic device can display the sixth video in the first interface area
  • the fifth video is displayed in 1860, and the sixth video is displayed in the second interface area 1870; the electronic device can synthesize the fifth video and the sixth video into a target video, wherein the fifth video corresponds to the first image area of the target video, and the fifth video corresponds to the first image area of the target video.
  • Six videos correspond to the second image area of the target video.
  • the user interface shown in FIGS. 20 to 23 may be displayed on the first electronic device; then “select my background” in FIGS. 20 to 23 may correspond to the first user background control 1910, FIG. 20
  • the “select counterparty background” in FIG. 23 may correspond to the second user background control 1920 .
  • the user interface shown in FIGS. 20 to 23 may be displayed on the second electronic device; then “select my background” in FIGS. 20 to 23 may correspond to the second user background control 1920, then FIG. 20 to FIG. 23 , “select the counterparty background” may correspond to the first user background control 1910 .
  • the user interface may further include a split screen switch control 1830 .
  • the first interface area 1860 and the second interface area 1870 may be, for example, two regular display areas. That is, the outline of the first interface area 1860 may not match (or correspond to) the outline of the first user, and the outline of the second interface area 1870 may not match (or correspond) the outline of the second user.
  • the area of the first interface region 1860 and the area of the second interface region 1870 may correspond to a fixed ratio (eg, 1:1, 1:1.5, etc.).
  • the split screen switch control 1830 is currently on. Both the first interface region 1860 and the second interface region 1870 may be rectangular in shape.
  • the electronic device may perform an operation of synthesizing a video to obtain a target video.
  • the first image area and the second image area of the target video may be two regular display areas.
  • the contours of the first image area may not match (or do not correspond to) the contours of the first user, and the contours of the second image area may not match (or do not correspond) to the contours of the second user.
  • the area of the first image area and the area of the second image area may correspond to a fixed ratio (eg, 1:1, 1:1.5, etc.).
  • the shapes of the first image area and the second image area of the target video may both be rectangles.
  • the outline of the first interface area 1860 may match (or correspond to) the outline of the first user, and the outline of the second interface area 1870 may, for example, match (or correspond to) the outline of the first user. Matches (or corresponds to) the profile of the second user.
  • the background removal switch control 1820 may be turned on. That is, the first interface area 1860 may not include the first background image 1862 of the first video as shown in FIG. 19 or FIG. 20 ; the second interface area 1870 may not include the second background image as shown in FIG. 19 or FIG. 20 . Second background image 1872 in the video.
  • the outline of the first image area may match (or correspond to) the outline of the first user
  • the outline of the second image area may match (or correspond to) the outline of the second user.
  • the contour matches (or correspond to) of That is, the first image area may not include the first background image 1862 of the first video as shown in FIG. 19 or FIG. 20 ; the second image area may not include in the second video as shown in FIG. 19 or FIG. 20 .
  • the electronic device may preferentially display the first user image 1861 or the second user image 1871 .
  • the first user image 1861 can be overlaid on the second user image 1871, then the display priority of the first user image 1861 or the first interface area 1860 can be higher than that of the second user image 1871 or the second interface area 1870 Show priority.
  • the second user image 1871 may be overlaid on the first user image 1861, then the display priority of the first user image 1861 or the first interface area 1860 may be higher than the display priority of the second user image 1871 or the second interface area 1870 class.
  • the electronic device may display the first user image 1861 before the second user image 1871 .
  • the outline of the first interface area 1860 may be determined by the first user image 1861 ; the outline of the second interface area 1870 may be determined by the second user image 1871 , and the overlap of the first user image 1861 with the second user image 1871 part is ok.
  • the first interface area 1860 may display all the pixels of the first user image 1861
  • the second interface area 1870 may display only part of the pixels of the second user image 1871 .
  • the outline of the first image area may be determined by the first user image 1861
  • the outline of the second image area may be determined by the second user image 1871
  • the The portion of the one user image 1861 that overlaps with the second user image 1871 is determined.
  • the first image area may include all pixel points corresponding to the first user image 1861
  • the second image area may include only part of the pixel points corresponding to the second user image 1871 .
  • the electronic device may display the second user image 1871 before the first user image 1861 , as indicated by arrow 2340 in FIG. 24 .
  • the outline of the second interface area 1870 may be determined by the second user image 1871 ; the outline of the first interface area 1860 may be determined by the first user image 1861 , and the second user image 1871 overlaps the first user image 1861 part is ok.
  • the second interface area 1870 may display all the pixels of the second user image 1871
  • the first interface area 1860 may display only part of the pixels of the first user image 1861 .
  • the outline of the second image area may be determined by the second user image 1871; the outline of the first image area may be determined by the first user image 1861, and the The portion of the second user image 1871 that overlaps the first user image 1861 is determined.
  • the first image area may include only part of the pixel points corresponding to the first user image 1861 , and the second image area may include all pixel points corresponding to the second user image 1871 .
  • the user can adjust the display size of the first user image 1861 and the second user image 1871, and then adjust the size ratio of the first user image and the second user image in the target video.
  • the display scale of the second user image 1871 on the user interface 2300 may be larger than the display scale of the first user image 1861 on the user interface 2300 .
  • the user may perform a zoom operation on the first interface area 1860 .
  • the electronic device may zoom the first interface area 1860. Since the outline of the first interface area 1860 matches the outline of the first user image 1861 , the display scale of the first user image 1861 on the user interface 2300 can be adjusted. Accordingly, the image scale of the first image region in the target video can be adjusted.
  • the user may perform a zoom operation on the second interface area 1870 .
  • the electronic device may zoom the second interface area 1870 . Since the outline of the second interface area 1870 matches the outline of the second user image 1871, the display scale of the second user image 1871 on the user interface 2300 may be adjusted. Accordingly, the image scale of the second image area in the target video can be adjusted.
  • the user may perform a zoom operation on the user interface 2300 .
  • the electronic device may adjust the display scale of the first interface area 1860 and the second interface area 1870 on the user interface 2300 .
  • the image ratios of the first image area and the second image ratio in the target video can be adjusted. That is, the electronic device may have the capability of resizing multiple user images at one time.
  • the display scale of the first interface area 1860 on the user interface may be the first display scale, and the display scale of the second interface area 1870 on the user interface may be the second display scale.
  • the display ratio of the interface area on the user interface can be understood as the ratio of the number of pixels in the interface area to the number of pixels in the user interface.
  • the first display scale and the second display scale may be the same or approximately the same.
  • the number of pixels included in the first interface region 1860 is the same as or not significantly different from the number of pixels included in the second interface region 1870 .
  • the display ratios of multiple user images on the user interface are not much different. This display method is relatively more suitable for a scene with a large number of co-production users, which is beneficial to reduce the workload of the user to adjust the display ratio of the user images one by one.
  • the ratio of the first user image 1861 to the first video may be the first image ratio (the image ratio may refer to, in a certain frame of the video image, the number of pixels of the user image is the same as the number of pixels in the frame ratio of the total number of pixels), the ratio of the second user image 1871 to the second video may be the second image ratio, the ratio of the first display ratio to the first image ratio (when the ratio of the display ratio to the image ratio is 1 It can be understood that the electronic device displays the user image according to the original size of the user image in the video) can be the first ratio, and the ratio of the second display ratio to the second image ratio can be the second ratio. After the electronic device adjusts the display ratios of the first interface area 1860 and the second interface area 1870 on the user interface, the first ratio and the second ratio may be the same.
  • the proportion of the second user image 1871 in the second video is relatively large, and the proportion of the first user image 1861 in the first video is relatively small, then the second user image 1871 in the user interface 2300 has a relatively small proportion.
  • the proportion may be greater than the proportion of the first user image 1861 in the second video.
  • This display method can be relatively closer to the original video.
  • the user can resize the user in the user interface and in the target video after the shot by moving closer to the camera or farther away from the camera.
  • the co-shot target video can basically restore the pixels of the original video, which is beneficial to improve the clarity of the co-shot video. As shown in the example shown in FIG.
  • the display ratio of the user image on the user interface 2300 during co-shooting may match the display ratio of the user image on the user interface 2300 during the self-shooting. This is beneficial to make it easier for users to adapt to the matching method provided by the embodiments of the present application.
  • the user interface 2300 may also include a background interface area 1880 .
  • the pixel points in the background interface area 1880 may be, for example, default values (eg, gray, white, etc. by default).
  • the background interface area 1880 may display any one of the first background image 1862 , the second background image 1872 , and the target gallery image 2210 .
  • the target video may include a background image area.
  • the background image area may correspond to any one of the first background image 1862 , the second background image 1872 , and the target gallery image 2210 .
  • the user interface may include a first user background control 1910 .
  • the first user background control 1910 may be used to instruct the electronic device to display the first background image 1862 within the background image area.
  • the electronic device may display the target gallery image 2210 in the background interface area 1880 .
  • the background interface area 1880 may include pixel points corresponding to the first background image 1862 .
  • the electronic device may synthesize the first user image 1861 in the first video, the second user image 1871 in the second video, and the target gallery image 2210, To obtain the target video, the first image area of the target video may correspond to the first user image 1861, the second image area of the target video may correspond to the second user image 1871, and the background image area of the target video may correspond to the target gallery image 2210.
  • the user interface may include a second user background control 1920 .
  • the second user background control 1920 may be used to instruct the electronic device to display the second background image 1872 within the background image area.
  • the electronic device may display the target gallery image 2210 in the background interface area 1880 .
  • the background interface area 1880 may include pixel points corresponding to the second background image 1872 .
  • the electronic device may synthesize the first user image 1861 in the first video, the first background image 1862 in the first video, and the first user image 1862 in the second video.
  • the second user image 1871 obtains the target video
  • the first image area of the target video may correspond to the first user image 1861
  • the second image area of the target video may correspond to the second user image 1871
  • the background image area of the target video may correspond to The first background image 1862 corresponds.
  • the user interface may include a gallery background control 1930.
  • the gallery background control 1930 is used to instruct the electronic device to obtain the target gallery image 2210 from the gallery; in response to the user's operation on the gallery background control 1930 , the electronic device can display the target gallery image 2210 in the background interface area 1880 .
  • the background interface area 1880 may include pixels corresponding to the target gallery image 2210 .
  • the target gallery image 2210 may be a subframe of the video.
  • the electronic device may synthesize the first user image 1861 in the first video, the second user image 1871 in the second video, and the second user image 1871 in the second video.
  • the second background image 1872 obtains the target video
  • the first image area of the target video may correspond to the first user image 1861
  • the second image area of the target video may correspond to the second user image 1871
  • the background image area of the target video may correspond to The second background image 1872 corresponds.
  • the user interface may display the video call hangup control 1850, as shown in FIG. 19 .
  • the electronic device may hang up the video call between the first user and the second user. In this way, the first user can realize a video call and remote co-production with the second user through the Changlian application.
  • the electronic device can retrieve the target video, so that the user can watch the target video.
  • the electronic device may perform post-adjustment on the target video. For example, you can adjust the speed of the first image area, the playback speed of the second image area, beautify the first image area, beautify the second image area, the size of the first image area, and the size of the second image area.
  • FIG. 28 shows a relationship diagram of an application module provided by an embodiment of the present application.
  • the camera application shown in FIG. 28 may correspond to the camera application shown in FIG. 2 , for example.
  • the smooth connection application shown in FIG. 28 may correspond to the smooth connection application shown in FIG. 2 .
  • the gallery application shown in FIG. 28 may correspond to the gallery application shown in FIG. 2 .
  • the camera application may include a camera module.
  • the photographing module may be used to photograph the scene where the first user is located to obtain the first video.
  • the photographing module may be used to photograph the scene where the second user is located to obtain the second video.
  • the Changlian application may include a video call module.
  • the video call module may be configured to send the first video to a second electronic device, and obtain the second video from the second electronic device, the second electronic device being the electronic device of the second user.
  • the video calling module may be configured to send the second video to the first electronic device, and obtain the first video from the first electronic device, where the first electronic device is the electronic device of the first user.
  • the Changlian app can also include a composition module.
  • the synthesizing module can synthesize the target video according to the first video and the second video.
  • Gallery applications may include multimedia modules.
  • the multimedia module can retrieve the target video and perform post-processing on the target video.
  • the solutions provided by the embodiments of the present application can implement a video call through the Changlian application, and can call the camera application of the electronic device through the Changlian application to complete the shooting, thereby realizing the effect of synchronizing and remote co-shooting.
  • a video call can facilitate communication and communication between multiple users, it is beneficial to improve the matching degree of co-production of multiple users.
  • a co-shot picture or a co-shot video with a relatively good co-production effect can be obtained, which is beneficial to reduce the workload of the user during the co-production process, such as the workload of post-production image retouching, etc.
  • the quality of the call is relatively good (for example, when the signal of the electronic device is good)
  • the definition of the co-shot image or the co-shot video may be relatively high.
  • User A and User B are in different places and it is difficult to meet immediately.
  • User A and User B intend to make a video for a body movement (such as dance, finger movement, gymnastics, etc.).
  • the requirements of user A and user B for the co-production video may include, for example, that the speed of the body movement is generally consistent; when user A and user B dance the body movement, they can start or end at about the same time; in terms of size or depth of field, etc.
  • the image of user A in the co-shot video may be substantially the same as the image of user B in the co-shot video.
  • user A can invite user B to make a video call and take a photo remotely through electronic device A.
  • User B can use electronic device B to connect to the video call and remote co-production invitation initiated by user A.
  • user A and user B can communicate the details of co-production through a video call connection.
  • user A and user B can communicate dance movements.
  • user A and/or user B can adjust the distance from the camera of the electronic device.
  • user A and/or user B can adjust the display effect of user A and/or user B on the user interface through one or more of the beauty switch controls, filter switch controls, and beautification switch controls in the user interface .
  • user A and/or user B may act on the user interface of the electronic device to adjust the display scale of user A and/or user B on the user interface.
  • user A and user B can communicate with each other to confirm whether to enable the split screen switch control. If the split screen switch control is on, user A and user B can also communicate the relative positions of interface area A where user A is located and interface area B where user B is located. Above and below the user interface, or the interface area A and the interface area B are respectively set to the left and right of the user interface. If the split screen switch control is off, user A and user B can also communicate to confirm that interface area A can cover interface area B, or interface area B can cover interface area A.
  • user A or user B can confirm whether to enable the background removal switch control through communication. If the split screen switch control is turned on, user A and user B can also communicate the specific background source used for the co-production video, such as the background image corresponding to the scene where user A is located, the background image corresponding to the scene where user B is located, and the application from the gallery. Gallery image retrieved from .
  • user A can operate the recording control on electronic device A, or user B can operate the recording control on electronic device B to start recording the co-shot video. Then, user A and user B can complete body movements according to the previously communicated way.
  • user A can operate the recording controls on the electronic device A, or user B can operate the recording controls on the electronic device B to end recording the co-shot video.
  • Electronic device A and/or electronic device B may store the co-shot video.
  • User A and user B can communicate the co-production video through a video call connection to confirm whether a new co-production video needs to be shot. If necessary, user A and user B can co-shoot a new video in the manner described above. If not required, user A and user B can choose to hang up the video call connection.
  • users Before starting to shoot a co-shot video, users can communicate the details of the co-shot via a video call connection. After the video co-production is completed, the electronic device can synthesize the co-production video according to the data obtained from the video call.
  • the electronic device can synthesize the co-production video according to the data obtained from the video call.
  • it is beneficial to improve the matching degree of the co-production of multiple users and reduce the processing amount of the co-production video for the user; on the other hand, it is beneficial to improve the clarity of the co-production video. Therefore, it is beneficial to improve the user experience of multi-user co-production in different places.
  • scenarios similar to scenario 1 may include, for example, that user A and user B co-shoot a video during the live broadcast.
  • user A and user B can have a video call, and user A or user B can broadcast the video call process live.
  • User A and user B can communicate the details of the co-produced video during live broadcast (ie, a video call), and complete the co-produced video through the method provided in the embodiment of the present application.
  • scenarios similar to scenario 1 may include, for example, during the epidemic, it is difficult for user A and user B to meet and prepare to make a video together.
  • User A and user B can communicate the details of the co-production video through a video call, and complete the co-production video through the method provided by the embodiment of the present application.
  • the background image of the co-production video can be replaced, so that even if user A and user B are in different places, the co-production video with the same background can be quickly obtained, so that viewers of the co-production video can think or approximately think that user A and user B are in the same scene .
  • scenarios similar to scenario 1 may include, for example, user A may be a fitness student, user B may be a fitness coach, and user A and user B may co-produce video to improve the teaching quality of cloud fitness.
  • User A and user B can communicate the body movements during the co-production process through a video call, and complete the co-production video through the method provided by the embodiment of the present application. Afterwards, user B (that is, the fitness coach) can comment on the co-production video whether the fitness action of user A (that is, the fitness student) is standardized.
  • User C and user D are in different places and it is difficult to meet immediately. User C and user D are about to start a video conference, and are ready to record the video conference by co-producing a video.
  • the requirements of user C and user D for co-production video may include, for example, that user C and user D may be in the same meeting place; user C and user D communicate naturally, and so on.
  • user C can invite user D to make a video call and take a photo remotely through electronic device C.
  • User D can use electronic device D to connect to the video call and remote co-production invitation initiated by user C.
  • user C and user D can communicate the details of co-production through a video call connection.
  • the conference background may be, for example, a background image corresponding to the scene where user C is located, a background image corresponding to the scene where user D is located, or a gallery image retrieved from a gallery application.
  • user C and/or user D can adjust the distance from the camera of the electronic device.
  • user C and/or user D can adjust the display effect of user C and/or user D on the user interface through one or more of the beauty switch controls, filter switch controls, and beautification switch controls in the user interface. .
  • user C and/or user D may act on the user interface of the electronic device to adjust the display scale of user C and/or user D on the user interface.
  • user C and user D can communicate whether to enable the split screen switch control. If the split screen switch control is turned on, user C and user D can also communicate the relative positions of the interface area C where user C is located and the interface area D where user D is located. Above and below the user interface, or the interface area C and the interface area D are respectively set to the left and right of the user interface. If the split screen switch control is off, user C and user D can also communicate to confirm that interface area C can cover interface area D, or interface area D can cover interface area C.
  • user C can operate the recording control on electronic device C, or user D can operate the recording control on electronic device D to start recording the co-shot video. Then, user C and user D can conduct conference communication. After the conference ends, user C can operate the recording control on electronic device C, or user D can operate the recording control on electronic device D to end recording the co-shot video.
  • Electronic device C and/or electronic device D may store the co-shot video.
  • the co-production video may be a video-type meeting minutes.
  • User C and user D can communicate the co-production video through a video call connection to confirm whether a new round of meeting minutes needs to be recorded. If necessary, user C and user D can co-shoot a new video in the manner described above. If not required, user C and user D can choose to hang up the video call connection.
  • users Before starting to shoot a co-shot video, users can communicate the details of the co-shot via a video call connection. After the video co-production is completed, the electronic device can synthesize the co-production video according to the data obtained from the video call. On the one hand, it is beneficial to make the remote video conference approaching the same-site meeting; on the other hand, it is beneficial to improve the clarity of the co-produced video. Therefore, it is beneficial to improve the user experience of multi-user co-production in different places.
  • scenarios similar to scenario 1 may include, for example, user C is a teacher, user D is a student, user C can provide online tutorials online, and user D can acquire knowledge through online tutorials.
  • Teaching videos can be obtained through the co-production method provided by the embodiments of the present application, which is beneficial to improve the teaching quality of online courses.
  • user C and user D can communicate the specific details of the co-production video through a video call, such as the postures of user C and user D, the background image of the co-production video can correspond to the classroom scene, and the slideshow displayed by user C It can be displayed on a blackboard, whiteboard, or projection screen in a classroom scene, etc.
  • the user C and the user D can start teaching, and obtain the co-production video through the method provided in the embodiment of the present application.
  • User C can continue to use the co-produced video as a teaching video template.
  • User D can watch the co-production video repeatedly to review the knowledge.
  • This embodiment of the present application further provides a method 2900 for co-shooting, and the method 2900 may be implemented in an electronic device (eg, a mobile phone, a tablet computer, etc.) as shown in FIG. 1 and FIG. 2 .
  • the method 2900 may include the following steps:
  • the first electronic device establishes a video call connection between the first electronic device and a second electronic device, where the first electronic device is the electronic device of the first user, and the second electronic device is the electronic device of the second user .
  • the second electronic device establishes a video call connection between the first electronic device and the second electronic device.
  • the first electronic device may initiate a video call connection in a shooting application.
  • the first electronic device may initiate a video call connection in a video call application.
  • the co-shooting method further includes: displaying, by the first electronic device, a first interface of a shooting application,
  • the first interface includes an in-time control;
  • the first electronic device displays a second interface in response to an operation acting on the in-time control, and
  • the second interface includes a plurality of user controls corresponding to a plurality of users one-to-one.
  • the plurality of users includes the second user;
  • the first electronic device sends a co-production invitation to the second electronic device in response to an operation of a user control acting on the second user to establish the video call connection.
  • the user control of the second user may be, for example, the control 410 and the control 411 shown in FIG. 4 .
  • the co-shooting method further includes: displaying, by the first electronic device, a third interface of a video call application.
  • the third interface includes a plurality of video call controls corresponding to a plurality of users one-to-one, the plurality of users include the second user; the first electronic device responds to the video call acting on the second user The operation of the control sends a video call invitation to the second electronic device to establish the video call connection.
  • the controls of the second user may be, for example, the controls 1411, 1411, 1412, 1430, 1440, etc. shown in FIG. 15 .
  • the first electronic device acquires first video data of the first user.
  • the first electronic device may obtain the video of the first user captured during the video call by using a shooting application.
  • the first electronic device may obtain a video of the first user captured during the video call through a video call application.
  • the first electronic device acquires the second video data of the second user from the second electronic device through the video call connection.
  • the second electronic device sends the second video data of the second user to the first electronic device through the video call connection.
  • the first electronic device may obtain the video of the second user captured by the second electronic device from the second electronic device through a shooting application during the video call.
  • the first electronic device may obtain the video of the second user captured by the second electronic device from the second electronic device through a video call application.
  • the first electronic device acquires, according to the first video data and the second video data, a co-shot file of the first user and the second user.
  • the first electronic device may use a shooting application to synthesize videos of two users during a video call.
  • the first electronic device may use a video call application to synthesize videos of two users during a video call.
  • the co-shooting method further includes: the first electronic device displays a first interface area and a second interface area on a fourth interface according to the first video data and the second video data, and the first interface area includes A first user image, the second interface area includes a second user image, the first user image includes pixels corresponding to the first user, and the second user image includes pixels corresponding to the second user pixel.
  • the first electronic device may display a fourth interface.
  • the fourth interface may be, for example, the interfaces shown in FIGS. 5 to 13 and FIGS. 18 to 27 .
  • the fourth interface includes a split screen switch control and a background removal switch control, and when the split screen switch control is in an on state and the background removal switch control is in an on state, the first interface
  • the area further includes a second background image or a target gallery image, and/or, the second interface area further includes a first background image or a target gallery image, wherein the first background image includes a scene with the first user The corresponding pixel points, the second background image includes pixel points corresponding to the scene where the second user is located.
  • the second interface area further includes a first background image
  • the fourth interface may be, for example, the interface shown in FIG. 7 .
  • the first interface area further includes a second background image
  • the fourth interface may be, for example, the interface shown in FIG. 8 .
  • the first interface area further includes a target gallery image
  • the second interface area further includes a target gallery image
  • the fourth interface may be, for example, the interface shown in FIG. 9 .
  • the fourth interface includes a split screen switch control and a background removal switch control, and when the split screen switch control is in an off state and the background removal switch control is in an on state, the fourth interface Including a background interface area, the background interface area is the background of the first interface area and the second interface area, and the background interface area includes any of the following: a first background image, a second background image, a target gallery image, wherein the first background image includes pixels corresponding to the scene where the first user is located, and the second background image includes pixels corresponding to the scene where the second user is located.
  • the background interface area includes a first background image
  • the fourth interface may be, for example, the interface shown in FIG. 11 .
  • the background interface area includes a second background image
  • the fourth interface may be, for example, the interface shown in FIG. 12 .
  • the background interface area includes a first background image
  • the fourth interface may be, for example, the interface shown in FIG. 13 .
  • the matching method further includes: the first electronic device, in response to an operation acting on the fourth interface, adjusts the size of the first interface area and/or the second interface area.
  • the matching method further includes: adjusting, by the first electronic device, a display priority of the first interface area or the second interface area in response to an operation acting on the fourth interface.
  • the priority of the second interface area may be higher than the priority of the first interface area, and the second interface area may cover the first interface area. on the interface area.
  • the fourth interface further includes a recording control, and the acquiring, according to the first video data and the second video data, the co-production files of the first user and the second user, including: The first electronic device acquires the co-shot file according to the first video data and the second video data in response to an operation acting on the recording control.
  • the first electronic device may operate the recording control in the shooting application.
  • the first electronic device may operate the recording control in the video call application.
  • the co-shot file includes a first image area and a second image area, the first image area includes pixels corresponding to the first user, and the second image area includes pixels corresponding to the second user .
  • the first image area includes pixels corresponding to any one of the following: a first background image, a second background image, and a target gallery image.
  • the second image area includes pixels corresponding to any one of the following: a first background image, a second background image, and a target gallery image.
  • the co-shot file further includes a background image area, the background image area is the background of the first image area and the second image area, and the background image area includes pixels corresponding to any of the following: Points: first background image, second background image, target gallery image.
  • the resolution of the co-shot file is higher than the display resolution of the first electronic device.
  • the co-shot file is a co-shot image or a co-shot video.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the electronic device can be divided into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 30 shows a possible schematic diagram of the composition of the electronic device 3000 involved in the above embodiment.
  • the electronic device 3000 may include: a processing unit 3001. Obtaining unit 3002.
  • the processing unit 3001 may be configured to establish a video call connection between the electronic device 3000 and the second electronic device.
  • the processing unit 3001 may also be configured to acquire the first video data of the first user during a video call.
  • the obtaining unit 3002 may obtain the second video data of the second user from the second electronic device through the video call connection.
  • the processing unit 3001 may also be configured to acquire a co-shot file of the first user and the second user according to the first video data and the second video data.
  • the electronic device may include a processing module, a memory module and a communication module.
  • the processing module may be used to control and manage the actions of the electronic device, for example, may be used to support the electronic device to perform the steps performed by the above units.
  • the storage module may be used to support the electronic device to execute stored program codes and data, and the like.
  • the communication module can be used to support the communication between the electronic device and other devices.
  • the processing module may be a processor or a controller. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • the processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and the like.
  • the storage module may be a memory.
  • the communication module may be a transceiver.
  • the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, and a Wi-Fi chip.
  • the electronic device involved in this embodiment may be a device having the structure shown in FIG. 1 .
  • This embodiment also provides a computer storage medium, where computer instructions are stored in the computer storage medium, and when the computer instructions are executed on the electronic device, the electronic device executes the above-mentioned relevant method steps to realize the co-production method in the above-mentioned embodiment.
  • This embodiment also provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the matching method in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used to store computer execution instructions, and when the apparatus is running, The processor can execute the computer-executed instructions stored in the memory, so that the chip executes the matching method in the above method embodiments.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference can be made to the corresponding provided above. The beneficial effects in the method will not be repeated here.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution, and the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

本申请提供了一种合拍方法和电子设备。通过视频通话,可以获取到两个用户的视频,进而可以得到两个用户的合拍图像或合拍视频。在视频通话过程中,两个用户可以交互合拍细节。在视频通话结束后,可以快捷生成合拍视频。本申请的目的包括减少后期处理量、提升合成图片或合成视频的清晰度等,进而有利于提升异地多用户合拍的用户体验感。

Description

合拍方法和电子设备
本申请要求于2021年02月09日提交中国专利局、申请号为202110181051.9、发明名称为“合拍方法和电子设备”的中国专利申请以及于2021年05月14日提交中国专利局、申请号为202110528138.9、发明名称为“合拍方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备领域,并且更具体地,涉及一种合拍方法和电子设备。
背景技术
位于同一场地内的多个用户可以通过同一摄像设备(带有摄像头的电子设备)合拍,以获得包含该多个用户的样貌的图片或视频。然而,对于远程多用户的合拍而言,往往较难达到高质量的合拍效果。可能的原因例如包括,需要额外的后期处理,合成图片或合成视频的清晰度相对较差等。
发明内容
本申请提供一种合拍方法和电子设备,目的包括减少后期处理量、提升合成图片或合成视频的清晰度等,进而有利于提升异地多用户合拍的用户体验感。
第一方面,提供了一种合拍方法,包括:
第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接,所述第一电子设备为第一用户的电子设备,所述第二电子设备为第二用户的电子设备;
所述第一电子设备在视频通话过程中,获取所述第一用户的第一视频数据;
所述第一电子设备通过所述视频通话连接,从所述第二电子设备获取所述第二用户的第二视频数据;
所述第一电子设备根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件。
通过视频通话,可以实现同步远程合拍的效果。多个用户之间可以沟通、交流,有利于提升多个用户合拍的匹配度。在挂断视频通话后可以获得合拍效果相对较好的合拍图片或合拍视频,有利于减少用户在合拍过程中的工作量,例如后期修图工作量等。
结合第一方面,在第一方面的某些实现方式中,在所述第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接之前,所述合拍方法还包括:
所述第一电子设备显示拍摄应用的第一界面,所述第一界面包括合拍控件;
所述第一电子设备响应作用于所述合拍控件的操作,显示第二界面,所述第二界面包括与多个用户一一对应的多个用户控件,所述多个用户包括所述第二用户;
所述第一电子设备响应作用于所述第二用户的用户控件的操作,向所述第二电子设备 发送合拍邀请,以建立所述视频通话连接。
拍摄应用可以内置合拍控件,该合拍控件可以从拍摄应用以外的其他应用调用用户控件,从而可以向其他用户发起合拍请求。另外,通过合拍控件,可以使得电子设备的多个应用(包括拍摄应用)协同运行,以实现多个用户的合拍。结合第一方面,在第一方面的某些实现方式中,在所述第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接之前,所述合拍方法还包括:
所述第一电子设备显示视频通话应用的第三界面,所述第三界面包括与多个用户一一对应的多个视频通话控件,所述多个用户包括所述第二用户;
所述第一电子设备响应作用于所述第二用户的视频通话控件的操作,向所述第二电子设备发送视频通话邀请,以建立所述视频通话连接。
视频通话应用可以与其他应用协同运行,以实现多个用户的合拍。从而,视频通话应用除了可以具有视频通话的功能,还可以具有生成合拍文件的功能。
结合第一方面,在第一方面的某些实现方式中,所述合拍方法还包括:
所述第一电子设备根据第一视频数据、第二视频数据,在第四界面显示第一界面区域、第二界面区域,所述第一界面区域包括第一用户图像,所述第二界面区域包括第二用户图像,所述第一用户图像包括与所述第一用户对应的像素点,所述第二用户图像包括与所述第二用户对应的像素点。
在获取合拍文件之前,第一电子设备显示多个用户的图像,从而可以预览多个用户合拍的大致效果,使用户可以提取预知合拍文件的大致情况,有利于使合拍视频更容易符合用户的期待。
结合第一方面,在第一方面的某些实现方式中,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于开启状态,且所述背景去除开关控件处于开启状态的情况下,
所述第一界面区域还包括第二背景图像或目标图库图像,和/或,
所述第二界面区域还包括第一背景图像或目标图库图像,
其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
背景去除开关控件处于开启状态,意味着第一界面区域、第二界面区域中的至少一个的背景被去除,从而第一界面区域、第二界面区域可以使用相同的背景。这使得第一用户的图像和第二用户的图像可以被视为处于同一背景或同一场景下。分屏开关控件处于开启状态,意味着第一用户的图像和第二用户的图像可以在用户界面上被归属于不同的区域,这可以更适应于需要相对明显地区分用户图像的场景,例如因用户身份不同而不适于将多个用户的图像混合起来的场景。
结合第一方面,在第一方面的某些实现方式中,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于关闭状态,且所述背景去除开关控件处于开启状态的情况下,所述第四界面包括背景界面区域,所述背景界面区域为所述第一界面区域、所述第二界面区域的背景,所述背景界面区域包括以下任一项:第一背景图像、第二背景图像、目标图库图像,其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
背景去除开关控件处于开启状态,意味着第一界面区域、第二界面区域中的至少一个的背景被去除,从而第一界面区域、第二界面区域可以使用相同的背景。这使得第一用户的图像和第二用户的图像可以被视为处于同一背景或同一场景下。分屏开关控件处于关闭状态,意味着第一用户的图像和第二用户的图像可以相互交叉、覆盖,这有利于进一步将第一用户的图像和第二用户的图像融合为一体。这可以更适应于不需要明显区域用户图像的场景,例如群体合拍场景。
结合第一方面,在第一方面的某些实现方式中,所述合拍方法还包括:
所述第一电子设备响应作用于所述第四界面的操作,调整所述第一界面区域和/或所述第二界面区域的尺寸。
在获取合拍文件之间,用户可以通过操作,调整第一用户的图像、第二用户的图像在显示界面的占比,进而间接调整合拍文件的合拍效果。
结合第一方面,在第一方面的某些实现方式中,所述合拍方法还包括:
所述第一电子设备响应作用于所述第四界面的操作,调整所述第一界面区域或所述第二界面区域的显示优先级。
在第一界面区域和所述第二界面区域相互交叉的情况下,用户可以通过设定显示优先级,将第一界面区域覆盖在第二界面区域上,或将第二界面区域覆盖在第一界面区域上,进而间接调整合拍文件的合拍效果。
结合第一方面,在第一方面的某些实现方式中,所述第四界面还包括录制控件,所述根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件,包括:
所述第一电子设备响应作用于所述录制控件的操作,根据所述第一视频数据与所述第二视频数据,获取所述合拍文件。
录制控件可以用于指示开始合拍的时刻,从而在视频通话过程中,第一电子设备可以不必从视频通话开始到视频通话结束一直生成合拍文件。
结合第一方面,在第一方面的某些实现方式中,所述合拍文件包括第一图像区域、第二图像区域,所述第一图像区域包括与第一用户对应的像素点,所述第二图像区域包括与第二用户对应的像素点。
合拍文件可以包括多个用户的图像,从而实现多个用户的合拍。
结合第一方面,在第一方面的某些实现方式中,所述第一图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
第一图像区域可以选用多种背景图像,使得第一用户的背景选取可以相对灵活。
结合第一方面,在第一方面的某些实现方式中,所述第二图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
第二图像区域可以选用多种背景图像,使得第二用户的背景选取可以相对灵活。
结合第一方面,在第一方面的某些实现方式中,所述合拍文件还包括背景图像区域,所述背景图像区域为所述第一图像区域、所述第二图像区域的背景,所述背景图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
背景图像区域可以充当多个用户的共同背景,使得在合拍文件中,多个用户可以被视为在同一场景下。背景图像区域可以选用多种背景图像,使得多用户合拍的背景选取可以 相对灵活。
结合第一方面,在第一方面的某些实现方式中,所述合拍文件的分辨率高于所述第一电子设备的显示分辨率。
在通话质量相对较好的情况下(例如电子设备信号较好的情况下),合拍图像或合拍视频的清晰度可以相对较高。这有利于提升合拍文件的质量。
结合第一方面,在第一方面的某些实现方式中,所述合拍文件为合拍图像或合拍视频。
第二方面,提供了一种电子设备,其特征在于,包括:
处理器、存储器和收发器,所述存储器用于存储计算机程序,所述处理器用于执行所述存储器中存储的计算机程序;其中,
所述处理器用于,建立所述电子设备与第二电子设备的视频通话连接,所述电子设备为第一用户的电子设备,所述第二电子设备为第二用户的电子设备;
所述处理器还用于,在视频通话过程中,获取所述第一用户的第一视频数据;
所述收发器用于,通过所述视频通话连接,从所述第二电子设备获取所述第二用户的第二视频数据;
所述处理器还用于,根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件。
结合第二方面,在第二方面的某些实现方式中,在所述处理器建立所述电子设备与第二电子设备的视频通话连接之前,所述处理器还用于:
显示拍摄应用的第一界面,所述第一界面包括合拍控件;
响应作用于所述合拍控件的操作,显示第二界面,所述第二界面包括与多个用户一一对应的多个用户控件,所述多个用户包括所述第二用户;
响应作用于所述第二用户的用户控件的操作,向所述第二电子设备发送合拍邀请,以建立所述视频通话连接。
结合第二方面,在第二方面的某些实现方式中,在所述处理器建立所述电子设备与第二电子设备的视频通话连接之前,所述处理器还用于:
显示视频通话应用的第三界面,所述第三界面包括与多个用户一一对应的多个视频通话控件,所述多个用户包括所述第二用户;
响应作用于所述第二用户的视频通话控件的操作,向所述第二电子设备发送视频通话邀请,以建立所述视频通话连接。
结合第二方面,在第二方面的某些实现方式中,所述处理器还用于:
根据第一视频数据、第二视频数据,在第四界面显示第一界面区域、第二界面区域,所述第一界面区域包括第一用户图像,所述第二界面区域包括第二用户图像,所述第一用户图像包括与所述第一用户对应的像素点,所述第二用户图像包括与所述第二用户对应的像素点。
结合第二方面,在第二方面的某些实现方式中,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于开启状态,且所述背景去除开关控件处于开启状态的情况下,
所述第一界面区域还包括第二背景图像或目标图库图像,和/或,
所述第二界面区域还包括第一背景图像或目标图库图像,
其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
结合第二方面,在第二方面的某些实现方式中,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于关闭状态,且所述背景去除开关控件处于开启状态的情况下,所述第四界面包括背景界面区域,所述背景界面区域为所述第一界面区域、所述第二界面区域的背景,所述背景界面区域包括以下任一项:第一背景图像、第二背景图像、目标图库图像,
其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
结合第二方面,在第二方面的某些实现方式中,所述处理器还用于:
响应作用于所述第四界面的操作,调整所述第一界面区域和/或所述第二界面区域的尺寸。
结合第二方面,在第二方面的某些实现方式中,所述处理器还用于:
响应作用于所述第四界面的操作,调整所述第一界面区域或所述第二界面区域的显示优先级。
结合第二方面,在第二方面的某些实现方式中,所述第四界面还包括录制控件,所述处理器具体用于:
响应作用于所述录制控件的操作,根据所述第一视频数据与所述第二视频数据,获取所述合拍文件。
结合第二方面,在第二方面的某些实现方式中,所述合拍文件包括第一图像区域、第二图像区域,所述第一图像区域包括与第一用户对应的像素点,所述第二图像区域包括与第二用户对应的像素点。
结合第二方面,在第二方面的某些实现方式中,所述第一图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
结合第二方面,在第二方面的某些实现方式中,所述第二图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
结合第二方面,在第二方面的某些实现方式中,所述合拍文件还包括背景图像区域,所述背景图像区域为所述第一图像区域、所述第二图像区域的背景,所述背景图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
结合第二方面,在第二方面的某些实现方式中,所述合拍文件的分辨率高于所述电子设备的显示分辨率。
结合第二方面,在第二方面的某些实现方式中,所述合拍文件为合拍图像或合拍视频。
第三方面,提供了一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行上述第一方面的任一项可能的实现方式中所述的合拍方法。
第四方面,提供了一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述第一方面的任一项可能的实现方式中所述的合拍方法。
附图说明
图1是本申请实施例提供的一种电子设备的示意性结构图。
图2是本申请实施例提供的一种电子设备的软件结构框图。
图3是本申请实施例提供的一种用户界面的示意性结构图。
图4是本申请实施例提供的一种用户界面的示意性结构图。
图5是本申请实施例提供的一种用户界面的示意性结构图。
图6是本申请实施例提供的一种用户界面的示意性结构图。
图7是本申请实施例提供的一种用户界面的示意性结构图。
图8是本申请实施例提供的一种用户界面的示意性结构图。
图9是本申请实施例提供的一种用户界面的示意性结构图。
图10是本申请实施例提供的一种用户界面的示意性结构图。
图11是本申请实施例提供的一种用户界面的示意性结构图。
图12是本申请实施例提供的一种用户界面的示意性结构图。
图13是本申请实施例提供的一种用户界面的示意性结构图。
图14是本申请实施例提供的一种应用模块的关系图。
图15是本申请实施例提供的一种用户界面的示意性结构图。
图16是本申请实施例提供的一种用户界面的示意性结构图。
图17是本申请实施例提供的一种用户界面的示意性结构图。
图18是本申请实施例提供的一种用户界面的示意性结构图。
图19是本申请实施例提供的一种用户界面的示意性结构图。
图20是本申请实施例提供的一种用户界面的示意性结构图。
图21是本申请实施例提供的一种用户界面的示意性结构图。
图22是本申请实施例提供的一种用户界面的示意性结构图。
图23是本申请实施例提供的一种用户界面的示意性结构图。
图24是本申请实施例提供的一种用户界面的示意性结构图。
图25是本申请实施例提供的一种用户界面的示意性结构图。
图26是本申请实施例提供的一种用户界面的示意性结构图。
图27是本申请实施例提供的一种用户界面的示意性结构图。
图28是本申请实施例提供的一种应用模块的关系图。
图29是本申请实施例提供的一种合拍方法的示意性流程图。
图30是本申请实施例提供的一种电子设备的示意性结构图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个、两个或两个以上。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A, 同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
以下介绍了本申请实施例提供的电子设备、用于这样的电子设备的用户界面、和用于使用这样的电子设备的实施例。在一些实施例中,电子设备可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式电子设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴电子设备(如智能手表)等。便携式电子设备的示例性实施例包括但不限于搭载
Figure PCTCN2022072235-appb-000001
或者其它操作系统的便携式电子设备。上述便携式电子设备也可以是其它便携式电子设备,诸如膝上型计算机(Laptop)等。还应当理解的是,在其他一些实施例中,上述电子设备也可以不是便携式电子设备,而是台式计算机。
示例性的,图1示出了电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,按键190,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的部件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备101也可以包括一个或多个处理器110。其中,控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。在其他一些实施例中,处理器110中还可以设置存储器,用于存储指令和数据。示例性地,处理器110中的存储器可以为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。这样就避免了重复存取,减少了处理器110的等待时间,因而提高了电子设备101处理数据或执行指令的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路间(inter-integrated circuit,I2C)接口,集成电路间音频(inter-integrated circuit sound,I2S) 接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,SIM卡接口,和/或USB接口等。其中,USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备101充电,也可以用于电子设备101与外围设备之间传输数据。该USB接口130也可以用于连接耳机,通过耳机播放音频。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个 或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或多个显示屏194。
电子设备100的显示屏194可以是一种柔性屏,目前,柔性屏以其独特的特性和巨大的潜力而备受关注。柔性屏相对于传统屏幕而言,具有柔韧性强和可弯曲的特点,可以给用户提供基于可弯折特性的新交互方式,可以满足用户对于电子设备的更多需求。对于配置有可折叠显示屏的电子设备而言,电子设备上的可折叠显示屏可以随时在折叠形态下的小屏和展开形态下大屏之间切换。因此,用户在配置有可折叠显示屏的电子设备上使用分屏功能,也越来越频繁。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或多个摄像头193。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构, 例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备101执行本申请一些实施例中所提供的灭屏显示的方法,以及各种应用以及数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用(比如图库、联系人等)等。存储数据区可存储电子设备101使用过程中所创建的数据(比如照片,联系人等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储部件,闪存部件,通用闪存存储器(universal flash storage,UFS)等。在一些实施例中,处理器110可以通过运行存储在内部存储器121的指令,和/或存储在设置于处理器110中的存储器的指令,来使得电子设备101执行本申请实施例中所提供的灭屏显示的方法,以及其他应用及数据处理。电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
图2是本申请实施例的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。
如图2所示,应用程序包可以包括图库,相机,畅连,地图,导航,畅连等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂 断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
本申请实施例提供的方案可以应用于多用户合拍的场景,尤其可以应用于远程多用户合拍场景。远程多用户合拍场景可以指,至少两个用户无法或很难在同一时间、通过同一摄像设备完成合拍。下面通过一些示例描述远程多用户合拍场景。
示例一
用户A可以通过带有摄像功能的电子设备A进行自拍,得到自拍视频A;用户B可以通过带有摄像功能的电子设备B进行自拍。最后,通过将视频A与视频B合成,可以得到用户A与用户B的合拍视频。自拍视频A与自拍视频B可以通过异步拍摄的方式得到。
在这一示例中,用户A与用户B的合拍视频可能较难协同,也就是说,在自拍过程中,用户A与用户B的诸多拍摄因素可能难以匹配。例如,用户A与电子设备A的距离可能和用户B与电子设备B的距离相差较大,因此自拍视频A中用户A的轮廓大小与自拍视频B中的用户B的轮廓大小相差较大。又如,用户A与用户B做类似的动作,但是用户A的动作相对较快,幅度较大,用户B的动作相对较慢,幅度较小。为了使合拍视频达到相对较高的匹配度,用户需要对合拍视频进行工作量较大的后期处理。
示例一以拍摄视频为例进行说明。实际上,通过示例一所示的方法得到的合拍图片也 存在类似的问题。
示例二
用户A可以通过带有摄像功能的电子设备A,与用户B视频通话,并通过录屏的方式获取既包含用户A又包含用户B的合拍视频。
然而,通过录屏得到的合拍视频的清晰度通常相对较差。合拍视频的最大分辨率通常取决于电子设备A的显示分辨率。
示例二以拍摄视频为例进行说明。实际上,通过示例二所示的方法得到的合拍图片也存在类似的问题。
本申请实施例提供一种新的合拍方法,目的包括减少用户对合拍文件(如合拍图像、合拍视频)的后期处理工作量,提升合拍文件的清晰度等,进而有利于提升异地多用户合拍的用户体验感。
图3是本申请实施例提供的一种用户界面300的示意图。该用户界面300可以显示在第一电子设备上。该用户界面300可以为相机应用的界面,或者其他具有拍摄功能的应用的界面。也就是说,第一电子设备上承载相机应用或其他具有拍摄功能的应用。响应第一用户作用在这些应用的操作,第一电子设备可以显示该用户界面300。
例如,第一用户可以通过点击相机应用的图标,打开相机应用,进而第一电子设备可以显示用户界面300。相机应用可以调用图1所示的摄像头193,拍摄第一电子设备周围的景象。例如,相机应用可以调用第一电子设备的前置摄像头,以拍摄第一用户的自拍图像,并在用户界面300上显示该自拍图像。
用户界面300可以包括多个功能控件310(该功能控件310可以以选项卡的形式呈现在用户界面300上),该多个功能控件310可以分别与相机应用的多个相机功能一一对应。如图3所示,该多个相机功能例如可以包括人像功能、拍照功能、录像功能、合拍功能、专业功能等,多个功能控件310可以包括人像功能控件、拍照功能控件、录像功能控件、合拍功能控件、专业功能控件。
第一电子设备可以响应第一用户作用在用户界面300上的操作(如滑动操作),将当前相机功能切换至用于完成合拍的功能,例如图3所示的合拍功能。应理解,在其他可能的示例中,相机应用可以包括其他相机功能以用于完成合拍。本申请实施例下面以合拍功能为例进行阐述。
在当前相机功能为合拍功能的情况下,用户界面300可以包括用户合拍控件320。用户合拍控件320可以用于选择或邀请第二用户,以完成第一用户与第二用户的同步合拍。第一电子设备可以响应第一用户作用在用户合拍控件320上的操作(如点击操作),显示如图4所示的用户界面400。
可选的,用户界面300还可以包括素材合拍控件330。素材合拍控件330可以用于从云端选择素材,以完成第一用户与素材的合拍。素材例如可以是照片、漫画、表情、贴纸、动画、视频等多种文件。
例如,第一用户可以通过第一电子设备自拍一段视频A。第一电子设备可以拍摄包含第一用户的视频A。第一电子设备可以根据视频A中第一用户的轮廓,裁剪视频A,得到用户子视频a,用户子视频a可以包含第一用户的图像,且不包含背景的图像。下面以视频A的一个子帧A为例进行详细说明。子帧A可以包括多个像素点A,该多个像素点A 可以包括与第一用户的轮廓对应的多个像素点a。子帧A中位于该多个像素点a以内的多个像素点a’可以形成用户子视频a的一个子帧a’。第一电子设备可以合成用户子视频a与素材,得到新的视频A’,素材可以在视频A’中充当用户子视频a的背景。
又如,第一用户可以通过第一电子设备自拍一段视频B。第一电子设备可以拍摄包含第一用户的视频B。第一电子设备可以合成视频B与素材,得到新的视频B’,视频B可以在视频B’中充当素材的背景。
可选的,用户界面300还可以包括图库合拍控件340。图库合拍控件340可以用于从本地图库中选择多媒体数据(多媒体数据可以包括图片、视频),以完成第一用户与多媒体数据的合拍。
例如,第一用户可以通过第一电子设备自拍一段视频C。第一电子设备可以拍摄包含第一用户的视频C。第一电子设备可以根据视频C中第一用户的轮廓,裁剪视频C,得到用户子视频c,用户子视频c可以包含第一用户的图像,且不包含背景的图像。第一电子设备可以合成用户子视频c与多媒体数据,得到新的视频C’,多媒体数据可以在视频C’中充当用户子视频c的背景。
又如,第一用户可以通过第一电子设备自拍一段视频D。第一电子设备可以拍摄包含第一用户的视频D。第一电子设备可以合成视频D与多媒体数据,得到新的视频D’,视频D可以在视频D’中充当多媒体数据的背景。
可选的,用户界面300还可以包括图库控件350。响应第一用户作用在图库控件350的操作,第一电子设备可以跳转至图库应用,以查看已拍摄的多媒体数据。
如图4所示,用户界面400可以包括与多个用户一一对应的多个用户控件410。用户界面400可以包括第二用户的用户控件411。也就是说,该多个用户可以包括该第二用户。
如图4所示,用户界面400可以包括用户搜索控件420。响应第一用户作用在用户搜索控件420的操作(如点击操作),以及后续一系列操作(如文字输入、语音输入、扫描二维码等),第一电子设备可以获取第二用户的相关信息(如第二用户的部分或全部姓名、第二用户的姓名的首字母、第二用户的部分或全部视频通话号码等)。第一电子设备可以根据该第二用户的相关信息,从第一电子设备存储的多个用户记录中确定第二用户的用户记录,该多个用户记录可以与该多个用户一一对应。进而,第一电子设备可以快速在用户界面400上显示该第二用户的用户控件411。
第一用户可以通过第一电子设备,邀请第二用户进行视频通话。例如,响应第一用户作用在该第二用户的用户控件411的操作(如点击操作),第一电子设备可以向第二电子设备发起视频通话,其中,第二电子设备可以为第二用户使用的电子设备。相应地,第二用户可以通过第二电子设备接收到第一用户的视频通话邀请。第二电子设备可以显示视频通话邀请的界面,该界面可以包括视频通话接听控件。响应第二用户作用在视频通话接听控件上的操作,第一电子设备与第二电子设备之间可以建立视频通话连接。
可选的,为便于第一用户快速搜索第二用户,多个用户例如可以按照字母的顺序排列。
可选的,用户界面400可以包括常用用户控件412。在一个示例中,第一电子设备可以统计合拍次数最多的用户为用户A,并在用户界面400上显示常用用户控件A,该常用用户控件A可以为与用户A对应的控件。在另一个示例中,第一电子设备可以统计视频通话次数最多的用户为用户B,并在用户界面400上显示常用用户控件B,该常用用户控 件B可以为与用户B对应的控件。
在第一电子设备与第二电子设备建立视频通话连接后,第一电子设备通过拍摄可以得到第一视频,第二电子设备通过拍摄可以得到第二视频;并且,第一电子设备可以通过该视频通话连接,获取该第二视频,第二电子设备可以通过视频通话连接,获取该第一视频。第一电子设备和/或第二电子设备可以显示如图5所示的用户界面500。
用户界面500可以包括第一界面区域560、第二界面区域570,第一界面区域560可以显示第一电子设备当前拍摄到的部分或全部图像,第二界面区域570可以显示第二电子设备当前拍摄到的部分或全部图像。第一界面区域560与第二界面区域570之间可以互不交叉。第一界面区域560、第二界面区域570可以位于用户界面500上的任意位置。如图5所示,第一界面区域560可以位于用户界面500的上方,第二界面区域570可以位于用户界面500的下方。也就是说,第一电子设备拍摄到的部分或全部图像与第二电子设备拍摄到的部分或全部图像可以同时显示在用户界面500上。
例如,如图5所示,在第一用户使用第一电子设备的前置摄像头自拍,且第二用户使用第二电子设备的前置摄像头自拍的情况下,第一界面区域560可以包括第一用户图像561,第二界面区域570可以包括第二用户图像571。也就是说,第一界面区域560可以包括与第一用户对应的像素点,第二界面区域570可以包括与第二用户对应的像素点。应理解,在其他示例中,第一电子设备和/或第二电子设备可以采用后置摄像头拍摄包含用户的图像。
用户界面500可以包括录制控件510。响应用户(如果用户界面500显示在第一电子设备上,则该用户为第一用户;如果用户界面500显示在第二电子设备上,则该用户为第二用户,以下类似情形不再详细阐述)作用在该录制控件510上的操作,电子设备(如果用户界面500显示在第一电子设备上,则该电子设备为第一电子设备;如果用户界面500显示在第二电子设备上,则该电子设备为第二电子设备,以下类似情形不再详细阐述)可以合成第一电子设备拍摄的第一视频以及第二电子设备拍摄的第二视频,得到目标视频。该目标视频包括第一图像区域、第二图像区域,第一图像区域与第一界面区域560对应,第二图像区域与第二界面区域570对应。
如上文中的示例二所述,电子设备可以通过录屏的方式得到既有用户A又有用户B的图像。然而录屏实际上是获取电子设备的显示数据,而非拍摄数据。显示数据的清晰度应当比拍摄数据的清晰度差。也就是说,该目标视频的清晰度可以高于电子设备的显示清晰度。
用户界面还可以包括用于调整合拍效果的多个控件。在合拍开始之前或在合拍的过程中,用户可以通过这些控件调整合拍效果。下面结合图5至图13,阐述本申请提供的一些可能的控件示例。
可选的,如图5所示,用户界面500可以包括美颜开关控件540。
在美颜开关控件540处于开启状态的情况下,电子设备可以针对第一用户图像561和/或第二用户图像571进行人像美化。也就是说,电子设备可以在用户界面500上显示经人像美化后的第一用户图像561和/或第二用户图像571;在合成后的目标视频中,第一图像区域内的用户图像和/或第二图像区域内的用户图像可以是经过美颜处理后的图像。
在美颜开关控件540处于关闭状态的情况下,电子设备可以不针对第一用户图像561 和第二用户图像571进行人像美化。也就是说,电子设备可以根据第一用户的原始图像和第二用户的原始图像,在用户界面500可以显示第一用户图像561和第二用户图像571,第一用户图像561、第二用户图像571可以是未经美颜处理的图像;在合成后的目标视频中,第一图像区域内的用户图像可以根据第一用户的原始图像得到,第二图像区域内的用户图像可以根据第二用户的原始图像得到,即第一图像区域内的用户图像、第二图像区域内的用户图像可以是未经美颜处理的图像。
可选的,如图5所示,用户界面500还可以包括滤镜开关控件550。
在滤镜开关控件550处于开启状态的情况下,电子设备可以针对第一视频的图像和/或第二视频的图像进行滤镜美化。也就是说,电子设备可以在用户界面500上显示经滤镜美化后的第一视频的图像和/或第一视频的图像;并且,在合成后的目标视频中,第一图像区域内的图像和/或第二图像区域内的图像可以是经过滤镜处理后的图像。
在滤镜开关控件550处于关闭状态的情况下,电子设备可以不针对第一用户图像561和第二用户图像571进行滤镜美化。也就是说,电子设备可以根据第一视频的原始图像和第二视频的原始图像,在用户界面500内显示未经滤镜处理的图像;在合成后的目标视频中,第一图像区域内的图像可以根据第一视频的原始图像得到,第二图像区域内的图像可以根据第二第一视频的原始图像得到,即目标视频可以不包括经过滤镜处理的图像。
可选的,如图5所示,用户界面500还可以包括背景去除开关控件520。在背景去除开关控件520处于关闭状态的情况下,电子设备可以不扣除第一视频的背景和第二视频的背景,即保留第一视频的背景和第二视频的背景。在背景去除开关控件520处于开启状态的情况下,电子设备例如可以扣除第一视频的背景和/或第二视频的背景。例如,电子设备可以扣除第一视频的背景,且保留第二视频的背景;又如,电子设备可以扣除第二视频的背景,且保留第一视频的背景;又如,电子设备可以扣除第一视频的背景和第二视频的背景。
下面以一个例子阐述用户图像(或用户像素点、用户图像块)与背景(或背景像素点、背景图像块)之间的关系。
例如,用户e可以通过电子设备e自拍一段视频。在电子设备e拍摄的视频E包含用户e的情况下,电子设备e可以根据视频E中用户e的轮廓,裁剪视频E,得到用户子视频和背景子视频。其中,用户子视频可以包含用户e的图像,且不包含背景图像;背景子视频可以包含背景图像,且不包含用户e的图像。
下面以视频E的一个子帧E为例进行详细说明。子帧E可以包括多个像素点E,该多个像素点E可以包括与用户e的轮廓对应的多个像素点e。子帧E中位于该多个像素点e以内的多个像素点e’可以形成用户子视频的一个子帧e’,且可以形成该用户e的图像;子帧E中位于该多个像素点e以外的多个像素点e”可以形成背景子视频的一个子帧e”,且可以形成该背景图像。
在图5所示的示例中,背景去除开关控件520当前处于关闭状态。
第一界面区域560可以显示有第一用户图像561和第一背景图像562。第一背景图像562可以为第一用户的背景图像。第一背景图像562可以通过拍摄第一用户所在场景得到。也就是说,第一界面区域560可以包括与第一用户对应的像素点,以及与第一用户所在场景对应的像素点。
类似地,第二界面区域570可以显示有第二用户图像571和第二背景图像572。第二背景图像572可以为第二用户的背景图像。第二背景图像572可以通过拍摄第二用户所在场景得到。也就是说,第二界面区域570可以包括与第二用户对应的像素点,以及与第二用户所在场景对应的像素点。
如图6所示,在背景去除开关控件520处于开启状态的情况下,电子设备可以显示用户界面600。用户界面600可以包括第一用户背景控件610、第二用户背景控件620、图库背景控件630。
第一用户背景控件610可以用于指示电子设备在第一界面区域560、第二界面区域570内均显示第一背景图像562。
如图7的用户界面700所示,第一界面区域560可以显示第一电子设备拍摄到的第一视频,该第一视频可以包括第一用户图像561以及第一背景图像562,第一背景图像562包含第一用户所在场景的图像信息。第二界面区域570显示有第二用户图像571以及该第一背景图像562,第二用户图像571可以是第二电子设备拍摄到的第二视频的一部分。第二界面区域570可以不显示如图5或图6所示的第二背景图像572。也就是说,第二界面区域570可以不包括与第二用户所在场景对应的像素点。
电子设备可以通过合成第一视频中的第一用户图像561、第一视频中的第一背景图像562、第二视频中的第二用户图像571,得到目标视频,目标视频的第一图像区域可以与第一用户图像561、第一背景图像562对应,目标视频的第二图像区域可以与第二用户图像571、第一背景图像562对应。
例如,电子设备可以通过上文提到的方式,获取第一视频、第二视频;电子设备可以根据第一视频,确定第一背景图像562;电子设备可以合成第一背景图像562与第二用户图像571,得到第三视频;电子设备可以在第一界面区域560内显示第一视频,在第二界面区域570内显示第三视频;电子设备可以将第一视频与第三视频合成为目标视频,其中,第一视频对应目标视频的第一图像区域,第三视频对应目标视频的第二图像区域。
第二用户背景控件620可以用于指示电子设备在第二界面区域570、第二界面区域570内均显示第二背景图像572。
如图8的用户界面800所示,第二界面区域570可以显示第二电子设备拍摄到的第二视频,该第二视频可以包括第二用户图像571以及第二背景图像572,第二背景图像572包含第二用户所在场景的图像信息。第一界面区域560显示有第一用户图像561以及该第二背景图像572,第一用户图像561可以是第一电子设备拍摄到的第一视频的一部分。第一界面区域560可以不显示如图5或图6所示的第一背景图像562。也就是说,第一界面区域560、第一图像区域均可以不包括与第一用户所在场景对应的像素点。
也就是说,电子设备可以通过合成第一视频中的第一用户图像561、第二视频中的第二用户图像571、第二视频中的第二背景图像572,得到目标视频,目标视频的第一图像区域可以与第一用户图像561、第二背景图像572对应,目标视频的第二图像区域可以与第二用户图像571、第二背景图像572对应。
例如,电子设备可以通过上文提到的方式,获取第一视频、第二视频;电子设备可以根据第二视频,确定第二背景图像572;电子设备可以合成第二背景图像572与第一用户图像561,得到第四视频;电子设备可以在第一界面区域560内显示第四视频,在第二界 面区域570内显示第二视频;电子设备可以将第二视频与第四视频合成为目标视频,其中,第四视频对应目标视频的第一图像区域,第二视频对应目标视频的第二图像区域。
图库背景控件630可以用于指示电子设备从图库中获取目标图库图像910,并将该目标图库图像910设置为背景图像。目标图库图像910可以来自事先存储在电子设备上的视频或图像。目标图库图像910可以为视频的一个子帧。例如,当用户合拍视频时,该合拍视频的某个子帧可以与目标图库图像对应,该合拍视频的多个子帧可以与该目标图库图像所在视频的多个子帧一一对应。
在一个示例中,电子设备可以将目标图库图像910和第一用户图像561显示在第一界面区域560内,其中目标图库图像910可以是第一用户的背景图像。在另一个示例中,电子设备可以将目标图库图像910和第二用户图像571显示在第二界面区域570内,其中目标图库图像910可以是第二用户的背景图像。
如图9的用户界面900所示,第一界面区域560可以显示有第一用户图像561以及该目标图库图像910,第一用户图像561可以是第一电子设备拍摄到的第一视频的一部分。也就是说,第一界面区域560可以包括与第一用户对应的像素点,以及与目标图库图像910对应的像素点。第一界面区域560可以不显示如图5或图6所示的第一背景图像562,也不显示如图8所示的第二背景图像572。
如图9的用户界面900所示,第二界面区域570可以显示有第二用户图像571以及该目标图库图像910,第二用户图像571可以是第二电子设备拍摄到的第二视频的一部分。也就是说,第二界面区域570可以包括与第二用户对应的像素点,以及与目标图库图像910对应的像素点。第二界面区域570可以不显示如图5或图6所示的第二背景图像572,也不显示如图7所示的第一背景图像562。
电子设备可以通过合成第一视频中的第一用户图像561、第二视频中的第二用户图像571、目标图库图像910,得到目标视频,目标视频的第一图像区域可以与第一用户图像561、目标图库图像910对应,目标视频的第二图像区域可以与第二用户图像571、目标图库图像910对应。
例如,电子设备可以通过上文提到的方式,获取第一视频、第二视频;电子设备可以根据第一视频,确定第一用户图像561;电子设备可以根据第二视频,确定第二用户图像571;电子设备可以合成目标图库图像910与第一用户图像561,得到第五视频;电子设备可以合成目标图库图像910与第二用户图像571,得到第六视频;电子设备可以在第一界面区域560内显示第五视频,在第二界面区域570内显示第六视频;电子设备可以将第五视频与第六视频合成为目标视频,其中,第五视频对应目标视频的第一图像区域,第六视频对应目标视频的第二图像区域。
在一个示例中,图6至图9所示的用户界面可以被显示在第一电子设备上;那么图6至图9中的“选择我方背景”可以对应第一用户背景控件610,图6至图9中的“选择对方背景”可以对应第二用户背景控件620。在其他示例中,图6至图9所示的用户界面可以被显示在第二电子设备上;那么图6至图9中的“选择我方背景”可以对应第二用户背景控件620,那么图6至图9中的“选择对方背景”可以对应第一用户背景控件610。
可选的,如图5、图10所示,用户界面还可以包括分屏开关控件530。
如图5的用户界面500所示,在分屏开关控件530处于开启状态的情况下,第一界面 区域560和第二界面区域570例如可以为两个规则的显示区域。也就是说,第一界面区域560的轮廓与第一用户的轮廓可以不匹配(或不对应),且第二界面区域570的轮廓与第二用户的轮廓可以不匹配(或不对应)。第一界面区域560的面积和第二界面区域570的面积例如可以对应固定比例(如1:1、1:1.5等)。在图5所示的示例中,分屏开关控件530当前处于开启状态。第一界面区域560和第二界面区域570的形状均可以为矩形。
响应用户作用在录制控件510上的操作,电子设备可以执行合成视频的操作,得到目标视频,该目标视频的第一图像区域和第二图像区域可以为两个规则的显示区域。第一图像区域560的轮廓与第一用户的轮廓可以不匹配(或不对应),且第二图像区域570的轮廓与第二用户的轮廓可以不匹配(或不对应)。第一图像区域560的面积和第二图像区域570的面积例如可以对应固定比例(如1:1、1:1.5等)。结合图5所示的示例,第一图像区域和第二图像区域的形状均可以为矩形。
如图10所示,在分屏开关控件530处于关闭状态的情况下,第一界面区域560的轮廓例如可以与第一用户的轮廓匹配(或对应),且第二界面区域570的轮廓例如可以与第二用户的轮廓匹配(或对应)。可选的,在分屏开关控件530处于关闭的情况下,背景去除开关控件520可以处于开启状态。也就是说,第一界面区域560可以不包括如图5或图6所示的第一视频的第一背景图像562;第二界面区域570可以不包括如图5或图6所示的第二视频中的第二背景图像572。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,第一图像区域560的轮廓可以与第一用户的轮廓匹配(或对应),且第二图像区域570的轮廓可以与第二用户的轮廓匹配(或对应)。也就是说,第一图像区域560可以不包括如图5或图6所示的第一视频的第一背景图像562;第二图像区域570可以不包括如图5或图6所示的第二视频中的第二背景图像572。
如图10的用户界面1000所示,当第一用户图像561与第二用户图像571在用户界面1000上可能存在显示冲突时,电子设备可以优先显示第一用户图像561或第二用户图像571。换句话说,第一用户图像561可以覆盖在第二用户图像571上,那么第一用户图像561或第一界面区域560的显示优先级可以高于第二用户图像571或第二界面区域570的显示优先级。或者,第二用户图像571可以覆盖在第一用户图像561上,那么第一用户图像561或第一界面区域560的显示优先级可以高于第二用户图像571或第二界面区域570的显示优先级。
例如,响应用户作用在第一界面区域560或第二界面区域570的操作,电子设备可以将第一用户图像561显示在第二用户图像571之前。在此情况下,第一界面区域560的轮廓可以由第一用户图像561确定;第二界面区域570的轮廓可以由第二用户图像571,以及第一用户图像561的与第二用户图像571重叠的部分确定。在一种可能的场景中,第一界面区域560可以显示第一用户图像561的全部像素点,第二界面区域570仅可以显示第二用户图像571的部分像素点。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,第一图像区域的轮廓可以由第一用户图像561确定,第二图像区域的轮廓可以由第二用户图像571,以及第一用户图像561的与第二用户图像571重叠的部分确定。第一图像区域可以包括与第一用户图像561对应的全部像素点,第二图像区域可以包括与第二用户图像571对应的部 分像素点。
又如,响应用户作用在第二界面区域570的操作,电子设备可以将第二用户图像571显示在第一用户图像561之前,如图10中箭头1040所指的位置。在此情况下,第二界面区域570的轮廓可以由第二用户图像571确定;第一界面区域560的轮廓可以由第一用户图像561,以及第二用户图像571的与第一用户图像561重叠的部分确定。在一种可能的场景中,第二界面区域570可以显示第二用户图像571的全部像素点,第一界面区域560仅可以显示第一用户图像561的部分像素点。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,第二图像区域的轮廓可以由第二用户图像571确定;第一图像区域的轮廓可以由第一用户图像561,以及第二用户图像571的与第一用户图像561重叠的部分确定。第一图像区域可以包括与第一用户图像561对应的仅部分像素点,第二图像区域可以包括与第二用户图像571对应的全部像素点。
为了减少用户后期处理视频的工作量,用户可以调整第一用户图像561和第二用户图像571的显示大小,进而可以调整第一用户的图像和第二用户的图像在目标视频中的尺寸比例。
例如,用户可以在第一界面区域560上做出缩放操作。响应用户作用在第一界面区域560上的缩放操作,电子设备可以对第一界面区域560进行缩放。由于第一界面区域560的轮廓与第一用户图像561的轮廓匹配,因此可以调整第一用户图像561在用户界面1000上的显示比例。相应地,目标视频中第一图像区域的图像比例可以被调整。
又如,用户可以在第二界面区域570上做出缩放操作。响应用户作用在第二界面区域570上的缩放操作,电子设备可以对第二界面区域570进行缩放。由于第二界面区域570的轮廓与第二用户图像571的轮廓匹配,因此可以调整第二用户图像571在用户界面1000上的显示比例。相应地,目标视频中第二图像区域的图像比例可以被调整。
又如,用户可以在用户界面1000上做出缩放操作。响应用户作用在用户界面1000上的缩放操作,电子设备可以调整第一界面区域560、第二界面区域570在用户界面1000上的显示比例。相应地,目标视频中第一图像区域、第二图像比例的图像比例均可以被调整。也就是说,电子设备可以具有一次性调整多个用户图像大小的能力。
第一界面区域560在用户界面上的显示比例可以为第一显示比例,第二界面区域570在用户界面上的显示比例可以为第二显示比例。界面区域在用户界面上的显示比例可以理解为,界面区域的像素点数量与用户界面的像素点数量的比值。
在一个可能的示例中,在电子设备调整第一界面区域560、第二界面区域570在用户界面上的显示比例后,第一显示比例与第二显示比例可以相同或近似相同。例如,第一界面区域560包括的像素数量与第二界面区域570包括的像素数量相同或相差不大。这种显示方式可以使多个用户图像在用户界面上的显示比例相差不大。这种显示方式相对更适用于合拍用户数量较多的场景,有利于减少用户逐一调整用户图像显示比例的工作量。如图10所示的示例,在用户观察用户界面时,第一用户的图像与第二用户的图像看起来可以是等大的。
在另一个可能的示例中,第一用户图像561相对于第一视频的比例可以为第一图像比例(图像比例可以指,在视频图像的某一帧内,用户图像的像素点数量与该帧的总像素点 数量的比值),第二用户图像571相对于第二视频的比例可以为第二图像比例,第一显示比例与第一图像比例的比值(在显示比例与图像比例的比值为1的情况下,可以理解为电子设备根据用户图像在视频中的原始大小显示该用户图像)可以为第一比值,第二显示比例与第二图像比例的比值可以为第二比值。在电子设备调整第一界面区域560、第二界面区域570在用户界面上的显示比例后,该第一比值与该第二比值可以相同。
例如,第一用户图像561在第一视频中的占比较大,第二用户图像571在第二视频中的占比较小,那么第一用户图像561在用户界面中的占比可以大于第二用户图像571在第二视频中的占比。这种显示方式可以相对更贴近原始视频。用户可以通过靠近摄像头或远离摄像头的方式调整用户在用户界面和合拍后的目标视频中的大小。合拍后的目标视频可以基本还原原始视频的像素,有利于提升合拍视频的清晰度。如图10所示的示例,合拍时用户图像在用户界面上的显示比例可以匹配自拍时用户图像在用户界面上的显示比例。这有利于使用户更容易适应本申请实施例提供的合拍方法。
如图10所示,用户界面1000还可以包括背景界面区域580。在图10所示的示例中,背景界面区域580中的像素点例如可以是默认值(例如默认为灰色、白色等)。可选的,背景界面区域580可以显示第一背景图像562、第二背景图像572、目标图库图像910中的任一种。相应地,目标视频可以包括背景图像区域。背景图像区域可以与第一背景图像562、第二背景图像572、目标图库图像910中的任一个对应。
在一个示例中,如图10、图11所示,用户界面可以包括第一用户背景控件1010。
第一用户背景控件1010可以用于指示电子设备在背景图像区域内显示第一背景图像562。响应用户作用在第一用户背景控件1010的操作,电子设备可以将目标图库图像910显示在背景界面区域580。如图11所示,背景界面区域580可以包括与第一背景图像562对应的像素点。在一些可能的实现方式中,电子设备可以隐藏用户界面上的第一用户背景控件1010。例如,电子设备可以自动隐藏用户界面上的第一用户背景控件1010,以使得第一用户背景控件1010不会遮挡第一背景图像562。可选的,电子设备还可以隐藏第二用户背景控件1020、图库背景控件1030。由于用户已选择了合适的背景图像,因此电子设备可以隐藏部分控件,从而简化用户界面,减少控件对预览视频的遮挡。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,电子设备可以合成第一视频中的第一用户图像561、第二视频中的第二用户图像571、目标图库图像910,得到目标视频,目标视频的第一图像区域可以与第一用户图像561对应,目标视频的第二图像区域可以与第二用户图像571对应,目标视频的背景图像区域可以与目标图库图像910对应。
在一个示例中,如图10、图12所示,用户界面可以包括第二用户背景控件1020。
第二用户背景控件1020可以用于指示电子设备在背景图像区域内显示第二背景图像572。响应用户作用在第二用户背景控件1020的操作,电子设备可以将目标图库图像910显示在背景界面区域580。如图12所示,背景界面区域580可以包括与第二背景图像572对应的像素点。在一些可能的实现方式中,电子设备可以隐藏用户界面上的第二用户背景控件1020。例如,电子设备可以自动隐藏用户界面上的第二用户背景控件1020,以使得第二用户背景控件1020不会遮挡第一背景图像562。可选的,电子设备还可以隐藏第一用户背景控件1010、图库背景控件1030。由于用户已选择了合适的背景图像,因此电子 设备可以隐藏部分控件,从而简化用户界面,减少控件对预览视频的遮挡。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,电子设备可以合成第一视频中的第一用户图像561、第一视频中的第一背景图像562、第二视频中的第二用户图像571,得到目标视频,目标视频的第一图像区域可以与第一用户图像561对应,目标视频的第二图像区域可以与第二用户图像571对应,目标视频的背景图像区域可以与第一背景图像562对应。
在一个示例中,如图10、图13所示,用户界面可以包括图库背景控件1030。
图库背景控件1030用于指示电子设备从图库中获取目标图库图像910;响应用户作用在图库背景控件1030的操作,电子设备可以将目标图库图像910显示在背景界面区域580。如图13所示,背景界面区域580可以包括与目标图库图像910对应的像素点。目标图库图像910可以为视频的一个子帧。在一些可能的实现方式中,电子设备可以隐藏用户界面上的图库背景控件1030。例如,电子设备可以自动隐藏用户界面上的图库背景控件1030,以使得图库背景控件1030不会遮挡第一背景图像562。可选的,电子设备还可以隐藏第一用户背景控件1010、第二用户背景控件1020。由于用户已选择了合适的背景图像,因此电子设备可以隐藏部分控件,从而简化用户界面,减少控件对预览视频的遮挡。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,电子设备可以合成第一视频中的第一用户图像561、第二视频中的第二用户图像571、第二视频中的第二背景图像572,得到目标视频,目标视频的第一图像区域可以与第一用户图像561对应,目标视频的第二图像区域可以与第二用户图像571对应,目标视频的背景图像区域可以与第二背景图像572对应。
在远程合拍完成后,用户界面可以显示视频通话挂断控件(图3至图13未示出)。响应用户作用在视频通话挂断控件的操作(如点击操作),电子设备可以挂断第一用户与第二用户的视频通话。由此,第一用户可以通过相机应用,实现与第二用户的视频通话和远程合拍。
响应用户针对图库应用的操作,电子设备可以调取目标视频,从而用户可以观看目标视频。响应用户针对图库应用的操作,电子设备可以对目标视频进行后期调整。例如可以调整第一图像区域的快慢、第二图像区域的播放快慢、对第一图像区域美化、对第二图像区域美化、第一图像区域的大小、第二图像区域的大小等。
图14示出了本申请实施例提供的一种应用模块的关系图。图14所示的相机应用例如可以对应图2所示的相机应用。图14所示的畅连应用例如可以对应图2所示的畅连应用。图14所示的图库应用例如可以对应图2所示的图库应用。
相机应用可以包括拍摄模块。在一个示例中,该拍摄模块可以用于拍摄第一用户所在场景,得到第一视频。在另一个示例中,该拍摄模块可以用于拍摄第二用户所在场景,得到第二视频。
畅连应用可以包括视频通话模块。在一个示例中,该视频通话模块可以用于向第二电子设备发送第一视频,并从该第二电子设备获取第二视频,第二电子设备为第二用户的电子设备。在另一个示例中,该视频通话模块可以用于向第一电子设备发送第二视频,并从该第一电子设备获取第一视频,第一电子设备为第一用户的电子设备。
相机应用还可以包括合成模块。该合成模块可以根据第一视频和第二视频,合成目标 视频。
图库应用可以包括多媒体模块。该多媒体模块可以调取目标视频,并对目标视频进行后期处理。
本申请实施例提供的方案可以通过相机应用实现拍摄,并且可以通过相机应用调用电子设备的畅连应用,进而实现视频通话,从而可以实现同步远程合拍的效果。由于视频通话可以有利于多个用户之间沟通、交流,因此有利于提升多个用户合拍的匹配度。在挂断视频通话后可以获得合拍效果相对较好的合拍图片或合拍视频,有利于减少用户在合拍过程中的工作量,例如后期修图工作量等。在通话质量相对较好的情况下(例如电子设备信号较好的情况下),合拍图像或合拍视频的清晰度可以相对较高。
图15是本申请实施例提供的另一种用户界面1400的示意图。该用户界面1400可以显示在第一电子设备上。该用户界面1400可以为畅连应用的界面,或者其他具有视频通话功能的应用的界面。也就是说,第一电子设备上承载畅连应用或其他具有视频通话功能的应用。响应第一用户作用在这些应用上的操作,第一电子设备可以显示该用户界面1400。
例如,第一用户可以通过点击畅连应用的图标,打开畅连应用,进而第一电子设备可以显示用户界面1400。
用户界面1400可以包括与多个用户一一对应的多个用户控件1410。该多个用户可以包括第二用户。响应第一用户对第二用户控件1410的操作(如点击操作),第一电子设备可以在图16所示的用户界面1500显示该第二用户的联系信息。如图16所示,第二用户的联系信息可以包括以下至少一项:第二用户的姓名1510、第二用户的联系方式1520、第二用户的通话记录1530等。
如图15所示,用户界面1400可以包括用户搜索控件1420。在一个示例中,第一用户可以通过该用户搜索控件1420,邀请第二用户进行视频通话。响应第一用户作用在用户搜索控件1420的操作(如点击操作),以及后续一系列操作(如文字输入、语音输入、扫描二维码等),第一电子设备可以获取第二用户的相关信息(如第二用户的部分或全部姓名、第二用户的姓名的首字母、第二用户的部分或全部视频通话号码等)。第一电子设备可以根据该第二用户的相关信息,从第一电子设备存储的多个用户记录中确定第二用户的用户记录,该多个用户记录可以与该多个用户一一对应。进而,第一电子设备可以快速在用户界面1400上显示该第二用户的用户控件。
可选的,用户界面1400可以包括常用用户控件1412。如图15所示,第二用户可以属于常用联系人,用户界面1400可以包括与第二用户对应的常用用户控件1411。
在一个示例中,第一电子设备可以统计合拍次数最多的用户为用户A,并在用户界面1400上显示常用用户控件A,该常用用户控件A可以为与用户A对应的控件。在另一个示例中,第一电子设备可以统计视频通话次数最多的用户为用户B,并在用户界面1400上显示常用用户控件B,该常用用户控件B可以为与用户B对应的控件。
可选的,为便于第一用户快速搜索第二用户,多个用户例如可以按照字母的顺序排列。
如图15、图16所示,用户界面可以包括畅连视频控件1430。如图15所示,用户界面1400可以包括与多个用户一一对应的多个畅连视频控件1430。如图16所示,用户界面1500可以包括与第二用户对应的畅连视频控件1430。
第一用户可以通过第一电子设备,邀请第二用户进行视频通话。结合图15至图17所 示,响应第一用户作用在与第二用户对应的畅连视频控件1430的操作(如点击操作),第一电子设备可以向第二电子设备发起视频通话,其中,第二电子设备可以为第二用户使用的电子设备。第一电子设备例如可以显示如图17所示的视频呼叫界面1600。
相应地,第二用户可以通过第二电子设备接收到第一用户的视频通话邀请。第二电子设备可以显示视频通话邀请的界面,该界面可以包括视频通话接听控件。响应第二用户作用在视频通话接听控件上的操作,第一电子设备与第二电子设备之间可以建立视频通话连接。第一电子设备例如可以显示如图18所示的用户界面1700。
在第一电子设备与第二电子设备建立视频通话连接后,第一电子设备通过拍摄可以得到第一视频,第二电子设备通过拍摄可以得到第二视频;并且,第一电子设备可以通过该视频通话连接,获取该第二视频,第二电子设备可以通过视频通话连接,获取该第一视频。
在一个示例中,如图18所示,第一用户可以在视频通话过程中邀请第二用户远程合拍。在其他示例中,第二用户可以在视频通话过程中邀请第一用户远程合拍。在远程合拍被第一用户、第二用户均授权后,第一电子设备、第二电子设备可以显示如图19所示的用户界面1800。用户界面1800可以是远程合拍的准备界面。
可选的,图15或图16所示的用户界面还可以包括远程合拍控件1440。如图15所示,用户界面1400可以包括与多个用户一一对应的多个远程合拍控件1440。如图16所示,用户界面1500可以包括与第二用户对应的远程合拍控件1440。
第一用户可以通过远程合拍控件1440,邀请第二用户通过视频通话完成远程合拍。结合图15至图17所示,响应第一用户作用在该远程合拍控件1440的操作(如点击操作),第一电子设备可以向第二电子设备发起视频通话,并向第二电子设备发送指示信息,该指示信息用于邀请第二用户合拍,其中,第二电子设备可以为第二用户使用的电子设备。第一电子设备例如可以显示如图17所示的视频呼叫界面1600。
相应地,第二用户可以通过第二电子设备接收到第一用户的远程合拍邀请。第二电子设备可以显示远程合拍邀请的界面,该界面可以包括视频通话接听控件。响应第二用户作用在视频通话接听控件上的操作,第一电子设备与第二电子设备之间可以建立视频通话连接,并且,第一电子设备、第二电子设备均可以显示如图19所示的用户界面1800。
如图19所示,用户界面1800可以包括第一界面区域1860、第二界面区域1870,第一界面区域1860可以显示第一电子设备当前拍摄到的部分或全部图像,第二界面区域1870可以显示第二电子设备当前拍摄到的部分或全部图像。第一界面区域1860与第二界面区域1870之间可以互不交叉。第一界面区域1860、第二界面区域1870可以位于用户界面1800上的任意位置。如图19所示,第一界面区域1860可以位于用户界面1800的上方,第二界面区域1870可以位于用户界面1800的下方。也就是说,第一电子设备拍摄到的部分或全部图像与第二电子设备拍摄到的部分或全部图像可以同时显示在用户界面1800上。
例如,如图19所示,在第一用户使用第一电子设备的前置摄像头自拍,且第二用户使用第二电子设备的前置摄像头自拍的情况下,第一界面区域1860可以包括第一用户图像1861,第二界面区域1870可以包括第二用户图像1871。也就是说,第一界面区域1860可以包括与第一用户对应的像素点,第二界面区域1870可以包括与第二用户对应的像素点。应理解,在其他示例中,第一电子设备和/或第二电子设备可以采用后置摄像头拍摄 包含用户的图像。
用户界面1800可以包括录制控件1810。响应用户作用在该录制控件1810上的操作,电子设备可以合成第一电子设备拍摄的第一视频以及第二电子设备拍摄的第二视频,得到目标视频。该目标视频包括第一图像区域、第二图像区域,第一图像区域与第一界面区域1860对应,第二图像区域与第二界面区域1870对应。结合上文可知,该目标视频的清晰度可以高于电子设备的显示清晰度。
用户界面还可以包括用于调整合拍效果的多个控件。在合拍开始之前或在合拍的过程中,用户可以通过这些控件调整合拍效果。下面结合图19至图27,阐述本申请提供的一些可能的控件示例。
可选的,如图19所示,用户界面1800可以包括美化开关控件1840。参照图3至图13所示的实施例,美化开关控件1840可以具有上文所述的美颜开关控件540和/或滤镜开关控件550的功能,在此就不再详细赘述。
可选的,如图19所示,用户界面1800还可以包括背景去除开关控件1820。背景去除开关控件1820可以用于指示电子设备是否扣除第一视频的背景和/或第二视频的背景。
在背景去除开关控件1820处于开启状态的情况下,电子设备可以显示用户界面1800。用户界面1800可以包括第一用户背景控件、第二用户背景控件、图库背景控件。
如图19所示,背景去除开关控件1820当前可以处于关闭状态。第一界面区域1860可以显示有第一用户图像1861和第一背景图像1862;第二界面区域1870可以显示有第二用户图像1871和第二背景图像1872。第一背景图像1862可以为第一用户的背景图像。第二背景图像1872可以为第二用户的背景图像。也就是说,第一界面区域1860可以包括与第一用户对应的像素点,以及与第一用户所在场景对应的像素点;第二界面区域1870可以包括与第二用户对应的像素点,以及与第二用户所在场景对应的像素点。
如图20所示,在背景去除开关控件1820处于开启状态的情况下,电子设备可以显示用户界面1900。用户界面1900可以包括第一用户背景控件1910、第二用户背景控件1920、图库背景控件1930。
第一用户背景控件1910用于指示电子设备在第一界面区域1860、第二界面区域1870内均显示第一背景图像1862。
如图21的用户界面2000所示,第一界面区域1860可以显示第一电子设备拍摄到的第一视频,该第一视频可以包括第一用户图像1861以及第一背景图像1862,第一背景图像1862包含第一用户所在场景的图像信息。第二界面区域1870显示有第二用户图像1871以及该第一背景图像1862,第二用户图像1871可以是第二电子设备拍摄到的第二视频的一部分。第二界面区域1870可以不显示如图19或图20所示的第二背景图像1872。也就是说,第二界面区域1870可以不包括与第二用户所在场景对应的像素点。
电子设备可以通过合成第一视频中的第一用户图像1861、第一视频中的第一背景图像1862、第二视频中的第二用户图像1871,得到目标视频,目标视频的第一图像区域可以与第一用户图像1861、第一背景图像1862对应,目标视频的第二图像区域可以与第二用户图像1871、第一背景图像1862对应。
例如,电子设备可以通过上文提到的方式,获取第一视频、第二视频;电子设备可以根据第一视频,确定第一背景图像1862;电子设备可以合成第一背景图像1862与第二用 户图像1871,得到第三视频;电子设备可以在第一界面区域1860内显示第一视频,在第二界面区域1870内显示第三视频;电子设备可以将第一视频与第三视频合成为目标视频,其中,第一视频对应目标视频的第一图像区域,第三视频对应目标视频的第二图像区域。
第二用户背景控件1920可以用于指示电子设备在第二界面区域1870、第二界面区域1870内均显示第二背景图像1872。
如图22的用户界面2100所示,第二界面区域1870可以显示第二电子设备拍摄到的第二视频,该第二视频可以包括第二用户图像1871以及第二背景图像1872,第二背景图像1872包含第二用户所在场景的图像信息。第一界面区域1860显示有第一用户图像1861以及该第二背景图像1872,第一用户图像1861可以是第一电子设备拍摄到的第一视频的一部分。第一界面区域1860可以不显示如图19或图20所示的第一背景图像1862。也就是说,第一界面区域1860可以不包括与第一用户所在场景对应的像素点。
电子设备可以通过合成第一视频中的第一用户图像1861、第二视频中的第二用户图像1871、第二视频中的第二背景图像1872,得到目标视频,目标视频的第一图像区域可以与第一用户图像1861、第二背景图像1872对应,目标视频的第二图像区域可以与第二用户图像1871、第二背景图像1872对应。
例如,电子设备可以通过上文提到的方式,获取第一视频、第二视频;电子设备可以根据第二视频,确定第二背景图像1872;电子设备可以合成第二背景图像1872与第一用户图像1861,得到第四视频;电子设备可以在第一界面区域1860内显示第四视频,在第二界面区域1870内显示第二视频;电子设备可以将第二视频与第四视频合成为目标视频,其中,第四视频对应目标视频的第一图像区域,第二视频对应目标视频的第二图像区域。
图库背景控件1930可以用于指示电子设备从图库中获取目标图库图像2210,并将该目标图库图像2210设置为背景图像。目标图库图像2210可以为视频的一个子帧。例如,当用户合拍视频时,该合拍视频的某个子帧可以与目标图库图像2210对应,该合拍视频的多个子帧可以与该目标图库图像2210所在视频的多个子帧一一对应。
如图23的用户界面2200所示,第一界面区域1860可以显示有第一用户图像1861以及该目标图库图像2210,第一用户图像1861可以是第一电子设备拍摄到的第一视频的一部分。也就是说,第一界面区域1860可以包括与第一用户对应的像素点,以及与目标图库图像2210对应的像素点。第一界面区域1860可以不显示如图19或图20所示的第一背景图像1862,也不显示如图22所示的第二背景图像1872。
如图23的用户界面2200所示,第二界面区域1870可以显示有第二用户图像1871以及该目标图库图像2210,第二用户图像1871可以是第二电子设备拍摄到的第二视频的一部分。也就是说,第二界面区域1870可以包括与第二用户对应的像素点,以及与目标图库图像2210对应的像素点。第二界面区域1870可以不显示如图19或图20所示的第二背景图像1872,也不显示如图21所示的第一背景图像1862。
在其他示例中,电子设备可以将目标图库图像2210仅显示在第一界面区域1860或第二界面区域1870内,以充当该界面区域的背景图像。
电子设备可以通过合成第一视频中的第一用户图像1861、第二视频中的第二用户图像1871、目标图库图像2210,得到目标视频,目标视频的第一图像区域可以与第一用户图像1861、目标图库图像2210对应,目标视频的第二图像区域可以与第二用户图像1871、 目标图库图像2210对应。
例如,电子设备可以通过上文提到的方式,获取第一视频、第二视频;电子设备可以根据第一视频,确定第一用户图像1861;电子设备可以根据第二视频,确定第二用户图像1871;电子设备可以合成目标图库图像2210与第一用户图像1861,得到第五视频;电子设备可以合成目标图库图像2210与第二用户图像1871,得到第六视频;电子设备可以在第一界面区域1860内显示第五视频,在第二界面区域1870内显示第六视频;电子设备可以将第五视频与第六视频合成为目标视频,其中,第五视频对应目标视频的第一图像区域,第六视频对应目标视频的第二图像区域。
在一个示例中,图20至图23所示的用户界面可以被显示在第一电子设备上;那么图20至图23中的“选择我方背景”可以对应第一用户背景控件1910,图20至图23中的“选择对方背景”可以对应第二用户背景控件1920。在其他示例中,图20至图23所示的用户界面可以被显示在第二电子设备上;那么图20至图23中的“选择我方背景”可以对应第二用户背景控件1920,那么图20至图23中的“选择对方背景”可以对应第一用户背景控件1910。
可选的,如图19、图24所示,用户界面还可以包括分屏开关控件1830。
如图19的用户界面1800所示,在分屏开关控件1830处于开启状态的情况下,第一界面区域1860和第二界面区域1870例如可以为两个规则的显示区域。也就是说,第一界面区域1860的轮廓与第一用户的轮廓可以不匹配(或不对应),且第二界面区域1870的轮廓与第二用户的轮廓可以不匹配(或不对应)。第一界面区域1860的面积和第二界面区域1870的面积例如可以对应固定比例(如1:1、1:1.5等)。在图19所示的示例中,分屏开关控件1830当前处于开启状态。第一界面区域1860和第二界面区域1870的形状均可以为矩形。
响应用户作用在录制控件1810上的操作,电子设备可以执行合成视频的操作,得到目标视频。该目标视频的第一图像区域和第二图像区域可以为两个规则的显示区域。第一图像区域的轮廓与第一用户的轮廓可以不匹配(或不对应),且第二图像区域的轮廓与第二用户的轮廓可以不匹配(或不对应)。第一图像区域的面积和第二图像区域的面积例如可以对应固定比例(如1:1、1:1.5等)。结合图19所示的示例,该目标视频的第一图像区域和第二图像区域的形状均可以为矩形。
如图24所示,在分屏开关控件1830处于关闭状态的情况下,第一界面区域1860的轮廓例如可以与第一用户的轮廓匹配(或对应),且第二界面区域1870的轮廓例如可以与第二用户的轮廓匹配(或对应)。可选的,在分屏开关控件1830处于关闭的情况下,背景去除开关控件1820可以处于开启状态。也就是说,第一界面区域1860可以不包括如图19或图20所示的第一视频的第一背景图像1862;第二界面区域1870可以不包括如图19或图20所示的第二视频中的第二背景图像1872。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,第一图像区域的轮廓可以与第一用户的轮廓匹配(或对应),且第二图像区域的轮廓可以与第二用户的轮廓匹配(或对应)。也就是说,第一图像区域可以不包括如图19或图20所示的第一视频的第一背景图像1862;第二图像区域可以不包括如图19或图20所示的第二视频中的第二背景图像1872。
如图24的用户界面2300所示,当第一用户图像1861与第二用户图像1871在用户界面2300上可能存在显示冲突时,电子设备可以优先显示第一用户图像1861或第二用户图像1871。换句话说,第一用户图像1861可以覆盖在第二用户图像1871上,那么第一用户图像1861或第一界面区域1860的显示优先级可以高于第二用户图像1871或第二界面区域1870的显示优先级。或者,第二用户图像1871可以覆盖在第一用户图像1861上,那么第一用户图像1861或第一界面区域1860的显示优先级可以高于第二用户图像1871或第二界面区域1870的显示优先级。
例如,响应用户作用在第一界面区域1860或第二界面区域1870的操作,电子设备可以将第一用户图像1861显示在第二用户图像1871之前。在此情况下,第一界面区域1860的轮廓可以由第一用户图像1861确定;第二界面区域1870的轮廓可以由第二用户图像1871,以及第一用户图像1861的与第二用户图像1871重叠的部分确定。在一种可能的场景中,第一界面区域1860可以显示第一用户图像1861的全部像素点,第二界面区域1870仅可以显示第二用户图像1871的部分像素点。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,第一图像区域的轮廓可以由第一用户图像1861确定,第二图像区域的轮廓可以由第二用户图像1871,以及第一用户图像1861的与第二用户图像1871重叠的部分确定。第一图像区域可以包括与第一用户图像1861对应的全部像素点,第二图像区域可以包括与第二用户图像1871对应的仅部分像素点。
又如,响应用户作用在第二界面区域1870或第一界面区域1860的操作,电子设备可以将第二用户图像1871显示在第一用户图像1861之前,如图24中箭头2340所指的位置。在此情况下,第二界面区域1870的轮廓可以由第二用户图像1871确定;第一界面区域1860的轮廓可以由第一用户图像1861,以及第二用户图像1871的与第一用户图像1861重叠的部分确定。在一种可能的场景中,第二界面区域1870可以显示第二用户图像1871的全部像素点,第一界面区域1860仅可以显示第一用户图像1861的部分像素点。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,第二图像区域的轮廓可以由第二用户图像1871确定;第一图像区域的轮廓可以由第一用户图像1861,以及第二用户图像1871的与第一用户图像1861重叠的部分确定。第一图像区域可以包括与第一用户图像1861对应的仅部分像素点,第二图像区域可以包括与第二用户图像1871对应的全部像素点。
为了减少用户后期处理视频的工作量,用户可以调整第一用户图像1861和第二用户图像1871的显示大小,进而可以调整第一用户的图像和第二用户的图像在目标视频中的尺寸比例。如图24所示,第二用户图像1871在用户界面2300的显示比例可以大于第一用户图像1861在用户界面2300的显示比例。
例如,用户可以在第一界面区域1860上做出缩放操作。响应用户作用在第一界面区域1860上的缩放操作,电子设备可以对第一界面区域1860进行缩放。由于第一界面区域1860的轮廓与第一用户图像1861的轮廓匹配,因此可以调整第一用户图像1861在用户界面2300上的显示比例。相应地,目标视频中第一图像区域的图像比例可以被调整。
又如,用户可以在第二界面区域1870上做出缩放操作。响应用户作用在第二界面区域1870上的缩放操作,电子设备可以对第二界面区域1870进行缩放。由于第二界面区域 1870的轮廓与第二用户图像1871的轮廓匹配,因此可以调整第二用户图像1871在用户界面2300上的显示比例。相应地,目标视频中第二图像区域的图像比例可以被调整。
又如,用户可以在用户界面2300上做出缩放操作。响应用户作用在用户界面2300上的缩放操作,电子设备可以调整第一界面区域1860、第二界面区域1870在用户界面2300上的显示比例。相应地,目标视频中第一图像区域、第二图像比例的图像比例均可以被调整。也就是说,电子设备可以具有一次性调整多个用户图像大小的能力。
第一界面区域1860在用户界面上的显示比例可以为第一显示比例,第二界面区域1870在用户界面上的显示比例可以为第二显示比例。界面区域在用户界面上的显示比例可以理解为,界面区域的像素点数量与用户界面的像素点数量的比值。
在一个可能的示例中,在电子设备调整第一界面区域1860、第二界面区域1870在用户界面上的显示比例后,第一显示比例与第二显示比例可以相同或近似相同。例如,第一界面区域1860包括的像素数量与第二界面区域1870包括的像素数量相同或相差不大。这种显示方式可以使多个用户图像在用户界面上的显示比例相差不大。这种显示方式相对更适用于合拍用户数量较多的场景,有利于减少用户逐一调整用户图像的显示比例的工作量。
在另一个可能的示例中,第一用户图像1861相对于第一视频的比例可以为第一图像比例(图像比例可以指,在视频图像的某一帧内,用户图像的像素点数量与该帧的总像素点数量的比值),第二用户图像1871相对于第二视频的比例可以为第二图像比例,第一显示比例与第一图像比例的比值(在显示比例与图像比例的比值为1的情况下,可以理解为电子设备根据用户图像在视频中的原始大小显示该用户图像)可以为第一比值,第二显示比例与第二图像比例的比值可以为第二比值。在电子设备调整第一界面区域1860、第二界面区域1870在用户界面上的显示比例后,该第一比值与该第二比值可以相同。
例如,如图24所示,第二用户图像1871在第二视频中的占比较大,第一用户图像1861在第一视频中的占比较小,那么第二用户图像1871在用户界面2300中的占比可以大于第一用户图像1861在第二视频中的占比。这种显示方式可以相对更贴近原始视频。用户可以通过靠近摄像头或远离摄像头的方式调整用户在用户界面和合拍后的目标视频中的大小。合拍后的目标视频可以基本还原原始视频的像素,有利于提升合拍视频的清晰度。如图24所示的示例,合拍时用户图像在用户界面2300上的显示比例可以匹配自拍时用户图像在用户界面2300上的显示比例。这有利于使用户更容易适应本申请实施例提供的合拍方法。
如图24所示,用户界面2300还可以包括背景界面区域1880。在图24所示的示例中,背景界面区域1880中的像素点例如可以是默认值(例如默认为灰色、白色等)。可选的,背景界面区域1880可以显示第一背景图像1862、第二背景图像1872、目标图库图像2210中的任一种。相应地,目标视频可以包括背景图像区域。背景图像区域可以与第一背景图像1862、第二背景图像1872、目标图库图像2210中的任一个对应。
在一个示例中,如图24、图25所示,用户界面可以包括第一用户背景控件1910。
第一用户背景控件1910可以用于指示电子设备在背景图像区域内显示第一背景图像1862。响应用户作用在第一用户背景控件1910的操作,电子设备可以将目标图库图像2210显示在背景界面区域1880。如图25所示,背景界面区域1880可以包括与第一背景图像1862对应的像素点。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,电子设备可以合成第一视频中的第一用户图像1861、第二视频中的第二用户图像1871、目标图库图像2210,得到目标视频,目标视频的第一图像区域可以与第一用户图像1861对应,目标视频的第二图像区域可以与第二用户图像1871对应,目标视频的背景图像区域可以与目标图库图像2210对应。
在一个示例中,如图24、图26所示,用户界面可以包括第二用户背景控件1920。
第二用户背景控件1920可以用于指示电子设备在背景图像区域内显示第二背景图像1872。响应用户作用在第二用户背景控件1920的操作,电子设备可以将目标图库图像2210显示在背景界面区域1880。如图26所示,背景界面区域1880可以包括与第二背景图像1872对应的像素点。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,电子设备可以合成第一视频中的第一用户图像1861、第一视频中的第一背景图像1862、第二视频中的第二用户图像1871,得到目标视频,目标视频的第一图像区域可以与第一用户图像1861对应,目标视频的第二图像区域可以与第二用户图像1871对应,目标视频的背景图像区域可以与第一背景图像1862对应。
在一个示例中,如图24、图27所示,用户界面可以包括图库背景控件1930.
图库背景控件1930用于指示电子设备从图库中获取目标图库图像2210;响应用户作用在图库背景控件1930的操作,电子设备可以将目标图库图像2210显示在背景界面区域1880。如图27所示,背景界面区域1880可以包括与目标图库图像2210对应的像素点。目标图库图像2210可以为视频的一个子帧。
由于合成后的目标视频可以与电子设备显示的用户界面相对应,因此,电子设备可以合成第一视频中的第一用户图像1861、第二视频中的第二用户图像1871、第二视频中的第二背景图像1872,得到目标视频,目标视频的第一图像区域可以与第一用户图像1861对应,目标视频的第二图像区域可以与第二用户图像1871对应,目标视频的背景图像区域可以与第二背景图像1872对应。
在远程合拍完成后,用户界面可以显示视频通话挂断控件1850,如图19所示。响应用户作用在视频通话挂断控件1850的操作(如点击操作),电子设备可以挂断第一用户与第二用户的视频通话。由此,第一用户可以通过畅连应用,实现与第二用户的视频通话和远程合拍。
响应用户针对图库应用的操作,电子设备可以调取目标视频,从而用户可以观看目标视频。响应用户针对图库应用的操作,电子设备可以对目标视频进行后期调整。例如可以调整第一图像区域的快慢、第二图像区域的播放快慢、对第一图像区域美化、对第二图像区域美化、第一图像区域的大小、第二图像区域的大小等。
图28示出了本申请实施例提供的一种应用模块的关系图。图28所示的相机应用例如可以对应图2所示的相机应用。图28所示的畅连应用例如可以对应图2所示的畅连应用。图28所示的图库应用例如可以对应图2所示的图库应用。
相机应用可以包括拍摄模块。在一个示例中,该拍摄模块可以用于拍摄第一用户所在场景,得到第一视频。在另一个示例中,该拍摄模块可以用于拍摄第二用户所在场景,得到第二视频。
畅连应用可以包括视频通话模块。在一个示例中,该视频通话模块可以用于向第二电子设备发送第一视频,并从该第二电子设备获取第二视频,第二电子设备为第二用户的电子设备。在另一个示例中,该视频通话模块可以用于向第一电子设备发送第二视频,并从该第一电子设备获取第一视频,第一电子设备为第一用户的电子设备。
畅连应用还可以包括合成模块。该合成模块可以根据第一视频和第二视频,合成目标视频。
图库应用可以包括多媒体模块。该多媒体模块可以调取目标视频,并对目标视频进行后期处理。
本申请实施例提供的方案可以通过畅连应用实现视频通话,并且可以通过畅连应用调用电子设备的相机应用,进而完成拍摄,从而可以实现同步远程合拍的效果。由于视频通话可以有利于多个用户之间沟通、交流,因此有利于提升多个用户合拍的匹配度。在挂断视频通话后可以获得合拍效果相对较好的合拍图片或合拍视频,有利于减少用户在合拍过程中的工作量,例如后期修图工作量等。在通话质量相对较好的情况下(例如电子设备信号较好的情况下),合拍图像或合拍视频的清晰度可以相对较高。
结合本申请实施例提供的方案,所属领域的技术人员还可以想到其他可能的方案。例如,用户可以通过如图3所示的相机应用,跳转并进入如图15所示的畅连应用,并通过在如图15至图28所示的用户界面上的操作,完成视频通话和远程合拍;又如,用户可以通过如图15所示的畅连应用,跳转并进入如图3所示的相机应用,并通过在如图3至图13所示的用户界面上的操作,完成视频通话和远程合拍。
下面结合图3至图28所示的示例,阐述几种可能的合拍场景。
场景一
用户A与用户B处于异地且很难立刻见面。用户A与用户B打算针对一段肢体动作(如舞蹈、手指动作、体操等)合拍一个视频。用户A与用户B对于合拍视频的要求例如可以包括,肢体动作的快慢速度大体一致;用户A与用户B在舞动该肢体动作时,可以大致同时开始或大致同时结束;在尺寸或景深等方面,用户A在合拍视频中的图像与用户B在合拍视频中的图像可以大体一致。
结合图3至图28所示的示例,用户A可以通过电子设备A邀请用户B视频通话以及远程合拍。用户B可以通过电子设备B接通用户A发起的视频通话以及远程合拍邀请。在正式合拍之前,用户A与用户B可以通过视频通话连接沟通合拍具体细节。
例如,用户A和用户B可以沟通舞动动作。
又如,用户A和/或用户B可以调整与电子设备的摄像头之间的距离。
又如,用户A和/或用户B可以通过用户界面的美颜开关控件、滤镜开关控件、美化开关控件中的一个或多个,调整用户A和/或用户B在用户界面上的显示效果。
又如,用户A和/或用户B可以作用在电子设备的用户界面上,以调整用户A和/或用户B在用户界面上的显示比例。
又如,用户A和用户B可以通过沟通确认是否开启分屏开关控件。如果分屏开关控件处于开启的状态,用户A与用户B还可以沟通用户A所在的界面区域A和用户B所在的界面区域B的相对位置,如将界面区域A和界面区域B分别设置在与用户界面的上方和下方,或者将界面区域A和界面区域B分别设置在与用户界面的左方和右方。如果分 屏开关控件处于关闭的状态,用户A与用户B还可以沟通确认界面区域A可覆盖界面区域B,或界面区域B可覆盖界面区域A。
又如,用户A或用户B可以通过沟通确认是否开启背景去除开关控件。如果分屏开关控件处于开启的状态,用户A与用户B还可以沟通合拍视频采用的具体背景来源,例如与用户A所在场景对应的背景图像、与用户B所在场景对应的背景图像、从图库应用中调取的图库图像。
之后,用户A可以对电子设备A上的录制控件操作,或者,用户B可以对电子设备B上的录制控件操作,以开始录制合拍视频。然后,用户A与用户B可以根据先前沟通好的方式完成肢体动作。在动作拍摄结束后,用户A可以对电子设备A上的录制控件操作,或者,用户B可以对电子设备B上的录制控件操作,以结束录制合拍视频。电子设备A和/或电子设备B可以存储合拍视频。用户A与用户B可以通过视频通话连接沟通该合拍视频,以确认是否需要拍摄新的合拍视频。如果需要,用户A与用户B可以参照如上所述的方式合拍新的视频。如果不需要,用户A与用户B可以选择挂断视频通话连接。
在开始拍摄合拍视频之前,用户可以通过视频通话连接沟通合拍细节。在视频合拍完成后,电子设备可以根据视频通话获得到的数据合成合拍视频。一方面有利于提升多个用户合拍的匹配度,减少用户对合拍视频的处理量;另一方面有利于提高合拍视频的清晰度。因此,有利于提升异地多用户合拍的用户体验感。
与场景一类似的其他场景例如可以包括,用户A与用户B在直播过程中合拍视频。例如,用户A与用户B可以视频通话,用户A或用户B可以直播该视频通话过程。用户A与用户B可以在直播(即视频通话)时沟通合拍视频的细节,并通过本申请实施例提供的方法完成合拍视频。
与场景一类似的其他场景例如可以包括,在疫情期间,用户A与用户B很难见面并准备合拍视频。用户A与用户B可以通过视频通话的方式沟通合拍视频的细节,并通过本申请实施例提供的方法完成合拍视频。如上所述,合拍视频的背景图像可以替换,使得即使用户A与用户B异地,也可以快速获得同一背景的合拍视频,让合拍视频的欣赏者可以认为或近似认为用户A与用户B处于同一场景。
与场景一类似的其他场景例如可以包括,用户A可以为健身学员,用户B可以为健身教练,用户A与用户B可以通过合拍视频以提高云健身的教学质量。用户A与用户B可以通过视频通话的方式沟通合拍过程中的肢体动作,并通过本申请实施例提供的方法完成合拍视频。之后,用户B(即健身教练)可以针对该合拍视频点评用户A(即健身学员)的健身动作是否规范。
场景二
用户C与用户D处于异地且难以立刻见面。用户C与用户D准备开始视频会议,并准备通过合拍视频的方式记录该视频会议。用户C与用户D对于合拍视频的要求例如可以包括,用户C与用户D可以处于同一会议地点;用户C与用户D交流自然等。
结合图3至图28所示的示例,用户C可以通过电子设备C邀请用户D视频通话以及远程合拍。用户D可以通过电子设备D接通用户C发起的视频通话以及远程合拍邀请。在正式合拍之前,用户C与用户D可以通过视频通话连接沟通合拍具体细节。
例如,用户C和用户D可以沟通合适的会议背景,以及用户C和/或用户D的姿态。 会议背景例如可以为:与用户C所在场景对应的背景图像、与用户D所在场景对应的背景图像、从图库应用中调取的图库图像。
又如,用户C和/或用户D可以调整与电子设备的摄像头之间的距离。
又如,用户C和/或用户D可以通过用户界面的美颜开关控件、滤镜开关控件、美化开关控件中的一个或多个,调整用户C和/或用户D在用户界面上的显示效果。
又如,用户C和/或用户D可以作用在电子设备的用户界面上,以调整用户C和/或用户D在用户界面上的显示比例。
又如,用户C和用户D可以沟通是否开启分屏开关控件。如果分屏开关控件处于开启的状态,用户C与用户D还可以沟通用户C所在的界面区域C和用户D所在的界面区域D的相对位置,如将界面区域C和界面区域D分别设置在与用户界面的上方和下方,或者将界面区域C和界面区域D分别设置在与用户界面的左方和右方。如果分屏开关控件处于关闭的状态,用户C与用户D还可以沟通确认界面区域C可覆盖界面区域D,或界面区域D可覆盖界面区域C。
之后,用户C可以对电子设备C上的录制控件操作,或者,用户D可以对电子设备D上的录制控件操作,以开始录制合拍视频。然后,用户C与用户D可以进行会议交流。在会议结束后,用户C可以对电子设备C上的录制控件操作,或者,用户D可以对电子设备D上的录制控件操作,以结束录制合拍视频。电子设备C和/或电子设备D可以存储合拍视频。该合拍视频可以是一份视频类型的会议纪要。用户C与用户D可以通过视频通话连接沟通该合拍视频,以确认是否需要记录新一轮的会议纪要。如果需要,用户C与用户D可以参照如上所述的方式合拍新的视频。如果不需要,用户C与用户D可以选择挂断视频通话连接。
在开始拍摄合拍视频之前,用户可以通过视频通话连接沟通合拍细节。在视频合拍完成后,电子设备可以根据视频通话获得到的数据合成合拍视频。一方面有利于使异地视频会议趋近于同地见面会议;另一方面有利于提高合拍视频的清晰度。因此,有利于提升异地多用户合拍的用户体验感。
与场景一类似的其他场景例如可以包括,用户C为教师,用户D为学生,用户C可以在线上提供线上教程,用户D可以通过线上教程获取知识。通过本申请实施例提供的合拍方法可以获取教学视频,进而有利于提高线上课程的教学质量。例如,在开始合拍之前,用户C与用户D可以通过视频通话的方式沟通合拍视频的具体细节,如用户C、用户D的姿态,合拍视频的背景图像可以对应教室场景,用户C展示的幻灯片可以显示在教室场景中的黑板、白板或投影幕布上等。在开始合拍视频后用户C与用户D可以开始教学,并通过本申请实施例提供的方法得到合拍视频。用户C可以将该合拍视频作为教学视频范本继续使用。用户D可以反复观看该合拍视频,以复习知识。
本申请实施例还提供了一种合拍方法2900,该方法2900可以在如图1、图2所示的电子设备(例如手机、平板电脑等)中实现。如图29所示,该方法2900可以包括以下步骤:
2901,第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接,所述第一电子设备为第一用户的电子设备,所述第二电子设备为第二用户的电子设备。
相应地,第二电子设备建立所述第一电子设备与第二电子设备的视频通话连接。
示例性的,如图3至图4所示,第一电子设备可以在拍摄应用中发起视频通话连接。
示例性的,如图15至图17所示,第一电子设备可以在视频通话应用中发起视频通话连接。
可选的,在所述第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接之前,所述合拍方法还包括:所述第一电子设备显示拍摄应用的第一界面,所述第一界面包括合拍控件;所述第一电子设备响应作用于所述合拍控件的操作,显示第二界面,所述第二界面包括与多个用户一一对应的多个用户控件,所述多个用户包括所述第二用户;所述第一电子设备响应作用于所述第二用户的用户控件的操作,向所述第二电子设备发送合拍邀请,以建立所述视频通话连接。
示例性的,第二用户的用户控件例如可以是图4所示的控件410、控件411等。
可选的,在所述第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接之前,所述合拍方法还包括:所述第一电子设备显示视频通话应用的第三界面,所述第三界面包括与多个用户一一对应的多个视频通话控件,所述多个用户包括所述第二用户;所述第一电子设备响应作用于所述第二用户的视频通话控件的操作,向所述第二电子设备发送视频通话邀请,以建立所述视频通话连接。
示例性的,第二用户的控件例如可以是图15所示的控件1411、控件1411、控件1412、控件1430、控件1440等。
2902,所述第一电子设备在视频通话过程中,获取所述第一用户的第一视频数据。
示例性的,如图5至图13所示,第一电子设备可以通过拍摄应用,获取视频通话过程中拍摄到的第一用户的视频。
示例性的,如图18至图27所示,第一电子设备可以通过视频通话应用,获取视频通话过程中拍摄到的第一用户的视频。
2903,所述第一电子设备通过所述视频通话连接,从所述第二电子设备获取所述第二用户的第二视频数据。
相应地,所述第二电子设备通过所述视频通话连接,向所述第一电子设备发送所述第二用户的第二视频数据。
示例性的,如图5至图13所示,第一电子设备可以通过拍摄应用,从第二电子设备获取,在视频通话过程中,第二电子设备拍摄到的第二用户的视频。
示例性的,如图18至图27所示,第一电子设备可以通过视频通话应用,从第二电子设备获取,在视频通话过程中,第二电子设备拍摄到的第二用户的视频。
2904,所述第一电子设备根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件。
示例性的,如图5至图13所示,第一电子设备可以通过拍摄应用,在视频通话过程中合成两个用户的视频。
示例性的,如图18至图27所示,第一电子设备可以通过视频通话应用,在视频通话过程中合成两个用户的视频。
可选的,所述合拍方法还包括:所述第一电子设备根据第一视频数据、第二视频数据,在第四界面显示第一界面区域、第二界面区域,所述第一界面区域包括第一用户图像,所述第二界面区域包括第二用户图像,所述第一用户图像包括与所述第一用户对应的像素点, 所述第二用户图像包括与所述第二用户对应的像素点。
在正式合拍之前,第一电子设备可以显示第四界面。示例性的,第四界面例如可以是图5至图13、图18至图27所示的界面。
可选的,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于开启状态,且所述背景去除开关控件处于开启状态的情况下,所述第一界面区域还包括第二背景图像或目标图库图像,和/或,所述第二界面区域还包括第一背景图像或目标图库图像,其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
示例性的,所述第二界面区域还包括第一背景图像,第四界面例如可以是图7所示的界面。
示例性的,所述第一界面区域还包括第二背景图像,第四界面例如可以是图8所示的界面。
示例性的,所述第一界面区域还包括目标图库图像,且所述第二界面区域还包括目标图库图像,第四界面例如可以是图9所示的界面。
可选的,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于关闭状态,且所述背景去除开关控件处于开启状态的情况下,所述第四界面包括背景界面区域,所述背景界面区域为所述第一界面区域、所述第二界面区域的背景,所述背景界面区域包括以下任一项:第一背景图像、第二背景图像、目标图库图像,其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
示例性的,所述背景界面区域包括第一背景图像,第四界面例如可以是图11所示的界面。
示例性的,所述背景界面区域包括第二背景图像,第四界面例如可以是图12所示的界面。
示例性的,所述背景界面区域包括第一背景图像,第四界面例如可以是图13所示的界面。
可选的,所述合拍方法还包括:所述第一电子设备响应作用于所述第四界面的操作,调整所述第一界面区域和/或所述第二界面区域的尺寸。
可选的,所述合拍方法还包括:所述第一电子设备响应作用于所述第四界面的操作,调整所述第一界面区域或所述第二界面区域的显示优先级。
示例性的,如图10中的1040所示,以及如图11至图13所示,第二界面区域的优先级可以高于第一界面区域的优先级,第二界面区域可以覆盖在第一界面区域上。
可选的,所述第四界面还包括录制控件,所述根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件,包括:所述第一电子设备响应作用于所述录制控件的操作,根据所述第一视频数据与所述第二视频数据,获取所述合拍文件。
示例性的,如图5中的510所示,第一电子设备可以在拍摄应用对录制控件操作。
示例性的,如图19中的1810所示,第一电子设备可以在视频通话应用中对录制控件操作。
可选的,所述合拍文件包括第一图像区域、第二图像区域,所述第一图像区域包括与 第一用户对应的像素点,所述第二图像区域包括与第二用户对应的像素点。
可选的,所述第一图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
可选的,所述第二图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
可选的,所述合拍文件还包括背景图像区域,所述背景图像区域为所述第一图像区域、所述第二图像区域的背景,所述背景图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
合拍文件的示例可以参照第四界面的示例,在此不再详细赘述。
可选的,所述合拍文件的分辨率高于所述第一电子设备的显示分辨率。
可选的,所述合拍文件为合拍图像或合拍视频。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图30示出了上述实施例中涉及的电子设备3000的一种可能的组成示意图,如图30所示,该电子设备3000可以包括:处理单元3001、获取单元3002。
处理单元3001可以用于建立电子设备3000与第二电子设备的视频通话连接。
处理单元3001还可以用于在视频通话过程中,获取所述第一用户的第一视频数据。
获取单元3002可以通过所述视频通话连接,从所述第二电子设备获取所述第二用户的第二视频数据。
处理单元3001还可以用于根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,电子设备可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对电子设备的动作进行控制管理,例如,可以用于支持电子设备执行上述各个单元执行的步骤。存储模块可以用于支持电子设备执行存储程序代码和数据等。通信模块,可以用于支持电子设备与其他设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器 的组合等等。存储模块可以是存储器。通信模块可以是收发器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
在一个实施例中,当处理模块为处理器,存储模块为存储器时,本实施例所涉及的电子设备可以为具有图1所示结构的设备。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的合拍方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的合拍方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的合拍方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而 前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (32)

  1. 一种合拍方法,其特征在于,包括:
    第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接,所述第一电子设备为第一用户的电子设备,所述第二电子设备为第二用户的电子设备;
    所述第一电子设备在视频通话过程中,获取所述第一用户的第一视频数据;
    所述第一电子设备通过所述视频通话连接,从所述第二电子设备获取所述第二用户的第二视频数据;
    所述第一电子设备根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件。
  2. 根据权利要求1所述的合拍方法,其特征在于,在所述第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接之前,所述合拍方法还包括:
    所述第一电子设备显示拍摄应用的第一界面,所述第一界面包括合拍控件;
    所述第一电子设备响应作用于所述合拍控件的操作,显示第二界面,所述第二界面包括与多个用户一一对应的多个用户控件,所述多个用户包括所述第二用户;
    所述第一电子设备响应作用于所述第二用户的用户控件的操作,向所述第二电子设备发送合拍邀请,以建立所述视频通话连接。
  3. 根据权利要求1所述的合拍方法,其特征在于,在所述第一电子设备建立所述第一电子设备与第二电子设备的视频通话连接之前,所述合拍方法还包括:
    所述第一电子设备显示视频通话应用的第三界面,所述第三界面包括与多个用户一一对应的多个视频通话控件,所述多个用户包括所述第二用户;
    所述第一电子设备响应作用于所述第二用户的视频通话控件的操作,向所述第二电子设备发送视频通话邀请,以建立所述视频通话连接。
  4. 根据权利要求1至3中任一项所述的合拍方法,其特征在于,所述合拍方法还包括:
    所述第一电子设备根据第一视频数据、第二视频数据,在第四界面显示第一界面区域、第二界面区域,所述第一界面区域包括第一用户图像,所述第二界面区域包括第二用户图像,所述第一用户图像包括与所述第一用户对应的像素点,所述第二用户图像包括与所述第二用户对应的像素点。
  5. 根据权利要求4所述的合拍方法,其特征在于,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于开启状态,且所述背景去除开关控件处于开启状态的情况下,
    所述第一界面区域还包括第二背景图像或目标图库图像,和/或,
    所述第二界面区域还包括第一背景图像或目标图库图像,
    其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
  6. 根据权利要求4所述的合拍方法,其特征在于,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于关闭状态,且所述背景去除开关控件处于开 启状态的情况下,所述第四界面包括背景界面区域,所述背景界面区域为所述第一界面区域、所述第二界面区域的背景,所述背景界面区域包括以下任一项:第一背景图像、第二背景图像、目标图库图像,其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
  7. 根据权利要求4至6中任一项所述的合拍方法,其特征在于,所述合拍方法还包括:
    所述第一电子设备响应作用于所述第四界面的操作,调整所述第一界面区域和/或所述第二界面区域的尺寸。
  8. 根据权利要求4至7中任一项所述的合拍方法,其特征在于,所述合拍方法还包括:
    所述第一电子设备响应作用于所述第四界面的操作,调整所述第一界面区域或所述第二界面区域的显示优先级。
  9. 根据权利要求4至8中任一项所述的合拍方法,其特征在于,所述第四界面还包括录制控件,所述根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件,包括:
    所述第一电子设备响应作用于所述录制控件的操作,根据所述第一视频数据与所述第二视频数据,获取所述合拍文件。
  10. 根据权利要求1至9中任一项所述的合拍方法,其特征在于,所述合拍文件包括第一图像区域、第二图像区域,所述第一图像区域包括与第一用户对应的像素点,所述第二图像区域包括与第二用户对应的像素点。
  11. 根据权利要求10所述的合拍方法,其特征在于,所述第一图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
  12. 根据权利要求10或11所述的合拍方法,其特征在于,所述第二图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
  13. 根据权利要求10所述的合拍方法,其特征在于,所述合拍文件还包括背景图像区域,所述背景图像区域为所述第一图像区域、所述第二图像区域的背景,所述背景图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
  14. 根据权利要求1至13中任一项所述的合拍方法,其特征在于,所述合拍文件的分辨率高于所述第一电子设备的显示分辨率。
  15. 根据权利要求1至14中任一项所述的合拍方法,其特征在于,所述合拍文件为合拍图像或合拍视频。
  16. 一种电子设备,其特征在于,包括:
    处理器、存储器和收发器,所述存储器用于存储计算机程序,所述处理器用于执行所述存储器中存储的计算机程序;其中,
    所述处理器用于,建立所述电子设备与第二电子设备的视频通话连接,所述电子设备为第一用户的电子设备,所述第二电子设备为第二用户的电子设备;
    所述处理器还用于,在视频通话过程中,获取所述第一用户的第一视频数据;
    所述收发器用于,通过所述视频通话连接,从所述第二电子设备获取所述第二用户的第二视频数据;
    所述处理器还用于,根据所述第一视频数据与所述第二视频数据,获取所述第一用户与所述第二用户的合拍文件。
  17. 根据权利要求16所述的电子设备,其特征在于,在所述处理器建立所述电子设备与第二电子设备的视频通话连接之前,所述处理器还用于:
    显示拍摄应用的第一界面,所述第一界面包括合拍控件;
    响应作用于所述合拍控件的操作,显示第二界面,所述第二界面包括与多个用户一一对应的多个用户控件,所述多个用户包括所述第二用户;
    响应作用于所述第二用户的用户控件的操作,向所述第二电子设备发送合拍邀请,以建立所述视频通话连接。
  18. 根据权利要求16所述的电子设备,其特征在于,在所述处理器建立所述电子设备与第二电子设备的视频通话连接之前,所述处理器还用于:
    显示视频通话应用的第三界面,所述第三界面包括与多个用户一一对应的多个视频通话控件,所述多个用户包括所述第二用户;
    响应作用于所述第二用户的视频通话控件的操作,向所述第二电子设备发送视频通话邀请,以建立所述视频通话连接。
  19. 根据权利要求16至18中任一项所述的电子设备,其特征在于,所述处理器还用于:
    根据第一视频数据、第二视频数据,在第四界面显示第一界面区域、第二界面区域,所述第一界面区域包括第一用户图像,所述第二界面区域包括第二用户图像,所述第一用户图像包括与所述第一用户对应的像素点,所述第二用户图像包括与所述第二用户对应的像素点。
  20. 根据权利要求19所述的电子设备,其特征在于,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于开启状态,且所述背景去除开关控件处于开启状态的情况下,
    所述第一界面区域还包括第二背景图像或目标图库图像,和/或,
    所述第二界面区域还包括第一背景图像或目标图库图像,
    其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
  21. 根据权利要求19所述的电子设备,其特征在于,所述第四界面包括分屏开关控件、背景去除开关控件,在所述分屏开关控件处于关闭状态,且所述背景去除开关控件处于开启状态的情况下,所述第四界面包括背景界面区域,所述背景界面区域为所述第一界面区域、所述第二界面区域的背景,所述背景界面区域包括以下任一项:第一背景图像、第二背景图像、目标图库图像,
    其中,所述第一背景图像包括与所述第一用户所在场景对应的像素点,所述第二背景图像包括与所述第二用户所在场景对应的像素点。
  22. 根据权利要求19至21中任一项所述的电子设备,其特征在于,所述处理器还用于:
    响应作用于所述第四界面的操作,调整所述第一界面区域和/或所述第二界面区域的尺寸。
  23. 根据权利要求19至22中任一项所述的电子设备,其特征在于,所述处理器还用于:
    响应作用于所述第四界面的操作,调整所述第一界面区域或所述第二界面区域的显示优先级。
  24. 根据权利要求19至23中任一项所述的电子设备,其特征在于,所述第四界面还包括录制控件,所述处理器具体用于:
    响应作用于所述录制控件的操作,根据所述第一视频数据与所述第二视频数据,获取所述合拍文件。
  25. 根据权利要求16至24中任一项所述的电子设备,其特征在于,所述合拍文件包括第一图像区域、第二图像区域,所述第一图像区域包括与第一用户对应的像素点,所述第二图像区域包括与第二用户对应的像素点。
  26. 根据权利要求25所述的电子设备,其特征在于,所述第一图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
  27. 根据权利要求25或26所述的电子设备,其特征在于,所述第二图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
  28. 根据权利要求25所述的电子设备,其特征在于,所述合拍文件还包括背景图像区域,所述背景图像区域为所述第一图像区域、所述第二图像区域的背景,所述背景图像区域包括与以下任一项对应的像素点:第一背景图像、第二背景图像、目标图库图像。
  29. 根据权利要求16至28中任一项所述的电子设备,其特征在于,所述合拍文件的分辨率高于所述电子设备的显示分辨率。
  30. 根据权利要求16至29中任一项所述的电子设备,其特征在于,所述合拍文件为合拍图像或合拍视频。
  31. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至15中任一项所述的合拍方法。
  32. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至15中任一项所述的合拍方法。
PCT/CN2022/072235 2021-02-09 2022-01-17 合拍方法和电子设备 WO2022170918A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/264,875 US20240056677A1 (en) 2021-02-09 2022-01-17 Co-photographing method and electronic device
EP22752080.6A EP4270300A4 (en) 2021-02-09 2022-01-17 MULTI-PERSON CAPTURE METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110181051 2021-02-09
CN202110181051.9 2021-02-09
CN202110528138.9 2021-05-14
CN202110528138.9A CN114943662A (zh) 2021-02-09 2021-05-14 合拍方法和电子设备

Publications (1)

Publication Number Publication Date
WO2022170918A1 true WO2022170918A1 (zh) 2022-08-18

Family

ID=82838252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072235 WO2022170918A1 (zh) 2021-02-09 2022-01-17 合拍方法和电子设备

Country Status (3)

Country Link
US (1) US20240056677A1 (zh)
EP (1) EP4270300A4 (zh)
WO (1) WO2022170918A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090064013A1 (en) * 2007-09-04 2009-03-05 Apple Inc. Opaque views for graphical user interfaces
CN110798621A (zh) * 2019-11-29 2020-02-14 维沃移动通信有限公司 一种图像处理方法及电子设备
CN111629151A (zh) * 2020-06-12 2020-09-04 北京字节跳动网络技术有限公司 视频合拍方法、装置、电子设备及计算机可读介质
CN111866434A (zh) * 2020-06-22 2020-10-30 阿里巴巴(中国)有限公司 一种视频合拍的方法、视频剪辑的方法、装置及电子设备
CN112004034A (zh) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 合拍方法、装置、电子设备及计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10609332B1 (en) * 2018-12-21 2020-03-31 Microsoft Technology Licensing, Llc Video conferencing supporting a composite video stream

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090064013A1 (en) * 2007-09-04 2009-03-05 Apple Inc. Opaque views for graphical user interfaces
CN110798621A (zh) * 2019-11-29 2020-02-14 维沃移动通信有限公司 一种图像处理方法及电子设备
CN111629151A (zh) * 2020-06-12 2020-09-04 北京字节跳动网络技术有限公司 视频合拍方法、装置、电子设备及计算机可读介质
CN111866434A (zh) * 2020-06-22 2020-10-30 阿里巴巴(中国)有限公司 一种视频合拍的方法、视频剪辑的方法、装置及电子设备
CN112004034A (zh) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 合拍方法、装置、电子设备及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4270300A4

Also Published As

Publication number Publication date
EP4270300A4 (en) 2024-07-17
EP4270300A1 (en) 2023-11-01
US20240056677A1 (en) 2024-02-15

Similar Documents

Publication Publication Date Title
US20220224968A1 (en) Screen Projection Method, Electronic Device, and System
WO2021233218A1 (zh) 投屏方法、投屏源端、投屏目的端、投屏系统及存储介质
WO2020093988A1 (zh) 一种图像处理方法及电子设备
US12020472B2 (en) Image processing method and image processing apparatus
CN113099146B (zh) 一种视频生成方法、装置及相关设备
WO2022007862A1 (zh) 图像处理方法、系统、电子设备及计算机可读存储介质
CN112527174B (zh) 一种信息处理方法及电子设备
CN114115769A (zh) 一种显示方法及电子设备
CN112527222A (zh) 一种信息处理方法及电子设备
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
CN113965694A (zh) 录像方法、电子设备及计算机可读存储介质
CN113973189A (zh) 显示内容的切换方法、装置、终端及存储介质
WO2023001043A1 (zh) 一种显示内容方法、电子设备及系统
CN115689963A (zh) 一种图像处理方法及电子设备
WO2022252649A1 (zh) 一种视频的处理方法及电子设备
CN114827696B (zh) 一种跨设备的音视频数据同步播放的方法和电子设备
CN112269554B (zh) 显示系统及显示方法
WO2022170837A1 (zh) 处理视频的方法和装置
CN113747056A (zh) 拍照方法、装置及电子设备
CN116708696B (zh) 视频处理方法和电子设备
CN114911377A (zh) 显示方法和电子设备
CN114943662A (zh) 合拍方法和电子设备
WO2022170918A1 (zh) 合拍方法和电子设备
EP4116811A1 (en) Display method and electronic device
CN115484390A (zh) 一种拍摄视频的方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22752080

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18264875

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2022752080

Country of ref document: EP

Effective date: 20230727

NENP Non-entry into the national phase

Ref country code: DE