WO2022105803A1 - 摄像头调用方法、系统及电子设备 - Google Patents

摄像头调用方法、系统及电子设备 Download PDF

Info

Publication number
WO2022105803A1
WO2022105803A1 PCT/CN2021/131243 CN2021131243W WO2022105803A1 WO 2022105803 A1 WO2022105803 A1 WO 2022105803A1 CN 2021131243 W CN2021131243 W CN 2021131243W WO 2022105803 A1 WO2022105803 A1 WO 2022105803A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
application
interface
image
electronic device
Prior art date
Application number
PCT/CN2021/131243
Other languages
English (en)
French (fr)
Inventor
刘畅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21893950.2A priority Critical patent/EP4231614A4/en
Priority to US18/037,719 priority patent/US20230403458A1/en
Publication of WO2022105803A1 publication Critical patent/WO2022105803A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the embodiments of the present application relate to the field of terminal devices, and in particular, to a camera calling method, system, and electronic device.
  • the camera function of the electronic device (including the calling of the front camera and/or the rear camera) adopts an exclusive mode, and the camera can only be called by a single application.
  • the present application provides a camera calling method, system and electronic device.
  • the camera of the electronic device can be called by multiple applications, so that the multiple applications can share the camera, thereby improving the diversity of application scenarios of the electronic device and the user experience.
  • an embodiment of the present application provides a camera calling method.
  • the method includes: the electronic device displays a first preview interface of a first application, and the first preview interface displays an image collected by a first camera at a first moment.
  • the electronic device displays the first interface at the second moment, wherein the first moment and the second moment are different moments, and the first interface includes the second preview interface of the first application and the first interface.
  • the third preview interface of the second application Specifically, the second preview interface and the third preview interface display the images collected by the first camera at the second moment.
  • the present application provides a camera sharing method, in which multiple applications in an electronic device can share the camera, that is, call the camera at the same time, and display the images collected by the camera in real time, so as to increase the diversity of application scenarios of the electronic device and improve the user experience. Use experience.
  • the second preview interface includes a first recording duration, where the first recording duration is used to indicate the recording duration of the first application from the moment when the recording starts.
  • the first application can display the recording duration in real time to indicate that the current shooting is still in progress, and the shooting continues for the time indicated by the recording duration.
  • the method before the electronic device displays the first preview interface of the first application, the method further includes: acquiring the first configuration information of the first application by the electronic device; configuring the first application by the electronic device; session, and save the association relationship between the first session, the first application and the first configuration information; the electronic device displays the first interface at the second moment, and further includes: the electronic device obtains the second configuration information of the second application; the electronic device configures the first interface Second session, and save the association relationship between the second session, the second configuration information and the second configuration information.
  • the electronic device can establish an association relationship between the first application and the first configuration information, and the second application and the second configuration information based on the first configuration information of the first application and the second configuration information of the second application, to In subsequent processing, images corresponding to the first application and the second application may be generated based on the established association relationship.
  • the electronic device displays the first interface at the second moment, including: the electronic device acquires an image captured by the first camera at the second moment; the electronic device is based on the first session, determine the first application and the first configuration information associated with the first session, and determine the second application and the second configuration information associated with the second session based on the second session; the electronic device according to the first camera at the second moment The collected image and the first configuration information are obtained, the first sub-image is obtained, and the second sub-image is obtained according to the image and the second configuration information collected by the first camera at the second moment; the electronic device displays on the second preview interface The first sub-image, and the second sub-image is displayed on the third preview interface.
  • the electronic device can generate images that meet different requirements of the application based on the established association relationship, and display the generated images in the corresponding application interface.
  • the first configuration information is used to indicate the first resolution
  • the second configuration information is used to indicate the second resolution
  • the first configuration information is used to indicate the first zoom factor
  • the second configuration information is used to indicate the second zoom factor, the first zoom factor and the second zoom factor same or different.
  • the method further includes: in response to the received second operation, the electronic device acquires third configuration information of the first application, where the third configuration information is used to indicate the first Three zoom factors, the third zoom factor is different from the first zoom factor; the electronic device saves the association relationship between the first session, the first application and the third configuration information; the electronic device obtains the image collected by the first camera at the third moment; The device determines, based on the first session, the first application and third configuration information associated with the first session, and determines, based on the second session, the second application and second configuration information associated with the second session; the electronic device determines according to the first camera The image and the third configuration information collected at the third moment are used to obtain the third sub-image, and the fourth sub-image is obtained according to the image and the second configuration information collected by the first camera at the third moment; The second preview interface displays the third sub-image, and the third preview interface displays the fourth sub-image.
  • At least one of the multiple applications calling the camera at the same time can dynamically update the configuration information in the process of calling the camera, and the electronic device can establish an association relationship applied to the updated configuration information based on the updated configuration information, and based on the updated configuration information With the updated configuration information, an image that satisfies the updated configuration information is generated for the application, and at the same time, other applications whose configuration information has not been updated are also processed correspondingly according to the original configuration information.
  • the method further includes: in response to the received third operation, the electronic device displays a second interface at a fourth moment; the second interface includes the first interface of the first application.
  • the second interface includes the first interface of the first application.
  • Four preview interfaces and the fifth preview interface of the first application wherein the fourth preview interface displays the image collected by the first camera at the fourth moment, and the fifth preview interface displays the image collected by the second camera at the fourth moment , the first camera is not the same as the second camera.
  • the first application can also start the clone application, or start the dual scene shooting function, the clone application of the first application and the first application, and the dual application of the first application.
  • the two windows of the scene capture function can call the front camera and/or the rear camera at the same time.
  • the method further includes: in response to the received fourth operation, the electronic device displays, on the second preview interface, the image collected by the first camera at the fifth moment, Moreover, the third preview interface displays the image collected by the second camera at the fifth moment, and the second camera is different from the first camera. In this way, in a scenario where multiple applications call the camera at the same time, at least one application can switch the camera.
  • the first camera is a first front camera, and the second camera is a first rear camera; or, the first camera is a first front camera, and the first camera is a first front camera
  • the second camera is the second front camera, and the first front camera is different from the second front camera; or, the first camera is the first rear camera, the second camera is the second rear camera, and the first rear camera is the same as the second rear camera.
  • the second rear camera is different.
  • the embodiments of the present application can be applied to multiple applications calling the front camera at the same time, or calling different front cameras at the same time, or calling the rear camera at the same time, or calling different rear cameras at the same time, and can also be applied to multiple applications. A scene where an application calls both the front camera and the rear camera at the same time.
  • the third preview interface further includes a camera switching option, and the fourth operation is used to indicate an operation on the camera switching option.
  • the user can switch the camera called by the application by clicking the camera switching option, so as to increase the diversity of application scenarios.
  • a user can use the video calling application to call the front camera to make a video call with family members, and can also use the live broadcast application to call the rear camera for live broadcast.
  • the camera called by the video calling application and the live broadcast application can also be used according to the user.
  • the instructions can be switched at any time.
  • the first interface is a split-screen interface
  • one interface of the split-screen interface includes a second preview interface
  • another interface of the split-screen interface includes a third preview interface interface.
  • the second preview interface, and/or the third preview interface is a floating interface.
  • the camera invoking method in the embodiment of the present application can be applied to a multi-application floating interface scenario, and multiple applications can be included in different floating interfaces and display images captured by the camera in real time.
  • the electronic device displays the first interface at the second moment in response to the received first operation, including: the electronic device receives the first sub-operation, and at the first The interface displays a sidebar, and the sidebar includes an application icon of the second application; the electronic device receives a second sub-operation on the application icon of the second application, and displays the first interface at a second moment.
  • the electronic device can provide a sidebar, so that the user can start the second application through the sidebar, so that multiple applications can call the camera at the same time.
  • the first application is a camera application
  • the second application is any one of the following: a video call application, a live broadcast application, and an application with a code scanning function.
  • an embodiment of the present application provides a camera calling system.
  • the system includes: a first electronic device and a second electronic device, the first electronic device performs data interaction with the second electronic device through a first connection, the first electronic device includes a first camera; the first electronic device is used for: displaying the first electronic device.
  • a first preview interface of an application in which the first interface sent by the second electronic device is displayed, where the first interface includes the image captured by the first camera at the first moment; the first electronic device is further used for: In response to the received first operation, the second interface is displayed at the second moment; the first moment and the second moment are different moments; wherein, the second interface includes the second preview interface of the first application and the third interface of the second application.
  • a preview interface; the second preview interface displays a third interface sent by the second electronic device, and the third interface and the third preview interface include images collected by the first camera at a second moment.
  • the first application can also send an image corresponding to at least one application to the second electronic device to realize multi-device collaboration, and the second electronic device
  • the image captured by the camera of the first electronic device may also be used.
  • a second electronic device is configured to: receive an image sent by the first electronic device and collected by the first camera at the first moment; display the image collected by the first camera at the first moment on the first interface; Once connected, the first interface is sent to the first electronic device. In this way, the image captured by the second electronic device using the camera of the first electronic device can be implemented, and the acquired image can be displayed in the first interface.
  • the second electronic device is configured to: receive an image sent by the first electronic device and captured by the first camera at the first moment; The first interface of the image collected at a moment; wherein, the second electronic device is in an off-screen state; the first interface is sent to the first electronic device through the first connection.
  • the second electronic device can also use the image captured by the first electronic device, and cause the first electronic device to display the first interface generated by the second electronic device based on the acquired image.
  • the first application is a multi-screen collaborative application.
  • the first preview interface is a floating window.
  • the embodiments of the present application provide an electronic device.
  • the electronic device includes a memory and a processor, the memory and the processor are coupled, and the memory stores program instructions.
  • the electronic device causes the electronic device to perform the following steps: displaying a first preview interface of the first application, and the first preview interface Display the image collected by the first camera at the first moment; in response to the received first operation, display the first interface at the second moment; the first moment and the second moment are different moments; the first interface includes the first interface of the first application.
  • Two preview interfaces and a third preview interface of the second application wherein, the second preview interface and the third preview interface display the images collected by the first camera at the second moment.
  • the second preview interface includes the first recording duration, where the first recording duration is used to indicate the recording duration of the first application from the moment when the recording starts.
  • the electronic device when the program instructions are executed by the processor, the electronic device causes the electronic device to perform the following steps: acquiring the first configuration information of the first application; configuring the first session, and saving the first session An association relationship between a session, a first application, and the first configuration information; acquiring second configuration information of a second application; configuring a second session, and saving the association relationship between the second session, the second configuration information, and the second configuration information.
  • the electronic device when the program instructions are executed by the processor, the electronic device is caused to perform the following steps: acquiring an image captured by the first camera at the second moment; based on the first session, determining the first application and the first configuration information associated with the first session, and determining the second application and the second configuration information associated with the second session based on the second session; according to the image collected by the first camera at the second moment and the first configuration information to obtain the first sub-image, and, according to the image collected by the first camera at the second moment and the second configuration information, to obtain the second sub-image; display the first sub-image on the second preview interface, and , and display the second sub-image on the third preview interface.
  • the first configuration information is used to indicate the first resolution
  • the second configuration information is used to indicate the second resolution
  • the first configuration information is used to indicate the first zoom factor
  • the second configuration information is used to indicate the second zoom factor
  • the electronic device when the program instructions are executed by the processor, the electronic device causes the electronic device to perform the following steps: acquiring the third configuration information of the first application in response to the received second operation , the third configuration information is used to indicate the third zoom factor, and the third zoom factor is different from the first zoom factor; save the association relationship between the first session, the first application and the third configuration information; based on the first session, determining the first application and third configuration information associated with the first session, and determining the second application and second configuration information associated with the second session based on the second session; according to the first session the image and the third configuration information collected by the camera at the third moment to obtain the third sub-image, and the fourth sub-image is obtained according to the image and the second configuration information collected by the first camera at the third moment;
  • the preview interface displays the third sub-image, and the fourth sub-image is displayed on the third preview interface.
  • the electronic device when the program instructions are executed by the processor, the electronic device is caused to perform the following steps: in response to the received third operation, displaying the second interface at the fourth moment;
  • the second interface includes a fourth preview interface of the first application and a fifth preview interface of the first application; wherein the fourth preview interface displays the image captured by the first camera at the fourth moment, and the fifth preview interface displays the second The image collected by the camera at the fourth moment is different from the first camera and the second camera.
  • the electronic device when the program instructions are executed by the processor, the electronic device causes the electronic device to perform the following steps: in response to the received fourth operation, displaying the first camera on the second preview interface The image collected at the fifth moment, and the third preview interface displays the image collected by the second camera at the fifth moment, and the second camera is different from the first camera.
  • the first camera is the first front camera, and the second camera is the first rear camera; or, the first camera is the first front camera, and the first camera is the first front camera
  • the second camera is the second front camera, and the first front camera is different from the second front camera; or, the first camera is the first rear camera, the second camera is the second rear camera, and the first rear camera is the same as the second rear camera.
  • the second rear camera is different.
  • the third preview interface further includes a camera switching option, and the fourth operation is used to indicate an operation on the camera switching option.
  • the first interface is a split-screen interface
  • one interface of the split-screen interface includes a second preview interface
  • another interface of the split-screen interface includes a third preview interface interface
  • the second preview interface, and/or the third preview interface is a floating interface.
  • the electronic device when the program instructions are executed by the processor, the electronic device is caused to perform the following steps: receiving the first sub-operation, displaying a sidebar on the first interface, and the sidebar Including the application icon of the second application; receiving a second sub-operation on the application icon of the second application, and displaying the first interface at the second moment.
  • the first application is a camera application
  • the second application is any one of the following: a video call application, a live broadcast application, and an application with a code scanning function.
  • the third aspect and any implementation manner of the third aspect correspond to the first aspect and any implementation manner of the first aspect, respectively.
  • the technical effects corresponding to the third aspect and any implementation manner of the third aspect reference may be made to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which will not be repeated here.
  • embodiments of the present application provide a computer-readable medium for storing a computer program, where the computer program includes instructions for executing the method in the first aspect or any possible implementation manner of the first aspect.
  • embodiments of the present application provide a computer-readable medium for storing a computer program, where the computer program includes instructions for executing the method in the second aspect or any possible implementation manner of the second aspect.
  • an embodiment of the present application provides a computer program, where the computer program includes instructions for executing the method in the first aspect or any possible implementation manner of the first aspect.
  • an embodiment of the present application provides a computer program, where the computer program includes instructions for executing the method in the second aspect or any possible implementation manner of the second aspect.
  • an embodiment of the present application provides a chip, where the chip includes a processing circuit and a transceiver pin.
  • the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the first aspect or any possible implementation manner of the first aspect to control the receiving pin to receive a signal to Control the send pin to send the signal.
  • an embodiment of the present application provides a chip, where the chip includes a processing circuit and a transceiver pin.
  • the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the second aspect or any possible implementation manner of the second aspect to control the receiving pin to receive a signal to Control the send pin to send the signal.
  • Fig. 1 is one of the schematic diagrams of application scenarios shown in an exemplary manner
  • Fig. 2 is one of the schematic diagrams of application scenarios shown in an exemplary manner
  • FIG. 3 is one of the schematic structural diagrams of the electronic device exemplarily shown
  • FIG. 4 is one of the schematic structural diagrams of the electronic device exemplarily shown
  • FIG. 5 is a schematic diagram of a software structure of an exemplary electronic device
  • FIG. 6 is a schematic structural diagram of an exemplary electronic device
  • FIG. 7 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 8 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 9 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 10 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 11 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 12 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 13 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • FIG. 14 is one of schematic diagrams of module interaction provided by an embodiment of the present application.
  • 15a-15l are schematic diagrams of application scenarios exemplarily shown.
  • Figures 16a-16d are schematic diagrams of module interaction exemplarily shown
  • FIG. 17 is a schematic flowchart of a camera calling method provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of an apparatus provided by an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of the objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • multiple processing units refers to two or more processing units; multiple systems refers to two or more systems.
  • FIG. 1 is an exemplary schematic diagram of an application scenario.
  • a user uses multiple mobile phones (including mobile phone 1, mobile phone 2, mobile phone 3 and mobile phone 4) to log in to different mobile phones respectively.
  • the live broadcast platform also referred to as a live broadcast application or a short video application
  • the display windows of the live broadcast applications of mobile phones 1 to 4 display images collected by their respective cameras.
  • the angles and sizes of the users in the images displayed by the mobile phones 1 to 4 are also different.
  • the camera in the mobile phone (including the front and rear) can only be called by one application at the same time, if the user needs to live broadcast on multiple live broadcast platforms, he can only use the method shown in Figure 1, that is, through Multiple mobile phones log in to different live broadcast platforms for live broadcast.
  • FIG. 2 is a schematic diagram of an application scenario of applying a camera calling method provided in an embodiment of the present application.
  • a user can log in to multiple live broadcast platforms using a tablet.
  • the display window of the tablet includes a plurality of floating windows, each floating window is used to display the display interface of the corresponding live platform, and the image displayed on the display interface of each live platform is the image collected by the camera of the tablet.
  • the application that is calling the camera or the application clone is displayed in the foreground.
  • the windows of each application are floating windows, and the floating windows may not overlap at all or may partially overlap.
  • the floating window of an application completely overlaps the floating window of another application, it can also be considered that the application is displayed in the foreground, and the image captured by the camera in real time can be displayed.
  • the blocked application can also suspend calling the camera, that is, the screen freezes, and some or all of the floating window of the application is not blocked by other applications. When the suspended window of the application is blocked, the image captured by the camera in real time is displayed again.
  • FIG. 3 is a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may be a terminal, and may also be referred to as a terminal device, and the terminal may be a device with a camera, such as a cellular phone, a tablet computer (pad), a wearable device, or an Internet of Things device. Do limit.
  • the schematic structural diagram of the electronic device 100 can be applied to the mobile phone in FIG. 1 or the tablet in FIG. 2 .
  • the electronic device 100 may have more or less components than those shown in the figures, may combine two or more components, or may have different component configurations.
  • the various components shown in Figure 3 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the display screen 194 may display a shooting preview interface, a video recording preview interface, a live broadcast preview interface, a code scanning preview interface, etc., and may also display a video playback interface and the like during video playback.
  • a preview interface eg, a shooting preview interface, a live preview interface
  • a user can view images captured by the current camera in real time through the display screen 194 .
  • the recording preview interface displayed on the display screen 194 displays the preview image captured by the current camera
  • the camera application responds to the user operation and starts recording
  • the recording preview interface displays the The recorded image captured by the current camera.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may be located in the edge area of the electronic device, may be an under-screen camera, or may be a camera that can be raised and lowered.
  • the camera 193 may include a rear camera, and may also include a front camera.
  • the embodiment of the present application does not limit the specific position and shape of the camera 193 .
  • the electronic device 100 may include cameras with one or more focal lengths, for example, cameras with different focal lengths may include a telephoto camera, a wide-angle camera, an ultra-wide-angle camera, a panoramic camera, and the like.
  • FIG. 4 is a schematic diagram showing the position of the camera 193 on the mobile phone when the electronic device 100 is a mobile phone.
  • a front camera is provided on the upper part of the display screen of the mobile phone (near the top edge area) , there may be one or more front cameras.
  • the mobile phone includes two front cameras.
  • the layout of the cameras shown in FIG. 4( 1 ) (such as horizontal rows and intervals) is only a schematic example, which is not limited in this application.
  • Fig. 4(2) exemplarily, one or more rear cameras are provided on the back of the mobile phone (that is, the side opposite to the display screen), for example, the rear camera of the mobile phone in Fig.
  • 4(2) includes 4 Camera, 4 cameras can be regarded as a rear camera module, or it can be regarded as a separate 4 cameras.
  • the four cameras may include but are not limited to: wide-angle cameras, ultra-wide-angle cameras, panoramic cameras, etc., which are not limited in this application.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes the camera calling method in the embodiment of the present application by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
  • FIG. 5 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture of the electronic device 100 divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a hardware abstraction layer (HAL) and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application framework layer provides an application programming interface (API) and a programming framework for applications in the application layer, including various components and services to support developers' Android development.
  • the application framework layer includes some predefined functions. As shown in FIG. 5 , the application framework layer may include a view system, a window manager, a resource manager, a content provider, a notification manager, a camera service, a multimedia management module, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • Data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the camera service is used to call the camera (including the front camera and/or the rear camera) in response to the application's request.
  • the multimedia management module is used to process images based on the configuration of the camera service, and the specific processing process will be described in detail in the following embodiments.
  • the system library and runtime layer includes the system library and the Android Runtime.
  • a system library can include multiple functional modules. For example: browser kernel, 3D graphics library (eg: OpenGL ES), font library, etc.
  • the browser kernel is responsible for interpreting the syntax of the web page (such as an application HTML and JavaScript under the standard general markup language) and rendering (displaying) the web page.
  • the 3D graphics library is used to implement 3D graphics drawing, image rendering, compositing and layer processing, etc.
  • the font library is used to implement the input of different fonts.
  • the Android runtime includes core libraries and a virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • the components included in the system framework layer, system library and runtime layer shown in FIG. 5 do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the HAL layer is the interface layer between the operating system kernel and the hardware circuit.
  • the HAL layer includes but is not limited to: audio hardware abstraction layer (Audio HAL) and camera hardware abstraction layer (Camera HAL).
  • Audio HAL is used to process the audio stream, for example, noise reduction, directional enhancement and other processing of the audio stream
  • Camera HAL is used to process the image stream.
  • the kernel layer is the layer between the hardware and the aforementioned software layers.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the hardware may include devices such as a camera, a display screen, a microphone, a processor, and a memory.
  • the display screen in the hardware can display a shooting preview interface, a video preview interface and a shooting interface during video recording.
  • a camera in the hardware can be used to capture images.
  • Microphones in hardware can be used to collect sound signals and generate analog audio electrical signals.
  • the display interface of the mobile phone includes one or more controls, and the controls include but are not limited to : Battery icon, network icon, application icon, etc.
  • the user can click the short video application A icon 601 on the display interface to start the short video application A.
  • the mobile phone starts the short video application A and displays the application interface of the short video application A, as shown in Figure 6(2), with reference to Figure 6(2), the application interface of the short video application A includes one or more Each control may include, for example, a recording option 607 , a camera switching option 609 , and a zoom option 605 .
  • the application interface includes a preview interface, and the preview interface displays an image (may be referred to as a preview image) captured by a camera (for example, a front-facing camera).
  • Short video application A starts recording (or live broadcast), and the image captured by the camera (also called recorded image) continues to be displayed in the preview interface.
  • short video application A can upload the image captured by the camera to the live broadcast platform server. and/or, stored locally.
  • the preview interface also includes a recording duration 603, which is the duration from the recording start time (that is, after the user clicks the recording option 607) to the current moment.
  • the duration is only a schematic example, which is not limited in this application, and the description will not be repeated hereinafter. It should be noted that, in order to avoid repeated descriptions, the descriptions of zoom options, recording options, camera switching options, and recording durations in each electronic device below can all refer to the descriptions in FIG. 6 , and the descriptions will not be repeated hereinafter.
  • the camera switch option 609 is used to switch cameras.
  • the front camera is enabled by default, that is, the preview interface displays the image captured by the front camera. If the user clicks the camera switch option 609, the short video application A calls the rear camera, and the image captured by the rear camera is displayed on the preview interface.
  • the recording function of the short video application is used as an example for description. In other embodiments, the present application is also applicable to functions or applications such as video calls and live broadcasts.
  • the zoom option 605 is used to adjust the shooting focal length, so that the preview interface displays the image within the zoom range corresponding to the zoom factor.
  • the default initial zoom factor is 1 times.
  • the sidebar 611 includes one or more Controls, for example, include the icon of email application, the icon of memo application, the icon of gallery application, the icon of file management application, the icon 611a of short video application B, etc. icon is added to the sidebar.
  • the names, numbers and positions of each application in FIG. 6(3) are only schematic examples, and are not limited in this application.
  • the user may click or drag the icon 611a of the short video application B in the sidebar to start the short video application B.
  • the user can drag the icon of the short video application B to the lower half area of the mobile phone display window and release it, and the mobile phone will split the display window of the mobile phone in response to the user's operation behavior, including the display window 613 and the display window.
  • 615 as shown in FIG. 6(4), referring to FIG. 6(4), exemplarily, the display window 613 of the mobile phone is used to display the application interface of the short video application A, and the display window 615 is used to display the application of the short video application B interface.
  • the application interface (also referred to as the preview interface) of short video application B includes one or more controls, such as recording options, camera switching options, zoom options, etc.
  • the preview interface of the short video application B displays an image collected by a camera (which may be a front-facing camera or a rear-facing camera, and in the embodiments of the present application, the front-facing camera is turned on by default as an example for description).
  • the user can click the recording option on the application interface of the short video application B, and the short video application B starts recording in response to the user's click operation, that is, the image captured by the front camera is displayed in the preview interface, and the short video Application B saves the image and/or transmits the image to the server.
  • the camera in the electronic device can only be called by one application. If the operation in Figure 6(3) occurs, the display screen of the previous application that called the camera will freeze. Take Figure 6(4) as an example.
  • the short video application B calls the camera, that is, acquires and displays the image captured by the camera, while the short video application A cannot call the camera, so the short video application A's screen freezes , that is, the display window 613 only displays the image obtained by the short video application A for the last time before the short video application B starts (that is, calling the camera), and no longer displays the image obtained by the camera subsequently.
  • the user opens the short video application After B, the short video application B is opened by default to call the camera, and a prompt message that the camera cannot be called may also appear on the screen of the mobile phone.
  • the short video application A and the short video application B for recording at the same time, only the method shown in FIG. 1 can be used, that is, using multiple electronic devices to log in to different applications and perform recording.
  • An embodiment of the present application provides a method for invoking a camera.
  • the camera in the electronic device in the embodiment of the present application can be simultaneously invoked by one or more applications, and one or more applications can acquire and display images collected by the camera. . That is to say, through the camera calling method in the embodiment of the present application, the short video application A and the short video application B in FIG. 6(4) can call the camera at the same time, and display the real-time image collected by the camera in the preview interface.
  • short video application A short video application B
  • camera application are used as examples for description.
  • Scenarios such as short video applications, scan code payment functions in payment applications, video calls, video call functions in chat applications, and other applications sharing cameras, are not repeated in this application.
  • a possible application scenario is: short video application A calls the front camera of the mobile phone, and at the same time, short video application B calls the rear camera of the mobile phone.
  • Another possible application scenario is: the video calling application calls the front camera of the mobile phone to make a video call with other contacts, and at the same time, while keeping the video call uninterrupted, the payment application calls the rear camera of the mobile phone, and the payment application uses the scanning Code payment function to complete the shopping payment operation.
  • the process of calling the camera by an application can be divided into two parts.
  • the first part is the creation process, which can also be understood as the preparation process.
  • the creation process is mainly to create corresponding instances of each module , and interactively control the process of information.
  • the second part is the recording process, that is, the process in which each module or an instance in each module processes the image collected by the camera.
  • the "instance" described in the embodiments of the present application may also be understood as program code or process code running in a process.
  • FIG. 7 it specifically includes:
  • the short video application A invokes the camera service, and the camera service performs corresponding processing.
  • the short video application A calls the camera service, for example, the short video application A sends a request message to the camera service
  • the request message may include, but is not limited to: the application ID of the short video application A (for example, the application package name), PID (Process Identification, process identification number), the configuration information of the short video application A (also called the requirement information, in order to distinguish it from the configuration information of the short video application B below, the configuration information of the short video application A is hereinafter referred to as configuration 1) and so on.
  • the configuration may include a resolution (eg, 1080*720) corresponding to the image displayed by the short video application A.
  • the request message may not include the application ID.
  • the request message includes configuration information.
  • the camera service can obtain the application ID of the application corresponding to the received request message through the interface with the application layer. and PID.
  • FIG. 8 is a schematic diagram of the interaction between the camera service and the multimedia management module.
  • the steps for the camera service to perform corresponding processing include:
  • Camera service creates an instance of Camera Service (camera service), an instance of Carema Device Client (camera device client), an instance of Carema3Deivce (camera device, where the number 3 represents the version number of the camera service, which can be updated with the version) instance, and an instance of Camera3Stream (camera data stream) .
  • the camera service creates the above instances in response to the request of the short video application A.
  • the functions of each instance are described in detail below:
  • the Camera Service instance is used to provide an API interface for applications at the application layer, and create a corresponding session (Session) based on the request of the application (such as short video application A).
  • the Camera Service can receive a request input from short video application A (including application ID and configuration 1, etc.) based on the API interface, and the Camera Service can create a corresponding session based on the request of short video application A (For example, the identification information of the session is Session1), and output the application ID, configuration 1 and session identification information (ie Session1) of the short video application A to the Carema Device Client instance.
  • the instance of Carema Device Client can be regarded as the client of the camera service, which is mainly used to provide the E interface for the camera service to exchange data with other modules.
  • the Carema Device Client instance saves the correspondence between the application ID and Session1, and outputs the application ID, configuration 1, and Session1 of the short video application A to the Carema3Deivce instance.
  • Carema3Deivce which provides an interface for the HAL layer and transparent transmission of data (such as images).
  • the Carema3Deivce instance records the corresponding relationship between each information based on the application ID, configuration 1 and Session1 of the short video application A input by the Carema Device Client instance, and outputs the configuration 1 and Session1 to the multimedia management module, and the application The ID and Session1 are output to the Camera3Stream instance.
  • the Camera3Stream instance used to process the image accordingly. Specifically, the Camera3Stream instance stores the application ID of the short video application A entered by the Carema3Deivce instance corresponding to Session1.
  • the camera service outputs Session1 and Configuration 1 to the multimedia management module, and the multimedia management module performs corresponding processing.
  • the Carema3Deivce instance outputs Configuration 1 and Session 1 to the multimedia management module.
  • exemplary include in the multimedia management module
  • Media Center Mrg (multimedia management center) module (also can be referred to as sub-module or unit) is responsible for providing interface and interacting logic with external modules
  • Media Device Mrg (multimedia equipment Management Center) module is responsible for the storage, addition and deletion of configuration information, Session and other information
  • Media Stream Mrg (Multimedia Data Management Center) module is responsible for data conversion, resolution adaptation and data distribution.
  • the Media Center Mrg module receives configuration 1 and Session1 input by the Carema3Deivce instance, and the Media Center Mrg module outputs Session1 and configuration 1 to the Media Device Mrg module.
  • the Media Device Mrg module records the correspondence between Session1 and Configuration 1.
  • the Media Stream Mrg module is mainly used to process images. The specific processing process will be explained in detail in the following recording process.
  • Camera Hal calls the camera driver in the kernel layer.
  • the camera service calls Carema Hal, and Carema Hal performs corresponding processing, such as establishing a corresponding instance.
  • Carema Hal calls the camera driver, and the camera driver performs corresponding processing, such as establishing a corresponding instance.
  • the front camera starts to capture images in response to the invocation of the camera driver.
  • each instance or module in the Camera Hal and the camera driver performs corresponding processing on the data (such as images).
  • the specific processing process can refer to the technical solutions in the prior art embodiments, and this application does not Repeat.
  • the camera outputs the collected image to the camera driver.
  • the camera driver acquires the image captured by the front camera, and outputs the image of the front camera to Carema Hal.
  • Carema Hal outputs the images captured by the front camera to the camera service.
  • the camera service outputs the image and Session1 to the multimedia management module, and the multimedia management module performs corresponding processing, and outputs the processed image and Session1 to the camera service.
  • Figure 10 is a schematic diagram of the interaction between the camera service and the multimedia management module.
  • the Carema3Device instance receives the image input by Camera Hal, and the Carema3Device instance detects the currently stored Session, that is, the currently stored Session1 and Other information (including application ID and configuration 1), the Carema3Device instance outputs the image and Session1 to the multimedia management module.
  • the Media Stream Mrg module in the multimedia management module is based on the received image and Session1, from the Media Device Mrg module, obtains the configuration 1 corresponding to Session1, and the Media Device Mrg module can be based on the configuration 1 to the image For processing, such as adjusting the resolution, etc., the Media Device Mrg module outputs the processed image and Session1 to the camera service.
  • the camera service when it detects that only Session1 and the corresponding information currently exist, it may not interact with the multimedia management module, but directly execute S205, that is, output the image to the short video application A.
  • the multimedia management module may also use the application ID as the retrieval information to determine the corresponding configuration.
  • the camera service may output the application ID to the multimedia management module and configuration 1, correspondingly, in S204, the multimedia management module may determine the corresponding configuration 1 based on the application ID, and process the image according to the configuration 1.
  • the camera service can also output Session1, application ID and configuration 1 to the multimedia management module, and the multimedia management module can retrieve information based on at least one of Session1 and the application ID, and determine the corresponding configuration 1.
  • the association between the session and the configuration is taken as an example to illustrate.
  • the camera service outputs the image to the short video application A.
  • the Carema3Device instance receives the image (ie, the processed image) and Session1 input by the multimedia management module, and the Carema3Device instance outputs the image and Session1 to the Carema3Stream instance.
  • the Carema3Stream instance outputs the image to the short video application A based on the recorded correspondence between Session1 and the application ID of the short video application A.
  • the short video application B invokes the camera service, and the camera service performs corresponding processing.
  • the short video application B calls the camera service, for example, the short video application B sends a request to the camera service
  • the request message may include, but is not limited to, the application ID of the short video application B, the configuration information of the short video application B (hereinafter referred to as configuration 2), and the like.
  • configuration 2 may include that the resolution corresponding to the image displayed by the short video application B is 1280*720.
  • FIG. 12 is a schematic diagram of the interaction between the camera service and the multimedia management module.
  • the steps for the camera service to perform corresponding processing include:
  • the Camera Service instance After the Camera Service instance receives the request from the short video application B, it outputs the application ID and configuration 2 of the short video application B to the Carema Device Clinet instance.
  • the Carema Device Clinet instance creates a corresponding session (the session identification information is Session2) in response to the application ID and configuration 2 of the short video application B input by the Camera Service instance.
  • the Carema Device Clinet instance outputs the application ID, configuration 2, and Session 2 of the short video application B to the Carema3Deivce instance.
  • the Carema3Deivce instance saves the application ID of the short video application B, the corresponding relationship between configuration 2 and Session2, and outputs the application ID and Session2 of the short video application B to the Camera3Stream instance, and outputs the Session2 and configuration 2 to the multimedia management module. .
  • the camera service outputs Session2 and Configuration 2 to the multimedia management module, and the multimedia management module performs corresponding processing.
  • the Carema3Deivce instance outputs Configuration 2 and Session 2 to the multimedia management module.
  • the Media Center Mrg module receives the configuration 2 and Session2 input by the Carema3Deivce instance, and the Media Center Mrg module outputs the Session2 and the configuration 2 to the Media Device Mrg module.
  • the Media Device Mrg module records the correspondence between Session2 and Configuration 2.
  • the camera service since the camera service detects that the camera has been called, the camera service no longer performs the steps described in S103 to S105 in this creation process. For example, the camera service detects the established Session (for example, a Session list can be generated, and the corresponding relationship between the Session and the application ID, configuration and other information is recorded), and it is determined that Session1 already exists. The camera can determine that there is currently an application corresponding to Session1 that is being called. camera, the camera service will not repeatedly call lower-level modules (such as the CameraHAL module).
  • the camera service will not repeatedly call lower-level modules (such as the CameraHAL module).
  • the short video application A is still in the recording process, that is, the short video application A is driven by the camera, Carema Hal, camera service,
  • the multimedia management module acquires the image collected by the front camera and displays the image.
  • the camera service obtains the image collected by the front camera, and the specific process can refer to S301-S303, which will not be repeated here.
  • camera service interacts with multimedia management module, to obtain two images (comprising image A and image B) that multimedia management module generates, and image A is output to short video application A, and image B is output to short video application B.
  • Short video application A can display image A on the recording interface
  • short video application B can display image B on the recording interface.
  • an exemplary Carema3Device instance After receiving the image input by Camera Hal, the instance of Carema3Device detects the currently stored Session, that is, currently stored Session1 and other information (including the application ID and configuration 1 of short video application A), and Session2 and other information (including short video application B) application ID and configuration 2).
  • the Carema3Device instance outputs images, Session1 and Session2 to the multimedia management module.
  • the Media Stream Mrg module in the multimedia management module is based on the received image, Session1 and Session2, from the Media Device Mrg module, obtains the configuration 1 corresponding to Session1, and, with the configuration 2 corresponding to Session2 .
  • the Media Device Mrg module copies the image to get two images.
  • the Media Device Mrg module processes one of the images based on configuration 1, for example, adjusting the resolution of the image to 1080*720 to obtain image A.
  • the Media Device Mrg module processes another image based on configuration 2, for example, adjusting the resolution of the image to 1280*720 to obtain image B.
  • the Media Device Mrg module outputs Session1 and image A, as well as Session2 and image B, to the camera service.
  • the Carema3Device instance receives Session1 and image A, and Session2 and image B input by the multimedia management module.
  • the Carema3Device instance outputs Session1 and image A, and Session2 and image B correspondingly to the Carema3Stream instance.
  • the Carema3Stream instance outputs the image A to the short video application A based on the recorded correspondence between Session1 and the application ID of the short video application A.
  • ACK information may be fed back to inform the opposite end that information or data has been successfully received.
  • short video application A sends a call request to the camera service
  • the camera service feeds back ACK information to short video application A to indicate that the camera service has successfully received the request.
  • the schematic diagram of interaction of modules in the embodiments of the present application only shows the flow of data or information (mainly refers to request or control information), but does not show the flow of ACK information, which will not be repeated below.
  • the configuration information includes the resolution as an example for description.
  • the configuration information may also include other information, such as a zoom factor.
  • configuration 1 indicates that the zoom factor is 3 times
  • configuration 2 indicates that the zoom factor is 5 times
  • the multimedia module can perform zoom processing on the image based on the zoom factor indicated by the configuration information, and output the zoomed image and the corresponding session to the camera service.
  • the configuration information may also be other such as beauty, image tone, etc., which is not limited in this application.
  • the configuration information in this embodiment of the present application may be initially set or dynamically transformed.
  • the initial configuration information of short video application A and short video application B both indicate that the zoom factor is 1 times
  • the multimedia management module updates the stored configuration information, and based on the updated configuration information, the image is processed accordingly, that is, the camera is at 1 times.
  • the image captured within the zoom range corresponding to the zoom factor is subjected to 3x zoom processing, and is output to short video application A through the camera service.
  • the configuration information of the short video application B remains unchanged, that is, the indicated zoom factor is still 1x
  • the image displayed on the preview interface of the short video application B is the image captured by the camera within the zoom range corresponding to the 1x zoom factor. image.
  • the short video application A can deliver the updated zoom factor (for example, 3x zoom), in an example, the camera application can change the configuration information of both short video application A and short video application B to 3x zoom factor, and output to the multimedia management module, and the multimedia management module can update the two Configuration information corresponding to an application.
  • the camera application sends the zoom factor indicated by the short video application A to the camera driver through Camera Hal, and the camera driver can control the zoom of the camera (such as the front camera) and collect the zoom range (the zoom corresponding to the 3x zoom factor).
  • the multimedia management module can perform corresponding processing on the images within the zoom range collected by the camera. That is to say, in this embodiment, if the configuration information corresponding to one application changes when multiple applications call the camera at the same time, the configuration information of other applications also changes accordingly.
  • the short video application B and the short video application A display The image is transformed from 1x zoom to the corresponding image at 3x zoom.
  • scenario 1 is a process in which multiple applications (eg, short video application A and short video application B) in an electronic device (eg, a mobile phone) simultaneously call the camera.
  • applications eg, short video application A and short video application B
  • an electronic device eg, a mobile phone
  • the camera calling process in the multi-electronic device collaboration scenario will be described in detail below with reference to FIGS. 15a to 151 .
  • the display interface of the mobile phone displays the home page of the mobile phone, and the home page includes one or more controls.
  • the specific description can refer to the above, which will not be repeated here.
  • the display interface of the tablet includes one or more controls, such as application icons, battery icons, and the like.
  • the user can slide down from the upper edge of the tablet, and the tablet displays a pull-down notification bar 1501 in the upper edge region of the display interface in response to the user's operation behavior.
  • the pull-down notification bar 1501 includes one or more controls, such as a time bar, Wi-Fi setting options, Bluetooth setting options, mobile data setting options, mute setting options, auto-rotation setting options, and multi-screen collaboration options 1501a and the like.
  • the user can click on the multi-screen collaboration option 1501a in the pull-down notification bar 1501, and the tablet displays a prompt box on the display interface in response to the user's operation behavior, as shown in Fig. 15b.
  • a prompt box 1503 is displayed on the right edge area of the display interface of the tablet, and the prompt box includes prompt information, which is used to indicate that the tablet is currently activating the multi-screen collaboration function.
  • the prompt box also includes a "Cancel" option 1503a and an "Agree” option 1503b. If the user clicks the "Cancel" option 1503a, the prompt box 1503 disappears.
  • a nearby electronic device such as a mobile phone
  • FIG. 15c exemplary, after the tablet is scanned to the mobile phone, prompt information is displayed in the prompt box 1503 of the display interface of the tablet, and the prompt information is used to indicate that a mobile phone that can be used for multi-screen collaboration is found. ” to indicate the current expectation to establish multi-screen collaboration with mobile phones.
  • the prompt box may also include other prompt information or options.
  • the user can click the “scan code to connect” option in the prompt box to connect.
  • the prompt box may further include a "Cancel" option 1503a, which is used to cancel the multi-screen collaboration function.
  • a prompt box 1505 is displayed on the display interface on the mobile phone side (for example, the lower half area of the display interface), and the prompt box 1505 may include but not limited to: Icons 1505a, "Cancel” option 1505b and "Connect” option 1505c of the device to be established for the multi-screen collaborative connection.
  • the "Cancel” option 1505b is used to cancel the current connection establishment process, and cancel the display of the prompt box.
  • the user clicks the "connection" option 1505c and the mobile phone establishes a multi-screen collaborative connection with the tablet in response to the user's operation behavior.
  • connection establishment process reference may be made to the specific embodiments of the multi-screen collaboration technology, which will not be repeated in this application.
  • the multi-screen collaborative window 1507 is displayed on the display interface of the tablet (which can be any area on the tablet), and the multi-screen collaborative window 1507 displays the display interface of the mobile phone , that is to say, all controls and images included in the display interface of the mobile phone will be displayed on the multi-screen collaboration window in real time.
  • the mobile phone can send part or all of the display interface of the mobile phone to the tablet, and the tablet displays part or all of the display interface sent by the mobile phone in the multi-screen collaboration window 1507 .
  • the user clicks the icon 1507a of the short video application A displayed on the display interface of the mobile phone in the multi-screen collaboration window of the tablet, the tablet receives the user's operation, and displays the user's operation (including the pressure value and position coordinates corresponding to the user's operation, etc.) Sent to the mobile phone, the mobile phone can obtain the user's operation behavior on the multi-screen collaboration window of the tablet, and the mobile phone displays the application interface of the short video application A on the display interface of the mobile phone in response to the user's operation behavior.
  • the mobile phone sends the currently displayed interface, that is, the interface including the application interface of the short video application A, to the tablet.
  • the tablet also displays the current interface of the mobile phone in the multi-screen collaboration window 1507 based on the interface sent by the mobile phone, That is, the application interface of the short video application A, as shown in Figure 15e(1).
  • the application interface of the short video application A (including the interface on the mobile phone side and the interface in the multi-screen collaboration window 1507 on the tablet side) defaults to calling the camera of the tablet (
  • the front camera or the rear camera of a tablet in the embodiment of this application, the front camera of the tablet is called by default as an example), that is, the application interface of the short video application A (including the interface on the mobile phone side and the multi-function interface on the tablet side).
  • the interface in the screen collaboration window 1507) displays the image captured by the front-facing camera of the tablet.
  • the tablet sends the image captured by the front camera in real time to the mobile phone.
  • the mobile phone After the mobile phone processes the image accordingly, it sends the application interface of the short video application A including the image captured by the front camera to the tablet.
  • the tablet can respond to the received image.
  • the application interface of the short video application A including the image captured by the front camera displays the received interface in the multi-screen collaboration window 1507 .
  • the user can slide down from the upper edge of the mobile phone to display the pull-down notification bar 1510 on the upper part of the display interface of the mobile phone.
  • the pull-down notification bar 1510 includes a menu bar and a notification bar
  • the notification bar includes the prompt information of "connected to the "Huawei tablet”" to prompt the currently connected peer device of multi-screen collaboration.
  • the notification bar may include a "disconnect” option to indicate disconnection from the device.
  • the notification bar may also include an option of "record tablet screen", which is used to instruct the operation and display interface in the 1507 window in the tablet to be recorded.
  • the notification bar also includes The “Switch audio and video to mobile phone” option 1510a is used to instruct to call the camera of the mobile phone, that is, if the user clicks this option, the mobile phone will call the camera of the mobile phone in response to the user’s operation, that is, the preview interface of the mobile phone’s short video application A and All images displayed in the preview interface of the short video application A in the multi-screen collaboration window 1507 of the tablet are images collected by the camera of the mobile phone.
  • the “Switch audio and video to mobile phone” option 1510a the content of this option becomes the "Switch audio and video to tablet” option, that is, after the user clicks this option again, the camera is switched to the camera of the tablet.
  • the multi-screen collaborative application calls the front camera exemplarily.
  • the multi-screen collaborative application transmits the acquired image captured by the front camera to the mobile phone through the multi-screen collaborative connection, and the application interface of the short-sighted application A of the mobile phone displays the image sent by the tablet side.
  • the display interface of the mobile phone displayed on the collaboration window is synchronized with the mobile phone, as shown in Figure 15e.
  • the mobile phone can turn off the screen to reduce the power consumption of the mobile phone.
  • the user can click the icon of the short video application B displayed on the display interface of the tablet, the tablet responds to the user's operation behavior, starts the short video application B, and displays the short video application B on the display interface.
  • the application interface is shown in Figure 15f.
  • the application interface of the short video application B may partially overlap or not overlap with the multi-screen collaboration window, which is not limited in this application.
  • the left area of the tablet displays the multi-screen collaboration window 1507
  • the right area displays the application interface 1511 of the short video application B
  • the multi-screen collaboration window 1507 and the short video application B application interface 1511 do not overlap or Partially overlapping.
  • the application interface 1511 of the short video application B includes one or more controls, and the specific description can refer to the above, which will not be repeated here.
  • the multi-screen collaborative application and the short video application B jointly call the front camera.
  • the short video application B displays the acquired image on the application interface of the short video application B of the tablet
  • the multi-screen collaborative application transmits the acquired image to the mobile phone through the multi-screen collaborative connection.
  • different applications in the same electronic device can also call the front camera and the rear camera at the same time.
  • the user in the process of multi-screen collaboration between the mobile phone and the tablet, the user can click the "Camera Switch" option 1513 in the application interface of the tablet's short video application B to switch the camera, that is, use the tablet's rear camera.
  • the tablet calls the rear camera in response to the user's operation behavior, and the application interface (ie the preview interface) 1511 of the short video application B displays the image captured by the rear camera of the tablet, at the same time, multiple The preview interface of the short video application A in the screen collaboration window 1507 still displays the image captured by the front camera of the tablet.
  • Figure 16a is a schematic diagram of the interaction of each module. Referring to Figure 16a, the specific steps include:
  • the short video application B invokes the camera service, and the camera service performs corresponding processing.
  • the short video application B sends a request message to the camera service service, and the request message includes but is not limited to: the application ID of the short video application B, and the configuration 3 corresponding to the short video application B (configuration 3 may be the same as or different from configuration 2). , this application is not limited).
  • the camera service outputs Session3 and Configuration 3 to the multimedia management module, and instructs the multimedia management module to delete Session2 and Configuration 2, and the multimedia management module performs corresponding processing.
  • Session1 corresponds to a multi-screen collaborative application
  • Session2 corresponds to short video application B
  • Session2 is generated by short video application B during the creation process.
  • the camera service outputs Session3 and configuration 3 to the multimedia management module, and instructs the multimedia management module to delete the stored Session2 and other information associated with Session2, that is, configuration 2.
  • the multimedia management module saves the corresponding relationship between Session 3 and Configuration 3, and deletes Session 2 and Configuration 2.
  • the multi-screen collaborative application is still in the recording process, that is, the image captured by the front camera is acquired.
  • the camera driver outputs the corresponding images collected by Session1 and the front camera to the Carema Hal, and the images collected by the Session1 and the front camera are sent to the Carema Hal through the Carema Hal.
  • the image is transferred to the camera service.
  • the camera driver outputs the corresponding images collected by Session3 and the rear camera to Carema Hal, and transmits the images collected by Session3 and the rear camera to the camera service through Carema Hal.
  • the other undescribed parts of the recording process are the same as or similar to the recording process in which the short video application A calls the front camera in the scene 1, and will not be repeated here.
  • the multimedia management module performs corresponding processing on the images collected by the front camera based on the configuration 1 corresponding to Session1, and , and based on configuration 3 corresponding to Session 3, the images collected by the rear camera are processed accordingly.
  • the user can slide from the right edge to the center on the display interface of the tablet, and the tablet displays a sidebar 1515 in the right edge region in response to the user's operation behavior (for details, please refer to the above , which will not be described here), as shown in Figure 15i (2), exemplarily, the user clicks (or drags) the camera application icon 1515a in the sidebar, and the tablet responds to the user's operation behavior, and the right area is displayed.
  • the split screen includes a display window 1511 and a display window 1517, wherein the display window 1511 is used to display the application interface of the short video application B, and the display window 1517 is used to display the application interface of the camera application, as shown in Figure 15j, with reference to Figure 15j,
  • the application interface of the camera application includes one or more controls, for example, a shooting mode option 1519 may be included, and the shooting mode option 1519 may further include a plurality of sub-modes, for example, may include night scene mode options, video mode options, shooting mode options, dual Scene shooting mode options and more options, exemplarily, in the embodiment of the present application, the camera application enters the shooting mode by default after startup, and calls the front camera as an example for description.
  • FIG. 16b is a schematic diagram of the interaction of each module, referring to FIG. 16b, specifically including:
  • the camera application calls the camera service, and the camera service performs corresponding processing.
  • the camera application sends a request message to the camera service service, and the request message includes, but is not limited to: the application ID of the camera application, the configuration 4 corresponding to the camera application (configuration 4 may be the same as or different from configuration 1 and configuration 3, this application not limited).
  • the camera service outputs Session 4 and Configuration 4 to the multimedia management module, and the multimedia management module performs corresponding processing.
  • the information currently stored by the multimedia management module includes: Session1 and configuration 1, Session3 and configuration 3, and Session4 and configuration 4, as well as the corresponding relationship between each session and configuration.
  • the multimedia management module copies the images captured by the front camera, and performs corresponding processing on one of the images based on configuration 1 to generate image A, and performs corresponding processing on the other image based on configuration 4 to generate image A.
  • Image B the camera service can output image A to the multi-screen collaborative application and output image B to the camera application.
  • the multimedia management module performs corresponding processing on the image collected by the rear camera based on configuration 3 to generate an image C, and the camera service outputs the image C to the short video application B.
  • the user can click the dual-view shooting mode option 1519a on the application interface of the camera application, and the tablet, in response to the user's operation behavior, further performs a split-screen display on the display window 1517, including the display window 1521 and display window 1523, as shown in FIG. 15k(2), exemplarily, the display window 1521 and the display window 1523 include the preview interface of the camera application, and the preview interface of the display window 1521 and the preview interface of the display window 1523 both display the front camera acquired images.
  • the display window 1521 and the display window 1523 further include one or more controls, for example, a "camera switch" option and a "zoom" option 1523a may be included.
  • the user may click "zoom" in the display window 1523 If the option 1523a is selected, for example, the 3x zoom mode is selected, the display window 1523a currently displays the image captured by the front camera in the 3x zoom mode.
  • the camera application needs to call the front camera 1 and the front camera 2 in the front camera respectively (the front camera 1 and the front camera 2
  • the description can refer to Figure 4).
  • FIG. 15j it is assumed in FIG. 15j that the front camera 1 is used during the shooting process, and the multi-screen collaborative application also uses the front camera 1.
  • the camera application After the camera application switches to the dual-view shooting mode, the camera application needs to call the front camera 2.
  • the camera application sends a call request to the camera service, and the camera service responds to the received call request, generates Session 5, and saves Session 5 and the configuration 5 and the corresponding relationship of the application ID, and the camera service outputs Session5 and configuration 5 to the multimedia management module.
  • the undescribed parts are similar to the above creation processes, and will not be repeated here.
  • the image collected by the front camera 1 is taken as the front image. 1.
  • the image collected by the front camera 2 is the front value image 2 as an example to illustrate.
  • the front camera 1 outputs the collected front image 1 to the camera service through the camera driver and Camera Hal.
  • the front camera 2. Output the collected front image 2 to the camera service through the camera driver and Camera Hal, and the rear camera will output the collected rear image to the camera service through the camera driver and Camera Hal.
  • the camera service interacts with the multimedia module.
  • the camera service outputs the front image 1, Session1 and Session 4 to the multimedia management module.
  • the multimedia management module copies the front image 1, and One of the images is processed based on configuration 1 corresponding to Session 1 to generate a pre-image A, and another image is processed to generate a pre-image B.
  • the camera service outputs the front image 2 and Session 5 to the multimedia management module.
  • the multimedia management module processes the front image 2 based on the configuration 5 corresponding to the Session 5 to generate the front image C.
  • the camera server outputs the rear image corresponding to Session 4 to the multimedia management module.
  • the multimedia management module processes the rear image 2 based on the configuration 4 corresponding to Session 4 to generate the rear image A. It should be noted that, the steps for the camera service to output the above information to the multimedia management module are in no particular order.
  • the multimedia management module outputs the front image A and Session1, the front image B and Session4, the front image C and Session5, and the rear image A and Session3 to the camera service correspondingly.
  • the camera service can output the front image A to the multi-screen collaborative application, the rear image A to the short video application B, and the front image B and the front image C to the camera application based on the corresponding relationship between each session and the application ID.
  • the camera application can display the front image B in the display window 1 and display the front value image C in the display window 2 .
  • the user can click on the “Camera Switch” option 1521a in the display window 1521 on the application interface of the camera application, and the tablet performs corresponding processing in response to the user's operation behavior.
  • the current display mode of the tablet includes: what is displayed in the multi-screen collaboration window 1507 is the application interface (ie, the preview interface) of the short video application A in the mobile phone, and what is displayed in the application interface is the tablet The image captured by the front camera.
  • the display window 1511 displays the application interface of the short video application B of the tablet, and the application interface displays the image captured by the rear camera of the tablet.
  • the display window 1521 displays the application interface (ie, the preview interface) of the camera application in the tablet, and the application interface displays the image captured by the rear camera of the tablet (before shooting or recording, the displayed image is captured by the camera). preview image).
  • the display window 1523 displays the application interface of the camera application in the tablet, and the application interface displays the image captured by the front camera of the tablet in the 3x zoom mode.
  • FIG. 16d is an exemplary interaction diagram of each module.
  • FIG. 16d specifically shows an interaction diagram of each module during the recording process.
  • the camera service and multimedia management module delete the saved Session5 and the corresponding information (such as configuration 5), and create Session6 and the corresponding information (such as the configuration 6) ), in which Session6 is created when the rear camera is called based on the camera application, and the specific creation process can refer to the creation process in the above examples, which will not be repeated here.
  • the image collected by the front camera (which can be understood as the front camera 1) and the image collected by the rear camera
  • the image collected by the front camera is hereinafter referred to as the front image
  • the image collected by the rear camera is The image is called a rear image.
  • the front image collected by the front camera and the rear image collected by the rear camera are output to the camera service through the camera driver and Camera Hal.
  • the camera service outputs the front image, Session1 and Session4, and the rear image, Session3 and Session5 to the multimedia management module.
  • Session1 is created based on the multi-screen collaborative application calling the front camera
  • Session3 is created based on the short video application B calling the rear camera
  • Session4 is created based on the camera application calling the front camera in dual-view shooting mode
  • Session5 is created. Created based on when the camera app invokes the rear camera in dual view mode.
  • the multimedia module processes the front image based on configuration 1 to generate front image A, processes the rear image based on configuration 3 to generate rear image A, and processes the front image based on configuration 4 to generate front image A.
  • Post image B is processed based on configuration 5 to generate post image B.
  • the multimedia management module outputs Session1 and front image A, Session 3 and rear image A, Session 4 and front image B, and Session 5 and rear image B and the corresponding relationship between each session and image to the camera service.
  • the camera service can output the front image A to the multi-screen collaborative application, output the rear image A to the short video application B, output the front image B to the camera application, and output the rear image B to the camera application based on the corresponding relationship between each session and the application ID. BOutput to camera app.
  • the embodiments of this application only take the dual-scene shooting mode of the camera application as an example for description.
  • the camera application or other applications with shooting functions may also have a three-scene shooting mode or a four-scene shooting mode, etc., for example.
  • the application interface of the camera application can include three display windows, one display window is used to display the screen shot by the rear camera in the wide-angle mode, and the other display window is used to display the front camera
  • the third display window is used to display the picture captured by the front camera, which is not limited in this application.
  • FIGS. 15a to 15l can also be applied to a single electronic device.
  • multiple applications in a mobile phone can call the front camera and/or the rear camera at the same time, which is not done in this application. limited.
  • the camera calling method in this embodiment of the present application can also be applied to a scenario where an application clone calls different cameras.
  • the mobile phone can activate the application clone function in response to a user operation, and the main page can display The instant messaging application icon and the instant messaging application clone icon. The user clicks the instant messaging application icon to start the instant messaging application. After the instant messaging application is started, the instant messaging application can respond to the received user operation, start the shooting function, and call the pre- camera, and display the real-time images collected by the front camera in the application interface of the instant messaging application.
  • the user can activate the instant messaging application clone by operating the sidebar or other means. For example, by clicking the instant messaging application clone icon included in the sidebar, the mobile phone can split the screen.
  • the application interface of the instant messaging application and the application interface of the instant messaging application clone are displayed.
  • the instant messaging application and the instant messaging application avatar may have different accounts, that is, the user can log in to the instant messaging application through account A, and log in to the instant messaging application avatar through account B, for example, the instant messaging application
  • the shooting function can be activated in response to the received user operation, and the front-facing camera can be called, and the real-time images collected by the front-facing camera and/or the rear-facing camera can also be displayed on the avatar interface of the instant messaging application, that is, the application embodiment
  • the method described in two different applications such as Douyin application and Kuaishou application
  • calling the same camera is also applicable to the scenario of two same applications (such as WeChat application and WeChat application avatar) logged in with different accounts.
  • the specific implementation for the method reference may be made to the relevant steps in the above method embodiments, which will not be repeated here.
  • FIG. 17 is a schematic flowchart of a method for invoking a camera provided by an embodiment of the present application. Referring to FIG. 17 , the method specifically includes:
  • the electronic device displays a first preview interface of a first application, and the first preview interface displays an image captured by a first camera at a first moment.
  • the first application and the second application may be any one of a video call application, a live broadcast application, and an application with a code scanning function.
  • the first preview interface of the first application may refer to the application interface of the short video application A shown in FIG. 6(2).
  • the first application is a multi-screen collaborative application
  • the electronic device displays a first interface at a second moment in response to the received first operation; the first moment and the second moment are different moments; the first interface includes a second preview interface of the first application and a second preview interface of the second application The third preview interface; wherein, the second preview interface and the third preview interface display the image collected by the first camera at the second moment.
  • the electronic device may, in response to the received first operation, display an image including the second preview interface of the first application and the preview interface of the second application. first interface.
  • both the second preview interface and the third preview interface display images captured by the first camera in real time.
  • the first operation may include a first sub-operation and a second sub-operation.
  • the first sub-operation is sliding from the right edge of the screen to the left (or may also be sliding from the left edge of the screen to the left). swipe on the right) to invoke the sidebar.
  • the electronic device displays the sidebar on the first interface based on the first sub-operation, and the sidebar includes the application icon of the second application, as shown in Figure 6(3) shown.
  • the second sub-operation is the second sub-operation on the application icon of the second application.
  • the electronic device displays the first interface at the second moment in response to the second sub-operation, as shown in FIG. 6(4).
  • the first interface is a split-screen interface
  • one interface of the split-screen interface includes a second preview interface
  • another interface of the split-screen interface includes a third preview interface.
  • the split-screen interface includes an interface 613 and an interface 615, wherein the interface 613 includes a preview interface of a first application (eg, short video application A), and the interface 615 includes a second application (eg, a short video) The preview interface of application B).
  • the second preview interface, and/or the third preview interface is a floating interface.
  • the display interface of the electronic device ie the tablet
  • displays multiple floating interfaces also referred to as floating windows
  • each floating interface includes a preview interface of an application, such as a short video application A.
  • the second preview interface includes a first recording duration, where the first recording duration is used to indicate the recording duration of the first application from the moment when the recording starts.
  • the first application displays the recording duration during the shooting process
  • the electronic device starts the second application
  • the second application calls the pre- camera, and display the real-time image captured by the front camera, and at the same time, the first application is still continuing to shoot and record the recording time.
  • the electronic device before displaying the first preview interface of the first application, further includes the following steps: acquiring first configuration information of the first application; configuring the first session, and saving the first session, the first an association relationship between an application and the first configuration information; displaying the first interface at the second moment, further comprising: acquiring the second configuration information of the second application; configuring the second session, and saving the second session, the second configuration information and the first 2.
  • the association relationship of configuration information before displaying the foregoing method embodiment, the electronic device further includes the following steps: acquiring first configuration information of the first application; configuring the first session, and saving the first session, the first an association relationship between an application and the first configuration information; displaying the first interface at the second moment, further comprising: acquiring the second configuration information of the second application; configuring the second session, and saving the second session, the second configuration information and the first 2.
  • the association relationship of configuration information before displaying the first preview interface of the first application, the electronic device further includes the following steps: acquiring first configuration information of the first application; configuring the first session
  • the camera service in the electronic device acquires the first configuration information (ie, Configuration 1 ) delivered by the first application, and configures the first session (ie, Session 1 ).
  • the camera service saves the association between Session1 and Configuration 1, and sends the association to the multimedia management module, which also saves the association between Session1 and Configuration 1.
  • the electronic device starts the second application, and the camera service acquires the second configuration information (ie, configuration 2 ) delivered by the second application, and configures the second session (ie, Session 2 ).
  • the camera service saves the association relationship between Session2 and configuration 2, and sends the association relationship to the multimedia management module, and the multimedia management module also saves the association relationship between Session2 and configuration 2.
  • displaying the first interface at the second moment by the electronic device includes: acquiring an image captured by the first camera at the second moment; and determining, based on the first session, a first application associated with the first session and the first configuration information, and based on the second session, determine the second application and the second configuration information associated with the second session; according to the image collected by the first camera at the second moment and the first configuration information, obtain the first sub-application an image, and obtaining a second sub-image according to the image and the second configuration information collected by the first camera at the second moment; displaying the first sub-image on the second preview interface, and displaying the second sub-image on the third preview interface image.
  • the multimedia management module in the electronic device may determine, based on the correspondence between Session1 and configuration 1 and Session2 and configuration 2, that there are currently two applications calling the camera, that is, two images are generated.
  • the multimedia management module may process the image based on configuration 1 to obtain image A (ie, the first sub-image), and process the image based on configuration 2 to obtain image B (ie, the second sub-image).
  • the second preview interface (eg, 613 in FIG. 6 ) of the first application of the electronic device may display image A
  • the third preview interface (eg, 615 in FIG. 6 ) of the second application may display image B.
  • the first configuration information is used to indicate the first resolution
  • the second configuration information is used to indicate the second resolution
  • the first resolution is the same as or different from the second resolution
  • the first configuration information is used to indicate a first zoom factor
  • the second configuration information is used to indicate a second zoom factor
  • the first zoom factor is the same as or different from the second zoom factor
  • the electronic device acquires third configuration information of the first application in response to the received second operation, where the third configuration information is used to indicate a third zoom factor, and the third zoom factor is the same as the first zoom factor.
  • the multiples are different.
  • the electronic device saves the association relationship between the first session, the first application and the third configuration information; the electronic device acquires the image captured by the first camera at the third moment; the electronic device determines the first session associated with the first session based on the first session the application and the third configuration information, and based on the second session, determine the second application and the second configuration information associated with the second session; the electronic device obtains the image according to the image collected by the first camera at the third moment and the third configuration information
  • the third sub-image, and the fourth sub-image is acquired according to the image collected by the first camera at the third moment and the second configuration information; the electronic device displays the third sub-image on the second preview interface, and the third sub-image is displayed in the third preview The interface displays the fourth sub-image.
  • the short video application A may deliver configuration information (ie, third configuration information) corresponding to the current zoom factor to the camera service.
  • the camera service and multimedia management module can perform corresponding processing to update the association relationship between the first application and the configuration information, and based on the new configuration information, that is, the third configuration information, process the image collected by the camera, and, based on
  • the second configuration information of the second application processes the images collected by the camera, that is, after the configuration information of the first application, such as the zoom factor (or resolution) is changed, the configuration information of the second application can be maintained. That is, the first application displays the image corresponding to the updated zoom factor, and the second application displays the image corresponding to the original zoom factor.
  • the method further includes: in response to the received third operation, the electronic device displays a second interface at a fourth moment; the second interface includes a fourth preview interface of the first application and an interface of the first application.
  • the fifth preview interface wherein, the fourth preview interface displays the image collected by the first camera at the fourth moment, and the fifth preview interface displays the image collected by the second camera at the fourth moment.
  • the first camera and the second camera Are not the same.
  • the fifth preview interface of the first application may be an interface of an application avatar of the first application.
  • the fourth preview interface and the fifth preview interface may also be two interfaces of the dual-view shooting function of the first application, as shown in FIG. 15k, for example, the fourth preview interface may be interface 1521, and the fifth preview interface may be is interface 1523.
  • the method further includes: in response to the received fourth operation, the electronic device displays, on the second preview interface, the image collected by the first camera at the fifth moment, and the third preview interface displays the first In the image collected by the second camera at the fifth moment, the second camera is different from the first camera.
  • the third preview interface (eg, interface 1511) includes a camera switching option 1513, and the fourth operation is used to indicate the operation of the camera switching option, that is, the user clicks the camera switching option 1513, as shown in FIG. 15h.
  • the second preview interface (such as the application interface 1507 of the multi-screen collaborative application) displays the real-time images captured by the front camera of the tablet
  • the third preview interface (such as the application interface 1511 of the short video application B) displays the real-time images of the rear camera of the tablet. acquired images.
  • the first camera is a first front camera
  • the second camera is a first rear camera
  • the multi-screen collaborative application calls the front camera of the tablet
  • the short video application B calls the rear camera of the tablet.
  • the first camera is a first front camera
  • the second camera is a second front camera
  • the first front camera is different from the second front camera
  • the multi-screen collaborative application invokes one of the front cameras of the tablet, and the camera application invokes two front cameras of the tablet to realize dual-view shooting.
  • the multi-screen collaborative application may call one of the front cameras, and the camera application may call the other front camera.
  • the zoom factors of the two front cameras may be different. For example, if one front camera is a wide-angle camera, the image displayed in the multi-screen collaborative application is the image collected by the wide-angle front camera, and the other front camera is a multi-zoom camera, and the image displayed by the camera application is the multi-zoom camera. acquired images.
  • the first camera is a first rear camera
  • the second camera is a second rear camera
  • the first rear camera is different from the second rear camera.
  • the same application or different applications can call multiple rear cameras in the camera at the same time.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • FIG. 18 shows a schematic block diagram of an apparatus 1800 according to an embodiment of the present application.
  • the apparatus 1800 may include: a processor 1801 , a transceiver/transceiver pin 1802 , and optionally, a memory 1803 .
  • bus 1804 The various components of the device 1800 are coupled together by a bus 1804, wherein the bus 1804 includes a power bus, a control bus and a status signal bus in addition to a data bus.
  • bus 1804 includes a power bus, a control bus and a status signal bus in addition to a data bus.
  • the various buses are referred to as bus 1804 in the figures.
  • the memory 1803 may be used for instructions in the foregoing method embodiments.
  • the processor 1801 can be used to execute the instructions in the memory 1803, and control the receive pins to receive signals, and control the transmit pins to transmit signals.
  • the apparatus 1800 may be the electronic device or the chip of the electronic device in the above method embodiments.
  • This embodiment also provides a computer storage medium, where computer instructions are stored in the computer storage medium, and when the computer instructions are executed on the electronic device, the electronic device executes the above-mentioned relevant method steps to implement the camera calling method in the above-mentioned embodiment.
  • This embodiment also provides a computer program product, which when the computer program product runs on the computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the camera calling method in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used for storing computer execution instructions, and when the apparatus is running, The processor can execute the computer-executed instructions stored in the memory, so that the chip executes the camera calling method in the above method embodiments.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference can be made to the corresponding provided above. The beneficial effects in the method will not be repeated here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or May be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium.
  • a readable storage medium including several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the steps of the method or algorithm described in conjunction with the disclosure of the embodiments of this application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (Random Access Memory, RAM), flash memory, read only memory (Read Only Memory, ROM), erasable programmable read only memory ( Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供了一种摄像头调用方法、系统和电子设备,该方法包括:电子设备中的多个应用对应的预览界面中,可同时显示电子设备的摄像头实时采集的图像,从而提供一种摄像头共享方式,以增加电子设备的应用场景的多样性,并提升用户使用体验。

Description

摄像头调用方法、系统及电子设备
本申请要求于2020年11月20日提交中国国家知识产权局、申请号为202011315380.X、申请名称为“摄像头调用方法、系统及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及终端设备领域,尤其涉及一种摄像头调用方法、系统及电子设备。
背景技术
随着通信技术的发展,电子设备的计算能力和硬件能力的提高,电子设备的录像功能也越来越强大,相应的,其应用场景也越来越广泛,例如拍照/录像、视频通话、直播等。
目前,电子设备的摄像头功能(包括前置摄像头和/或后置摄像头的调用)采用独占模式,摄像头只能被单一应用调用。
发明内容
为了解决上述技术问题,本申请提供一种摄像头调用方法、系统及电子设备。该方法中,电子设备的摄像头可被多个应用调用,实现多应用共享摄像头,从而提升电子设备的应用场景的多样性以及用户使用体验。
第一方面,本申请实施例提供一种摄像头调用方法。该方法包括:电子设备显示第一应用的第一预览界面,第一预览界面显示第一摄像头在第一时刻采集的图像。电子设备响应于接收到的第一操作,在第二时刻显示第一界面,其中,第一时刻与第二时刻是不同的时刻,并且,第一界面包括第一应用的第二预览界面和第二应用的第三预览界面。具体的,第二预览界面和第三预览界面显示第一摄像头在第二时刻采集的图像。这样,本申请提供一种摄像头共享的方式,电子设备中的多个应用可以共享摄像头,即同时调用摄像头,并显示摄像头实时采集的图像,以增加电子设备的应用场景的多样性,并提升用户使用体验。
根据第一方面,第二预览界面包括第一录制时长,第一录制时长用于指示第一应用从开始录制时刻起的录制时长。这样,第一应用在录制过程中,可实时显示录制的时长,以指示当前的拍摄还在进行,并且,拍摄持续了录制时长指示的时间。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备显示第一应用的第一预览界面之前,还包括:电子设备获取第一应用的第一配置信息;电子设备配置第一会话,并保存第一会话、第一应用和第一配置信息的关联关系;电子设备在第二时刻显示第一界面,还包括:电子设备获取第二应用的第二配置信息;电子设备配置第二会 话,并保存第二会话、第二配置信息和第二配置信息的关联关系。这样,电子设备可基于第一应用的第一配置信息和第二应用的第二配置信息,建立第一应用与第一配置信息,以及第二应用与第二配置信息之间的关联关系,以在后续处理过程中,可基于建立的关联关系,生成对应于第一应用和第二应用的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备在第二时刻显示第一界面,包括:电子设备获取第一摄像头在第二时刻采集到的图像;电子设备基于第一会话,确定与第一会话关联的第一应用和第一配置信息,以及基于第二会话,确定与第二会话关联的第二应用和第二配置信息;电子设备根据第一摄像头在第二时刻采集到的图像和第一配置信息,获取第一子图像,以及,根据第一摄像头在第二时刻采集到的图像和第二配置信息,获取第二子图像;电子设备在第二预览界面显示第一子图像,并且,在第三预览界面显示第二子图像。这样,电子设备可基于已建立的关联关系,生成满足应用的不同需求的图像,并在对应的应用界面中显示生成的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,第一配置信息用于指示第一分辨率,第二配置信息用于指示第二分辨率,第一分辨率与第二分辨率相同或不同。这样,多个同时调用摄像头的应用可对图像要求不同的分辨率,电子设备可基于应用对图像的分辨率的不同需求,进行相应处理,以得到具有相同或不同分辨率的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,第一配置信息用于指示第一变焦倍数,第二配置信息用于指示第二变焦倍数,第一变焦倍数与第二变焦倍数相同或不同。这样,多个同时调用摄像头的应用可对图像要求不同的变焦倍数,电子设备可基于应用对图像的变焦倍数的不同需求,进行相应处理,以得到具有相同或不同变焦倍数的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:电子设备响应于接收到的第二操作,获取第一应用的第三配置信息,第三配置信息用于指示第三变焦倍数,第三变焦倍数与第一变焦倍数不同;电子设备保存第一会话、第一应用和第三配置信息的关联关系;电子设备获取第一摄像头在第三时刻采集到的图像;电子设备基于第一会话,确定与第一会话关联的第一应用和第三配置信息,以及基于第二会话,确定与第二会话关联的第二应用和第二配置信息;电子设备根据第一摄像头在第三时刻采集到的图像和第三配置信息,获取第三子图像,以及,根据第一摄像头在第三时刻采集到的图像和第二配置信息,获取第四子图像;电子设备在第二预览界面显示第三子图像,并且,在第三预览界面显示第四子图像。这样,多个同时调用摄像头的应用中的至少一个应用,在调用摄像头的过程中,可以动态更新配置信息,电子设备可基于更新的配置信息,建立应用于更新的配置信息的关联关系,并基于更新的配置信息,为该应用生成满足更新的配置信息的图像,同时,其它未更新配置信息的应用还按照原有配置信息进行相应处理。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:电子设备响应于接收到的第三操作,在第四时刻显示第二界面;第二界面包括第一应用的第四预览界面和第一应用的第五预览界面;其中,第四预览界面显示第一摄像头在第四时刻采集到的图像,并且,第五预览界面显示第二摄像头在第四时刻采集到的图像,第一摄像头与第二摄像头不相同。这样,第一应用与第二应用同时调用摄像头的场景下,第一应用还可以启动分身应用,或者,启动双景拍摄功能,第一应用和第一应用的分身应用,以及第一应用的双景拍摄功能的两个窗口可同时调用前置摄像头和/或后置摄像头。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:电子设备响应于接收到的第四操作,在第二预览界面显示第一摄像头在第五时刻采集到的图像,并且,第三预览界面显示第二摄像头在第五时刻采集到的图像,第二摄像头与第一摄像头不相同。这样,多个应用同时调用摄像头的场景下,至少一个应用可以切换摄像头。
根据第一方面,或者以上第一方面的任意一种实现方式,第一摄像头为第一前置摄像头,第二摄像头为第一后置摄像头;或者,第一摄像头为第一前置摄像头,第二摄像头为第二前置摄像头,第一前置摄像头与第二前置摄像头不同;或者,第一摄像头为第一后置摄像头,第二摄像头为第二后置摄像头,第一后置摄像头与第二后置摄像头不同。这样,本申请实施例可以应用于多个应用同时调用前置摄像头,或者,同时调用不同的前置摄像头,或者同时调用后置摄像头,或者,同时调用不同的后置摄像头,还可以应用于多个应用同时调用前置摄像头和后置摄像头的场景。
根据第一方面,或者以上第一方面的任意一种实现方式,第三预览界面还包括摄像头切换选项,第四操作用于指示对摄像头切换选项的操作。这样,用户可通过点击摄像头切换选项,切换应用调用的摄像头,以增加应用场景的多样性。例如,用户可以一边使用视频通话应用调用前置摄像头,以与家人进行视频通话,还可以使用直播应用调用后置摄像头,以进行直播,并且,视频通话应用和直播应用调用的摄像头还可以根据用户的指示随时切换。
根据第一方面,或者以上第一方面的任意一种实现方式,第一界面为分屏界面,分屏界面的一个界面中包括第二预览界面,分屏界面的另一个界面中包括第三预览界面。这样,本申请实施例中的摄像头调用方法可应用于分屏场景中,多个应用可包含于分屏界面中的不同界面中,并显示摄像头实时采集的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,第二预览界面,和/或,第三预览界面为悬浮界面。这样,本申请实施例中的摄像头调用方法可应用于多应用的悬浮界面场景中,多个应用可包含于不同的悬浮界面中,并显示摄像头实时采集的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备响应于接收到的第一操作,在第二时刻显示第一界面,包括:电子设备接收第一子操作,在第一界面显示侧边栏,侧边栏包括第二应用的应用图标;电子设备接收对第二应用的应用图标的第二子操作,在第二时刻显示第一界面。这样,电子设备可提供侧边栏,以使用户能够通过侧边栏启动第二应用,以实现多应用同时调用摄像头。
根据第一方面,或者以上第一方面的任意一种实现方式,第一应用为相机应用,第二应用为以下任意一种:视频通话应用、直播应用、具有扫码功能的应用。
第二方面,本申请实施例提供一种摄像头调用系统。该系统包括:第一电子设备和第二电子设备,第一电子设备通过第一连接与第二电子设备进行数据交互,第一电子设备包括第一摄像头;第一电子设备,用于:显示第一应用的第一预览界面,在第一预览界面中显示第二电子设备发送的第一界面,第一界面中包括第一摄像头在第一时刻采集的图像;第一电子设备,还用于:响应于接收到的第一操作,在第二时刻显示第二界面;第一时刻与第二时刻为不同时刻;其中,第二界面包括第一应用的第二预览界面和第二应用的第三预览界面;第二预览界面中显示第二电子设备发送的第三界面,第三界面和第三预览界面包括第一摄像头在第二时刻采集的图像。这样,第一电子设备中的多个应用同时调用摄像头的场景下,第一应用还可以将至少一个应用对应的图像,发送至第二电子设备,以实现多设备协同,并且,第二电子设备同样可使用第一电子设备的摄像头采集的图像。
根据第二方面,第二电子设备,用于:接收第一电子设备发送的第一摄像头在第一时刻采集的图像;在第一界面中显示第一摄像头在第一时刻采集的图像;通过第一连接,向第一电子设备发送第一界面。这样,可实现第二电子设备使用第一电子设备的摄像头采集的图像,并在第一界面中显示获取到的图像。
根据第二方面,或者以上第二方面的任意一种实现方式,第二电子设备,用于:接收第一电子设备发送的第一摄像头在第一时刻采集的图像;生成包含第一摄像头在第一时刻采集的图像的第一界面;其中,第二电子设备处于息屏状态;通过第一连接,向第一电子设备发送第一界面。这样,第二电子设备在息屏的情况下,同样可使用第一电子设备采集的图像,并使得第一电子设备显示第二电子设备基于获取到的图像生成的第一界面。
根据第二方面,或者以上第二方面的任意一种实现方式,第一应用为多屏协同应用。
根据第二方面,或者以上第二方面的任意一种实现方式,第一预览界面为悬浮窗口。
第三方面,本申请实施例提供电子设备。该电子设备包括存储器和处理器,存储器和处理器耦合,存储器存储有程序指令,程序指令由处理器执行时,使得电子设备执行 如下步骤:显示第一应用的第一预览界面,第一预览界面显示第一摄像头在第一时刻采集的图像;响应于接收到的第一操作,在第二时刻显示第一界面;第一时刻与第二时刻为不同时刻;第一界面包括第一应用的第二预览界面和第二应用的第三预览界面;其中,第二预览界面和第三预览界面显示第一摄像头在第二时刻采集的图像。
根据第三方面,第二预览界面包括第一录制时长,第一录制时长用于指示第一应用从开始录制时刻起的录制时长。
根据第三方面,或者以上第三方面的任意一种实现方式,程序指令由处理器执行时,使得电子设备执行如下步骤:获取第一应用的第一配置信息;配置第一会话,并保存第一会话、第一应用和第一配置信息的关联关系;获取第二应用的第二配置信息;配置第二会话,并保存第二会话、第二配置信息和第二配置信息的关联关系。
根据第三方面,或者以上第三方面的任意一种实现方式,程序指令由处理器执行时,使得电子设备执行如下步骤:获取第一摄像头在第二时刻采集到的图像;基于第一会话,确定与第一会话关联的第一应用和第一配置信息,以及基于第二会话,确定与第二会话关联的第二应用和第二配置信息;根据第一摄像头在第二时刻采集到的图像和第一配置信息,获取第一子图像,以及,根据第一摄像头在第二时刻采集到的图像和第二配置信息,获取第二子图像;在第二预览界面显示第一子图像,并且,在第三预览界面显示第二子图像。
根据第三方面,或者以上第三方面的任意一种实现方式,第一配置信息用于指示第一分辨率,第二配置信息用于指示第二分辨率,第一分辨率与第二分辨率相同或不同。
根据第三方面,或者以上第三方面的任意一种实现方式,第一配置信息用于指示第一变焦倍数,第二配置信息用于指示第二变焦倍数,第一变焦倍数与第二变焦倍数相同或不同。
根据第三方面,或者以上第三方面的任意一种实现方式,程序指令由处理器执行时,使得电子设备执行如下步骤:响应于接收到的第二操作,获取第一应用的第三配置信息,第三配置信息用于指示第三变焦倍数,第三变焦倍数与第一变焦倍数不同;保存第一会话、第一应用和第三配置信息的关联关系;获取第一摄像头在第三时刻采集到的图像;基于第一会话,确定与第一会话关联的第一应用和第三配置信息,以及基于第二会话,确定与第二会话关联的第二应用和第二配置信息;根据第一摄像头在第三时刻采集到的图像和第三配置信息,获取第三子图像,以及,根据第一摄像头在第三时刻采集到的图像和第二配置信息,获取第四子图像;在第二预览界面显示第三子图像,并且,在第三预览界面显示第四子图像。
根据第三方面,或者以上第三方面的任意一种实现方式,程序指令由处理器执行时,使得电子设备执行如下步骤:响应于接收到的第三操作,在第四时刻显示第二界面;第二界面包括第一应用的第四预览界面和第一应用的第五预览界面;其中,第四预览界面显示第一摄像头在第四时刻采集到的图像,并且,第五预览界面显示第二摄像头在第四时刻采集到的图像,第一摄像头与第二摄像头不相同。
根据第三方面,或者以上第三方面的任意一种实现方式,程序指令由处理器执行时,使得电子设备执行如下步骤:响应于接收到的第四操作,在第二预览界面显示第一摄像头在第五时刻采集到的图像,并且,第三预览界面显示第二摄像头在第五时刻采集到的图像,第二摄像头与第一摄像头不相同。
根据第三方面,或者以上第三方面的任意一种实现方式,第一摄像头为第一前置摄像头,第二摄像头为第一后置摄像头;或者,第一摄像头为第一前置摄像头,第二摄像头为第二前置摄像头,第一前置摄像头与第二前置摄像头不同;或者,第一摄像头为第一后置摄像头,第二摄像头为第二后置摄像头,第一后置摄像头与第二后置摄像头不同。
根据第三方面,或者以上第三方面的任意一种实现方式,第三预览界面还包括摄像头切换选项,第四操作用于指示对摄像头切换选项的操作。
根据第三方面,或者以上第三方面的任意一种实现方式,第一界面为分屏界面,分屏界面的一个界面中包括第二预览界面,分屏界面的另一个界面中包括第三预览界面。
根据第三方面,或者以上第三方面的任意一种实现方式,第二预览界面,和/或,第三预览界面为悬浮界面。
根据第三方面,或者以上第三方面的任意一种实现方式,程序指令由处理器执行时,使得电子设备执行如下步骤:接收第一子操作,在第一界面显示侧边栏,侧边栏包括第二应用的应用图标;接收对第二应用的应用图标的第二子操作,在第二时刻显示第一界面。
根据第三方面,或者以上第三方面的任意一种实现方式,第一应用为相机应用,第二应用为以下任意一种:视频通话应用、直播应用、具有扫码功能的应用。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第五方面,本申请实施例提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第二方面或第二方面的任意可能的实现方式中的方法的指令。
第六方面,本申请实施例提供了一种计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第七方面,本申请实施例提供了一种计算机程序,该计算机程序包括用于执行第二方面或第二方面的任意可能的实现方式中的方法的指令。
第八方面,本申请实施例提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第一方面或第一方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。
第九方面,本申请实施例提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第二方面或第二方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。
附图说明
图1为示例性示出的应用场景示意图之一;
图2为示例性示出的应用场景示意图之一;
图3为示例性示出的电子设备的结构示意图之一;
图4为示例性示出的电子设备的结构示意图之一;
图5为示例性示出的电子设备的软件结构示意图;
图6为示例性示出的电子设备的结构示意图;
图7为本申请实施例提供的模块交互示意图之一;
图8为本申请实施例提供的模块交互示意图之一;
图9为本申请实施例提供的模块交互示意图之一;
图10为本申请实施例提供的模块交互示意图之一;
图11为本申请实施例提供的模块交互示意图之一;
图12为本申请实施例提供的模块交互示意图之一;
图13为本申请实施例提供的模块交互示意图之一;
图14为本申请实施例提供的模块交互示意图之一;
图15a~图15l为示例性示出的应用场景示意图;
图16a~图16d为示例性示出的模块交互示意图;
图17为本申请实施例提供的摄像头调用方法的流程示意图;
图18为本申请实施例提供的装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
如图1为示例性示出一种应用场景示意图,参照图1,示例性的,在直播场景下,用户使用多个手机(包括手机1、手机2、手机3和手机4),分别登录不同的直播平台(也可以称为直播应用、短视频应用)以进行直播,在直播过程中,手机1~手机4的直播应用的显示窗口上显示各自的摄像头采集到的图像。示例性的,基于各手机与用户之间的距离和角度不同,手机1~手机4显示的图像中的用户的角度和尺寸也不相同。由于手机中的摄像头(包括前置和后置)在同一时间只能被一个应用调用,因此,若用户需要在多个直播平台上直播时,只能采用图1中所示的方式,即通过多个手机登录不同的直播平台的方式,以进行直播。
如图2所示为应用本申请实施例中提供的一种摄像头调用方法的应用场景示意图,参照图2,在本申请的实施例中,用户可使用平板登录多个直播平台,示例性的,平板的显示窗口包括多个悬浮窗口,每个悬浮窗口用于显示对应的直播平台的显示界面,每个直播平台的显示界面显示的图像即为平板的摄像头采集到的图像。需要说明的是,在本申请的实施例中,正在调用摄像头的应用或者是应用分身均在前台显示,例如,各应用的窗口均为悬浮窗口,各悬浮窗口可以完全不重叠、也可以部分重叠,示例性的,若某个应用的悬浮窗口与另一应用的悬浮窗口完全重叠,也可以认为该应用是在前台显示的,即可显示摄像头实时采集的图像。可选地,若应用的悬浮窗口与另一应用的悬浮窗口完全重叠,则被遮挡的应用也可以暂停调用摄像头,即画面冻住,并在该应用的悬浮窗口 的部分或全部窗口未被其它应用的悬浮窗口遮挡的情况下,再次显示摄像头实时采集的图像。
如图3所示为电子设备100的结构示意图。可选地,电子设备100可以为终端,也可以称为终端设备,终端可以为蜂窝电话(cellular phone),平板电脑(pad)、可穿戴设备或物联网设备等具有摄像头的设备,本申请不做限定。需要说明的是,电子设备100的结构示意图可以适用于图1中的手机,也可以适用于图2中的平板。进一步需要说明的是,电子设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图3中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry  processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term  evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
在本申请的实施例中,显示屏194可以显示拍摄预览界面、录像预览界面、直播预览界面、扫码预览界面等,还可以在视频回放时显示视频播放界面等。需要说明的是,在本申请实施例中,预览界面(例如拍摄预览界面、直播预览界面)是指用户可通过显示屏194观看当前摄像头实时采集的图像的界面。举例说明,以相机应用为例,相机应用启动后,显示屏194显示的录制预览界面显示的是当前摄像头采集到的预览图像,相机应用响应于用户操作,开始录制后,录制预览界面显示的是当前摄像头采集到的录制图像。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
其中,摄像头193可以位于电子设备的边缘区域,可以为屏下摄像头,也可以是可升降的摄像头。摄像头193可以包括后置摄像头,还可以包括前置摄像头。本申请实施例对摄像头193的具体位置和形态不予限定。电子设备100可以包括一种或多种焦段的摄像头,例如不同焦段的摄像头可以包括长焦摄像头、广角摄像头、超广角摄像头或全 景摄像头等。
如图4所示为电子设备100为手机时,摄像头193在手机上的位置示意图,参照图4(1),示例性的,手机的显示屏的上部(靠近顶部边缘区域)设置有前置摄像头,前置摄像头可以为一个或多个,本申请实施例中,手机包括两个前置摄像头。需要说明的是,图4(1)中所示的摄像头的布局方式(例如横排及间隔)仅为示意性举例,本申请不做限定。参照图4(2),示例性的,手机的背部(即与显示屏相对的一面)设置有一个或多个后置摄像头,例如,图4(2)中的手机的后置摄像头包括4个摄像头,4个摄像头可以看作为后置摄像头模组,也可以将其看作为单独的4个摄像头。其中,4个摄像头可以包括但不限于:广角摄像头、超广角摄像头、全景摄像头等,本申请不做限定。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行本申请实施例中的摄像头调用方法。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100 可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动终端平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图5是本申请实施例的电子设备100的软件结构框图。
电子设备100的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,硬件抽象层(hardware abstraction layer,HAL)和内核层。
应用程序层可以包括一系列应用程序包。
如图5所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(Application Programming Interface,API)和编程框架,包括各种组件和服务来支持开发者的安卓开发。应用程序框架层包括一些预先定义的函数。如图5所示,应用程序框架层可包括视图系统、窗口管理器、资源管理器、内容提供器、通知管理器、摄像头服务、多媒体管理模块等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
摄像头服务用于响应于应用的请求,调用摄像头(包括前置摄像头和/或后置摄像头)。
多媒体管理模块用于基于摄像头服务的配置,对图像进行处理,具体处理过程将在下面的实施例中进行详细说明。
系统库与运行时层包括系统库和安卓运行时(Android Runtime)。系统库可以包括多个功能模块。例如:浏览器内核,3D图形库(例如:OpenGL ES),字体库等。浏览器内核负责对网页语法的解释(如标准通用标记语言下的一个应用HTML、JavaScript)并渲染(显示)网页。3D图形库用于实现三维图形绘图,图像渲染,合成和图层处理等。字体库用于实现不同字体的输入。安卓运行时包括核心库和虚拟机。安卓运行时负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
可以理解的是,图5示出的系统框架层、系统库与运行时层包含的部件,并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
HAL层为位于操作系统内核与硬件电路之间的接口层。HAL层包括但不限于:音频硬件抽象层(Audio HAL)和摄像头硬件抽象层(Camera HAL)。其中,Audio HAL用于对音频流进行处理,例如,对音频流进行降噪、定向增强等处理,Camera HAL用于对图像流进行处理。
内核层是硬件和上述软件层之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。其中,该硬件可以包括摄像头、显示屏,麦克风,处理器,以及存储器等器件。
在本申请的实施例中,硬件中的显示屏可以显示录像时的拍摄预览界面、录像预览界面和拍摄界面。硬件中的摄像头可以用于采集图像。硬件中的麦克风可以用于采集声音信号,并生成模拟音频电信号。
如图6所示为示例性示出的应用场景示意图,参照图6(1),以电子设备为手机为例,示例性的,手机的显示界面包括一个或多个控件,控件包括但不限于:电量图标、网络图标、应用图标等。用户可点击显示界面上的短视频应用A图标601,以启动短视频应用A。手机响应于用户的点击操作,启动短视频应用A并显示短视频应用A的应用界面,如图6(2)所示,参照图6(2),短视频应用A的应用界面包括一个或多个控件,例如可以包括录像选项607、摄像头切换选项609、变焦选项605。示例性的,短视频应用A启动后,应用界面包括预览界面,预览界面中显示的是摄像头(例如前置摄像头)采集到的图像(可以称为预览图像),在用户点击录制选项607后,短视频应用A开始进行录制(或直播),预览界面中继续显示摄像头采集到的图像(也可以称为录制图像),同时,短视频应用A可将摄像头采集到的图像上传至直播平台服务器,和/或,存储于本地。可选地,在录制过程中,预览界面中还包括录制时长603,该录制时长即为录制起始时刻(即用户点击录制选项607后)至当前时刻的时长,本申请各附图显示的录制时长仅为示意性举例,本申请不做限定,下文中不再重复说明。需要说明的是,为避免重复说明,下文中的各电子设备中变焦选项、录制选项、摄像头切换选项以及录制时长等描述均可 参照图6中的描述,下文中不再重复说明。
示例性的,摄像头切换选项609用于切换摄像头,例如,短视频应用A启动后默认开启前置摄像头,即,预览界面上显示的是前置摄像头采集的图像,若用户点击摄像头切换选项609后,短视频应用A调用后置摄像头,预览界面上显示后置摄像头采集的图像。本申请实施例中仅以短视频应用的录制功能为例进行说明,在其他实施例中,本申请同样可适用于视频通话、直播等功能或应用。
示例性的,变焦选项605用于调节拍摄焦距,以使得预览界面显示对应于变焦倍数的变焦范围内的图像。可选地,在本申请实施例中,默认为初始变焦倍数为1倍。
仍参照图6(2),在本申请的实施例中,用户在使用短视频应用A录制的过程中,用户可从右侧边缘(也可以是左侧边缘)向显示窗口中心滑动并停留,短视频应用A的应用界面的右边缘(也可以是左边缘)将会显示侧边栏611,如图6(3)所示,参照图6(3),侧边栏中包括一个或多个控件,例如包括电子邮件应用的图标、备忘录应用的图标、图库应用的图标、文件管理应用的图标、短视频应用B的图标611a等,还可以包括添加选项,该选项用于将指定的应用的图标添加至侧边栏。需要说明的是,图6(3)中的各应用的名称、数量及位置仅为示意性举例,本申请不做限定。
继续参照图6(3),示例性的,用户可点击或拖动侧边栏中的短视频应用B的图标611a,以启动短视频应用B。举例说明,用户可将短视频应用B的图标拖动至手机显示窗口的下半部区域并松开,手机响应于用户的操作行为,将手机的显示窗口分屏,包括显示窗口613和显示窗口615,如图6(4)所示,参照图6(4),示例性的,手机的显示窗口613用于显示短视频应用A的应用界面,显示窗口615用于显示短视频应用B的应用界面。
仍参照图6(4),示例性的,短视频应用A的应用界面上包括的一个或多个控件可参照上文,此处不赘述。短视频应用B的应用界面(也可以称为预览界面)上包括一个或多个控件,例如录制选项、摄像头切换选项、变焦选项等,具体描述可参照短视频应用A的应用界面上的描述,此处不赘述。短视频应用B的预览界面上显示摄像头(可以是前置摄像头或后置摄像头,本申请实施例中均以默认打开的是前置摄像头为例进行说明)采集的图像。示例性的,用户可点击短视频应用B的应用界面上的录制选项,短视频应用B响应于用户的点击操作,开始录像,即在预览界面中显示前置摄像头采集的图像,并且,短视频应用B保存图像,和/或,将图像传输至服务器。
示例性的,电子设备中的摄像头仅能被一个应用调用,若出现图6(3)中的操作,则前一个调用摄像头的应用的显示画面将会冻住,以图6(4)为例,示例性的,用户在打开短视频应用B后,短视频应用B调用摄像头,即获取并显示摄像头采集的图像,而短视频应用A将无法调用摄像头,因此,短视频应用A的画面冻住,即显示窗口613仅显示短视频应用B启动(即调用摄像头)之前短视频应用A最后一次获取到的图像,而不再显示摄像头后续获取到的图像,可选地,用户在打开短视频应用B后,短视频应用B默认打开后调用摄像头,手机的屏幕上也可能出现摄像头无法被调用的提示信息。相应的,若用户需要同时使用短视频应用A和短视频应用B进行录制,则只能通过图1的方式,即采用多个电子设备登录不同的应用并进行录制。
本申请实施例中提供一种摄像头调用方法,具体的,本申请实施例中的电子设备中的摄像头可以被一个或多个应用同时调用,一个或多个应用均可以获取并显示摄像头采集的图像。也就是说,通过本申请实施例中的摄像头调用方法,图6(4)中的短视频应用A和短视频应用B可同时调用摄像头,并在预览界面中显示摄像头实时采集到的图像。
需要说明的是,本申请实施例中仅以短视频应用A、短视频应用B以及相机应用为例进行说明,在其他实施例中,本申请的具体实施方式同样可以应用于其他需要调用摄像头的场景,例如短视频应用、支付应用中的扫码支付功能、视频通话、聊天应用中的视频通话功能等多个应用共享摄像头的场景,本申请不再重复说明。
一个可能的应用场景为:短视频应用A调用手机的前置摄像头,同时,短视频应用B调用手机的后置摄像头。另一个可能的应用场景为:视频通话应用调用手机的前置摄像头与其他联系人进行视频通话,同时,在保持视频通话不中断的情况下,支付应用调用手机的后置摄像头,支付应用使用扫码支付功能,完成购物付款操作。
场景一
下面结合图7~图14,对本申请实施例具体实现方式进行详细说明。具体的,应用(例如短视频应用A或短视频应用B)对摄像头调用的过程可分为两部分,第一部分为创建过程,也可以理解为准备过程,创建过程主要是各模块创建对应的实例,并交互控制信息的过程。第二部分为录制过程,即各模块或各模块中的实例对摄像头采集到的图像进行处理的过程。需要说明的是,本申请实施例中所述的“实例”也可以理解为运行在进程中的程序代码或进程代码。
下面结合图7所示的各模块的交互流程示意图,对短视频应用A在调用摄像头的过程中的创建过程进行详细说明,参照图7,具体包括:
S101,短视频应用A调用摄像头服务,摄像头服务进行相应处理。
示例性的,短视频应用A启动后(例如图6(1)和图6(2)中所示的过程),短视频应用A调用摄像头服务,例如,短视频应用A向摄像头服务发送请求消息,请求消息中可包括但不限于:短视频应用A的应用ID(例如可以是应用程序包名)、PID(Process Identification,进程识别号)、短视频应用A的配置信息(也可以称为需求信息,为与下面的短视频应用B的配置信息进行区分,以下将短视频应用A的配置信息简称为配置1)等。示例性的,配置可以包括短视频应用A显示的图像对应的分辨率(例如1080*720)。
可选地,请求消息中也可以不包括应用ID,例如,请求消息中包括配置信息,示例性的,摄像头服务可通过与应用层的接口,获取到接收到的请求消息对应的应用的应用ID以及PID。
如图8所示为摄像头服务与多媒体管理模块之间的交互示意图,参照图8,示例性的,摄像头服务进行相应处理的步骤包括:
创建Camera Service(相机服务)实例、Carema Device Client(相机设备客户端)实例、Carema3Deivce(相机设备,其中,数字3表示摄像头服务的版本号,可随版本更新)实例、Camera3Stream(相机数据流)实例。具体的,摄像头服务响应于短视频应用A的请求,创建上述各实例。下面对各实例的功能进行详细说明:
Camera Service实例,用于为应用层的应用提供API接口,并基于应用(例如短视频应用A)的请求,创建对应的会话(Session)。以短视频应用A为例,Camera Service可基于API接口,接收到短视频应用A输入的请求(包括应用的ID和配置1等),Camera Service可基于短视频应用A的请求,创建对应的会话(例如,会话的标识信息为Session1),并将短视频应用A的应用的ID、配置1和会话的标识信息(即Session1)输出至Carema Device Client实例。
Carema Device Client实例,可以看作为摄像头服务的客户端,主要用于为摄像头服务提供E接口与其它模块进行数据交互。Carema Device Client实例保存应用ID与Session1的对应关系,并将短视频应用A的应用的ID、配置1和Session1输出至Carema3Deivce实例。
Carema3Deivce实例,用于为HAL层提供接口,以及数据(例如图像)透传。具体的,Carema3Deivce实例基于Carema Device Client实例输入的短视频应用A的应用的ID、配置1和Session1,记录各信息之间的对应关系,并将配置1和Session1输出至多媒体管理模块,以及将应用的ID和Session1输出至Camera3Stream实例。
Camera3Stream实例,用于对图像进行相应处理。具体的,Camera3Stream实例将Carema3Deivce实例输入的短视频应用A的应用的ID和Session1对应保存。
S102,摄像头服务向多媒体管理模块输出Session1和配置1,多媒体管理模块进行相应处理。
示例性的,如上文所述,Carema3Deivce实例将配置1和Session1输出至多媒体管理模块。
仍参照图8,示例性的,多媒体管理模块中包括Media Center Mrg(多媒体管理中心)模块(也可以称为子模块、或单元)负责提供接口和与外部模块交互逻辑;Media Device Mrg(多媒体设备管理中心)模块负责配置信息、Session等信息的保存、增加和删除等操作;Media Stream Mrg(多媒体数据管理中心)模块负责数据转化,分辨率适配和数据分发等。举例说明,Media Center Mrg模块接收到Carema3Deivce实例输入的配置1和Session1,Media Center Mrg模块将Session1和配置1输出至Media Device Mrg模块。示例性的,Media Device Mrg模块记录Session1和配置1的对应关系。Media Stream Mrg模块主要用于对图像进行处理,具体处理过程将在下面的录制过程中详细说明。
S103,摄像头服务调用Camera Hal。
S104,Camera Hal调用内核层中的摄像头驱动。
S105,摄像头驱动调用前置摄像头。
示例性的,摄像头服务调用Carema Hal,Carema Hal进行相应处理,例如建立对应的实例。示例性的,Carema Hal调用摄像头驱动,摄像头驱动进行相应处理,例如建立对应的实例。
示例性的,前置摄像头响应于摄像头驱动的调用,开始采集图像。需要说明的是,在创建过程中,Camera Hal、摄像头驱动中的各实例或模块对数据(例如图像)进行相应处理,具体处理过程可参照已有技术实施例中的技术方案,本申请不再赘述。
下面结合图9所示的各模块的交互流程示意图,对短视频应用A在调用摄像头的过 程中的录制过程进行详细说明,参照图9,具体包括:
S201,摄像头将采集到的图像输出至摄像头驱动。
S202,摄像头驱动图像输出至Camera Hal。
S203,Camera Hal将图像输出至摄像头服务。
示例性的,摄像头驱动获取前置摄像头采集的图像,并将前置摄像头的图像输出至Carema Hal。Carema Hal将前置摄像头采集的图像输出至摄像头服务。
S204,摄像头服务将图像和Session1输出至多媒体管理模块,多媒体管理模块进行相应处理,并将处理后的图像和Session1输出至摄像头服务。
如图10所示为摄像头服务与多媒体管理模块之间的交互示意图,参照图10,示例性的,Carema3Device实例接收到Camera Hal输入的图像,Carema3Device实例检测当前存储的Session,即当前存储有Session1及其它信息(包括应用ID和配置1),Carema3Device实例将图像和Session1输出至多媒体管理模块。
仍参照图10,示例性的,多媒体管理模块中的Media Stream Mrg模块基于接收到的图像和Session1,从Media Device Mrg模块中获取与Session1对应的配置1,Media Device Mrg模块可基于配置1对图像进行处理,例如调整分辨率等,Media Device Mrg模块将处理后的图像和Session1输出至摄像头服务。
在一种可能的实现方式中,摄像头服务在检测到当前只存在Session1及对应的信息的情况下,可不与多媒体管理模块进行交互,而直接执行S205,即将图像输出至短视频应用A。
在另一种可能的实现方式中,多媒体管理模块也可以以应用ID为检索信息,确定对应的配置,例如,在控制流传输阶段(例如S102)中,摄像头服务可以向多媒体管理模块输出应用ID和配置1,相应的,在S204中,多媒体管理模块可基于应用ID,确定对应的配置1,并根据配置1,对图像进行处理。可选地,摄像头服务也可以向多媒体管理模块输出Session1、应用ID和配置1,多媒体管理模块可基于Session1和应用ID中的至少一个检索信息,确定对应的配置1,本申请实施例中仅以Session与配置之间的关联关系为例进行说明,具体配置与检索信息(包括Session、ID、PID等可以唯一确定配置与应用的对应关系的信息)的关联关系的建立,可根据实际需求进行设置,本申请不做限定,下文中不再重复说明。
S205,摄像头服务将图像输出至短视频应用A。
仍参照图10,示例性的,Carema3Device实例接收到多媒体管理模块输入的图像(即处理后的图像)和Session1,Carema3Device实例将图像和Session1输出至Carema3Stream实例。示例性的,Carema3Stream实例基于已记录的Session1与短视频应用A的应用ID的对应关系,将图像输出至短视频应用A。
下面结合图11所示的各模块的交互流程示意图,对短视频应用A和短视频应用B在调用摄像头的过程中,短视频应用B对应的创建过程进行详细说明,参照图11,具体包括:
S301,短视频应用B调用摄像头服务,摄像头服务进行相应处理。
示例性的,短视频应用B启动后,(例如图6(3)和图6(4)中所示的过程),短视 频应用B调用摄像头服务,例如,短视频应用B向摄像头服务发送请求消息,请求消息中可包括但不限于:短视频应用B的应用ID、短视频应用B的配置信息(以下简称配置2)等。示例性的,配置2可以包括短视频应用B显示的图像对应的分辨率为1280*720。
如图12所示为摄像头服务与多媒体管理模块之间的交互示意图,参照图12,示例性的,摄像头服务进行相应处理的步骤包括:
Camera Service实例接收到短视频应用B的请求后,将短视频应用B的应用ID、配置2输出至Carema Device Clinet实例。Carema Device Clinet实例响应于Camera Service实例输入的短视频应用B的应用ID、配置2,创建对应的会话(会话的标识信息为Session2)。示例性的,Carema Device Clinet实例将短视频应用B的应用ID、配置2和Session2输出至Carema3Deivce实例。示例性的,Carema3Deivce实例保存短视频应用B的应用ID、配置2和Session2的对应关系,并将短视频应用B的应用ID和Session2输出至Camera3Stream实例,以及将Session2和配置2输出至多媒体管理模块。
需要说明的是,图12中未描述的部分与图8中的描述相同或相似,此处不再赘述。
S302,摄像头服务向多媒体管理模块输出Session2和配置2,多媒体管理模块进行相应处理。
示例性的,如上文所述,Carema3Deivce实例将配置2和Session2输出至多媒体管理模块。
仍参照图11,示例性的,Media Center Mrg模块接收到Carema3Deivce实例输入的配置2和Session2,Media Center Mrg模块将Session2和配置2输出至Media Device Mrg模块。示例性的,Media Device Mrg模块记录Session2和配置2的对应关系。
示例性的,由于摄像头服务检测到已调用摄像头,摄像头服务在本次创建过程中不再执行S103~S105所述的步骤。例如,摄像头服务检测已建立的Session(例如可以生成Session列表,列表中记录有Session与应用ID、配置等信息的对应关系),确定已存在Session1,摄像头可确定当前存在与Session1对应的应用正在调用摄像头,则摄像头服务不再重复调用下层模块(例如CameraHAL模块)。
需要说明的是,如图11所示,在短视频应用B对应的创建过程执行的过程中,短视频应用A仍然在进行录制过程,即短视频应用A通过摄像头驱动、Carema Hal、摄像头服务、多媒体管理模块获取前置摄像头采集的图像,并显示图像。
下面结合图13所示的各模块的交互流程示意图,对短视频应用A和短视频应用B在调用摄像头的过程中,短视频应用A和短视频应用B的录制过程进行详细说明,参照图13,具体包括:
摄像头服务获取前置摄像头采集的图像,具体流程可参照S301~S303,此处不赘述。
继续参照图13,示例性的,摄像头服务与多媒体管理模块进行交互,以获取多媒体管理模块生成的两份图像(包括图像A和图像B),并将图像A输出至短视频应用A,将图像B输出至短视频应用B。短视频应用A可在录制界面显示图像A,短视频应用B可在录制界面显示图像B。
下面结合图14所示的摄像头服务与多媒体管理模块之间的交互示意图,对上文所述的摄像头服务与多媒体管理模块之间的交互过程进行详细说明,参照图14,示例性的, Carema3Device实例接收到Camera Hal输入的图像,Carema3Device实例检测当前存储的Session,即当前存储有Session1及其它信息(包括短视频应用A的应用ID和配置1),和,Session2及其它信息(包括短视频应用B的应用ID和配置2)。Carema3Device实例将图像、Session1和Session2输出至多媒体管理模块。
继续参照图14,示例性的,多媒体管理模块中的Media Stream Mrg模块基于接收到的图像、Session1和Session2,从Media Device Mrg模块中获取与Session1对应的配置1,和,与Session2对应的配置2。Media Device Mrg模块将图像进行复制,得到两个图像。Media Device Mrg模块基于配置1对其中一个图像进行处理,例如将图像的分辨率调整为1080*720,得到图像A。以及,Media Device Mrg模块基于配置2对另一个图像进行处理,例如将图像的分辨率调整为1280*720,得到图像B。Media Device Mrg模块将Session1和图像A,以及,Session2和图像B对应输出至摄像头服务。
仍参照图14,示例性的,Carema3Device实例接收到多媒体管理模块输入的Session1和图像A,以及,Session2和图像B。Carema3Device实例将Session1和图像A,以及,Session2和图像B对应输出至Carema3Stream实例。示例性的,Carema3Stream实例基于已记录的Session1与短视频应用A的应用ID的对应关系,将图像A输出至短视频应用A。
需要说明的是,在本申请的实施例中,各模块或实例之间进行交互时,可通过反馈ACK信息,以告知对端已成功接收到信息或数据。举例说明,短视频应用A向摄像头服务发送调用请求后,摄像头服务在接收到该请求后,向短视频应用A反馈ACK信息,以指示摄像头服务已成功接收到该请求。本申请的实施例中的各模块交互示意图中仅示出数据或信息(主要是指请求或控制信息)的流向,未示出ACK信息的流向,下文中不再重复说明。
进一步需要说明的是,本申请实施例中仅以配置信息包括分辨率为例进行说明,在其他实施例中,配置信息还可以包括其它信息,例如变焦倍数,举例说明,配置1指示变焦倍数为3倍,配置2指示变焦倍数为5倍,多媒体模块可基于配置信息指示的变焦倍数,对图像进行变焦处理,并将变焦处理后的图像和Session对应输出至摄像头服务。当然,配置信息还可以是其它例如美颜、图像色调等,本申请不做限定。
在一种可能的实现方式中,本申请实施例中的配置信息可以是初始设置的,也可以是动态变换的。举例说明,参照图6(4),在短视频应用A和短视频应用B同时调用前置摄像头时,短视频应用A和短视频应用B的初始配置信息均指示变焦倍数为1倍,可选地,用户可点击(或拖动)短视频应用A中的变焦选项605,以选择3倍变焦,短视频应用A可基于用户操作,向摄像头服务下发更新的配置信息,更新的配置信息指示变焦倍数为3倍,摄像头服务可将更新的配置信息输出至多媒体管理模块,相应的,多媒体管理模块更新存储的配置信息,并基于更新的配置信息,对图像进行相应处理,即将摄像头在1倍变焦倍数对应的变焦范围内采集的图像进行3倍变焦处理,并通过摄像头服务输出至短视频应用A。示例性的,短视频应用B的配置信息不变,即指示的变焦倍数仍为1倍,短视频应用B的预览界面显示的图像即为摄像头采集的对应于1倍变焦倍数的变焦范围内的图像。
在另一种可能的实现方式中,仍参照图6(4),若用户点击短视频应用A中的变焦选项605,短视频应用A可基于用户操作,向摄像头服务下发更新的变焦倍数(例如3倍变焦),一个示例中,摄像头应用可将短视频应用A和短视频应用B的配置信息均进行更改,更改为3倍变焦倍数,并输出至多媒体管理模块,多媒体管理模块可更新两个应用对应的配置信息。另一个示例中,摄像头应用通过Camera Hal将短视频应用A指示的变焦倍数下发至摄像头驱动,摄像头驱动可控制摄像头(例如前置摄像头)变焦,并采集变焦范围(3倍变焦倍数对应的变焦范围)内的图像,相应的,多媒体管理模块可对摄像头采集的变焦范围内的图像进行相应处理。也就是说,在本实施例中,若多个应用同时调用摄像头时,一个应用对应的配置信息变换,则其它应用的配置信息也随之变化,例如短视频应用B和短视频应用A显示的图像从1倍变焦变换为3倍变焦对应的图像。
场景二
场景一所述的实施例是电子设备(例如手机)中的多应用(例如短视频应用A和短视频应用B)同时调用摄像头的过程。下面结合图15a~图15l对多电子设备协同的场景下的摄像头调用过程进行详细说明。
参照图15a,示例性的,手机的显示界面显示的是手机的主页面,主页面上包括一个或多个控件,具体描述可参照上文,此处不赘述。平板的显示界面上包括一个或多个控件,例如应用图标、电量图标等。示例性的,用户可从平板的上边缘向下滑动,平板响应于用户的操作行为,在显示界面上的上边缘区域显示下拉通知栏1501。下拉通知栏1501中包括一个或多个控件,例如可以包括时间栏、Wi-Fi设置选项、蓝牙设置选项、移动数据设置选项、静音设置选项、自动旋转设置选项以及多屏协同选项1501a等。需要说明的是,图15a及下面的附图中所涉及的平板的显示界面显示的控件的名称和数量,以及下拉通知栏中的控件的名称和数量仅为示意性举例,本申请不做限定。
继续参照图15a,示例性的,用户可点击下拉通知栏1501中的多屏协同选项1501a,平板响应于用户的操作行为,在显示界面上显示提示框,如图15b所示。参照图15b,示例性的,平板的显示界面的右边缘区域显示提示框1503,提示框包括提示信息,提示信息用于指示平板当前正在启动多屏协同功能。提示框中还包括“取消”选项1503a和“同意”选项1503b。若用户点击“取消”选项1503a,则该提示框1503消失。示例性的,用户点击“同意”选项1503b,平板响应于用户的操作行为,对附近可建立多屏协同连接的电子设备(例如手机)进行扫描,并对扫描到的电子设备发起多屏协同连接请求,如图15c所示。参照图15c,示例性的,平板扫描到手机后,平板的显示界面的提示框1503中显示提示信息,提示信息用于指示发现可用于进行多屏协同的手机,例如,显示“连接您的手机”的字样,以指示当前期望与手机建立多屏协同。示例性的,提示框中还可以包括其它提示信息或选项,例如,若发现的手机不是用户想要进行多屏协同的电子设备,则用户可点击提示框中的“扫码连接”选项,以通过扫码的方式,与指定的电子设备建立多屏协同。示例性的,提示框中还可以包括“取消”选项1503a,用于取消多屏协同功能。
仍参照图15c,示例性的,平板向手机发送多屏协同请求后,手机侧的显示界面上(例如显示界面的下半部区域)显示提示框1505,提示框1505中可以包括但不限于:待建立 多屏协同连接的设备的图标1505a、“取消”选项1505b和“连接”选项1505c。“取消”选项1505b用于取消当前连接建立流程,并且取消提示框显示。示例性的,用户点击“连接”选项1505c,手机响应于用户的操作行为,与平板建立多屏协同连接。具体连接建立过程可参照多屏协同技术的具体实施例,本申请不再赘述。
参照图15d,示例性的,手机与平板建立多屏协同连接后,平板的显示界面上(可以是平板上的任意区域)显示多屏协同窗口1507,多屏协同窗口1507上显示手机的显示界面,也就是说,手机的显示界面上包括的所有控件和图像均会在该多屏协同窗口上实时显示。举例说明,手机可将手机的部分或全部显示界面发送至平板,平板在多屏协同窗口1507中显示手机发送的部分或全部显示界面。
示例性的,用户点击平板的多屏协同窗口中显示手机的显示界面上的短视频应用A的图标1507a,平板接收用户操作,并将用户操作(包括用户操作对应的压力值和位置坐标等)发送至手机,手机可获取到用户在平板的多屏协同窗口上的操作行为,手机响应于用户的操作行为,在手机的显示界面上显示短视频应用A的应用界面。示例性的,手机将当前显示的界面,即包含短视频应用A的应用界面的界面发送至平板,相应的,平板基于手机发送的界面,在多屏协同窗口1507中同样显示手机的当前界面,即短视频应用A的应用界面,如图15e(1)所示。可选地,在本申请实施例中,在多屏协同场景下,短视频应用A的应用界面(包括手机侧的界面和平板侧的多屏协同窗口1507中的界面)默认调用平板的摄像头(例如平板的前置摄像头或后置摄像头,本申请实施例中以默认调用平板的前置摄像头为例),也就是说,短视频应用A的应用界面(包括手机侧的界面和平板侧的多屏协同窗口1507中的界面)中显示的是平板的前置摄像头采集的图像。举例说明,平板将前置摄像头实时采集的图像发送至手机,手机对图像进行相应处理后,向平板发送包括前置摄像头采集的图像的短视频应用A的应用界面,平板可响应于接收到的包括前置摄像头采集的图像的短视频应用A的应用界面,在多屏协同窗口1507中显示接收到的界面。
可选地,参照图15e(2),用户可从手机的上边缘向下滑动以在手机的显示界面的上部显示下拉通知栏1510,示例性的,下拉通知栏1510包括菜单栏和通知栏,通知栏中包括“已连接“Huawei平板””的提示信息,以提示当前连接的多屏协同的对端设备,示例性的,通知栏中可包括“断开”选项,用于指示断开与对端设备的多屏协同连接,通知栏中还可以包括“录制平板屏幕”的选项,用于指示对平板中的1507窗口中的操作和显示界面进行录制,可选地,通知栏中还包括“音视频切换到手机”选项1510a,用于指示调用手机的摄像头,也就是说,若用户点击该选项,手机响应于用户操作,调用手机的摄像头,即手机的短视频应用A的预览界面和平板的多屏协同窗口1507中的短视频应用A的预览界面中显示的均为手机的摄像头采集到的图像。需要说明的是,用户点击“音视频切换到手机”选项1510a后,该选项内容变为“音视频切换到平板”选项,也就是说,用户再次点击该选项后,摄像头切换为平板的摄像头。
结合图15d和图15e,示例性的,多屏协同应用调用前置摄像头,具体创建流程和录制流程可参照场景一中的相关描述,此处不赘述。示例性的,多屏协同应用将获取到的前置摄像头采集的图像通过多屏协同连接传输至手机,手机的短视应用A的应用界面显 示平板侧发送的图像,示例性的,平板的多屏协同窗口上显示的手机的显示界面与手机同步,即如图15e所示。可选地,在多屏协同的过程中,手机可以息屏,以降低手机功耗。
继续参照图15e,示例性的,用户可点击平板的显示界面上显示的短视频应用B的图标,平板响应于用户的操作行为,启动短视频应用B,并在显示界面上显示短视频应用B的应用界面,如图15f所示。可选地,短视频应用B的应用界面可以与多屏协同窗口部分重叠、或者不重叠,本申请不做限定。
参照图15f,示例性的,平板的左侧区域显示多屏协同窗口1507,右侧区域显示短视频应用B的应用界面1511,多屏协同窗口1507与短视频应用B的应用界面1511不重叠或部分重叠。短视频应用B的应用界面1511上包括一个或多个控件,具体描述可参照上文,此处不再赘述。
结合图15e和图15f,示例性的,多屏协同应用与短视频应用B共同调用前置摄像头,具体创建流程和录制流程可参照场景一中的相关描述,此处不赘述。示例性的,短视频应用B将获取到的图像在平板的短视频应用B的应用界面上显示,多屏协同应用将获取到的图像通过多屏协同连接传输至手机。
在一种可能的实现方式中,同一电子设备中的不同应用还可以同时调用前置摄像头和后置摄像头。举例说明的,参照图15g,在手机和平板进行多屏协同的过程中,用户可点击平板的短视频应用B的应用界面中的“摄像头切换”选项1513,用于切换摄像头,即使用平板的后置摄像头。参照图15h,示例性的,平板响应于用户的操作行为,调用后置摄像头,并且,短视频应用B的应用界面(即预览界面)1511中显示平板的后置摄像头采集的图像,同时,多屏协同窗口1507中的短视频应用A的预览界面显示的仍为平板的前置摄像头采集到的图像。
结合图15g和图15h,如图16a所示为各模块的交互示意图,参照图16a,具体步骤包括:
S401,短视频应用B调用摄像头服务,摄像头服务进行相应处理。
示例性的,短视频应用B向摄像头服务服务发送请求消息,请求消息中包括但不限于:短视频应用B的应用ID、短视频应用B对应的配置3(配置3可以与配置2相同或不同,本申请不做限定)。
该步骤中未描述部分与S301的相关内容相同或相似,此处不再赘述。
S402,摄像头服务向多媒体管理模块输出Session3和配置3,并指示多媒体管理模块删除Session2和配置2,多媒体管理模块进行相应处理。
示例性的,以摄像头服务和多媒体管理模块已存储的信息包括Session1和配置1、Session2和配置2为例,其中,Session1对应于多屏协同应用,即Session1为多屏协同应用在创建过程中生成的,Session2对应于短视频应用B,即Session2为短视频应用B在创建过程中生成的。
示例性的,摄像头服务将Session3和配置3输出至多媒体管理模块,并指示多媒体管理模块删除已存储的Session2及与Session2相关联的其它信息,即配置2。多媒体管 理模块基于摄像头服务的指示,保存Session3和配置3的对应关系,并删除Session2和配置2。
该步骤中未描述部分与S302的相关内容相同或相似,此处不再赘述。
S403,摄像头服务调用Camera Hal。
S404,Camera Hal调用摄像头驱动。
S405,摄像头驱动调用后置摄像头。
需要说明的是,在短视频应用B调用后置摄像头的过程中,多屏协同应用仍然在进行录制过程,即获取前置摄像头采集的图像。
进一步需要说明的是,在多屏协同应用于短视频应用B的录制过程中,摄像头驱动将Session1和前置摄像头采集的图像对应输出至Carema Hal,并经过Carema Hal将Session1和前置摄像头采集的图像传输至摄像头服务。相应的,摄像头驱动将Session3和后置摄像头采集的图像对应输出至Carema Hal,并经过Carema Hal将Session3和后置摄像头采集的图像传输至摄像头服务。录制过程的其它未描述部分与场景一中短视频应用A调用前置摄像头的录制过程相同或相似,此处不再赘述。
进一步需要说明的是,在多屏协同应用调用前置摄像头和短视频应用B调用后置摄像头的过程中,多媒体管理模块基于Session1对应的配置1,对前置摄像头采集的图像进行相应处理,以及,基于Session3对应的配置3,对后置摄像头采集的图像进行相应处理。
参照图15i(1),示例性的,用户可在平板的显示界面上从右边缘向中心滑动,平板响应于用户的操作行为,在右边缘区域显示侧边栏1515(具体描述可参照上文,此处不赘述),如图15i(2),示例性的,用户点击(也可以是拖动)侧边栏中的相机应用图标1515a,平板响应于用户的操作行为,将右侧区域进行分屏,包括显示窗口1511和显示窗口1517,其中,显示窗口1511用于显示短视频应用B的应用界面,显示窗口1517用于显示相机应用的应用界面,如图15j所示,参照图15j,示例性的,相机应用的应用界面包括一个或多个控件,例如可以包括拍摄模式选项1519,拍摄模式选项1519进一步包括多个子模式,例如可以包括夜景模式选项、录像模式选项、拍摄模式选项、双景拍摄模式选项以及更多选项,示例性的,在本申请实施例中,以相机应用启动后默认进入拍照模式,并且调用前置摄像头为例进行说明。
结合图15i和图15j,如图16b为各模块的交互示意图,参照图16b,具体包括:
S501,相机应用调用摄像头服务,摄像头服务进行相应处理。
示例性的,相机应用向摄像头服务服务发送请求消息,请求消息中包括但不限于:相机应用的应用ID、相机应用对应的配置4(配置4可以与配置1、配置3相同或不同,本申请不做限定)。
该步骤中未描述部分与S301的相关内容相同或相似,此处不再赘述。
S502,摄像头服务向多媒体管理模块输出Session4和配置4,多媒体管理模块进行相应处理。
示例性的,多媒体管理模块当前存储的信息包括:Session1和配置1、Session3和配 置3以及Session4和配置4,以及各Session与配置之间的对应关系。
该步骤中未描述部分可参照S302的相关描述,此处不赘述。
示例性的,在录制过程中,多媒体管理模块将前置摄像头采集的图像进行复制,并基于配置1对其中一个图像进行相应处理,生成图像A,基于配置4对另一个图像进行相应处理,生成图像B,摄像头服务可将图像A输出至多屏协同应用,并将图像B输出至相机应用。以及,多媒体管理模块基于配置3对后置摄像头采集的图像进行相应处理,生成图像C,摄像头服务将图像C输出至短视频应用B。具体细节可参照上文中的描述,此处不再赘述。
参照图15k(1),示例性的,用户可点击相机应用的应用界面上的双景拍摄模式选项1519a,平板响应于用户的操作行为,将显示窗口1517进一步进行分屏显示,包括显示窗口1521和显示窗口1523,如图15k(2)所示,示例性的,显示窗口1521和显示窗口1523包括相机应用的预览界面,显示窗口1521的预览界面以及显示窗口1523的预览界面均显示前置摄像头采集的图像。示例性的,显示窗口1521和显示窗口1523中还包括一个或多个控件,例如可以包括“摄像头切换”选项和“变焦”选项1523a,示例性的,用户可点击显示窗口1523中的“变焦”选项1523a,例如选择3倍变焦模式,则显示窗口1523a当前显示的是前置摄像头在3倍变焦模式下采集的图像。
结合图15k,示例性的,利用前置摄像头实现双景拍摄的过程中,相机应用需要分别调用前置摄像头中的前置摄像头1和前置摄像头2(前置摄像头1和前置摄像头2的描述可参照图4)。举例说明,假设在图15j中其在拍摄过程中使用的是前置摄像头1,并且,多屏协同应用同样使用的是前置摄像头1。相机应用切换到双景拍摄模式后,相机应用需要调用前置摄像头2,示例性的,相机应用向摄像头服务发送调用请求,摄像头服务响应于接收到的调用请求,生成Session5,并保存Session5和配置5及应用ID的对应关系,以及,摄像头服务将Session5和配置5输出至多媒体管理模块。未描述部分与上文中各创建过程类似,此处不再赘述。
下面结合图16c所示为各模块的交互示意图对录制过程进行说明,参照图16c,为区分前置摄像头1和前置摄像头2获取到的图像,以前置摄像头1采集到的图像为前置图像1、前置摄像头2采集到的图像为前值图像2为例进行说明,示例性的,前置摄像头1将采集到的前置图像1通过摄像头驱动、Camera Hal输出至摄像头服务,前置摄像头2将采集到的前置图像2通过摄像头驱动、Camera Hal输出至摄像头服务,后置摄像头将采集到的后置图像通过摄像头驱动、Camera Hal输出至摄像头服务。
继续参照图16c,摄像头服务与多媒体模块进行交互,示例性的,摄像头服务将前置图像1、Session1和Session4对应输出至多媒体管理模块,相应的,多媒体管理模块将前置图像1进行复制,并基于Session1对应的配置1对其中一个图像进行处理,生成前置图像A,并且,对另外一个图像进行处理,生成前置图像B。示例性的,摄像头服务将前置图像2和Session5对应输出至多媒体管理模块,相应的,多媒体管理模块基于Session5对应的配置5对前置图像2进行处理,生成前置图像C。示例性的,摄像头服务器将后置图像和Session4对应输出至多媒体管理模块,相应的,多媒体管理模块基于Session4 对应的配置4对后置图像2进行处理,生成后置图像A。需要说明的是,摄像头服务向多媒体管理模块输出上述信息的步骤不分先后。
仍参照图16c,示例性的,多媒体管理模块将前置图像A和Session1、前置图像B和Session4、前置图像C和Session5以及后置图像A和Session3对应输出至摄像头服务。摄像头服务可基于各Session与应用ID的对应关系,将前置图像A输出至多屏协同应用,将后置图像A输出至短视频应用B,将前置图像B和前置图像C输出至相机应用。相应的,相机应用可在显示窗口1中显示前置图像B,在显示窗口2中显示前值图像C。
参照图15l(1),示例性的,用户可点击相机应用的应用界面上,显示窗口1521中的“摄像头切换”选项1521a,平板响应于用户的操作行为进行相应处理。参照图15l(2),当前平板的显示方式包括:多屏协同窗口1507中显示的是手机中的短视频应用A的应用界面(即预览界面),并且,该应用界面中显示的是平板的前置摄像头采集的图像。显示窗口1511中显示的是平板的短视频应用B的应用界面,并且,该应用界面中显示的是平板的后置摄像头采集的图像。显示窗口1521显示的是平板中的相机应用的应用界面(即预览界面),并且,该应用界面显示的是平板的后置摄像头采集的图像(在未进行拍摄或录制之前,显示的为摄像头采集的预览图像)。显示窗口1523显示的是平板中的相机应用的应用界面,并且,该应用界面显示的是平板的前置摄像头在3倍变焦模式下采集的图像。
结合图15l,图16d为示例性示出的各模块的交互示意图,示例性的,图16d具体示出的是各模块在录制过程的交互示意图,需要说明的是,在创建过程中,由于相机应用不再调用前置摄像头2,转为调用后置摄像头,因此,摄像头服务和多媒体管理模块删除已保存的Session5及对应的信息(例如配置5),并建立Session6及对应的信息(例如配置6)之间的对应关系,其中,Session6即为基于相机应用调用后置摄像头时创建的,具体创建过程可参照上述各实例中的创建过程,此处不再赘述。
参照图16d,为区分前置摄像头采集的图像(可以理解为是前置摄像头1)和后置摄像头采集的图像,以下将前置摄像头采集的图像称为前置图像,将后置摄像头采集的图像称为后置图像,示例性的,前置摄像头采集的前置图像以及后置摄像头采集的后置图像经过摄像头驱动、Camera Hal输出至摄像头服务。摄像头服务将前置图像、Session1和Session4,以及,后置图像、Session3和Session5输出至多媒体管理模块。其中,Session1是基于多屏协同应用调用前置摄像头创建的,Session3是基于短视频应用B调用后置摄像头时创建的,Session4是基于相机应用在双景拍摄模式下调用前置摄像头创建的,Session5是基于相机应用在双景拍摄模式下调用后置摄像头时创建的。相应的,多媒体模块基于配置1对前置图像进行相应处理,生成前置图像A,基于配置3对后置图像进行处理,生成后置图像A,基于配置4对前置图像进行处理,生成前置图像B,基于配置5对后置图像进行处理,生成后置图像B。多媒体管理模块将Session1和前置图像A、Session3和后置图像A、Session4和前置图像B、以及Session5和后置图像B及各Session与图像的对应关系输出至摄像头服务。摄像头服务可基于各Session与应用ID的对应关系,将前置图像A输出至多屏协同应用,将后置图像A输出至短视频应用B,将前置图 像B输出至相机应用,将后置图像B输出至相机应用。
需要说明的是,本申请实施例中示出的各窗口的排布方式仅为示意性举例,本申请不做限定。
进一步需要说明的是,本申请实施例中仅以相机应用的双景拍摄模式为例进行说明,实际上相机应用或其它具有拍摄功能的应用还可以具有三景拍摄模式或四景拍摄模式等,例如,在相机应用在拍摄过程中触发三景拍摄模式,相机应用的应用界面可以包括三个显示窗口,一个显示窗口用于显示后置摄像头在广角模式拍摄的画面,另一个显示窗口用于显示置摄像头在变焦模式拍摄的画面,第三个显示窗口用于显示前置摄像头拍摄的画面,本申请对此不做限定。
进一步需要说明的是,图15a~图15l所示的各示例同样可应用于单一电子设备中,例如,手机中的多个应用可同时调用前置摄像头和/或后置摄像头,本申请不做限定。
在一种可能的实现方式中,本申请实施例中的摄像头调用方法还可以应用于应用分身调用不同摄像头的场景,举例说明,手机可响应于用户操作,启动应用分身功能,主页面可显示有即时通信应用图标以及即时通信应用分身图标,用户点击即时通信应用图标,以启动即时通信应用,即时通信应用启动后,即时通信应用可响应于接收到的用户操作,启动拍摄功能,并调用前置摄像头,以及在即时通信应用的应用界面中显示前置摄像头实时采集到的图像。在即时通信应用显示前置摄像头采集的图像的过程中,用户可通过操作侧边栏等方式,启动即时通信应用分身,例如,点击侧边栏中包括的即时通信应用分身图标,手机可分屏显示即时通信应用的应用界面和即时通信应用分身的应用界面。需要说明的是,即时通信应用和即时通信应用分身可具有不同的账号,也就是说,用户可通过账号A登录即时通信应用,并通过账号B登录即时通信应用分身,示例性的,即时通信应用分身启动后,可响应于接收到的用户操作,启动拍摄功能,并调用前置摄像头,以及在即时通信应用分身界面也显示前置摄像头和/或后置摄像头实时采集的图像,即申请实施例中描述的两个不同应用(例如抖音应用和快手应用)调用同一个摄像头的方法,也适用于使用不同账号登录的两个同一应用(例如微信应用及微信应用的分身)的场景,具体实现方式可参照上述方法实施例的相关步骤,此处不再赘述。
如图17所示为本申请实施例提供的一种摄像头调用方法的流程示意图,参照图17,具体包括:
S601,电子设备显示第一应用的第一预览界面,第一预览界面显示第一摄像头在第一时刻采集的图像。
示例性的,第一应用和第二应用可以为视频通话应用、直播应用、具有扫码功能的应用中的任意一种。
当第一应用为短视频应用A时,第一应用的第一预览界面可参照图6(2)示出的短视频应用A的应用界面。
当第一应用为多屏协同应用时,第一应用的第一预览界面可参照图15e(1)示出的应用界面1507。
S602,电子设备响应于接收到的第一操作,在第二时刻显示第一界面;第一时刻与第二时刻为不同时刻;第一界面包括第一应用的第二预览界面和第二应用的第三预览界面;其中,第二预览界面和第三预览界面显示第一摄像头在第二时刻采集的图像。
示例性的,电子设备中的第一应用在调用第一摄像头的过程中,电子设备可响应于接收到的第一操作,显示包含第一应用的第二预览界面和第二应用的预览界面的第一界面。其中,第二预览界面和第三预览界面均显示第一摄像头实时采集的图像。
示例性的,第一操作可以包括第一子操作和第二子操作,以图6为例,第一子操作为从屏幕的右边缘向左侧滑动(或者也可以是从屏幕的左边缘向右侧滑动)的操作,以调用侧边栏,相应的,电子设备基于第一子操作,在第一界面显示侧边栏,侧边栏包括第二应用的应用图标,如图6(3)所示。第二子操作为对第二应用的应用图标的第二子操作,相应的,电子设备响应于第二子操作,在第二时刻显示第一界面,如图6(4)所示。
示例性的,第一界面为分屏界面,分屏界面的一个界面中包括第二预览界面,分屏界面的另一个界面中包括第三预览界面。举例说明,如图6所示,分屏界面包括界面613和界面615,其中,界面613中包括第一应用(例如短视频应用A)的预览界面,界面615中包括第二应用(例如短视频应用B)的预览界面。
示例性的,第二预览界面,和/或,第三预览界面为悬浮界面。举例说明,如图2所示,电子设备(即平板)的显示界面显示多个悬浮界面(也可以称为悬浮窗口),每个悬浮界面包括一个应用的预览界面,例如可以包括短视频应用A的预览界面和短视频应用B的预览界面。
示例性的,第二预览界面包括第一录制时长,第一录制时长用于指示第一应用从开始录制时刻起的录制时长。举例说明,如图6所示,在图6(2)中,第一应用在拍摄过程中,显示录制时长,在图6(3)中,电子设备启动第二应用,第二应用调用前置摄像头,并显示前置摄像头实时采集的图像,同时,第一应用仍然在继续拍摄并记录录制时长。
在上述方法实施例的基础上,电子设备在显示第一应用的第一预览界面之前,还包括如下步骤:获取第一应用的第一配置信息;配置第一会话,并保存第一会话、第一应用和第一配置信息的关联关系;在第二时刻显示第一界面,还包括:获取第二应用的第二配置信息;配置第二会话,并保存第二会话、第二配置信息和第二配置信息的关联关系。
示例性的,参照图11,电子设备中的摄像头服务获取第一应用下发的第一配置信息(即配置1),并配置第一会话(即Session1)。摄像头服务保存Session1和配置1之间的关联关系,并且,将关联关系发送给多媒体管理模块,多媒体管理模块同样保存Session1和配置1之间的关联关系。电子设备启动第二应用,摄像头服务获取第二应用下发的第二配置信息(即配置2),并配置第二会话(即Session2)。摄像头服务保存Session2和配置2之间的关联关系,并且,将关联关系发送给多媒体管理模块,多媒体管理模块同样保存Session2和配置2之间的关联关系。
在上述方法实施例的基础上,电子设备在第二时刻显示第一界面,包括:获取第一 摄像头在第二时刻采集到的图像;基于第一会话,确定与第一会话关联的第一应用和第一配置信息,以及基于第二会话,确定与第二会话关联的第二应用和第二配置信息;根据第一摄像头在第二时刻采集到的图像和第一配置信息,获取第一子图像,以及,根据第一摄像头在第二时刻采集到的图像和第二配置信息,获取第二子图像;在第二预览界面显示第一子图像,并且,在第三预览界面显示第二子图像。
示例性的,参照图14,电子设备中的多媒体管理模块可基于Session1和配置1以及Session2和配置2的对应关系,确定当前有两个应用在调用摄像头,即生成两个图像,示例性的,多媒体管理模块可基于配置1对图像进行处理,得到图像A(即第一子图像),并基于配置2对图像进行处理,得到图像B(即第二子图像)。电子设备的第一应用的第二预览界面(例如图6中的613)可显示图像A,第二应用的第三预览界面(例如图6中的615)可显示图像B。
在一种可能的实现方式中,第一配置信息用于指示第一分辨率,第二配置信息用于指示第二分辨率,第一分辨率与第二分辨率相同或不同。
在一种可能的实现方式中,第一配置信息用于指示第一变焦倍数,第二配置信息用于指示第二变焦倍数,第一变焦倍数与第二变焦倍数相同或不同。
在上述方法实施例的基础上,电子设备响应于接收到的第二操作,获取第一应用的第三配置信息,第三配置信息用于指示第三变焦倍数,第三变焦倍数与第一变焦倍数不同。电子设备保存第一会话、第一应用和第三配置信息的关联关系;电子设备获取第一摄像头在第三时刻采集到的图像;电子设备基于第一会话,确定与第一会话关联的第一应用和第三配置信息,以及基于第二会话,确定与第二会话关联的第二应用和第二配置信息;电子设备根据第一摄像头在第三时刻采集到的图像和第三配置信息,获取第三子图像,以及,根据第一摄像头在第三时刻采集到的图像和第二配置信息,获取第四子图像;电子设备在第二预览界面显示第三子图像,并且,在第三预览界面显示第四子图像。
示例性的,仍参照图6,在第一应用(例如短视屏应用A)和第二应用(例如短视屏应用B)同时调用第一摄像头(例如前置摄像头)时,用户可点击短视频应用A的应用界面613上的变焦选项605,以调节第一应用的图像的变焦倍数(即第三变焦倍数)。示例性的,短视频应用A可向摄像头服务下发当前变焦倍数对应的配置信息(即第三配置信息)。摄像头服务与多媒体管理模块可进行相应处理,以更新第一应用与配置信息之间的关联关系,并基于新的配置信息,即第三配置信息,对摄像头采集到的图像进行处理,以及,基于第二应用的第二配置信息对摄像头采集到的图像进行处理,也就是说,在第一应用的配置信息,例如变焦倍数(也可以是分辨率)改变后,第二应用的配置信息可维持不变,即第一应用显示的是更新后的变焦倍数对应的图像,第二应用显示的是原变焦倍数对应的图像。
在上述方法实施例的基础上,方法还包括:电子设备响应于接收到的第三操作,在第四时刻显示第二界面;第二界面包括第一应用的第四预览界面和第一应用的第五预览界面;其中,第四预览界面显示第一摄像头在第四时刻采集到的图像,并且,第五预览 界面显示第二摄像头在第四时刻采集到的图像,第一摄像头与第二摄像头不相同。
示例性的,第一应用的第五预览界面可以是第一应用的应用分身的界面。
示例性的,第四预览界面和第五预览界面还可以是第一应用的双景拍摄功能的两个界面,如图15k所示,例如第四预览界面可以是界面1521,第五预览界面可以是界面1523。
在上述方法实施例的基础上,方法还包括:电子设备响应于接收到的第四操作,在第二预览界面显示第一摄像头在第五时刻采集到的图像,并且,第三预览界面显示第二摄像头在第五时刻采集到的图像,第二摄像头与第一摄像头不相同。
示例性的,如图15g所示,第三预览界面(例如界面1511)包括摄像头切换选项1513,第四操作用于指示对摄像头切换选项的操作,即用户点击摄像头切换选项1513,如图15h所示,第二预览界面(例如多屏协同应用的应用界面1507)显示平板的前置摄像头实时采集的图像,第三预览界面(例如短视频应用B的应用界面1511)显示平板的后置摄像头实时采集的图像。
在一种可能的实现方式中,第一摄像头为第一前置摄像头,第二摄像头为第一后置摄像头。
示例性的,如图15h所示,多屏协同应用调用平板的前置摄像头,短视频应用B调用平板的后置摄像头。
在一种可能的实现方式中,第一摄像头为第一前置摄像头,第二摄像头为第二前置摄像头,第一前置摄像头与第二前置摄像头不同。
示例性的,如图15k所示,多屏协同应用调用平板的其中一个前置摄像头,相机应用调用平板的两个前置摄像头,以实现双景拍摄。
示例性的,本申请实施例中,多屏协同应用可以调用其中一个前置摄像头,相机应用可调用另一个前置摄像头。可选地,两个前置摄像头的变焦倍数可以不相同。例如,一个前置摄像头为广角摄像头,多屏协同应用中显示的图像即为广角前置摄像头采集的图像,另一个前置摄像头为多倍变焦摄像头,相机应用显示的图像即为多倍变焦摄像头采集的图像。
在一种可能的实现方式中,第一摄像头为第一后置摄像头,第二摄像头为第二后置摄像头,第一后置摄像头与第二后置摄像头不同。示例性的,同一个应用或者不同的应用,可同时调用摄像头中的多个后置摄像头。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认 为超出本申请的范围。
一个示例中,图18示出了本申请实施例的一种装置1800的示意性框图装置1800可包括:处理器1801和收发器/收发管脚1802,可选地,还包括存储器1803。
装置1800的各个组件通过总线1804耦合在一起,其中总线1804除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线1804。
可选地,存储器1803可以用于前述方法实施例中的指令。该处理器1801可用于执行存储器1803中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。
装置1800可以是上述方法实施例中的电子设备或电子设备的芯片。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的摄像头调用方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的摄像头调用方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的摄像头调用方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是 各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请各个实施例的任意内容,以及同一实施例的任意内容,均可以自由组合。对上述内容的任意组合均在本申请的范围之内。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (23)

  1. 一种摄像头调用方法,其特征在于,包括:
    显示第一应用的第一预览界面,所述第一预览界面显示第一摄像头在第一时刻采集的图像;
    响应于接收到的第一操作,在第二时刻显示第一界面;所述第一时刻与所述第二时刻为不同时刻;
    所述第一界面包括所述第一应用的第二预览界面和第二应用的第三预览界面;其中,所述第二预览界面和所述第三预览界面显示所述第一摄像头在所述第二时刻采集的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述第二预览界面包括第一录制时长,所述第一录制时长用于指示所述第一应用从开始录制时刻起的录制时长。
  3. 根据权利要求1所述的方法,其特征在于,所述显示第一应用的第一预览界面之前,还包括:
    获取所述第一应用的第一配置信息;
    配置第一会话,并保存所述第一会话、所述第一应用和所述第一配置信息的关联关系;
    所述在第二时刻显示第一界面,还包括:
    获取所述第二应用的第二配置信息;
    配置第二会话,并保存所述第二会话、所述第二配置信息和所述第二配置信息的关联关系。
  4. 根据权利要求3所述的方法,其特征在于,所述在第二时刻显示第一界面,包括:
    获取所述第一摄像头在所述第二时刻采集到的图像;
    基于所述第一会话,确定与所述第一会话关联的所述第一应用和所述第一配置信息,以及基于所述第二会话,确定与所述第二会话关联的所述第二应用和所述第二配置信息;
    根据所述第一摄像头在所述第二时刻采集到的图像和所述第一配置信息,获取第一子图像,以及,根据第一摄像头在所述第二时刻采集到的图像和所述第二配置信息,获取第二子图像;
    在所述第二预览界面显示所述第一子图像,并且,在所述第三预览界面显示所述第二子图像。
  5. 根据权利要求3所述的方法,其特征在于,所述第一配置信息用于指示第一分辨率,所述第二配置信息用于指示第二分辨率,所述第一分辨率与所述第二分辨率相同或不同。
  6. 根据权利要求3所述的方法,其特征在于,所述第一配置信息用于指示第一变焦倍 数,所述第二配置信息用于指示第二变焦倍数,所述第一变焦倍数与所述第二变焦倍数相同或不同。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    响应于接收到的第二操作,获取所述第一应用的第三配置信息,所述第三配置信息用于指示第三变焦倍数,所述第三变焦倍数与所述第一变焦倍数不同;
    保存所述第一会话、所述第一应用和所述第三配置信息的关联关系;
    获取所述第一摄像头在第三时刻采集到的图像;
    基于所述第一会话,确定与所述第一会话关联的所述第一应用和所述第三配置信息,以及基于所述第二会话,确定与所述第二会话关联的所述第二应用和所述第二配置信息;
    根据所述第一摄像头在所述第三时刻采集到的图像和所述第三配置信息,获取第三子图像,以及,根据第一摄像头在所述第三时刻采集到的图像和所述第二配置信息,获取第四子图像;
    在所述第二预览界面显示所述第三子图像,并且,在所述第三预览界面显示所述第四子图像。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于接收到的第三操作,在第四时刻显示第二界面;
    所述第二界面包括所述第一应用的第四预览界面和所述第一应用的第五预览界面;其中,所述第四预览界面显示所述第一摄像头在所述第四时刻采集到的图像,并且,所述第五预览界面显示第二摄像头在所述第四时刻采集到的图像,所述第一摄像头与所述第二摄像头不相同。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于接收到的第四操作,在所述第二预览界面显示所述第一摄像头在所述第五时刻采集到的图像,并且,所述第三预览界面显示第二摄像头在所述第五时刻采集到的图像,所述第二摄像头与所述第一摄像头不相同。
  10. 根据权利要求8或9所述的方法,其特征在于,
    所述第一摄像头为第一前置摄像头,所述第二摄像头为第一后置摄像头;
    或者,
    所述第一摄像头为第一前置摄像头,所述第二摄像头为第二前置摄像头,所述第一前置摄像头与所述第二前置摄像头不同;
    或者,
    所述第一摄像头为第一后置摄像头,所述第二摄像头为第二后置摄像头,所述第一后置摄像头与所述第二后置摄像头不同。
  11. 根据权利要求9所述的方法,其特征在于,所述第三预览界面还包括摄像头切换 选项,所述第四操作用于指示对所述摄像头切换选项的操作。
  12. 根据权利要求1所述的方法,其特征在于,所述第一界面为分屏界面,所述分屏界面的一个界面中包括所述第二预览界面,所述分屏界面的另一个界面中包括所述第三预览界面。
  13. 根据权利要求1所述的方法,其特征在于,所述第二预览界面,和/或,所述第三预览界面为悬浮界面。
  14. 根据权利要求1所述的方法,其特征在于,所述响应于接收到的第一操作,在第二时刻显示第一界面,包括:
    接收第一子操作,在所述第一界面显示侧边栏,所述侧边栏包括所述第二应用的应用图标;
    接收对所述第二应用的应用图标的第二子操作,在所述第二时刻显示所述第一界面。
  15. 根据权利要求1所述的方法,其特征在于,所述第一应用为相机应用,所述第二应用为以下任意一种:
    视频通话应用、直播应用、具有扫码功能的应用。
  16. 一种摄像头调用系统,其特征在于,包括:第一电子设备和第二电子设备,所述第一电子设备通过第一连接与所述第二电子设备进行数据交互,所述第一电子设备包括第一摄像头;
    第一电子设备,用于:
    显示第一应用的第一预览界面,在所述第一预览界面中显示第二电子设备发送的第一界面,所述第一界面中包括所述第一摄像头在第一时刻采集的图像;
    所述第一电子设备,还用于:
    响应于接收到的第一操作,在第二时刻显示第二界面;所述第一时刻与所述第二时刻为不同时刻;其中,所述第二界面包括所述第一应用的第二预览界面和第二应用的第三预览界面;所述第二预览界面中显示所述第二电子设备发送的第三界面,所述第三界面和所述第三预览界面包括所述第一摄像头在所述第二时刻采集的图像。
  17. 根据权利要求16所述的系统,其特征在于,所述第二电子设备,用于:
    接收所述第一电子设备发送的所述第一摄像头在第一时刻采集的图像;
    在所述第一界面中显示所述第一摄像头在第一时刻采集的图像;
    通过所述第一连接,向所述第一电子设备发送所述第一界面。
  18. 根据权利要求16所述的系统,其特征在于,所述第二电子设备,用于:
    接收所述第一电子设备发送的所述第一摄像头在第一时刻采集的图像;
    生成包含所述第一摄像头在第一时刻采集的图像的第一界面;其中,所述第二电子设备处于息屏状态;
    通过所述第一连接,向所述第一电子设备发送所述第一界面。
  19. 根据权利要求16所述的系统,其特征在于,所述第一应用为多屏协同应用。
  20. 根据权利要求16所述的系统,其特征在于,所述第一预览界面为悬浮窗口。
  21. 一种电子设备,其特征在于,包括:存储器和处理器,所述存储器和所述处理器耦合;所述存储器存储有程序指令,所述程序指令由所述处理器执行时,使得所述电子设备执行如权利要求1-15中任意一项所述的摄像头调用方法。
  22. 一种计算机可读存储介质,其特征在于,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1-15中任意一项所述的摄像头调用方法。
  23. 一种计算机可读存储介质,其特征在于,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求16-20中的第一电子设备或第二电子设备执行的步骤。
PCT/CN2021/131243 2020-11-20 2021-11-17 摄像头调用方法、系统及电子设备 WO2022105803A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21893950.2A EP4231614A4 (en) 2020-11-20 2021-11-17 CAMERA CALLING METHOD AND SYSTEM, AND ELECTRONIC DEVICE
US18/037,719 US20230403458A1 (en) 2020-11-20 2021-11-17 Camera Invocation Method and System, and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011315380.X 2020-11-20
CN202011315380.XA CN114554000B (zh) 2020-11-20 2020-11-20 摄像头调用方法、系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022105803A1 true WO2022105803A1 (zh) 2022-05-27

Family

ID=81659730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131243 WO2022105803A1 (zh) 2020-11-20 2021-11-17 摄像头调用方法、系统及电子设备

Country Status (4)

Country Link
US (1) US20230403458A1 (zh)
EP (1) EP4231614A4 (zh)
CN (2) CN116886810A (zh)
WO (1) WO2022105803A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484403A (zh) * 2022-08-08 2022-12-16 荣耀终端有限公司 录像方法和相关装置
CN116048436A (zh) * 2022-06-17 2023-05-02 荣耀终端有限公司 应用界面显示方法、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714854A (zh) * 2022-09-02 2024-03-15 华为技术有限公司 摄像头调用方法、电子设备、可读存储介质和芯片

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0746568A (ja) * 1993-07-29 1995-02-14 Canon Inc 情報処理装置
CN101594510A (zh) * 2009-06-23 2009-12-02 腾讯科技(深圳)有限公司 一种实现摄像头资源共享的方法及系统
CN107483812A (zh) * 2017-08-02 2017-12-15 深圳依偎控股有限公司 一种多平台并行直播的方法及装置
CN107786794A (zh) * 2016-08-25 2018-03-09 三星电子株式会社 向应用提供由图像传感器获取的图像的电子装置和方法
CN110753187A (zh) * 2019-10-31 2020-02-04 芋头科技(杭州)有限公司 一种摄像头的控制方法及设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10423194B2 (en) * 2017-01-09 2019-09-24 Samsung Electronics Co., Ltd. Electronic device and image capture method thereof
CN111818669B (zh) * 2020-06-04 2022-09-02 青岛海信移动通信技术股份有限公司 移动终端及其数据传输方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0746568A (ja) * 1993-07-29 1995-02-14 Canon Inc 情報処理装置
CN101594510A (zh) * 2009-06-23 2009-12-02 腾讯科技(深圳)有限公司 一种实现摄像头资源共享的方法及系统
CN107786794A (zh) * 2016-08-25 2018-03-09 三星电子株式会社 向应用提供由图像传感器获取的图像的电子装置和方法
CN107483812A (zh) * 2017-08-02 2017-12-15 深圳依偎控股有限公司 一种多平台并行直播的方法及装置
CN110753187A (zh) * 2019-10-31 2020-02-04 芋头科技(杭州)有限公司 一种摄像头的控制方法及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4231614A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048436A (zh) * 2022-06-17 2023-05-02 荣耀终端有限公司 应用界面显示方法、电子设备及存储介质
CN116048436B (zh) * 2022-06-17 2024-03-08 荣耀终端有限公司 应用界面显示方法、电子设备及存储介质
CN115484403A (zh) * 2022-08-08 2022-12-16 荣耀终端有限公司 录像方法和相关装置
CN115484403B (zh) * 2022-08-08 2023-10-24 荣耀终端有限公司 录像方法和相关装置

Also Published As

Publication number Publication date
EP4231614A4 (en) 2024-04-10
CN116886810A (zh) 2023-10-13
EP4231614A1 (en) 2023-08-23
US20230403458A1 (en) 2023-12-14
CN114554000A (zh) 2022-05-27
CN114554000B (zh) 2023-06-20

Similar Documents

Publication Publication Date Title
WO2020238871A1 (zh) 一种投屏方法、系统及相关装置
WO2021051989A1 (zh) 一种视频通话的方法及电子设备
WO2022105803A1 (zh) 摄像头调用方法、系统及电子设备
EP4002829A1 (en) Image processing method, electronic device and cloud server
CN113687803A (zh) 投屏方法、投屏源端、投屏目的端、投屏系统及存储介质
WO2022179405A1 (zh) 一种投屏显示方法及电子设备
WO2022143077A1 (zh) 一种拍摄方法、系统及电子设备
WO2022121775A1 (zh) 一种投屏方法及设备
WO2022017393A1 (zh) 显示交互系统、显示方法及设备
CN112527174B (zh) 一种信息处理方法及电子设备
WO2021190344A1 (zh) 多屏幕显示电子设备和电子设备的多屏幕显示方法
CN112527222A (zh) 一种信息处理方法及电子设备
US20230208790A1 (en) Content sharing method, apparatus, and system
WO2022127632A1 (zh) 一种资源管控方法及设备
US20230016178A1 (en) Data sharing and instruction operation control method and system
WO2022166521A1 (zh) 跨设备的协同拍摄方法、相关装置及系统
WO2022160985A1 (zh) 一种分布式拍摄方法,电子设备及介质
WO2022127670A1 (zh) 一种通话方法、相关设备和系统
WO2022222773A1 (zh) 拍摄方法、相关装置及系统
WO2023005900A1 (zh) 一种投屏方法、电子设备及系统
US20240134591A1 (en) Projection display method and electronic device
WO2023273460A1 (zh) 一种投屏显示方法及电子设备
WO2024022307A1 (zh) 一种投屏方法及电子设备
WO2023236939A1 (zh) 应用组件交互方法及相关设备
WO2022206702A1 (zh) 蓝牙通信方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893950

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021893950

Country of ref document: EP

Effective date: 20230517

NENP Non-entry into the national phase

Ref country code: DE