WO2024087900A1 - 一种摄像头切换方法及相关电子设备 - Google Patents

一种摄像头切换方法及相关电子设备 Download PDF

Info

Publication number
WO2024087900A1
WO2024087900A1 PCT/CN2023/117484 CN2023117484W WO2024087900A1 WO 2024087900 A1 WO2024087900 A1 WO 2024087900A1 CN 2023117484 W CN2023117484 W CN 2023117484W WO 2024087900 A1 WO2024087900 A1 WO 2024087900A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
electronic device
camera module
request message
Prior art date
Application number
PCT/CN2023/117484
Other languages
English (en)
French (fr)
Inventor
迪清华
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2024087900A1 publication Critical patent/WO2024087900A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display

Definitions

  • the present application relates to the field of camera switching, and in particular to a camera switching method and related electronic equipment.
  • a virtual camera refers to a camera used by an electronic device to capture images when another electronic device is called. For example, there are electronic devices A and B, and a communication connection can be established between electronic devices A and B.
  • the camera of electronic device A can be used to capture the image of user A, thereby sending the image of user A to user B.
  • the camera of electronic device B can also be used to capture the image of user A, thereby sending the image of user A to user B.
  • the camera of electronic device A is a physical camera
  • the camera of electronic device B is a virtual camera.
  • the embodiments of the present application provide a camera switching method and related electronic equipment, which solve the problem of long delay when a user switches between front and rear cameras of a virtual camera.
  • an embodiment of the present application provides a camera switching method, which is applied to a first electronic device, and the method includes: the first electronic device displays a first interface, the first interface includes a preview area and a switching control, the preview area displays a first image, the first image is an image captured by a first camera, and the first camera is a camera of the first electronic device; at a first moment, in response to a first input operation, the first electronic device switches the camera that captures the image to a second camera, and the second camera is a camera of the second electronic device; at a second moment, the first electronic device displays a second image on the preview area, and the second image is an image captured by the second camera; in the process of capturing the image through the second camera, the data frame cached in the buffer carries an image capture parameter, and the first image is captured by the first camera according to the parameter value in the image capture parameter; at a third moment, in response to a second input operation on the switching control; at a fourth moment, the first electronic device displays a
  • each data frame cached in the buffer carries the image capture parameters, which ensures that when the electronic device switches from the second camera to the third camera, when the electronic device checks the integrity of the data frames in the buffer, the data frames in the buffer are all complete data frames. Therefore, when the electronic device switches from the second camera to the third camera, it will not leave a certain buffer time to wait for the data frames carrying the image capture parameters to be buffered in the buffer before switching the camera. In this way, the efficiency of switching cameras of the electronic device is greatly improved.
  • the image acquisition parameter includes at least one of the following: In this way, the first camera can collect, expose and encode the image according to the maximum frame rate, exposure compensation value and encoding method in the image acquisition parameters, so as to avoid the collected image not meeting the requirements of the electronic device and causing image distortion.
  • the second electronic device is an electronic device that establishes a multi-screen collaborative connection with the first electronic device.
  • the first electronic device includes a first application, a camera service module, a physical camera module, and a virtual camera module. Before the first electronic device displays the first interface, it also includes: detecting a third input operation, and starting the first application; the camera service module sends a first request message to the physical camera module; when detecting that there are image acquisition parameters in the first request message, the physical camera module stores the image acquisition parameters in the virtual camera module; the physical camera module calls the first camera to acquire the first image according to the first request message; the physical camera module returns a first data frame to the first application, and the first data frame includes the first image and the image acquisition parameters.
  • the physical camera module after receiving the first request message, stores the image acquisition parameters in the virtual camera module when there are image acquisition parameters in the first request message. In this way, even if there are no image acquisition parameters in the request message received by the virtual camera module when it is working, the virtual camera module can add the image acquisition parameters to the data frame when returning the data frame to ensure the integrity of the data frame returned by it.
  • the camera service module will not leave buffer time to wait for the virtual camera module to return complete data frames, but will switch the camera immediately, thereby reducing the delay time of switching cameras.
  • the first electronic device switches the camera for capturing images to the second camera, specifically including: the first application sends a first switching instruction to the camera service module; the camera service module sends a first indication message to the physical camera module, the first indication message is used to instruct the physical camera module to forward the second request message to the virtual camera module; the camera service module sends a second request message to the physical camera module; the physical camera module sends the second request message to the virtual camera module; the virtual camera module calls the second camera to capture the second image according to the second request message; the virtual camera module returns a second data frame to the first application, the second data frame including the second image and image acquisition parameters; the virtual camera module caches the second data frame in a buffer.
  • the second data frames cached in the buffer all carry image acquisition parameters.
  • the virtual camera module before the virtual camera module calls the second camera to capture the second image according to the second request message, it also includes: the virtual camera module detects whether there are image acquisition parameters in the second request message; if so, the virtual camera module replaces the image acquisition parameters stored in the virtual camera module with the image acquisition parameters in the second request message.
  • the camera service module after responding to the second input operation on the switching control, it also includes: the camera service module determines whether the data frames in the buffer all carry image acquisition parameters; if it is determined to be yes, the camera service module sends a third request message to the physical camera module; the physical camera module sends a third request message to the virtual camera module; the virtual camera module calls the third camera to acquire a third image based on the third request message; the virtual camera module returns a third data frame to the camera service module, and the third data frame includes a third image and image acquisition information.
  • the virtual camera module before the virtual camera module calls the third camera to capture the third image based on the third request message, the virtual camera module also includes: detecting whether there is an image capture request in the third request message; Parameters; if present, the virtual camera module replaces the image acquisition parameters stored in the virtual camera module with the image acquisition parameters in the third request message.
  • an embodiment of the present application provides an electronic device, which includes: one or more processors, a display screen and a memory; the memory is coupled to the one or more processors, the memory is used to store a computer program code, the computer program code includes a computer instruction, and the one or more processors call the computer instruction to enable the electronic device to execute: controlling the display screen to display a first interface, the first interface includes a preview area and a switching control, the preview area displays a first image, the first image is an image captured by a first camera, and the first camera is a camera of the first electronic device; at a first moment, in response to a first input operation, the first electronic device switches the camera that captures the image to a second camera, and the second camera is a camera of the second electronic device; at a second moment, controlling the display screen to display a second image on the preview area, and the second image is an image captured by the second camera; in the process of capturing the image through the second camera, the data frame cached in the buffer carries an image
  • the one or more processors call the computer instructions to cause the electronic device to execute: detecting a third input operation, starting a first application; sending a first request message to a physical camera module through a camera service module; upon detecting that image acquisition parameters exist in the first request message, storing the image acquisition parameters in the virtual camera module through the physical camera module; calling the first camera through the physical camera module to acquire a first image according to the first request message; and returning a first data frame to the first application through the physical camera module, the first data frame including a first image and image acquisition parameters.
  • the one or more processors call the computer instructions to cause the electronic device to execute: sending a first switching instruction to the camera service module through a first application; sending a first indication message to the physical camera module through the camera service module, the first indication message being used to instruct the physical camera module to forward a second request message to the virtual camera module; sending a second request message to the physical camera module through the camera service module; sending the second request message to the virtual camera module through the physical camera module; calling the second camera to capture a second image according to the second request message through the virtual camera module; returning a second data frame to the first application through the virtual camera module, the second data frame including a second image and image acquisition parameters; and caching the second data frame in a buffer through the virtual camera module.
  • the one or more processors call the computer instructions to cause the electronic device to execute: detecting through the virtual camera module whether there are image acquisition parameters in the second request message; if so, replacing the image acquisition parameters stored in the virtual camera module with the image acquisition parameters in the second request message through the virtual camera module.
  • the one or more processors call the computer instructions to cause the electronic device to execute: determine through the camera service module whether the data frames in the buffer all carry image acquisition parameters; if determined to be yes, send a third request message to the physical camera module through the camera service module; send a third request message to the virtual camera module through the physical camera module; call the third camera through the virtual camera module to acquire a third image based on the third request message; and return a third data frame to the camera service module through the virtual camera module, wherein the third data frame includes a third image and image acquisition information.
  • the one or more processors call the computer instructions to cause the electronic device to execute: detecting through the virtual camera module whether there are image acquisition parameters in the third request message; if so, replacing the image acquisition parameters stored in the virtual camera module with the image acquisition parameters in the third request message through the virtual camera module.
  • an embodiment of the present application provides an electronic device, comprising: a touch screen, a camera, one or more processors and one or more memories; the one or more processors are coupled to the touch screen, the camera, and the one or more memories, and the one or more memories are used to store computer program code, and the computer program code includes computer instructions.
  • the electronic device executes the method described in the first aspect or any possible implementation method of the first aspect.
  • an embodiment of the present application provides a chip system, which is applied to an electronic device, and the chip system includes one or more processors, which are used to call computer instructions to enable the electronic device to execute the method described in the first aspect or any possible implementation method of the first aspect.
  • an embodiment of the present application provides a computer program product comprising instructions, which, when executed on an electronic device, enables the electronic device to execute the method described in the first aspect or any possible implementation of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, comprising instructions, which, when executed on an electronic device, causes the electronic device to execute the method described in the first aspect or any possible implementation of the first aspect.
  • FIG. 1A-FIG 1I are scene example diagrams of a group of camera switching methods provided in an embodiment of the present application.
  • FIG2 is a flow chart of a camera switching method provided in an embodiment of the present application.
  • 3A-3K are scene example diagrams of another set of camera switching methods provided in an embodiment of the present application.
  • FIG4 is a flowchart of a video recording provided by an embodiment of the present application.
  • 5A-5B are flow charts of another camera switching method provided in an embodiment of the present application.
  • 6A-6B are flow charts of another camera switching method provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of the software structure of the electronic device 100 provided in an embodiment of the present application.
  • a unit can be, but is not limited to, a process running on a processor, a processor, an object, an executable file, an execution thread, a program, and/or distributed between two or more computers.
  • these units can be executed from various computer-readable media having various data structures stored thereon.
  • Units can communicate, for example, through local and/or remote processes based on signals having one or more data packets (e.g., data from a second unit interacting with another unit in a local system, a distributed system, and/or a network.
  • the Internet interacts with other systems via signals).
  • a virtual camera refers to a camera used by an electronic device to call another electronic device to capture images.
  • there are electronic devices A and B and a communication connection can be established between electronic devices A and B.
  • the camera of electronic device A can be used to capture the image of user A, thereby sending the image of user A to user B.
  • the camera of electronic device B can also be used to capture the image of user A, thereby sending the image of user A to user B.
  • the camera of electronic device A is a physical camera
  • the camera of electronic device B is a virtual camera.
  • Multi-screen collaboration is a distributed technology that can achieve cross-system and cross-device collaboration. After multiple devices are connected, resource sharing and collaborative operation can be achieved. Take the multi-screen collaboration between a smartphone and a tablet as an example. After the tablet turns on the multi-screen collaboration function, the smartphone can connect to the tablet. After the connection is successful, the smartphone can transmit data to the tablet.
  • Scenario 1 Electronic device 100 and electronic device 200 first establish a multi-screen collaborative connection. After the multi-screen collaborative connection is established, electronic device 100 starts the camera application. At this time, the image captured by the camera of electronic device 100 is displayed on the interface of electronic device 100. When electronic device 100 switches the video and audio to electronic device 200, the image captured by the camera of electronic device 200 is displayed on the user interface of electronic device 100.
  • the electronic device 100 is a smart phone and the electronic device 200 is a tablet.
  • the electronic device 100 includes a first camera 1011, which is a rear camera of the electronic device 100.
  • the image range that can be captured by the first camera 1011 of the electronic device 100 is a sector-shaped area 101, and within the image capture range, object 1 is included.
  • the electronic device 200 includes a second camera 1012 and a third camera 1013, the second camera 1012 is a rear camera of the electronic device 200, and the third camera 1013 is a front camera of the electronic device 200.
  • the image range that can be captured by the second camera 1012 of the electronic device 200 is a sector-shaped area 102, and object 2 is included in the sector-shaped area 102.
  • the image range that can be captured by the third camera 1013 of the electronic device 200 is a sector-shaped area 103, and object 3 is included in the sector-shaped area 103.
  • the user interface 20 of the electronic device 200 includes functional icons such as a tablet collaboration icon 201, a mobile phone collaboration icon 202, and a computer collaboration icon 203.
  • the electronic device 200 displays a user interface 21 as shown in FIG1C , and the user interface 21 includes a QR code 205.
  • the electronic device 100 scans the QR code 205 through an application with a code scanning function (e.g., a code scanning function in a camera application built in the electronic device 100, or a code scanning function in a browser built in the electronic device 100).
  • a code scanning function e.g., a code scanning function in a camera application built in the electronic device 100, or a code scanning function in a browser built in the electronic device 100.
  • the electronic device 100 displays a connection prompt information box 104 as shown in FIG1D on the user interface 10.
  • the connection prompt information box 104 displays a "disconnect” icon and a "switch audio and video to tablet” icon, and the connection prompt information box 104 is used to indicate that the electronic device 100 is successfully connected to the multi-screen system of the electronic device 200.
  • the embodiments described in the above-mentioned Figures 1B-1D exemplarily introduce the process of establishing a multi-screen collaborative connection between the electronic device 100 and the electronic device 200.
  • the multi-screen collaborative connection between the electronic device 100 and the electronic device 200 can also be established in other ways. For example, after the electronic device 200 detects the input operation for the mobile phone collaborative icon, it can directly search for surrounding electronic devices and broadcast a message requesting a connection to the searched electronic devices. After the electronic device receiving the request message determines the connection, the electronic device 200 establishes a multi-screen collaborative connection with it.
  • the embodiment of the present application exemplifies the manner in which a multi-screen collaborative connection is established between the electronic device 100 and the electronic device 200, and should not constitute any limitation on the scope of protection of the present application.
  • the embodiment of the present application does not limit the initiating device of the multi-screen collaborative connection.
  • the initiating device of the multi-screen collaboration is the electronic device 200, and the initiating device of the multi-screen collaboration can also be the electronic device 100.
  • FIG. 1E shows the main interface of the electronic device 100, which includes other application icons such as a camera icon 111 and a gallery icon 112.
  • an input operation e.g., a single click
  • the electronic device 100 displays a shooting interface as shown in FIG. 1F in response to the operation.
  • the shooting interface includes a preview frame 121, and the preview frame 121 displays an image captured by the first camera of the electronic device 100, and the object 1 is displayed in the image.
  • the electronic device 100 detects an input operation on the shooting interface (for example, sliding down from the top of the screen), in response to the operation, the electronic device 100 displays the image shown in FIG1G User interface 10.
  • the user interface 10 includes a connection prompt information box 104.
  • the connection prompt information box 104 displays “Multi-screen collaborative connection successful”, a “disconnect” icon, and a “audio and video switched to tablet” icon.
  • the connection prompt information box 104 is used to indicate that the electronic device 100 and the electronic device 200 have a multi-screen collaborative connection.
  • an input operation e.g., a single click
  • the electronic device 100 switches the first camera for capturing images to the second camera of the electronic device 200.
  • FIG1H is a shooting interface of the electronic device 100 after the electronic device 100 switches the camera.
  • the image captured by the second camera of the electronic device 200 displayed in the preview box 121 of the shooting interface includes an object 2 in the image.
  • the electronic device 100 switches the second camera of the electronic device 200 to the third camera 1, and displays the shooting interface shown in FIG1I .
  • the image captured by the third camera is displayed in the preview box 121 of the shooting interface, and the object 3 is displayed in the image.
  • FIG. 1E-1I exemplarily introduce the application scenario in which the electronic device 100 uses the camera of the electronic device 200 to capture images after a multi-screen collaborative connection is established between the electronic device 100 and the electronic device 200.
  • the first camera is a physical camera
  • the electronic device 100 uses the second camera or the third camera of the electronic device 200 to capture images
  • the second camera and the third camera are virtual cameras.
  • the physical camera and the virtual camera are relative to the execution subject.
  • the camera of the electronic device itself is a physical camera
  • the electronic device calls the camera of other electronic devices as a virtual camera.
  • Figure 2 is a flow chart of a camera switching method provided in an embodiment of the present application, and the specific process is as follows:
  • Step 201 The second electronic device detects a first operation, responds to the first operation, and activates a multi-screen collaboration function.
  • the second electronic device may be the electronic device 200 in the embodiment of FIG. 1A
  • the first operation may be an input operation on the mobile phone collaboration icon 202 in FIG. 1B .
  • Step 202 The first electronic device detects a second operation, responds to the second operation, and establishes a multi-screen collaborative connection with the second electronic device.
  • the second electronic device can generate a QR code, and the second electronic device can establish a multi-screen collaborative connection with the electronic device that scans and parses the QR code.
  • the second electronic device can also search for nearby electronic devices and broadcast instruction information for requesting to establish a multi-screen system function. After the electronic device receives the instruction information and sends a message to the second electronic device to confirm the establishment of the connection, the second electronic device can establish a multi-screen collaborative connection with the electronic device.
  • the embodiment of the present application exemplifies one of the ways in which the second electronic device establishes a multi-screen collaborative connection with other electronic devices, and does not limit the way in which the second electronic device establishes a multi-screen system connection with other electronic devices.
  • the first electronic device may be the electronic device 100 in the embodiment of FIG. 1B
  • the second operation may be a scanning operation of the two-dimensional code 205 by the electronic device 100 in the embodiment of FIG. 1B .
  • Step 203 The second electronic device establishes a multi-screen collaborative connection with the first electronic device, and the first electronic device displays the connection A prompt information box, wherein the connection prompt information box includes a first control, and the connection prompt information box is used to indicate that the second electronic device has established a multi-screen collaborative connection with the first electronic device.
  • connection prompt information box may be the connection prompt information box 104 in FIG. 1D
  • the first control may be the “switch audio and video to tablet” icon in the connection prompt information box 104 .
  • steps 201-203 describe the process of multi-screen system connection between the second electronic device and the first electronic device. It should be understood that steps 201-203 are only an exemplary description of one of the ways to establish a multi-screen system connection between the second electronic device and the first electronic device, namely: the second electronic device is used as the initiating device of the multi-screen system connection, and the first electronic device is used as the receiving device of the multi-screen collaborative connection. In some embodiments, the first electronic device can also be used as the initiating device of the multi-screen collaborative connection, and the second electronic device can be used as the receiving device of the multi-screen system connection. The embodiments of the present application do not limit this. Steps 201-203 can also be used as optional steps.
  • Step 204 The first electronic device detects a first input operation and starts a first application.
  • the first application may be a camera application corresponding to the camera icon 111 in the embodiment of FIG. 1E
  • the first input operation may be an input operation detected for the camera icon 111 in FIG. 1E
  • the first application may also be an application with a shooting function, such as WeChat, a browser, etc.
  • Step 205 The first electronic device displays a first interface, where the first interface includes a preview area and a switch control, and the preview area displays an image captured by a first camera, where the first camera is a camera of the first electronic device.
  • the first interface may be the shooting interface in the embodiment of FIG. 1F above
  • the preview area may be the preview box 121 described in the embodiment of FIG. 1F above
  • the switching control may be the switching control 1214 in the above shooting interface
  • the first camera may be the first camera of the electronic device 100 in the embodiment of FIG. 1A above.
  • the first electronic device After the first electronic device starts the first application, it also starts the first camera.
  • the first camera of the first electronic device As a rear camera as an example, after the first electronic device starts the rear camera, the rear camera will collect images and display the collected images on the first interface.
  • Step 206 At a first moment, the first electronic device displays a connection prompt information box, where the connection prompt information box includes a first control.
  • Step 207 the first electronic device detects a second input operation for the first control, and in response to the second input operation, switches the first camera to a second camera, where the second camera is a camera of the second electronic device.
  • the first control may be the “switch audio and video to tablet” icon in the embodiment of FIG. 1G
  • the second input operation may be the input operation for the “switch audio and video to tablet” icon in the embodiment of FIG. 1G .
  • the first electronic device switches the first camera to the second camera of the second electronic device.
  • the first camera no longer captures images
  • the second camera captures images.
  • the first camera continues to capture images, and the images captured by the first camera are not displayed on the first interface, and the images captured by the second camera are displayed on the first interface.
  • the embodiments of the present application are not limited to this.
  • Step 208 At the second moment, the first electronic device displays the image captured by the second camera in the preview area of the first interface.
  • the image captured by the first camera is no longer displayed in the preview area of the first interface, and the image captured by the second camera is displayed.
  • Step 209 At the third moment, after the first electronic device detects the third input operation for the switching icon on the first interface, in response to the third input operation, at the fourth moment, the third camera acquisition image is displayed in the preview area of the first interface.
  • the third camera is a camera of the second electronic device, and the third camera is different from the second camera.
  • steps 201 to 209 illustrate the process of the first electronic device switching the first camera to the second camera of the second electronic device to capture images while using its first camera to capture images, and switching the second camera to the third camera.
  • Scenario 2 After the electronic device 100 establishes a multi-screen collaborative connection with the electronic device 200, the electronic device 100 uses the camera of the electronic device 200 to record a video, and the frame rate of the video is a first frame rate. After the recording is completed, a first video file is generated, and the video frame rate of the first video file is the first frame rate. After the electronic device 100 modifies the video frame rate to the second frame rate, it uses the camera of the electronic device 200 to record a second video. After the recording is completed, a second video file is generated, and the frame rate of the second video file is the second frame rate. The frame rates of these two video files are different. Below, the above process is exemplarily illustrated in conjunction with Figures 3A-3K.
  • FIG. 1A An example diagram of the electronic device 100 and the electronic device 200 is shown in FIG. 1A .
  • FIG3A it is a preview interface of the electronic device 100, which includes a preview box 301, a frame rate information box 302, and a start recording control 303.
  • the preview box 301 is used to display the image captured by the camera
  • the frame rate information box 302 is used to display the frame rate of the video shooting.
  • the frame rate of the current video recording is 30.
  • the image currently displayed in the preview box 301 is an image captured by the first camera of the electronic device 100, and the image includes an object 1.
  • the electronic device 100 detects an input operation for the preview interface (for example, sliding down from the top of the screen), in response to the operation, the electronic device 100 displays a user interface as shown in FIG3B .
  • the user interface includes a connection prompt information box 104.
  • the connection prompt information box 104 includes a "disconnect” icon and a "switch audio and video to tablet” icon, and the connection prompt information box 104 is used to indicate that the electronic device 100 and the electronic device 200 have a multi-screen collaborative connection.
  • an input operation e.g., a single click
  • the electronic device 100 switches the first camera to the camera of the electronic device 200.
  • FIG3C shows a preview interface after the electronic device 100 switches the camera.
  • the image captured by the second camera of the electronic device 200 is displayed in the preview box 301, and the image includes the object 2.
  • An input operation e.g., a single click
  • the electronic device 100 starts recording a video and displays a video recording interface as shown in FIG3D.
  • the video recording interface shown in FIG3D includes a recording time display box 305, a frame rate information box 302, and a stop recording control 304. It can be seen from the recording time display box 305 and the frame rate information box 302 that the recording time of the current video is 30 seconds and the video frame rate is 30 Hz. An input operation (e.g., a single click) for the stop recording control 304 is detected. In response to the operation, the electronic device 100 saves the current video file, which is a first video file, and the frame rate of the first video file is 30 Hz.
  • the electronic device 100 may display a preview interface as shown in FIG3E , because at this time the electronic device 100 switches the camera to the second camera of the electronic device 200. Therefore, the image captured by the second camera is displayed in the preview frame 301, and the object 2 is included in the image.
  • the electronic device 100 detects an input operation for the preview interface (for example, sliding down from the top of the screen), and in response to the operation, the electronic device 100 displays a user interface as shown in FIG3F.
  • the user interface includes a connection prompt information box 104.
  • the connection prompt information box 104 includes a "disconnect” icon and a "switch audio and video to mobile phone” icon.
  • the connection prompt information box 104 is used to indicate that the electronic device 100 and the electronic device 200 have a multi-screen collaborative connection.
  • the electronic device 100 detects an input operation for the "switch audio and video to mobile phone" icon (for example, a single click), in response to the operation, the electronic device 100 switches the second camera to the first camera of the electronic device 100.
  • FIG. 3G it is a preview interface after the electronic device 100 switches the camera.
  • the image captured by the first camera of the electronic device 100 is displayed in the preview box 301, and the image includes the object 1.
  • An input operation e.g., a single click
  • the frame rate information box 302 is detected, and in response to the operation, the electronic device 100 changes the video frame rate.
  • the embodiment of the present application only exemplarily illustrates one way of changing the frame rate, and the embodiment of the present application does not limit the way of changing the video frame rate.
  • the frame rate information box 302 shows that the video frame rate is 60 Hz.
  • the electronic device 100 detects an input operation on the preview interface (eg, sliding down from the top of the screen), in response to the operation, the electronic device 100 displays the user interface shown in FIG.
  • the user interface includes a connection prompt information box 104.
  • the connection prompt information box 104 includes a “disconnect” icon and a “switch audio and video to tablet” icon, and the connection prompt information box 104 is used to indicate that the electronic device 100 and the electronic device 200 have a multi-screen collaborative connection.
  • an input operation e.g., a single click
  • the electronic device 100 switches the first camera to the camera of the electronic device 200.
  • FIG3J it is a preview interface after the electronic device 100 switches the camera.
  • the image captured by the second camera of the electronic device 200 is displayed in the preview box 301, and the image includes the object 2.
  • An input operation e.g., a single click
  • the electronic device 100 starts recording a video and displays the video recording interface shown in FIG3K.
  • the video recording interface shown in FIG3K includes a recording time display box 305, a frame rate information box 302, and a stop recording control 304. It can be seen from the recording time display box 305 and the frame rate information box 302 that the recording time of the current video is 30 seconds and the video frame rate is 60 Hz. An input operation (e.g., a single click) for the stop recording control 304 is detected, and in response to the operation, the electronic device 100 saves the current video file, which is a second video file, and the frame rate of the second video file is 60 Hz.
  • An input operation e.g., a single click
  • the process of the first electronic device performing video recording in the above-mentioned Figures 3A-3K embodiments is exemplarily described.
  • the first electronic device may be the electronic device 100 in the above-mentioned Figures 3A-3K embodiments
  • the second electronic device may be the electronic device 200 in the above-mentioned Figures 3A-3K.
  • a multi-screen collaborative connection has been established between the first electronic device and the second electronic device.
  • Figure 4 is a flowchart of a video recording provided in an embodiment of the present application, and the specific process is as follows:
  • Step 401 a first electronic device displays a first recording interface, wherein the first recording interface includes a second control and a preview area, wherein the preview area currently displays an image captured by a first camera, and the first camera is a first camera of the first electronic device.
  • the first recording interface may be the preview interface described in the embodiment of FIG. 3A
  • the second control may be The start recording control 303 in the embodiment of Fig. 3A.
  • the frame rate of the video recording of the first electronic device is the first frame rate
  • the first frame rate may be the default frame rate of the first electronic device (eg, 30 Hz).
  • the first application can be an application with a video recording function, such as a camera application, WeChat, and other applications.
  • Step 402 At a first moment, a first input operation is detected, and in response to the first input operation, the first electronic device switches the first camera to a second camera, where the second camera is a camera of the second electronic device.
  • the first input operation may be an input operation on the preview interface and an input operation on the “switch audio and video to tablet” icon in the embodiments of FIGS. 3A and 3B .
  • Step 403 at the second moment, the first electronic device displays a first recording interface, where the first recording interface includes a second control and a preview area, where the preview area displays an image captured by the second camera.
  • the first recording interface may be the preview interface shown in FIG. 3C above.
  • Step 404 a second input operation directed to a second control is detected, and in response to the second input operation, the first electronic device records a first video, wherein the frame rate of the first video is a first frame rate.
  • the second control may be an input operation for the start recording control 303 in the embodiment of FIG. 3C .
  • Step 405 At a third moment, a third input operation is detected.
  • the first electronic device stops recording the video and saves the first video file.
  • the frame rate of the first video file is the first frame rate.
  • the third input operation may be an input operation for the stop recording control 304 in the embodiment of FIG. 3D .
  • Step 406 At a fourth moment, a fourth input operation is detected, and in response to the fourth input operation, the first electronic device switches the first camera to the second camera.
  • the fourth input operation may be an input operation on the preview interface (eg, sliding down from the top of the screen) and an input operation on the “switch audio and video to mobile phone” icon in the embodiments of FIGS. 3E-3F .
  • Step 407 At the fifth moment, the first electronic device displays a second recording interface, where the second recording interface includes a second control and a preview area, where the preview area displays an image captured by the first camera.
  • the second recording interface may be the preview interface shown in FIG. 3G above.
  • Step 408 a fifth input operation is detected, and in response to the fifth input operation, the first electronic device adjusts the video frame rate to a second frame rate, where the second frame rate is different from the first frame rate.
  • the fifth input operation may be an input operation on the frame rate information box 302 in the embodiment of FIG. 3G .
  • Step 409 At a sixth moment, a sixth input operation is detected, and in response to the sixth input operation, the first electronic device switches the first camera to the second camera.
  • the sixth input operation may be the input operation on the preview interface and the input operation on the “switch audio and video to tablet” icon in FIG. 3H to FIG. 3I .
  • Step 410 At the seventh moment, the first electronic device displays a second recording interface, where the second recording interface includes a second control and a preview area, where the preview area displays an image captured by the second camera.
  • the second recording interface may be the preview interface in the embodiment of FIG. 3J above.
  • Step 411 a seventh input operation for the second control is detected, and in response to the seventh input operation, the first electronic device starts recording a second video, wherein the frame rate of the second video is a second frame rate.
  • the seventh input operation may be an input operation for starting a recording control in the embodiment of FIG. 3J above.
  • Step 412 At the eighth moment, an eighth input operation is detected.
  • the first electronic device stops recording the video and saves a second video file.
  • the frame rate of the second video file is a second frame rate, which is different from the first frame rate.
  • the eighth input operation may be the input operation for the stop recording control 304 detected in FIG. 3K .
  • a virtual camera is used to capture images, that is, the first electronic device switches its own camera (first camera) to the camera (second camera) of the second electronic device to capture images, thereby realizing the use of the camera of the other electronic device.
  • the functional modules interact with each other, so that the image captured by the camera of the second electronic device can be transmitted to the first electronic device.
  • the second electronic device is an electronic device that has established a multi-screen collaborative connection with the first electronic device.
  • the first electronic device can be the electronic device 100 in the embodiment of Figure 1A above
  • the second electronic device can be the electronic device 200 in the embodiment of Figure 1A above.
  • the first electronic device includes a camera service module (CameraServe), a physical camera module, a virtual camera module and a first application.
  • the camera service module is located in the application framework layer in the Android software architecture
  • the physical camera module and the virtual camera module are located in the hardware abstraction layer (HAL layer) in the Android software framework, and the process of the first electronic device calling the virtual camera to capture an image is as follows:
  • Step 501 The camera service module sends a first configuration request to the physical camera module.
  • the first electronic device starts the physical camera module so that the physical camera module can configure relevant parameters.
  • the first electronic device may start a first application, which is an application that has a function of calling a camera to capture images, such as a camera application, WeChat, etc.
  • a trigger instruction may be sent to the camera service module, and the trigger instruction is used to trigger the camera service module to send a first configuration request to the physical camera module.
  • Step 502 The camera service module sends a first request message to the physical camera module.
  • the first request message is used to instruct the physical camera module to call the first camera to capture an image
  • the first camera is a camera of the first electronic device
  • the first request message includes identification information
  • the camera service module needs to send a request message before requesting each frame of image. After receiving the corresponding request message, the physical camera module/virtual camera module will call the camera to capture the image and return the captured image.
  • the camera service module may send a first indication message to the physical camera module.
  • the first indication message may include identification information of the first camera.
  • the first indication message is used to instruct the physical camera module to capture images using the first camera and upload the images captured by the first camera.
  • Step 503 The physical camera module calls the first camera to capture the first image according to the first request message.
  • Step 504 The physical camera module sends a first data frame to the camera service module.
  • the first data frame includes a first image and image acquisition parameters.
  • the image acquisition parameters may be Setting data, which includes metadata such as maximum frame rate, exposure compensation, and encoding method.
  • Step 505 The camera service module sends the first data frame to the first application.
  • the camera service module may upload the received first data frame to the first application, so that the first application may obtain a first image based on the first data frame and display the first image on the first interface.
  • the first application may be a camera application corresponding to the camera icon in the embodiment of FIG. 1E
  • the first interface may be a shooting interface in the embodiment of FIG. 1F .
  • the camera service module may also store the received first data frame in a buffer.
  • the buffer is used to temporarily store the data frames received by the camera service module, and the data frames include images captured by the camera. Since the space of the buffer is limited, the buffer uses a first-in-first-out method to store data frames, that is, when the number of data frames stored in the buffer is greater than or equal to the upper limit value N of the data frames stored in the buffer, the buffer will clear the previously stored data frames to leave storage space for storing new data frames.
  • the camera service module receives data frame 5 sent by the physical camera module, the camera service module needs to clear data frame 1 from the Buffer before storing data frame 5 in the Buffer.
  • Step 506 The first application obtains a first image based on the first data frame, and displays the first image on a first interface.
  • Step 507 At a first moment, a first input operation is detected, and the first application sends a first switching instruction to the camera service module.
  • Step 508 The camera service module sends second indication information to the physical camera module.
  • the second indication information is used to instruct the physical camera module to send the request message sent by the camera service module to the virtual camera module.
  • Step 509 the camera service module sends a second request message to the physical camera module, where the second request message is used to instruct the virtual camera module to call the second camera to capture a second image.
  • the second request message is used to instruct the virtual camera module to call the second camera to capture the second image.
  • the second request message includes identification information of the second request message and may also include identification information of the second camera, where the second camera is a camera of the second electronic device.
  • the physical camera module can detect whether the virtual camera module is in working condition. If the virtual camera module is not in working condition, the camera service module can use the first camera to capture images according to the second request message, and return the corresponding data frame to the camera service module.
  • the second request message sent by the camera service module is sent to the virtual camera module.
  • the first electronic device switches the first camera to the camera of the second electronic device for the first time, it can effectively avoid switching the camera during the startup of the virtual camera module, resulting in no camera capturing images, thereby making the first electronic device unable to capture images.
  • a screen interruption occurs on a first interface of an electronic device.
  • Step 510 The physical camera module sends a second request message to the virtual camera module.
  • Step 511 The virtual camera module calls the second camera to capture the second image according to the second request message.
  • the virtual camera module After receiving the second request message sent by the physical camera module, the virtual camera module calls the second camera to capture the second image.
  • Step 512 The virtual camera module sends a second data frame to the camera service module.
  • the second data frame includes the second image frame captured by the second camera and identification information of the second request message.
  • the virtual camera module will parse the second request message. If the second request message includes image acquisition parameters, the image acquisition parameters in the second request message received this time will be added to the second data frame, and the image acquisition parameters will be saved. If the image acquisition parameters are not parsed in the second request message received this time, the image acquisition parameters in the second request message previously received will be added to the second data frame. If the virtual camera module does not store the image acquisition parameters, the image acquisition parameters will not be added to the second data frame.
  • Step 513 The camera service module sends the second data frame to the first application.
  • the camera service module may upload the received second data frame to the first application, so that the first application may obtain the second image based on the second data frame and display the second image on the first interface.
  • the camera service module may also store the received first data frame in a buffer.
  • Step 514 At a second moment, a second input operation is detected, and the first application sends a second switching instruction to the camera service module.
  • the second switching instruction is used to instruct the camera service module to switch the second camera used to capture images to the third camera of the second electronic device.
  • the second switching instruction may include identification information of the third camera.
  • the second input operation may be an input operation for the “switch audio and video to tablet” icon in the embodiment of FIG. 1G .
  • Step 515 The camera service module sends third indication information to the physical camera module.
  • the third indication information is used to instruct the physical camera module to send the request message sent by the camera service module to the virtual camera module.
  • Step 516 The camera service module sends a third request message to the physical camera module, where the third request message is used to instruct the virtual camera module to call a third camera to capture a third image.
  • the third request message is used to instruct the virtual camera module to call the third camera to capture the third image.
  • the third request message includes identification information of the third request message and may also include identification information of the third camera, where the third camera is the camera of the second electronic device.
  • Step 517 The physical camera module sends the third request message to the virtual camera module.
  • Step 518 The virtual camera module calls the third camera to capture the third image according to the third request message.
  • the virtual camera module calls the third camera to start collecting the third image.
  • the third camera is the camera of the second electronic device.
  • Step 519 The virtual camera module sends a third data frame to the camera service module.
  • the third data frame includes a third image frame collected by a third camera, an identification signal of a third request message, interest.
  • the virtual camera module will parse the third request message. If the third request message includes image acquisition parameters, the image acquisition parameters in the third request message received this time will be added to the third data frame, and the image acquisition parameters will be saved. If the image acquisition parameters are not parsed in the third request message received this time, the image acquisition parameters in the third request message previously received will be added to the third data frame. If the virtual camera module does not store the image acquisition parameters, the image acquisition parameters will not be added to the third data frame.
  • Step 520 The camera service module sends the third data frame to the first application.
  • the camera service module may upload the received third data frame to the first application, so that the first application may obtain a third image based on the third data frame and display the third image on the first interface.
  • the camera service module may also store the received first data frame in a buffer.
  • FIG. 5A-FIG. 5B illustrate the interaction process of each module in the process of switching the virtual camera when the first electronic device is capturing images.
  • the request message corresponding to the first frame image sent by the camera service module to the physical camera module includes image acquisition parameters.
  • the physical camera module After the physical camera module receives the request message each time, it will parse the image parsing parameters in the request message and store them. Before the physical camera module returns the data frame to the camera service module, it will add the latest image acquisition parameters stored to the data frame and return the data frame.
  • the virtual camera module when it receives the request message sent by the camera service module, it will also parse the image acquisition parameters in the request message and store the image acquisition parameters so that before returning the data frame to the camera service module, the image acquisition parameters are added to the data frame to ensure the integrity of the returned data frame.
  • the image acquisition parameters can be Setting data
  • the Setting data includes metadata such as maximum frame rate, exposure compensation data, and encoding method.
  • the physical camera module can capture images according to the metadata in the Setting parameters. For example, the physical camera module can capture images according to the maximum frame rate in the Setting, perform exposure compensation on the captured images according to the exposure compensation data in the Setting, or encode the captured images according to the encoding method in the Setting.
  • the camera service module only adds image acquisition parameters in the request message corresponding to the first frame image.
  • the physical camera module receives the first frame image request message sent by the camera service module, it stores the image acquisition parameters in the request message.
  • the image acquisition parameters are not stored in the virtual camera module, and the image acquisition parameters are not included in the request message subsequently sent by the camera service module. This results in that the virtual camera module has no image acquisition parameters subsequently. Therefore, when the camera switches to the second camera or the third camera of the second electronic device, if the image acquisition parameters are not included in the request message sent by the camera service module, the image acquisition parameters are not included in the data frame returned by the virtual camera module. This makes the data frame returned by the virtual camera module an incomplete data frame.
  • the camera service module Before the first electronic device switches the second camera for capturing images to the third camera, the camera service module will check the integrity of the data frames cached in the buffer. If the camera service module detects that there are incomplete data frames (data frames without image acquisition parameters) in the buffer, the camera service module will suspend the switching of the camera within a first time period, so as to wait for the virtual camera module to upload complete data frames within the first time period. The camera service module may suspend the switching of the camera by not sending an instruction message for switching the camera to the physical camera module within the first time period, or not sending a request message within the first time period. After the first time period, the camera service module will switch the camera regardless of whether the virtual camera module has sent a complete data frame. The length of the first time period can be determined based on the number of incomplete data frames in the buffer. The greater the number of incomplete data frames, the longer the first duration is (for example, the first duration is generally 5 to 15 seconds).
  • the first electronic device switches from a virtual camera (for example, the second camera in the above-mentioned embodiments of Figures 5A-5B) to other cameras, a long delay will occur, resulting in the user being unable to switch cameras quickly, thereby reducing the user experience, that is, in the above-mentioned embodiment of Figure 2, the difference between the fourth moment and the third moment is greater than or equal to the first duration.
  • a virtual camera for example, the second camera in the above-mentioned embodiments of Figures 5A-5B
  • the embodiment of the present application proposes another camera switching method, which is: after the physical camera module receives the request message sent by the camera service module each time, the physical camera module parses whether there are image acquisition parameters in the request message. If so, the physical camera module stores the image acquisition parameters in the virtual camera module. In this way, when the virtual camera module returns a data frame, if there are no image acquisition parameters in the request message it receives, the virtual camera module can also add its stored image acquisition parameters to the data frame to ensure the integrity of the data frame it returns, thereby reducing the delay time of the first electronic device switching from the virtual camera to other cameras.
  • FIG6A-FIG6B is a flow chart of another camera switching method provided in an embodiment of the present application, and the specific process is as follows:
  • Step 601 The camera service module sends a first configuration request to the physical camera module.
  • Step 602 The camera service module sends a first request message to the physical camera module.
  • steps 601 and 602 please refer to steps 501 and 502 in the embodiment of FIG. 5A , which will not be described in detail here.
  • Step 603 When the physical camera module detects that the image acquisition parameters exist in the first request message, the physical camera module stores the image acquisition parameters in the virtual camera module.
  • the physical camera module After receiving the first request message, the physical camera module will parse the first request message to determine whether there are image acquisition parameters in the first request message. If so, the physical camera module stores the image acquisition parameters in the virtual camera module. In this way, when the camera for collecting images is switched from the camera of the first electronic device to the camera of the second electronic device, if there are no image acquisition parameters in the request message received by the virtual camera module, the virtual camera module can also add the image acquisition parameters stored by the physical camera module to the data frame to ensure the integrity of the data frame it returns. In this way, when the first electronic device switches the camera for collecting images from the camera of the second electronic device to other cameras, the delay time of camera switching can be reduced and the user experience can be improved.
  • the physical camera module when the first request message received by the physical camera module includes image acquisition parameters and the virtual camera module is not in a working state, if the image acquisition parameters already exist in the virtual camera module, the physical camera module can replace the image acquisition parameters in the virtual camera module with the image acquisition parameters in the first request message currently received by the physical camera module.
  • Step 604 The physical camera module calls the first camera to capture the first image according to the first request message.
  • Step 605 The physical camera module sends a first data frame to the camera service module.
  • Step 606 The camera service module sends the first data frame to the first application.
  • Step 607 The first application obtains a first image based on the first data frame, and displays the first image on a first interface.
  • Step 608 At a first moment, a first input operation is detected, and the first application sends a first switching instruction to the camera service module.
  • Step 609 The camera service module sends second indication information to the physical camera module.
  • Step 610 The camera service module sends a second request message to the physical camera module.
  • the second request message is used to Instruct the virtual camera module to call the second camera to capture the second image.
  • steps 604 to 610 please refer to steps 503 to 509 in the embodiment of FIG. 5A , which will not be described in detail here.
  • Step 611 The physical camera module sends a second request message to the virtual camera module.
  • the virtual camera module parses the second request message to determine whether the second request message contains image acquisition parameters. If so, the virtual camera module updates its stored image acquisition parameters to the image acquisition parameters in the second request message.
  • Step 612 The virtual camera module calls the second camera to capture the second image according to the second request message.
  • Step 613 The virtual camera module sends a second data frame to the camera service module.
  • the second data frame includes a second image frame captured by the second camera, identification information of the second request message, and image capture parameters.
  • Step 614 The camera service module sends the second data frame to the first application.
  • Step 615 At a second moment, a second input operation is detected, and the first application sends a second switching instruction to the camera service module.
  • Step 616 The camera service module sends third indication information to the physical camera module.
  • steps 614 to 616 please refer to steps 513 to 514 in the embodiment of FIG. 5B , which will not be described in detail here.
  • Step 617 The camera service module sends a third request message to the physical camera module, where the third request message is used to instruct the virtual camera module to call a third camera to capture a third image.
  • the third request message is used to instruct the virtual camera module to call the third camera to capture the third image.
  • the third request message includes identification information of the third request message and may also include identification information of the third camera, where the third camera is the camera of the second electronic device.
  • Step 618 The physical camera module sends the third request message to the virtual camera module.
  • the virtual camera module parses the third request message to determine whether the third request message contains image acquisition parameters. If so, the virtual camera module updates its stored image acquisition parameters to the image acquisition parameters in the third request message.
  • Step 619 The virtual camera module calls the third camera to capture the third image according to the third request message.
  • Step 620 The virtual camera module sends a third data frame to the camera service module.
  • the third data frame includes a third image frame captured by a third camera, identification information of the third request message, and image capture parameters.
  • Step 621 The camera service module sends the third data frame to the first application.
  • the camera service module may upload the received third data frame to the first application, so that the first application may obtain a third image based on the third data frame and display the third image on the first interface.
  • the physical camera module when the first electronic device starts to capture images with a camera, the physical camera module, after receiving the request message sent by the camera service module and obtaining the image capture parameters in the request message, will store the image capture parameters in the virtual camera module, so that the image capture parameters are stored in the virtual camera module.
  • the virtual camera module calls the camera of the second electronic device to capture images, it can also add the image capture information previously stored by the physical camera module for it in the data frame, thereby ensuring the integrity of the data frame it returns. This avoids the situation in which the first electronic device switches the camera of the second electronic device to another camera due to the incomplete data frame previously returned by the virtual camera module, causing the camera to be switched incorrectly.
  • the replacement time is long, which reduces the user experience.
  • FIG. 7 is a schematic diagram of the hardware structure of the electronic device 100 provided in the embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 122, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in FIG. 7, or combine certain components, or separate certain components, or arrange the components differently.
  • the components shown in FIG. 7 may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • controller a memory
  • video codec a digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated in one or more processors.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 may receive electromagnetic waves from the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 may also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the same device as at least some of the modules of the processor 110.
  • the wireless communication module 160 can provide wireless local area networks (WLAN) (such as Wi-Fi networks), Bluetooth (BT), BLE broadcasting, global navigation satellite
  • WLAN wireless local area networks
  • BT Bluetooth
  • BLE broadcasting
  • the wireless communication module 160 can be a solution for wireless communication such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (IR), etc.
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the frequency of the electromagnetic wave signal and performs filtering, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, modulate the frequency of the signal, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2.
  • the electronic device 100 implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • ISP is used to process the data fed back by camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to ISP for processing and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on the noise and brightness of the image. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, ISP can be set in camera 193.
  • the digital signal processor is used to process digital signals, and can process not only digital image signals but also other digital signals. For example, when the electronic device 100 is selecting a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • applications such as intelligent cognition of electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the electronic device 100 can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device 100 can listen to music or listen to a hands-free call through the speaker 170A.
  • the receiver 170B also called a "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to microphone 170C to input the sound signal into microphone 170C.
  • the electronic device 100 can be provided with at least one microphone 170C. In other embodiments, the electronic device 100 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to realize collection of sound signals, noise reduction, identification of sound sources, and realization of directional recording function, etc.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A can be disposed on the display screen 194 .
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor, and the electronic device 100 can detect the opening and closing of the flip leather case by using the magnetic sensor 180D.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in all directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device and is applied to applications such as horizontal and vertical screen switching and pedometers.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
  • the touch sensor 180K is also called a "touch panel”.
  • the touch sensor 180K can be set on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K can also be set on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of a human vocal part.
  • the software system of the electronic device 100 can adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes the Android system of the layered architecture as an example to exemplify the software structure of the electronic device 100.
  • the electronic device may include: an application layer, an application framework, a hardware abstraction layer (HAL) layer and a kernel layer.
  • HAL hardware abstraction layer
  • the application layer may include a series of application packages. As shown in FIG8 , the application package may include applications such as a camera application, a gallery, a first application, a navigation application, a code scanning application, and a background code scanning application.
  • applications such as a camera application, a gallery, a first application, a navigation application, a code scanning application, and a background code scanning application.
  • the application framework layer provides application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions. As shown in FIG8 , the application framework layer may include a window manager, a media recorder, a camera service module, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size and determine whether there is a status bar. Lock screen, take screenshots, etc.
  • the camera service module is used to send a request message to the physical camera module or the virtual camera module to request the physical camera module or the virtual camera module to call the camera to capture an image and return the image.
  • the hardware abstraction layer may include multiple functional modules, such as a physical camera module, a virtual camera module, etc.
  • the physical camera module is used to call the camera of the electronic device 100 to collect images and return the collected images to the camera service module.
  • the image collection parameters are stored in the virtual camera module.
  • the virtual camera module is used to call the virtual camera to capture images and return the captured images to the camera service module.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from one website, computer, server or data center to another website, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media integrated.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state drive Solid State Disk), etc.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种摄像头切换方法及相关电子设备,该方法包括:第一电子设备显示第一界面,第一界面包括预览区域和切换控件,预览区域显示第一图像,第一图像为第一摄像头采集的图像,第一摄像头为所述第一电子设备的摄像头;第一图像是所述第一摄像头根据图像采集参数中的参数值采集的;在第一时刻,响应于第一输入操作,第一电子设备将采集图像的摄像头切换为第二摄像头,第二摄像头为第二电子设备的摄像头;在第二时刻,第一电子设备在预览区域上显示第二图像,所述第二图像为第二摄像头采集的图像;在第三时刻,响应于针对切换控件的第二输入操作;在第四时刻,第一电子设备在预览区域上显示第三图像,第三图像为第三摄像头采集的图像。

Description

一种摄像头切换方法及相关电子设备
本申请要求于2022年10月27日提交中国专利局、申请号为202211322379.9、申请名称为“一种摄像头切换方法及相关电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及摄像头切换领域,尤其涉及一种摄像头切换方法及相关电子设备。
背景技术
虚拟相机(Virtual Camera):虚拟相机指的是电子设备调用另一个电子设备采集图像所用的相机。例如,存在电子设备A和电子设备B,电子设备A和电子设备B之间可以建立通信连接,用户A通过电子设备A使用通信软件与用户B进行视频通信时,可以使用电子设备A的摄像头采集用户A的图像,从而将用户A的图像发送给用户B。也可以使用电子设备B的摄像头采集用户A的图像,从而将用户A的图像发送给用户B。在用户A使用电子设备A与用户B进行视频通信的过程中,电子设备A的摄像头为物理相机,电子设备B的摄像头为虚拟相机。
如何减小用户切换虚拟相机前后摄像头的时延,是技术人员日益关注的问题。
发明内容
本申请实施例提供了一种摄像头切换方法及相关电子设备,解决了用户切换虚拟相机前后摄像头,延时过长的问题。
第一方面,本申请实施例提供了一种摄像头切换方法,应用于第一电子设备,该方法包括:第一电子设备显示第一界面,第一界面包括预览区域和切换控件,预览区域显示第一图像,第一图像为第一摄像头采集的图像,第一摄像头为第一电子设备的摄像头;在第一时刻,响应于第一输入操作,第一电子设备将采集图像的摄像头切换为第二摄像头,第二摄像头为第二电子设备的摄像头;在第二时刻,第一电子设备在预览区域上显示第二图像,第二图像为第二摄像头采集的图像;在通过第二摄像头采集图像的过程中,向缓冲区中缓存的数据帧携带有图像采集参数,且第一图像是第一摄像头根据图像采集参数中的参数值采集的;在第三时刻,响应于针对切换控件的第二输入操作;在第四时刻,第一电子设备在预览区域上显示第三图像,第三图像为第三摄像头采集的图像,第三摄像头为第二电子设备的摄像头。
在上述实施例中,第二摄像头采集图像的过程中,每次向缓冲区中缓存的数据帧携带有图像采集参数,保证了第二摄像头当电子设备从第二摄像头切换到第三摄像头时,电子设备检查缓冲区中的数据帧的完整性时,缓冲区中的数据帧都是完整数据帧,因此,电子设备在从第二摄像头切换到第三摄像头时,不会留一定的缓冲时间等待之后携带图像采集参数的数据帧缓冲到缓冲区中,再切换摄像头。这样,大大提高了电子设备切换摄像头的效率。
结合第一方面,在一种可能实现的方式中,图像采集参数包括以下至少一项:采集图 像的最大帧率、曝光补偿数值、编码方式。这样,第一摄像头可以根据图像采集参数中的最大帧率、曝光补偿数值、编码方式对图像进行采集、曝光和编码,避免采集的图像不符合电子设备的要求,导致图像失真。
结合第一方面,在一种可能实现的方式中,第二电子设备是与第一电子设备建立多屏协同连接的电子设备。
结合第一方面,在一种可能实现的方式中,第一电子设备包括第一应用、相机服务模块、物理相机模块、虚拟相机模块,第一电子设备显示第一界面之前,还包括:检测到第三输入操作,第一应用启动;相机服务模块向物理相机模块发送第一请求消息;在检测到第一请求消息中存在图像采集参数的情况下,物理相机模块将图像采集参数存入虚拟相机模块中;物理相机模块根据第一请求消息调用第一摄像头采集第一图像;物理相机模块向第一应用返回第一数据帧,第一数据帧包括第一图像和图像采集参数。在上述实施例中,物理相机模块在接收到第一请求消息后,在第一请求消息中存在图像采集参数的情况下,将图像采集参数存储到虚拟相机模块中。这样,即使虚拟相机模块工作时接收到的请求消息中没有图像采集参数,虚拟相机模块在返回数据帧时,都可以在数据帧中添加上图像采集参数,保证其返回的数据帧的完整性。当电子设备从第二摄像头切换到第三摄像头时,由于缓冲区中缓存的数据帧都是完整的数据帧,因此,在从第二摄像头切换到第三摄像头的过程中,相机服务模块不会留缓冲时间等待虚拟相机模块返回完整的数据帧,而是会立即切换摄像头,从而减小切换摄像头的延时时长。
结合第一方面,在一种可能实现的方式中,第一电子设备将采集图像的摄像头切换为第二摄像头,具体包括:第一应用向相机服务模块发送第一切换指令;相机服务模块向物理相机模块发送第一指示信息,第一指示信息用于指示物理相机模块将第二请求消息转发给虚拟相机模块;相机服务模块向物理相机模块发送第二请求消息;物理相机模块将第二请求消息发送给虚拟相机模块;虚拟相机模块根据第二请求消息调用第二摄像头采集第二图像;虚拟相机模块向第一应用返回第二数据帧,第二数据帧包括第二图像和图像采集参数;虚拟相机模块将所述第二数据帧缓存在缓冲区。
结合第一方面,在一种可能实现的方式中,缓存在缓冲区中的第二数据帧都携带图像采集参数。
结合第一方面,在一种可能实现的方式中,虚拟相机模块根据第二请求消息调用第二摄像头采集第二图像之前,还包括:虚拟相机模块检测第二请求消息中是否存在图像采集参数;若存在,虚拟相机模块将存储在虚拟相机模块中的图像采集参数替换为第二请求消息中的图像采集参数。
结合第一方面,在一种可能实现的方式中,响应于针对切换控件的第二输入操作之后,还包括:相机服务模块判断缓冲区中的数据帧是否都携带图像采集参数;若判断为是,相机服务模块向物理相机模块发送第三请求消息;物理相机模块向虚拟相机模块发送第三请求消息;虚拟相机模块基于第三请求消息调用第三摄像头采集第三图像;虚拟相机模块向相机服务模块返回第三数据帧,第三数据帧包括第三图像和图像采集信息。
结合第一方面,在一种可能实现的方式中,虚拟相机模块基于第三请求消息调用第三摄像头采集第三图像之前,还包括:虚拟相机模块检测第三请求消息中是否存在图像采集 参数;若存在,虚拟相机模块将存储在虚拟相机模块中的图像采集参数替换为第三请求消息中的图像采集参数。
第二方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器、显示屏和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:控制显示屏显示第一界面,第一界面包括预览区域和切换控件,预览区域显示第一图像,第一图像为第一摄像头采集的图像,第一摄像头为第一电子设备的摄像头;在第一时刻,响应于第一输入操作,第一电子设备将采集图像的摄像头切换为第二摄像头,第二摄像头为第二电子设备的摄像头;在第二时刻,控制显示屏在预览区域上显示第二图像,第二图像为第二摄像头采集的图像;在通过第二摄像头采集图像的过程中,向缓冲区中缓存的数据帧携带有图像采集参数,且第一图像是第二摄像头根据图像采集参数中的参数值采集的;在第三时刻,响应于针对切换控件的第二输入操作;在第四时刻,第一电子设备在预览区域上显示第三图像,第三图像为第三摄像头采集的图像,第三摄像头为第二电子设备的摄像头。
结合第二方面,在一种可能实现的方式中,该一个或多个处理器调用该计算机指令以使得该电子设备执行:检测到第三输入操作,启动第一应用;通过相机服务模块向物理相机模块发送第一请求消息;在检测到第一请求消息中存在图像采集参数的情况下,通过物理相机模块将所述图像采集参数存入所述虚拟相机模块中;通过物理相机模块根据第一请求消息调用第一摄像头采集第一图像;通过物理相机模块向第一应用返回第一数据帧,第一数据帧包括第一图像和图像采集参数。
结合第二方面,在一种可能实现的方式中,该一个或多个处理器调用该计算机指令以使得该电子设备执行:通过第一应用向相机服务模块发送第一切换指令;通过相机服务模块向物理相机模块发送第一指示信息,第一指示信息用于指示物理相机模块将第二请求消息转发给虚拟相机模块;通过相机服务模块向物理相机模块发送第二请求消息;通过物理相机模块将第二请求消息发送给虚拟相机模块;通过虚拟相机模块根据第二请求消息调用第二摄像头采集第二图像;通过虚拟相机模块向第一应用返回第二数据帧,第二数据帧包括第二图像和图像采集参数;通过虚拟相机模块将所述第二数据帧缓存在缓冲区。
结合第二方面,在一种可能实现的方式中,该一个或多个处理器调用该计算机指令以使得该电子设备执行:通过虚拟相机模块检测第二请求消息中是否存在图像采集参数;若存在,通过虚拟相机模块将存储在虚拟相机模块中的图像采集参数替换为第二请求消息中的图像采集参数。
结合第二方面,在一种可能实现的方式中,该一个或多个处理器调用该计算机指令以使得该电子设备执行:通过相机服务模块判断缓冲区中的数据帧是否都携带图像采集参数;若判断为是,通过相机服务模块向物理相机模块发送第三请求消息;通过物理相机模块向虚拟相机模块发送第三请求消息;通过虚拟相机模块基于第三请求消息调用第三摄像头采集第三图像;通过虚拟相机模块向相机服务模块返回第三数据帧,第三数据帧包括第三图像和图像采集信息。
结合第二方面,在一种可能实现的方式中,该一个或多个处理器调用该计算机指令以使得该电子设备执行:通过虚拟相机模块检测第三请求消息中是否存在图像采集参数;若存在,通过虚拟相机模块将存储在虚拟相机模块中的图像采集参数替换为第三请求消息中的图像采集参数。
第三方面,本申请实施例提供了一种电子设备,包括:触控屏、摄像头、一个或多个处理器和一个或多个存储器;所述一个或多个处理器与所述触控屏、所述摄像头、所述一个或多个存储器耦合,所述一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如第一方面或第一方面的任意一种可能实现的方式所述的方法。
第四方面,本申请实施例提供了一种芯片系统,该芯片系统应用于电子设备,该芯片系统包括一个或多个处理器,该处理器用于调用计算机指令以使得该电子设备执行如第一方面或第一方面的任意一种可能实现的方式所述的方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行如第一方面或第一方面的任意一种可能实现的方式所述的方法。
第六方面,本申请实施例提供了一种计算机可读存储介质,包括指令,当该指令在电子设备上运行时,使得该电子设备执行如第一方面或第一方面的任意一种可能实现的方式所述的方法。
附图说明
图1A-图1I是本申请实施例提供的一组摄像头切换方法的场景示例图;
图2是本申请实施例提供的一种摄像头切换方法的流程图;
图3A-图3K是本申请实施例提供的另一组摄像头切换方法的场景示例图;
图4是本申请实施例提供的一种视频录制的流程图;
图5A-图5B是本申请实施例提供的另一种摄像头切换方法的流程图;
图6A-图6B是本申请实施例提供的另一种摄像头切换方法的流程图;
图7是本申请实施例提供的电子设备100的硬件结构示意图;
图8是本申请实施例提供的电子设备100的软件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或者特性可以包含在本实施 例申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是相同的实施例,也不是与其它实施例互斥的独立的或是备选的实施例。本领域技术人员可以显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及所述附图中术语“第一”、“第二”、“第三”等是区别于不同的对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤或单元,或者可选地,还包括没有列出的步骤或单元,或者可选地还包括这些过程、方法、产品或设备固有的其它步骤或单元。
附图中仅示出了与本申请相关的部分而非全部内容。在更加详细地讨论示例性实施例之前,应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
在本说明书中使用的术语“部件”、“模块”、“系统”、“单元”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件或执行中的软件。例如,单元可以是但不限于在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或分布在两个或多个计算机之间。此外,这些单元可从在上面存储有各种数据结构的各种计算机可读介质执行。单元可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一单元交互的第二单元数据。例如,通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
下面,对本申请实施例设计的一些专有名词进行介绍。
1、虚拟相机(Virtual Camera):虚拟相机指的是电子设备调用另一个电子设备采集图像所用的相机。例如,存在电子设备A和电子设备B,电子设备A和电子设备B之间可以建立通信连接,用户A通过电子设备A使用通信软件与用户B进行视频通信时,可以使用电子设备A的摄像头采集用户A的图像,从而将用户A的图像发送给用户B。也可以使用电子设备B的摄像头采集用户A的图像,从而将用户A的图像发送给用户B。在用户A使用电子设备A与用户B进行视频通信的过程中,电子设备A的摄像头为物理相机,电子设备B的摄像头为虚拟相机。
2、多屏协同:多屏协同功能一种分布式技术,可以实现跨系统、跨设备协同,多个设备进行连接后,即实现资源共享,协同操作。以智能手机与平板进行多屏协同为例,平板在开启多屏协同功能后,智能手机可以与平板进行连接,在连接成功后,智能手机可以和平板进行数据的传输。
下面,分别以不同的场景为例,示例性地介绍本申请实施例提供的一种摄像头切换方 法。
场景1:电子设备100与电子设备200首先建立多屏协同连接,在建立多屏协同连接之后。电子设备100启动相机应用,此时,在电子设备100的界面上显示电子设备100摄像头采集的图像。当电子设备100将视音频切换到电子设备200后,在电子设备100的用户界面上显示电子设备200的摄像头采集的图像。
下面,结合图1A-图1I对上述过程进行示例性地说明,假设在图1A-图1I中,电子设备100为智能手机,电子设备200为平板。
如图1A所示,为电子设备100和电子设备200的示例图。其中,电子设备100包括第一摄像头1011,第一摄像头1011为电子设备100的后置摄像头。电子设备100的第一摄像头1011所能采集的图像范围为扇形区域101,在该图像采集范围内,包括对象1。电子设备200包括第二摄像头1012和第三摄像头1013,第二摄像头1012为电子设备200的后置摄像头,第三摄像头1013为电子设备200的前置摄像头。电子设备200的第二摄像头1012所能采集的图像范围为扇形区域102,在扇形区域102内包括对象2。电子设备200的第三摄像头1013所能采集的图像范围为扇形区域103,在扇形区域103内包括对象3。
如图1B所示为电子设备200用户界面20,在该用户界面20中包括平板协同图标201、手机协同图标202、电脑协同图标203等功能图标。在检测到针对手机协同图标202的输入操作(例如,单击)之后,响应该操作,电子设备200显示如图1C所示用户界面21,在用户界面21中包括二维码205。在电子设备100通过具备扫码功能的应用(例如,电子设备100自带相机应用中的扫码功能,或者电子设备100自带浏览器中的扫码功能)对二维码205进行扫码操作后。
电子设备100在用户界面10上显示如图1D所示的连接提示信息框104。在连接提示信息框104中显示“断开”图标和“音视频切换到平板”图标,连接提示信息框104用于指示电子设备100与电子设备200多屏系统连接成功。
上述图1B-图1D所述的实施例对电子设备100与电子设备200之间建立多屏协同连接的过程进行了示例性地介绍。应当理解的是,电子设备100和电子设备200之间还可以通过其它方式进行多屏协同的连接。例如,在电子设备200检测到针对手机协同图标的输入操作后,可以直接搜索周围的电子设备,并向搜索到的电子设备广播请求连接的消息,在接收到请求消息的电子设备确定连接后,电子设备200与其建立多屏协同连接。本申请实施例对电子设备100与电子设备200之间建立多屏协同连接的方式的进行示例性说明,不应对本申请的保护范围构成任何限制。此外,对于多屏协同连接的发起设备本申请实施例也不做限制,在上述图1B-上述图1D的实施例中,多屏协同的发起设备为电子设备200,多屏协同的发起设备也可以为电子设备100。
如图1E所示为电子设备100的主界面,在电子设备100的主界面中包括相机图标111、图库图标112等其他应用图标。在检测到针对相机图标111的输入操作(例如,单击),响应该操作,电子设备100显示如图1F所示的拍摄界面。
如图1F所示,在拍摄界面中包括预览框121,预览框121中显示电子设备100的第一摄像头采集的图像,在该图像中显示对象1。在T1时刻,当电子设备100检测到针对拍摄界面的输入操作(例如,从屏幕顶部下滑),响应该操作,电子设备100显示如图1G所示 的用户界面10。
如图1G所示,在用户界面10中包括连接提示信息框104。在连接提示信息框104中显示“多屏协同连接成功”以及“断开”图标和“音视频切换到平板”图标,连接提示信息框104用于指示电子设备100与电子设备200存在多屏协同的连接。当电子设备100检测到针对“音视频切换到平板”图标的输入操作(例如,单击)后,响应该操作,电子设备100将采集图像的第一摄像头切换为电子设备200的第二摄像头。
图1H为电子设备100在切换摄像头后,电子设备100的拍摄界面。如图1H所示,在拍摄界面的预览框121中显示的电子设备200的第二摄像头采集的图像,在该图像中包括对象2。在检测到针对拍摄界面上摄像头切换控件1214的输入操作(例如,单击)后,响应该操作,电子设备100将电子设备200的第二摄像头切换为第三摄像头1,并显示如图1I所示的拍摄界面。如图1I所示,在拍摄界面的预览框121中显示第三摄像头采集的图像,在该图像中显示对象3。
上述图1E-图1I中的实施例,对在电子设备100和电子设备200之间建立了多屏协同连接后,电子设备100使用电子设备200的摄像头进行图像的采集的应用场景进行了示例性地介绍。应当理解的是,在上述实施例中,电子设备100在使用其第一摄像头采集图像时,第一摄像头为物理相机,电子设备100使用电子设备200的第二摄像头或第三摄像头采集图像时,第二摄像头和第三摄像头为虚拟相机。此外,物理相机和虚拟相机是相对于执行主体而言的,对于存在正在运行使用摄像头采集图像的应用程序的电子设备,该电子设备本身的摄像头为物理相机,该电子设备调用其它电子设备的摄像头为虚拟相机。
下面,结合图2,对第一电子设备切换第二电子设备的摄像头进行图像采集的流程进行示例性地说明。请参见图2,图2是本申请实施例提供的一种摄像头切换方法的流程图,具体流程如下:
步骤201:第二电子设备检测到第一操作,响应第一操作,开启多屏协同功能。
示例性地,第二电子设备可以为上述图1A实施例中的电子设备200,第一操作可以为上述图1B中针对手机协同图标202的输入操作。
步骤202:第一电子设备检测到第二操作,响应第二操作,与第二电子设备建立多屏协同连接。
可选地,第二电子设备在开启多屏系统功能后,可以生成一个二维码,第二电子设备可以与扫描并解析该二维码的电子设备建立多屏协同的连接。第二电子设备也可以在开启多屏协同功能后,搜索附近的电子设备,并广播用于请求建立多屏系统功能的指示信息,在电子设备接收到指示信息,并向第二电子设备发送确定建立连接的消息后,第二电子设备可以与该电子设备建立多屏协同连接。本申请实施例对第二电子设备与其它电子设备建立多屏协同连接的其中一种方式进行示例性的说明,对于第二电子设备与其它电子设备建立多屏系统连接的方式不作限制。
具体地,第一电子设备可以为上述图1B实施例中的电子设备100,第二操作可以为上述图1B实施例中电子设备100对二维码205的扫码操作。
步骤203:第二电子设备与第一电子设备建立多屏协同连接,第一电子设备显示连接 提示信息框,所述连接提示信息框包括第一控件,所述连接提示信息框用于指示第二电子设备与第一电子设备已建立多屏协同连接。
示例性的,连接提示信息框可以为上述图1D中的连接提示信息框104,第一控件可以为连接提示信息框104中的“音视频切换到平板”图标。
上述步骤201-步骤203对第二电子设备与第一电子设备之间多屏系统连接的流程进行了叙述。应当理解的是,步骤201-步骤203仅是对第二电子设备与第一电子设备之间建立多屏系统连接的其中一种方式进行示例性地说明,即:将第二电子设备作为多屏系统连接的发起设备,将第一电子设备作为多屏协同连接的接受设备。在一些实施例中,也可以将第一电子设备作为多屏协同连接的发起设备,将第二电子设备作为多屏系统连接的接受设备,本申请实施例对此不做限制。步骤201-步骤203也可以作为可选步骤。
步骤204:第一电子设备检测到第一输入操作,第一应用启动。
示例性的,第一应用可以为上述图1E实施例中相机图标111对应的相机应用,第一输入操作可以为上述图1E中检测到针对相机图标111的输入操作。第一应用还可以为具备拍摄功能的应用,例如,微信、浏览器等。
步骤205:第一电子设备显示第一界面,第一界面包括预览区域和切换控件,预览区域显示第一摄像头采集的图像,第一摄像头为第一电子设备的摄像头。
示例性的,第一界面可以为上述图1F实施例中的拍摄界面,预览区域可以为上述图1F实施例所述的预览框121,切换控件可以为上述拍摄界面中的切换控件1214,第一摄像头可以为上述图1A实施例中电子设备100的第一摄像头。
具体地,第一电子设备在启动第一应用后,也会启动第一摄像头。以第一电子设备的第一摄像头为后置摄像头为例,第一电子设备在启动后置摄像头后,后置摄像头会采集图像,并将采集的图像显示在第一界面上。
步骤206:在第一时刻,第一电子设备显示连接提示信息框,所述连接提示信息框包括第一控件。
步骤207:第一电子设备检测到针对所述第一控件的第二输入操作,响应所述第二输入操作,将所述第一摄像头切换为第二摄像头,所述第二摄像头为第二电子设备的摄像头。
示例性的,第一控件可以为上述图1G实施例中,“音视频切换到平板”图标,第二输入操作可以为上述图1G实施例中,针对“音视频切换到平板”图标的输入操作。
具体地,在第一电子设备检测到针对第一控件的第二输入操作后,第一电子设备将第一摄像头切换为第二电子设备的第二摄像头。这样,第一摄像头不再采集图像,第二摄像头采集图像。或者,第一摄像头继续采集图像,第一摄像头采集的图像不显示在第一界面上,在第一界面上显示第二摄像头采集的图像。本申请实施例对此不做限制。
步骤208:在第二时刻,第一电子设备在第一界面的预览区域上显示第二摄像头采集的图像。
具体地,在第一电子设备将第一摄像头切换为第二摄像头后,在第一界面的预览区域上不再显示第一摄像头采集的图像,显示第二摄像头采集的图像。
步骤209:在第三时刻,第一电子设备检测到针对第一界面上切换图标的第三输入操作后,响应所述第三输入操作,在第四时刻,在第一界面的预览区域上显示第三摄像头采 集的图像,所述第三摄像头为所述第二电子设备的摄像头,所述第三摄像头与所述第二摄像头不同。
上述步骤201-步骤209对第一电子设备在使用其第一摄像头采集图像过程中,将第一摄像头切换为第二电子设备的第二摄像头来采集图像,以及将第二摄像头切换为第三摄像头的流程进行了说明。
场景2:电子设备100与电子设备200建立多屏协同连接后,电子设备100使用电子设备200的摄像头进行视频录制,视频的帧率为第一帧率。在录制完成后,生成第一视频文件,第一视频文件的视频帧率为第一帧率。电子设备100将视频帧率修改为第二帧率后,使用电子设备200的摄像头录制第二视频。在录制完成后,生成第二视频文件,第二视频文件的帧率为第二帧率。这两个视频文件的帧率不同。下面,结合图3A-图3K对上述过程进行示例性地说明。
电子设备100和电子设备200建立多屏协同连接的相关叙述可以参考上述图1A-图1D实施例中的叙述,在此不再赘述。
电子设备100和电子设备200的示例图如上述图1A所示。
如图3A所示,为电子设备100的预览界面,该预览界面包括预览框301、帧率信息框302以及开始录制控件303。其中,预览框301用于显示摄像头采集的图像,帧率信息框302用于显示视频拍摄的帧率,如图3A所示,当前视频录制的帧率为30。预览框301当前显示的图像为电子设备100的第一摄像头采集的图像,在该图像中包括对象1。当电子设备100检测到针对预览界面的输入操作(例如,从屏幕顶部下滑),响应该操作,电子设备100显示如图3B所示的用户界面。
如图3B所示,在用户界面中包括连接提示信息框104。在连接提示信息框104中包括“断开”图标和“音视频切换到平板”图标,连接提示信息框104用于指示电子设备100与电子设备200存在多屏协同的连接。当电子设备100检测到针对“音视频切换到平板”图标的输入操作(例如,单击)后,响应该操作,电子设备100进行将第一摄像头切换到电子设备200的摄像头。
如图3C所示为电子设备100切换摄像头后的预览界面,此时,在预览框301中显示电子设备200的第二摄像头采集的图像,在该图像中包括对象2。检测到针对开始录制控件303的输入操作(例如,单击),响应该操作,电子设备100开始录制视频,并显示如图3D所示的视频录制界面。
如图3D所示的视频录制界面包括录制时间显示框305、帧率信息框302以及停止录制控件304。由录制时间显示框305和帧率信息框302可知,当前视频的录制时间为30秒,视频帧率为30Hz。检测到针对停止录制控件304的输入操作(例如,单击),响应该操作,电子设备100保存当前视频文件,该视频文件为第一视频文件,第一视频文件的帧率为30Hz。
在视频录制结束后,电子设备100可以显示如图3E所示的预览界面,由于此时电子设备100将摄像头切换为电子设备200的第二摄像头。因此,在预览框301中显示的是第二摄像头采集的图像,在图像中包括对象2。
在T1时刻,电子设备100检测到针对预览界面的输入操作(例如,从屏幕顶部下滑),响应该操作,电子设备100显示如图3F所示的用户界面。如图3F所示,在该用户界面中包括连接提示信息框104。在连接提示信息框104中包括“断开”图标和“切换音视频到手机”图标,连接提示信息框104用于指示电子设备100与电子设备200存在多屏协同的连接。当电子设备100检测到针对“切换音视频到手机”图标的输入操作(例如,单击)后,响应该操作,电子设备100进行将第二摄像头切换到电子设备100的第一摄像头。
如图3G所示为电子设备100切换摄像头后的预览界面,此时,在预览框301中显示电子设备100的第一摄像头采集的图像,在该图像中包括对象1。检测到针对帧率信息框302的输入操作(例如,单击),响应该操作,电子设备100改变视频帧率。应当理解的是,本申请实施例仅对改变帧率的其中一种方式进行示例性地说明,对于改变视频帧率的方式,本申请实施例不做限制。
如图3H所示,通过帧率信息框302可知,此时视频帧率为60Hz。当电子设备100检测到针对预览界面的输入操作(例如,从屏幕顶部下滑),响应该操作,电子设备100显示如图3I所示的用户界面。
如图3I所示,在用户界面中包括连接提示信息框104。在连接提示信息框104中包括“断开”图标和“音视频切换到平板”图标,连接提示信息框104用于指示电子设备100与电子设备200存在多屏协同的连接。当电子设备100检测到针对“音视频切换到平板”图标的输入操作(例如,单击)后,响应该操作,电子设备100进行将第一摄像头切换到电子设备200的摄像头。
如图3J所示为电子设备100切换摄像头后的预览界面,此时,在预览框301中显示电子设备200的第二摄像头采集的图像,在该图像中包括对象2。检测到针对开始录制控件的输入操作(例如,单击),响应该操作,电子设备100开始录制视频,并显示如图3K所示的视频录制界面。
如图3K所示的视频录制界面包括录制时间显示框305、帧率信息框302以及停止录制控件304。由录制时间显示框305和帧率信息框302可知,当前视频的录制时间为30秒,视频帧率为60Hz。检测到针对停止录制控件304的输入操作(例如,单击),响应该操作,电子设备100保存当前视频文件,该视频文件为第二视频文件,第二视频文件的帧率为60Hz。
下面,结合图4,对上述图3A-图3K实施例中,第一电子设备进行视频录制的流程进行示例性说明。示例性的,第一电子设备可以为上述图3A-图3K实施例中的电子设备100,第二电子设备可以为上述图3A-图3K中的电子设备200。第一电子设备与第二电子设备之间已建立了多屏协同连接。请参见图4,图4是本申请实施例提供的一种视频录制的流程图,具体流程如下:
步骤401:第一电子设备显示第一录制界面,所述第一录制界面包括第二控件和预览区域,所述预览区域当前显示第一摄像头采集的图像,第一摄像头为第一电子设备的第一摄像头。
示例性的,第一录制界面可以为上述图3A实施例所述的预览界面,第二控件可以为 上述图3A实施例中的开始录制控件303。此时,第一电子设备视频录制的帧率为第一帧率,第一帧率可以为第一电子设备的默认帧率(例如,30Hz)。
具体地,电子设备在启动第一应用后,显示第一录制界面。第一应用可以为具备录制视频功能的应用,例如,相机应用、微信等应用。
步骤402:在第一时刻,检测到第一输入操作,响应第一输入操作,第一电子设备将第一摄像头切换为第二摄像头,第二摄像头为第二电子设备的摄像头。
示例性的,第一输入操作可以为上述图3A-图3B实施例中,针对预览界面的输入操作以及针对“音视频切换到平板”图标的输入操作。
步骤403:在第二时刻,第一电子设备显示第一录制界面,所述第一录制界面包括第二控件和预览区域,所述预览区域显示第二摄像头采集的图像。
示例性的,第一录制界面可以为上述图3C所示的预览界面。
步骤404:检测到针对第二控件的第二输入操作,响应第二输入操作,第一电子设备录制第一视频,所述第一视频的帧率为第一帧率。
具体的,第二控件可以为上述图3C实施例中,针对开始录制控件303的输入操作。
步骤405:在第三时刻,检测到第三输入操作,响应第三输入操作,第一电子设备停止录制视频,保存第一视频文件,第一视频文件的帧率为第一帧率。
示例性的,第三输入操作可以为上述图3D实施例中,针对停止录制控件304的输入操作。
步骤406:在第四时刻,检测到第四输入操作,响应第四输入操作,第一电子设备将第一摄像头切换为第二摄像头。
示例性的,第四输入操作可以为上述图3E-图3F实施例中,针对预览界面的输入操作(例如,从屏幕顶部下滑)以及针对“切换音视频到手机”图标的输入操作。
步骤407:在第五时刻,第一电子设备显示第二录制界面,所述第二录制界面包括第二控件和预览区域,所述预览区域显示第一摄像头采集的图像。
示例性的,第二录制界面可以为上述图3G所示的预览界面。
步骤408:检测到第五输入操作,响应第五输入操作,第一电子设备将视频帧率调整为第二帧率,第二帧率与第一帧率不同。
示例性的,第五输入操作可以为上述图3G实施例中,针对帧率信息框302的输入操作。
步骤409:在第六时刻,检测到第六输入操作,响应第六输入操作,第一电子设备将第一摄像头切换为第二摄像头。
示例性的,第六输入操作可以为上述图3H-图3I中,针对预览界面的输入操作以及针对“音视频切换到平板”图标的输入操作。
步骤410:在第七时刻,第一电子设备显示第二录制界面,所述第二录制界面包括第二控件和预览区域,所述预览区域显示第二摄像头采集的图像。
示例性的,第二录制界面可以为上述图3J实施例中的预览界面。
步骤411:检测到针对第二控件的第七输入操作,响应第七输入操作,第一电子设备开始录制第二视频,所述第二视频的帧率为第二帧率。
示例性的,第七输入操作可以为上述图3J实施例中,针对开始录制控件的输入操作。
步骤412:在第八时刻,检测到第八输入操作,响应第八输入操作,第一电子设备停止录制视频,保存第二视频文件,第二视频文件的帧率为第二帧率,第二帧率与第一帧率不同。
示例性的,第八输入操作可以为上述图3K中,检测到针对停止录制控件304的输入操作。
在上述场景1和场景2的实施例中,都使用了虚拟相机来采集图像,即:第一电子设备将其本身的摄像头(第一摄像头)切换为第二电子设备的摄像头(第二摄像头)来采集图像,从而实现对其它电子设备摄像头的使用。在第一电子设备使用虚拟相机的过程中,各功能模块之间会进行交互,使得第二电子设备摄像头采集的图像能够传送给第一电子设备。
下面,结合图5A-图5B,对上述图2实施例中,第一电子设备切换第二电子设备的摄像头采集图像的过程中,各个功能模块之间的交互流程进行说明。其中,第二电子设备是与第一电子设备建立了多屏协同连接的电子设备。示例性的,第一电子设备可以为上述图1A实施例中的电子设备100,第二电子设备可以为上述图1A实施例中的电子设备200。
第一电子设备包括相机服务模块(CameraServe)、物理相机模块、虚拟相机模块以及第一应用。其中,相机服务模块位于安卓软件架构中的应用程序框架层,物理相机模块和虚拟相机模块位于安卓软件框架中的硬件抽象层(HAL层),第一电子设备调用虚拟相机采集图像的流程如下所示:
步骤501:相机服务模块向物理相机模块发送第一配置请求。
具体的,第一电子设备在启动摄像头之前,会启动物理相机模块,以便物理相机模块配置相关参数。
可选的,在相机服务模块发送第一配置消息之前,第一电子设备可以启动第一应用,第一应用为具备调用摄像头采集图像功能的应用,例如,相机应用、微信等。第一应用启动后,可以向相机服务模块发送触发指令,该触发指令用于触发相机服务模块向物理相机模块发送第一配置请求。
步骤502:相机服务模块向物理相机模块发送第一请求消息。
具体的,第一请求消息用于指示物理相机模块调用第一摄像头采集图像,第一摄像头为第一电子设备的摄像头,第一请求消息包括标识信息。
相机服务模块在请求每帧图像之前,都需要发送请求消息,物理相机模块/虚拟相机模块在接收到对应的请求消息后,才会调用摄像头采集图像,并返回采集的图像。
可选的,在相机服务模块向物理相机模块发送第一请求消息之前,相机服务模块可以向物理相机模块发送第一指示信息,第一指示信息可以包括第一摄像头的标识信息,第一指示信息用于指示物理相机模块使用第一摄像头采集图像,并将第一摄像头采集的图像上传。
步骤503:物理相机模块根据第一请求消息,调用第一摄像头采集第一图像。
步骤504:物理相机模块向相机服务模块发送第一数据帧。
具体的,第一数据帧中包括第一图像和图像采集参数。
物理相机模块每次在接收到第一请求消息后,会解析第一请求消息。若在第一请求消息中包括图像采集参数,会在第一数据帧中添加本次接收的第一请求消息中的图像采集参数。若在本次接收的第一请求消息中为解析到图像采集参数,则将存储的之前接收的第一请求消息中的图像采集参数添加进第一数据帧中。其中,图像采集参数可以为Setting数据,在Setting数据中包括最大帧率、曝光补偿、编码方式等元数据。
步骤505:相机服务模块将第一数据帧发送给第一应用。
具体的,相机服务模块可以将其接收的第一数据帧上传给第一应用,以便第一应用可以基于第一数据帧获取第一图像,并将第一图像显示在第一界面上。
示例性的,第一应用可以为上述图1E实施例中相机图标对应的相机应用,第一界面可以为上述图1F是实施例中的拍摄界面。
可选的,相机服务模块还可以将接收的第一数据帧存储到缓冲区(Buffer)中。其中,Buffer用于暂时存储相机服务模块接收的数据帧,在数据帧中包括摄像头采集的图像。由于Buffer的空间有限,因此,Buffer采用先进先出的方式来存储数据帧,即:当Buffer中存储的数据帧的数量大于或等于Buffer存储的数据帧的上限值N之后,Buffer会清除先前存储的数据帧,以留出存储空间来存储新的数据帧。
示例性的,假设Buffer能够存储的数据帧的上限值为4,按照时间从先到后的顺序,当前Buffer中已存储了数据帧1、数据帧2、数据帧3以及数据帧4。当相机服务模块接收到物理相机模块发送的数据帧5时,相机服务模块需要将数据帧1从Buffer中清除,才能将数据帧5存入Buffer中。
步骤506:第一应用基于第一数据帧得到第一图像,并将第一图像显示在第一界面上。
步骤507:在第一时刻,检测到第一输入操作,第一应用向相机服务模块发送第一切换指令。
步骤508:相机服务模块向物理相机模块发送第二指示信息。
具体的,第二指示信息用于指示物理相机模块将相机服务模块发送的请求消息发送给虚拟相机模块。
步骤509:相机服务模块向物理相机模块发送第二请求消息,所述第二请求消息用于指示虚拟相机模块调用第二摄像头采集第二图像。
具体的,第二请求消息用于指示虚拟相机模块调用第二摄像头采集第二图像。第二请求消息包括第二请求消息的标识信息,也可以包括第二摄像头的标识信息,第二摄像头为第二电子设备的摄像头。
可选的,物理相机模块在接收到第二请求消息之后,可以检测虚拟相机模块是否处于工作状态。若虚拟相机模块不处于工作状态,相机服务模块可以根据第二请求消息,使用第一摄像头采集图像,并向相机服务模块返回相应的数据帧。在下一次检测到虚拟相机模块处于工作状态的情况下,将相机服务模块发送的第二请求消息发送给虚拟相机模块。这样,第一电子设备在将第一摄像头首次切换到第二电子设备的摄像头的过程中,可以有效避免在虚拟相机模块启动过程中,切换摄像头,而造成没有摄像头采集图像,进而使得第 一电子设备的第一界面中出现画面中断的情况。
步骤510:物理相机模块将第二请求消息发送给虚拟相机模块。
步骤511:虚拟相机模块根据第二请求消息,调用第二摄像头采集第二图像。
具体地,在接收到物理相机模块发送的第二请求消息后,虚拟相机模块调用第二摄像头采集第二图像。
步骤512:虚拟相机模块向相机服务模块发送第二数据帧。
具体的,在第二数据帧中包括第二摄像头采集的第二图像帧、第二请求消息的标识信息。
可选的,虚拟相机模块在接收到物理相机模块发送的第二请求消息后,会对第二请求消息进行解析。若在第二请求消息中包括图像采集参数,会在第二数据帧中添加本次接收的第二请求消息中的图像采集参数,并保存该图像采集参数。若在本次接收的第二请求消息中未解析到图像采集参数,则将存储的之前接收的第二请求消息中的图像采集参数添加进第二数据帧中。虚拟相机模块未存有图像采集参数,则在第二数据帧中不添加图像采集参数。
步骤513:相机服务模块将第二数据帧发送给第一应用。
具体的,相机服务模块可以将其接收的第二数据帧上传给第一应用,以便第一应用可以基于第二数据帧获取第二图像,并将第二图像显示在第一界面上。
可选的,相机服务模块还可以将接收的第一数据帧存储到缓冲区(Buffer)中。
步骤514:在第二时刻,检测到第二输入操作,第一应用向相机服务模块发送第二切换指令。
具体的,第二切换指令用于指示相机服务模块采集图像的第二摄像头切换为第二电子设备的第三摄像头。可选的,在第二切换指令中可以包括第三摄像头的标识信息。
示例性的,第二输入操作可以为上述图1G实施例中,针对“音视频切换到平板”图标的输入操作。
步骤515:相机服务模块向物理相机模块发送第三指示信息。
具体的,第三指示信息用于指示物理相机模块将相机服务模块发送的请求消息发送给虚拟相机模块。
步骤516:相机服务模块向物理相机模块发送第三请求消息,所述第三请求消息用于指示虚拟相机模块调用第三摄像头采集第三图像。
具体的,第三请求消息用于指示虚拟相机模块调用第三摄像头采集第三图像。第三请求消息包括第三请求消息的标识信息,也可以包括第三摄像头的标识信息,第三摄像头为第二电子设备的摄像头。
步骤517:物理相机模块将第三请求消息发送给虚拟相机模块。
步骤518:虚拟相机模块根据第三请求消息,调用第三摄像头采集第三图像。
具体地,在接收到物理相机模块发送的第三请求消息后,虚拟相机模块调用第三摄像头开始采集第三图像。第三摄像头为第二电子设备的摄像头。
步骤519:虚拟相机模块向相机服务模块发送第三数据帧。
具体的,在第三数据帧中包括第三摄像头采集的第三图像帧、第三请求消息的标识信 息。
可选的,虚拟相机模块在接收到物理相机模块发送的第三请求消息后,会对第三请求消息进行解析。若在第三请求消息中包括图像采集参数,会在第三数据帧中添加本次接收的第三请求消息中的图像采集参数,并保存该图像采集参数。若在本次接收的第三请求消息中未解析到图像采集参数,则将存储的之前接收的第三请求消息中的图像采集参数添加进第三数据帧中。虚拟相机模块未存有图像采集参数,则在第三数据帧中不添加图像采集参数。
步骤520:相机服务模块将第三数据帧发送给第一应用。
具体的,相机服务模块可以将其接收的第三数据帧上传给第一应用,以便第一应用可以基于第三数据帧获取第三图像,并将第三图像显示在第一界面上。
可选的,相机服务模块还可以将接收的第一数据帧存储到缓冲区(Buffer)中。
上述图5A-图5B实施例对第一电子设备在采集图像时,切换虚拟摄相机的过程中,各模块的交互流程进行了说明。在第一电子设备在启动摄像头采集图像时,相机服务模块向物理相机模块发送的首帧图像对应的请求消息中包括图像采集参数。在物理相机模块每次接收到请求消息后,会解析请求消息中的图像解析参数,并存储,当物理相机模块向相机服务模块返回数据帧之前,会将其存储的最新的图像采集参数添加到数据帧中,返回数据帧。同理,当虚拟相机模块接收到相机服务模块发送的请求消息时,也会解析请求消息中的图像采集参数,并存储该图像采集参数,以便向相机服务模块返回数据帧之前,在数据帧中添加图像采集参数,保证返回的数据帧的完整性。其中,图像采集参数可以为Setting(设置)数据,在Setting(设置)数据中包括最大帧率、曝光补偿数据、编码方式等元数据。物理相机模块在接收到Setting参数后,可以根据Setting参数中的元数据采集图像。例如,物理相机模块可以按照Setting中的最大帧率采集图像,也可以按照Setting中的曝光补偿数据对采集的图像进行曝光补偿,也可以根据Setting中的编码方式,对采集的图像进行编码等。
在一些实施例中,相机服务模块仅在首帧图像对应的请求消息中添加图像采集参数。当物理相机模块接收到相机服务模块发送的首帧图像请求消息后,存储该请求消息中的图像采集参数。这时,虚拟相机模块中未存储图像采集参数,且由于相机服务模块后续发送的请求消息中不包括图像采集参数。这就造成,虚拟相机模块后续一直没有图像采集参数。因此,当摄像头切换到第二电子设备的第二摄像头或第三摄像头时,在相机服务模块发送的请求消息中不包括图像采集参数的情况下,虚拟相机模块返回的数据帧中都不包括图像采集参数。这就使得,虚拟相机模块传回的数据帧是不完整的数据帧。
当第一电子设备将采集图像的第二摄像头切换为第三摄像头之前,相机服务模块会对缓冲区中缓存的数据帧的完整性进行检查。相机服务模块若检测到在缓冲区中存在不完整的数据帧(没有图像采集参数的数据帧),相机服务模块会在第一时长内暂停摄像头的切换,以便在第一时长内等待虚拟相机模块上传完整的数据帧。相机服务模块暂停摄像头的切换的方式可以为:在第一时长内不向物理相机模块发送切换摄像头的指示信息,或者,在第一时长内不发送请求消息。在第一时长后,不管虚拟相机模块是否发送了完整的数据帧,相机服务模块都会切换摄像头。第一时长的长度可以根据缓冲区中不完整数据帧的数量决 定,不完整数据帧的数量越多,第一时长越长(例如,第一时长一般为5~15秒)。
在这种情况下,当第一电子设备从虚拟摄像头(例如上述图5A-图5B实施例中的第二摄像头)切换到其他摄像头时,会发生较长时间的延迟,导致用户不能很快地切换摄像头,从而降低用户体验,即在上述图2的实施例中,第四时刻与第三时刻的差值大于或等于第一时长。
因此,针对上述问题,本申请实施例提出了另一种摄像头的切换方法,该方法为:物理相机模块在每次接收到相机服务模块发送的请求消息后,物理相机模块解析该请求消息中是否存在图像采集参数。若存在,物理相机模块将该图像采集参数存入虚拟相机模块中。这样,当虚拟相机模块返回数据帧时,在其接收到的请求消息中没有图像采集参数时,虚拟相机模块也可以将其存储的图像采集参数添加到数据帧中,保证其返回的数据帧的完整性,从而减少第一电子设备从虚拟摄像头切换到其他摄像头的延迟时间。
下面,结合图6A-图6B,对上述方法的流程进行说明。请参见图6A-图6B,图6A-图6B为本申请实施例提供的另一种摄像头切换方法的流程图,具体流程如下:
步骤601:相机服务模块向物理相机模块发送第一配置请求。
步骤602:相机服务模块向物理相机模块发送第一请求消息。
步骤601-步骤602请参见上述图5A实施例中的步骤501-步骤502,在此不再赘述。
步骤603:物理相机模块在检测到第一请求消息中存在图像采集参数的情况下,将图像采集参数存储到虚拟相机模块中。
具体的,物理相机模块在接收到第一请求消息后,会解析第一请求消息,判断第一请求消息中是否存在图像采集参数。若存在,物理相机模块将图像采集参数存储到虚拟相机模块中。这样,当将采集图像的摄像头由第一电子设备的摄像头切换为第二电子设备的摄像头时,虚拟相机模块接收到的请求消息中,如果不存在图像采集参数,虚拟相机模块也可以将物理相机模块为其存储的图像采集参数添加到数据帧中,确保其返回的数据帧的完整性。这样,当第一电子设备将采集图像的摄像头由第二电子设备的摄像头切换到其它摄像头时,可以减少摄像头切换的延迟时间,提高用户体验。
在一些实施例中,当物理相机模块接收到的第一请求消息中包括图像采集参数,虚拟相机模块未处于工作状态时,若虚拟相机模块中已存在图像采集参数,物理相机模块可以将虚拟相机模块中的图像采集参数替换为物理相机模块当前接收的第一请求消息中的图像采集参数。
步骤604:物理相机模块根据第一请求消息,调用第一摄像头采集第一图像。
步骤605:物理相机模块向相机服务模块发送第一数据帧。
步骤606:相机服务模块将第一数据帧发送给第一应用。
步骤607:第一应用基于第一数据帧得到第一图像,并将第一图像显示在第一界面上。
步骤608:在第一时刻,检测到第一输入操作,第一应用向相机服务模块发送第一切换指令。
步骤609:相机服务模块向物理相机模块发送第二指示信息。
步骤610:相机服务模块向物理相机模块发送第二请求消息,所述第二请求消息用于 指示虚拟相机模块调用第二摄像头采集第二图像。
步骤604-步骤610请参见上述图5A实施例中的步骤503-步骤509,在此不再赘述。
步骤611:物理相机模块将第二请求消息发送给虚拟相机模块。
具体的,虚拟相机模块在接收到第二请求消息后,解析第二请求消息,判断第二请求消息中是否存在图像采集参数。若存在,虚拟相机模块将其存储的图像采集参数更新为所述第二请求消息中的图像采集参数。
步骤612:虚拟相机模块根据第二请求消息,调用第二摄像头采集第二图像。
步骤613:虚拟相机模块向相机服务模块发送第二数据帧。
具体的,在第二数据帧中包括第二摄像头采集的第二图像帧、第二请求消息的标识信息以及图像采集参数。
步骤614:相机服务模块将第二数据帧发送给第一应用。
步骤615:在第二时刻,检测到第二输入操作,第一应用向相机服务模块发送第二切换指令。
步骤616:相机服务模块向物理相机模块发送第三指示信息。
步骤614-步骤616请参见上述图5B实施例中的步骤513-步骤514,在此不再赘述。
步骤617:相机服务模块向物理相机模块发送第三请求消息,所述第三请求消息用于指示虚拟相机模块调用第三摄像头采集第三图像。
具体的,第三请求消息用于指示虚拟相机模块调用第三摄像头采集第三图像。第三请求消息包括第三请求消息的标识信息,也可以包括第三摄像头的标识信息,第三摄像头为第二电子设备的摄像头。
步骤618:物理相机模块将第三请求消息发送给虚拟相机模块。
具体的,虚拟相机模块在接收到第三请求消息后,解析第三请求消息,判断第三请求消息中是否存在图像采集参数。若存在,虚拟相机模块将其存储的图像采集参数更新为所述第三请求消息中的图像采集参数。
步骤619:虚拟相机模块根据第三请求消息,调用第三摄像头采集第三图像。
步骤620:虚拟相机模块向相机服务模块发送第三数据帧。
具体的,在第三数据帧中包括第三摄像头采集的第三图像帧、第三请求消息的标识信息以及图像采集参数。
步骤621:相机服务模块将第三数据帧发送给第一应用。
具体的,相机服务模块可以将其接收的第三数据帧上传给第一应用,以便第一应用可以基于第三数据帧获取第三图像,并将第三图像显示在第一界面上。
在上述图6A-图6B实施例中,在第一电子设备启动摄像头采集图像时,物理相机模块在接收到相机服务模块下发的请求消息后,在得到请求消息中的图像采集参数后,会将图像采集参数存入到虚拟相机模块,使得虚拟相机模块中存储了图像采集参数。这样,即使后续相机服务模块下发的请求消息中不存在图像采集参数时,虚拟相机模块在调用第二电子设备的摄像头采集图像,也可以在数据帧中添加之前物理相机模块为其存储的图像采集信息,保证其传回的数据帧的完整性。避免了第一电子设备在将第二电子设备的摄像头切换为其它摄像头的过程中,由于之前虚拟相机模块传回的数据帧不完整,而导致摄像头切 换时间长,降低用户体验的问题。
下面对电子设备100的结构进行介绍。请参阅图7,图7是本申请实施例提供的电子设备100的硬件结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器122,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图7示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图7示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如Wi-Fi网络),蓝牙(BlueTooth,BT),BLE广播,全球导航卫 星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号、降噪、还可以识别声音来源,实现定向录音功能等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。
在本申请实施例中,电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
如图8所示,该电子设备可包括:应用程序层、应用程序框架、硬件抽象层(hardware abstraction layer,HAL)层及内核层(kernel)。其中:
应用程序层可以包括一系列应用程序包。如图8所示,应用程序包可以包括相机应用,图库,第一应用,导航,扫码应用、后台扫码程序等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。
应用程序框架层包括一些预先定义的函数。如图8所示,应用程序框架层可以包括窗口管理器,媒体记录器,相机服务模块等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏, 锁定屏幕,截取屏幕等。
相机服务模块用于向物理相机模块或虚拟相机模块发送请求消息,以请求物理相机模块或虚拟相机模块调用摄像头采集图像,并返回图像。
硬件抽象层可以包括多个功能模块。例如,物理相机模块、虚拟相机模块等。
物理相机模块用于调用电子设备100的摄像头采集图像,并将采集的图像返回给相机服务模块。此外,将图像采集参数存储到虚拟相机模块中。
虚拟相机模块用于调用虚拟相机采集图像,并将采集的图像返回给相机服务模块。
需要说明的是,对与上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但本领域技术人员应当知悉,本发明并不受所描述的动作顺序的限制。其次,本领域技术人员也应当知悉,说明书中所述的实施例均属优选实施例,所涉及的动作并不一定是本发明所必须的。
本申请的实施方式可以任意进行组合,以实现不同的技术效果。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
总之,以上所述仅为本发明技术方案的实施例,并非用于限定本发明的保护范围。凡根据本发明的揭露,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (19)

  1. 一种摄像头切换方法,其特征在于,应用于第一电子设备,所述方法包括:
    所述第一电子设备显示第一界面,所述第一界面包括预览区域和切换控件,所述预览区域显示第一图像,所述第一图像为第一摄像头采集的图像,所述第一摄像头为所述第一电子设备的摄像头;所述第一图像是所述第一摄像头根据图像采集参数中的参数值采集的;
    在第一时刻,响应于第一输入操作,所述第一电子设备将采集图像的摄像头切换为第二摄像头,所述第二摄像头为第二电子设备的摄像头;
    在第二时刻,所述第一电子设备在所述预览区域上显示第二图像,所述第二图像为所述第二摄像头采集的图像;在通过所述第二摄像头采集图像的过程中,向缓冲区中缓存的数据帧携带有图像采集参数;
    在第三时刻,响应于针对所述切换控件的第二输入操作;
    在第四时刻,所述第一电子设备在所述预览区域上显示第三图像,所述第三图像为第三摄像头采集的图像,所述第三摄像头为所述第二电子设备的摄像头。
  2. 如权利要求1所述的方法,其特征在于,所述图像采集参数包括以下至少一项:
    采集图像的最大帧率、曝光补偿数值、编码方式。
  3. 如权利要求1所述的方法,其特征在于,所述第二电子设备是与所述第一电子设备建立多屏协同连接的电子设备。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述第一电子设备包括第一应用、相机服务模块、物理相机模块、虚拟相机模块,所述第一电子设备显示第一界面之前,还包括:
    检测到第三输入操作,所述第一应用启动;
    所述相机服务模块向所述物理相机模块发送第一请求消息;
    在检测到所述第一请求消息中存在图像采集参数的情况下,所述物理相机模块将所述图像采集参数存入所述虚拟相机模块中;
    所述物理相机模块根据所述第一请求消息调用所述第一摄像头采集第一图像;
    所述物理相机模块向所述第一应用返回第一数据帧,所述第一数据帧包括所述第一图像和图像采集参数。
  5. 如权利要求4所述的方法,其特征在于,所述第一电子设备将采集图像的摄像头切换为第二摄像头,具体包括:
    所述第一应用向所述相机服务模块发送第一切换指令;
    所述相机服务模块向所述物理相机模块发送第一指示信息,所述第一指示信息用于指示所述物理相机模块将所述相机服务模块发送的请求消息转发给所述虚拟相机模块;
    所述相机服务模块向所述物理相机模块发送第二请求消息;
    所述物理相机模块将所述第二请求消息发送给所述虚拟相机模块;
    所述虚拟相机模块根据所述第二请求消息调用所述第二摄像头采集第二图像;
    所述虚拟相机模块向所述第一应用返回第二数据帧,所述第二数据帧包括所述第二图像和图像采集参数;
    所述虚拟相机模块将所述第二数据帧缓存在所述缓冲区中。
  6. 如权利要求5所述的方法,其特征在于,缓存在所述缓冲区中的第二数据帧都携带图像采集参数。
  7. 如权利要求5所述的方法,其特征在于,所述虚拟相机模块根据所述第二请求消息调用所述第二摄像头采集第二图像之前,还包括:
    所述虚拟相机模块检测所述第二请求消息中是否存在图像采集参数;
    若存在,所述虚拟相机模块将存储在所述虚拟相机模块中的图像采集参数替换为所述第二请求消息中的图像采集参数。
  8. 如权利要求5所述的方法,其特征在于,所述响应于针对所述切换控件的第二输入操作之后,还包括:
    所述相机服务模块判断所述缓冲区中的数据帧是否都携带图像采集参数;
    若判断为是,所述相机服务模块向所述物理相机模块发送第三请求消息;
    所述物理相机模块向所述虚拟相机模块发送所述第三请求消息;
    所述虚拟相机模块基于所述第三请求消息调用所述第三摄像头采集第三图像;
    所述虚拟相机模块向所述相机服务模块返回第三数据帧,所述第三数据帧包括所述第三图像和图像采集信息。
  9. 如权利要求8所述的方法,其特征在于,所述虚拟相机模块基于所述第三请求消息调用所述第三摄像头采集第三图像之前,还包括:
    所述虚拟相机模块检测所述第三请求消息中是否存在图像采集参数;
    若存在,所述虚拟相机模块将存储在所述虚拟相机模块中的图像采集参数替换为所述第三请求消息中的图像采集参数。
  10. 一种电子设备,其特征在于,包括:存储器、处理器和触控屏;其中:
    所述触控屏用于显示内容;
    所述存储器,用于存储计算机程序,所述计算机程序包括程序指令;
    所述处理器用于调用所述程序指令,使得所述电子设备执行如权利要求1-9任一项所述的方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-9任意一项所述的方法。
  12. 一种摄像头切换方法,其特征在于,应用于第一电子设备,所述第一电子设备包括第一应用、相机服务模块、物理相机模块、虚拟相机模块,所述方法包括:
    检测到第三输入操作,所述第一应用启动;
    所述相机服务模块向所述物理相机模块发送第一请求消息;
    在检测到所述第一请求消息中存在图像采集参数的情况下,所述物理相机模块将所述图像采集参数存入所述虚拟相机模块中;
    所述物理相机模块根据所述第一请求消息调用所述第一摄像头采集第一图像;
    所述物理相机模块向所述第一应用返回第一数据帧,所述第一数据帧包括所述第一图像和图像采集参数;
    显示第一界面,所述第一界面包括预览区域和切换控件,所述预览区域显示第一图像,所述第一图像为第一摄像头采集的图像,所述第一摄像头为所述第一电子设备的摄像头;所述第一图像是所述第一摄像头根据图像采集参数中的参数值采集的;
    在第一时刻,响应于第一输入操作,将采集图像的摄像头切换为第二摄像头,所述第二摄像头为第二电子设备的摄像头;
    在第二时刻,在所述预览区域上显示第二图像,所述第二图像为所述第二摄像头采集的图像;在通过所述第二摄像头采集图像的过程中,向缓冲区中缓存的数据帧携带有图像采集参数;
    在第三时刻,响应于针对所述切换控件的第二输入操作;
    在第四时刻,在所述预览区域上显示第三图像,所述第三图像为第三摄像头采集的图像,所述第三摄像头为所述第二电子设备的摄像头。
  13. 如权利要求12所述的方法,其特征在于,所述图像采集参数包括以下至少一项:
    采集图像的最大帧率、曝光补偿数值、编码方式。
  14. 如权利要求12所述的方法,其特征在于,所述第二电子设备是与所述第一电子设备建立多屏协同连接的电子设备。
  15. 如权利要求12-14任一项所述的方法,其特征在于,所述将采集图像的摄像头切换为第二摄像头,具体包括:
    所述第一应用向所述相机服务模块发送第一切换指令;
    所述相机服务模块向所述物理相机模块发送第一指示信息,所述第一指示信息用于指示所述物理相机模块将所述相机服务模块发送的请求消息转发给所述虚拟相机模块;
    所述相机服务模块向所述物理相机模块发送第二请求消息;
    所述物理相机模块将所述第二请求消息发送给所述虚拟相机模块;
    所述虚拟相机模块根据所述第二请求消息调用所述第二摄像头采集第二图像;
    所述虚拟相机模块向所述第一应用返回第二数据帧,所述第二数据帧包括所述第二图像和图像采集参数;
    所述虚拟相机模块将所述第二数据帧缓存在所述缓冲区中。
  16. 如权利要求15所述的方法,其特征在于,缓存在所述缓冲区中的第二数据帧都携带图像采集参数。
  17. 如权利要求15所述的方法,其特征在于,所述虚拟相机模块根据所述第二请求消息调用所述第二摄像头采集第二图像之前,还包括:
    所述虚拟相机模块检测所述第二请求消息中是否存在图像采集参数;
    若存在,所述虚拟相机模块将存储在所述虚拟相机模块中的图像采集参数替换为所述第二请求消息中的图像采集参数。
  18. 如权利要求15所述的方法,其特征在于,所述响应于针对所述切换控件的第二输入操作之后,还包括:
    所述相机服务模块判断所述缓冲区中的数据帧是否都携带图像采集参数;
    若判断为是,所述相机服务模块向所述物理相机模块发送第三请求消息;
    所述物理相机模块向所述虚拟相机模块发送所述第三请求消息;
    所述虚拟相机模块基于所述第三请求消息调用所述第三摄像头采集第三图像;
    所述虚拟相机模块向所述相机服务模块返回第三数据帧,所述第三数据帧包括所述第三图像和图像采集信息。
  19. 如权利要求18所述的方法,其特征在于,所述虚拟相机模块基于所述第三请求消息调用所述第三摄像头采集第三图像之前,还包括:
    所述虚拟相机模块检测所述第三请求消息中是否存在图像采集参数;
    若存在,所述虚拟相机模块将存储在所述虚拟相机模块中的图像采集参数替换为所述第三请求消息中的图像采集参数。
PCT/CN2023/117484 2022-10-27 2023-09-07 一种摄像头切换方法及相关电子设备 WO2024087900A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211322379.9 2022-10-27
CN202211322379.9A CN115379126B (zh) 2022-10-27 2022-10-27 一种摄像头切换方法及相关电子设备

Publications (1)

Publication Number Publication Date
WO2024087900A1 true WO2024087900A1 (zh) 2024-05-02

Family

ID=84073674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/117484 WO2024087900A1 (zh) 2022-10-27 2023-09-07 一种摄像头切换方法及相关电子设备

Country Status (2)

Country Link
CN (2) CN115379126B (zh)
WO (1) WO2024087900A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379126B (zh) * 2022-10-27 2023-03-31 荣耀终端有限公司 一种摄像头切换方法及相关电子设备
CN116347015B (zh) * 2023-05-26 2023-10-20 深圳市拔超科技股份有限公司 一种基于多个usb摄像头平滑切换的系统、方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887620A (zh) * 2021-01-28 2021-06-01 维沃移动通信有限公司 视频拍摄方法、装置及电子设备
WO2021220892A1 (ja) * 2020-04-27 2021-11-04 富士フイルム株式会社 画像処理装置、画像処理方法、及びプログラム
CN114466131A (zh) * 2020-11-10 2022-05-10 荣耀终端有限公司 一种跨设备的拍摄方法及相关设备
CN114500822A (zh) * 2020-11-11 2022-05-13 华为技术有限公司 控制相机的方法与电子设备
CN114866681A (zh) * 2021-02-04 2022-08-05 华为技术有限公司 跨设备的协同拍摄方法、相关装置及系统
CN115379126A (zh) * 2022-10-27 2022-11-22 荣耀终端有限公司 一种摄像头切换方法及相关电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10168882B2 (en) * 2013-06-09 2019-01-01 Apple Inc. Device, method, and graphical user interface for switching between camera interfaces
CN113422903B (zh) * 2021-06-16 2023-01-03 荣耀终端有限公司 拍摄模式切换方法、设备、存储介质
CN113426117B (zh) * 2021-06-23 2024-03-01 网易(杭州)网络有限公司 虚拟相机拍摄参数获取方法、装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021220892A1 (ja) * 2020-04-27 2021-11-04 富士フイルム株式会社 画像処理装置、画像処理方法、及びプログラム
CN114466131A (zh) * 2020-11-10 2022-05-10 荣耀终端有限公司 一种跨设备的拍摄方法及相关设备
CN114500822A (zh) * 2020-11-11 2022-05-13 华为技术有限公司 控制相机的方法与电子设备
CN112887620A (zh) * 2021-01-28 2021-06-01 维沃移动通信有限公司 视频拍摄方法、装置及电子设备
CN114866681A (zh) * 2021-02-04 2022-08-05 华为技术有限公司 跨设备的协同拍摄方法、相关装置及系统
CN115379126A (zh) * 2022-10-27 2022-11-22 荣耀终端有限公司 一种摄像头切换方法及相关电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUNTHA RAMESH; HARIHARAN BALAJI; RANGAN P. VENKAT: "CMAC: Collaborative multi agent computing for multi perspective multi screen immersive e-learning system", 2016 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), IEEE, 21 September 2016 (2016-09-21), pages 1212 - 1218, XP032989952, DOI: 10.1109/ICACCI.2016.7732210 *

Also Published As

Publication number Publication date
CN115379126A (zh) 2022-11-22
CN115379126B (zh) 2023-03-31
CN117956269A (zh) 2024-04-30

Similar Documents

Publication Publication Date Title
WO2021175213A1 (zh) 刷新率切换方法和电子设备
CN113885759B (zh) 通知消息处理方法、设备、系统及计算机可读存储介质
WO2024087900A1 (zh) 一种摄像头切换方法及相关电子设备
CN111628916B (zh) 一种智能音箱与电子设备协作的方法及电子设备
CN112492193B (zh) 一种回调流的处理方法及设备
WO2021185244A1 (zh) 一种设备交互的方法和电子设备
WO2020156230A1 (zh) 一种电子设备在来电时呈现视频的方法和电子设备
WO2022037407A1 (zh) 一种回复消息的方法、电子设备和系统
WO2021031865A1 (zh) 通话方法及装置
WO2022042326A1 (zh) 显示控制的方法及相关装置
WO2021179990A1 (zh) 一种应用服务器的访问方法及终端
CN111316604B (zh) 一种数据传输方法及电子设备
WO2021052388A1 (zh) 一种视频通信方法及视频通信装置
WO2023020012A1 (zh) 设备之间的数据通信方法、设备、存储介质及程序产品
EP4293997A1 (en) Display method, electronic device, and system
WO2022095820A1 (zh) 一种文本输入的方法、电子设备和系统
WO2021218544A1 (zh) 一种提供无线上网的系统、方法及电子设备
CN114567871A (zh) 文件共享的方法、装置、电子设备以及可读存储介质
WO2023236939A1 (zh) 应用组件交互方法及相关设备
WO2024067037A1 (zh) 一种服务调用方法、系统和电子设备
WO2023160224A1 (zh) 一种拍摄方法及相关设备
WO2022037412A1 (zh) 管理物联网设备的方法和装置
WO2024093614A1 (zh) 设备输入方法、系统、电子设备及存储介质
WO2023142940A1 (zh) 应用组件分享方法及相关设备
WO2023045966A1 (zh) 能力共享方法、电子设备以及计算机可读存储介质