WO2022143077A1 - 一种拍摄方法、系统及电子设备 - Google Patents

一种拍摄方法、系统及电子设备 Download PDF

Info

Publication number
WO2022143077A1
WO2022143077A1 PCT/CN2021/136767 CN2021136767W WO2022143077A1 WO 2022143077 A1 WO2022143077 A1 WO 2022143077A1 CN 2021136767 W CN2021136767 W CN 2021136767W WO 2022143077 A1 WO2022143077 A1 WO 2022143077A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
image processing
image data
mobile phone
processing tasks
Prior art date
Application number
PCT/CN2021/136767
Other languages
English (en)
French (fr)
Inventor
冷烨
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21913818.7A priority Critical patent/EP4246957A4/en
Publication of WO2022143077A1 publication Critical patent/WO2022143077A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • H04N23/662Transmitting camera control signals through networks, e.g. control via the Internet by using master/slave camera arrangements for affecting the control of camera image capture, e.g. placing the camera in a desirable condition to capture a desired image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00132Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
    • H04N1/00167Processing or editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00132Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
    • H04N1/00169Digital image input
    • H04N1/00172Digital image input directly from a still digital camera or from a storage medium mounted in a still digital camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32502Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device in systems having a plurality of input or output devices
    • H04N1/32545Distributing a job or task among a plurality of input devices or a plurality of output devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0084Digital still camera

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a photographing method, system, and electronic device.
  • electronic devices such as mobile phones are equipped with cameras, and the electronic devices can realize photographing, video recording and other shooting functions through the cameras.
  • the electronic device may turn on its own camera to collect image data, and the image data collected by the camera may be referred to as raw image data.
  • the electronic device can also use a corresponding image processing algorithm to perform image processing on the original image data, and output the target image data after image processing.
  • the user may need to switch the shooting function of the electronic device 1 to a distributed shooting scenario implemented in the electronic device 2 .
  • a distributed shooting scenario implemented in the electronic device 2 .
  • the mobile phone can interact with the TV and use the camera of the TV to shoot.
  • the shooting capabilities of different electronic devices may be different.
  • the image processing algorithm supported by the electronic device 1 is different from the image processing algorithm supported by the electronic device 2.
  • how to switch the shooting function of one electronic device to another electronic device to achieve a better shooting effect has become an urgent problem to be solved.
  • the present application provides a shooting method, system and electronic device, which can switch the shooting function of one electronic device to another electronic device in a distributed shooting scene, so as to achieve better shooting effect and improve user experience.
  • the present application provides a shooting method, comprising: when a first device uses a camera of a second device to shoot, the first device may determine a first shooting strategy during the shooting process, and the first shooting strategy includes the first device X image processing tasks to be executed, and Y image processing tasks to be executed by the second device, X and Y are both integers greater than or equal to 0; further, the first device can send the data to the second device according to the first shooting strategy
  • the first shooting instruction is used to trigger the second device to perform the above-mentioned Y image processing tasks on the collected raw image data in response to the first shooting instruction to obtain the first image data; when the first device receives the first image data; After the first image data sent by the second device, the first device may perform the above-mentioned X image processing tasks on the first image data according to the first shooting strategy to obtain the second image data; further, the first device may display the first image data on the display interface 2.
  • the first device in a distributed shooting scenario, can determine the shooting strategy in the shooting process in real time in combination with the shooting capability of the second device (ie, the slave device).
  • Each image processing task is assigned to the corresponding electronic device for execution.
  • the first device can switch its shooting function to other devices according to the image processing capabilities of other devices, and the first device can cooperate with other devices more efficiently and flexibly to realize the distributed shooting function, so as to achieve distributed shooting functions in distributed A better shooting effect is achieved in the shooting scene, and at the same time, a better shooting experience is provided for the user.
  • the first device can assign the image processing tasks that need to be performed to the first device (ie the master device) and the second device (ie the slave device), making full use of the device capabilities of the slave device and the master device.
  • the equipment cooperates to complete the image processing tasks during the shooting process.
  • the method further includes: the first device may acquire a shooting capability parameter of the second device, where the shooting capability parameter is used to indicate the image processing capability of the second device , for example, the shooting capability parameter may include an image processing algorithm supported by the second device, etc.; at this time, the first device assigns the above N image processing tasks to the first device and the second device, specifically including: the first device may follow The above shooting capability parameter assigns N image processing tasks to the first device and the second device, so that the second device can perform related image processing tasks that it can support, and at the same time, the first device can also perform related images that it can support. Processing tasks to improve the image processing efficiency of the distributed shooting process.
  • the first device assigns the first image processing task to the first device or the second device according to the shooting capability parameter, Specifically, it includes: if the shooting capability parameter of the second device indicates that the second device is capable of performing the first image processing task, the first device can assign the first image processing task to the second device; that is, when the second device ( The image processing tasks that the slave device is capable of executing can be assigned to the slave device for execution, thereby reducing the processing load of the first device (ie the master device).
  • the first device may assign the first image processing task to the second device. That is to say, the first device can allocate the image processing task to a device with a faster processing speed, so as to improve the processing efficiency of the subsequent image processing process.
  • the method further includes: the first device may create a corresponding hardware abstraction module (for example, DMSDP HAL), the hardware abstraction module has the image processing capability of the second device; the method further includes: the first device receives the first image data sent by the second device through the hardware abstraction module. That is to say, data transmission and reception can be performed between the first device and the second device through the DMSDP HAL.
  • the hardware abstraction module for example, DMSDP HAL
  • the HAL of the first device may also include a Camera HAL, that is, a HAL corresponding to the camera of the first device;
  • the data specifically includes: the first device performs X1 image processing tasks on the first image data through the above-mentioned hardware abstraction module to obtain the third image data; and then, the above-mentioned hardware abstraction module can send the third image data to the Camera HAL, and the Camera The HAL performs X2 image processing tasks on the third image data to obtain the second image data.
  • the hardware abstraction module of the first device has the processing capability of the second device
  • the hardware abstraction module can be used to execute the image processing tasks supported by the second device
  • the traditional Camera HAL in the first device can be used to execute the image processing tasks supported by the second device.
  • Image processing tasks supported by the first device can be used to execute the image processing tasks supported by the second device.
  • the first device performs X image processing tasks on the first image data according to the first shooting strategy, and obtains
  • the second image data it specifically includes: the hardware abstraction module of the first device can directly send the received first image data to the Camera HAL; the Camera HAL performs X image processing tasks on the first image data to obtain the second image data.
  • the above method further includes: after the first device detects a preset operation input by the user, the first device updates the current shooting mode or shooting option in response to the preset operation. For example, the user can switch the shooting mode from the photographing mode to the video recording and so on. For another example, the user can select or undo photographing options such as beauty and filters. Subsequently, based on the updated shooting mode or shooting option, the first device may continue to determine a corresponding shooting strategy according to the above method to cooperate with the second device to shoot.
  • the above shooting modes may include preview (also known as preview mode), photographing (also known as photographing mode), video recording (also known as video recording mode), portrait (also known as portrait mode) or slow motion (also known as slow motion mode) and other shooting modes.
  • the above shooting options may include shooting options such as beauty, filters, focus adjustment or exposure adjustment.
  • the method further includes: in response to a first operation input by the user, the first device may display a list of candidate devices, where the list of candidate devices is The second device is included; in response to the user's operation of selecting the second device in the candidate device list, the first device may instruct the second device to activate a camera to start collecting raw image data.
  • the above method further includes: the first device receives a video recording operation input by the user; in response to the video recording operation, the first device may determine the second device The shooting strategy and the third shooting strategy, wherein the second shooting strategy includes K image processing tasks that need to be performed on the preview stream data, and the K image processing tasks are performed by the first device, and the third shooting strategy includes the need to perform on the video stream data.
  • W image processing tasks to be performed these W image processing tasks are performed by the second device, and K and W are both integers greater than or equal to 0; further, the first device can be directed to the first device according to the second shooting strategy and the third shooting strategy.
  • the second device sends a second shooting instruction, and the second shooting instruction is used to trigger the second device to directly send the collected first preview stream data to the first device, and the collected first video stream data (the first preview stream data and the first video recording stream data is the original image data collected by the second device) to perform above-mentioned W image processing tasks, and the second video video stream data obtained is sent to the first device; like this, when the first device receives the second After the first preview stream data collected by the device, the first device can perform K image processing tasks on the first preview stream data to obtain second preview stream data; and then the first device displays the second preview stream data on the display interface ; After the first device receives the second video recording stream data sent by the second device, the first device saves the second video recording stream data as a video.
  • the method further includes: the first device may acquire a shooting capability parameter of the second device, where the shooting capability parameter is used to indicate the second shooting strategy The image processing capability of the device; at this time, the first device determines the second shooting strategy and the third shooting strategy, including: the first device determines the second shooting strategy and the third shooting strategy according to the above-mentioned shooting capability parameter.
  • the present application provides a shooting method, comprising: a first device receiving a video recording operation input by a user; in response to the video recording operation, the first device may determine a second shooting strategy and a third shooting strategy, wherein the second shooting strategy Including K image processing tasks that need to be performed on the preview stream data, the K image processing tasks are performed by the first device, and the third shooting strategy includes W image processing tasks that need to be performed on the video stream data, these W image processing tasks Executed by the second device, K and W are both integers greater than or equal to 0; further, the first device can send a second shooting instruction to the second device according to the second shooting strategy and the third shooting strategy, and the second shooting instruction is used for Trigger the second device to directly send the collected first preview stream data to the first device, while the collected first video stream data (the first preview stream data and the first video stream data are the original data collected by the second device).
  • the first device performs the above-mentioned W image processing tasks, and send the obtained second video recording stream data to the first device; like this, when the first device receives the first preview stream data collected by the second device, the first device K image processing tasks can be performed on the first preview stream data to obtain the second preview stream data; the second preview stream data is then displayed on the display interface by the first device; when the first device receives the second preview stream data sent by the second device After recording the stream data, the first device saves the second video stream data as a video.
  • the first device can assign the image processing task of the preview stream data to the first device, so that the second device can directly send the generated preview stream data to the first device Image processing is performed to ensure the high real-time requirement of preview streaming data.
  • the second device can perform image processing on video stream data with low real-time requirements, so that while the first device performs image processing on preview stream data, the second device can simultaneously perform image processing on video stream data.
  • the master device and the slave device can simultaneously perform image processing on different image data streams in a parallel manner, so that the resource utilization rate of each device in a distributed shooting scenario is improved, and the processing efficiency of the entire shooting process is also improved accordingly.
  • the slave device can send different image data streams to the master device in a time-sharing and segmented manner, thereby reducing the transmission time of the image data stream.
  • the network bandwidth pressure further improves the processing efficiency of the entire shooting process.
  • the method further includes: the first device acquires a shooting capability parameter of the second device, where the shooting capability parameter is used to indicate the image processing capability of the second device; at this time , the first device determines the second shooting strategy and the third shooting strategy, which specifically includes: the first device can determine the second shooting strategy and the third shooting strategy according to the shooting capability parameter, so as to use the device capability of the slave device to cooperate with the master device to complete shooting image processing tasks in the process.
  • the above method further includes: the first device receiving an operation of the user to update the current shooting mode or shooting options; responding to the updated shooting mode or shooting option, the first device determines the N image processing tasks to be performed; further, the first device can assign the N image processing tasks to the first device and the second device to obtain the first shooting strategy, the first shooting The strategy includes X image processing tasks that the first device needs to perform, and Y image processing tasks that the second device needs to perform; further, the first device can send a first shooting instruction to the second device according to the first shooting strategy, triggering The second device performs the above Y image processing tasks on the collected raw image data in response to the first shooting instruction to obtain the first image data; subsequently, after the first device receives the first image data sent by the second device, the first A device may perform the above-mentioned X image processing tasks on the first image data according to the first shooting strategy to obtain second image data; and the first device may display the second image data in the display interface
  • the method further includes: the first device may acquire a shooting capability parameter of the second device, where the shooting capability parameter is used to indicate the image processing capability of the second device , for example, the shooting capability parameter may include an image processing algorithm supported by the second device, etc.; at this time, the first device assigns the above N image processing tasks to the first device and the second device, specifically including: the first device may follow The above shooting capability parameter assigns N image processing tasks to the first device and the second device, so that the second device can perform related image processing tasks that it can support, and at the same time, the first device can also perform related images that it can support. Processing tasks to improve the image processing efficiency of the distributed shooting process.
  • the first device assigns the first image processing task to the first device or the second device according to the shooting capability parameter, Specifically, it includes: if the shooting capability parameter of the second device indicates that the second device is capable of performing the first image processing task, the first device can assign the first image processing task to the second device; that is, when the second device ( The image processing tasks that the slave device is capable of executing can be assigned to the slave device for execution, thereby reducing the processing load of the first device (ie the master device).
  • the first device may assign the first image processing task to the second device. That is to say, the first device can allocate the image processing task to a device with a faster processing speed, so as to improve the processing efficiency of the subsequent image processing process.
  • the HAL of the first device may also set the hardware abstraction module and the Camera HAL and other modules, and the hardware abstraction module and the Camera HAL and other modules can also be set.
  • the hardware abstraction module and the Camera HAL and other modules can also be set.
  • the present application provides a shooting method, including: the second device can turn on a camera in response to an instruction of the first device to start collecting raw image data; when the second device receives the first shooting instruction sent by the first device, If the first shooting instruction is used to indicate that the second device needs to perform Y image processing tasks, and Y is an integer greater than or equal to 0, the second device can perform Y image processing tasks on the collected raw image data to obtain the first image processing task. image data; furthermore, the second device can send the first image data to the first device, and the first device continues to perform related image processing tasks on the first image data, so as to realize the function of the master device and the slave device for cooperative shooting.
  • the method before the second device turns on the camera in response to the instruction of the first device and starts to collect raw image data, the method further includes: establishing a network connection between the second device and the first device; The shooting capability parameter of the device is sent to the first device, and the shooting capability parameter is used to indicate the image processing capability of the second device.
  • the master device ie, the first device
  • the slave devices ie, the second device
  • Image processing tasks are assigned to the main device to improve the processing efficiency of the entire shooting process.
  • the method further includes: the second device can turn on the camera in response to the instruction of the first device and start to collect the original image data And then, the second equipment can receive the second shooting instruction sent by the first equipment, and the second shooting instruction is used to indicate that the current shooting mode is video recording, and the second shooting instruction includes W image processing tasks that need to be performed to the video recording data stream.
  • W is an integer greater than or equal to 0; further, in response to the second shooting instruction, the second device can copy the collected original image data into two channels to obtain the first video stream data and the first preview stream data; subsequently, The second device can send the first preview stream data to the first device, and the first device can perform related image processing tasks on the first preview stream data; at the same time, the second device can perform W image processing on the first video stream data task, obtain the second video recording stream data, and send the second video recording stream data to the first device.
  • the present application provides a shooting method, including: the second device can turn on a camera in response to an instruction of the first device to start collecting raw image data; further, the second device can receive a second shooting instruction sent by the first device, The second shooting instruction is used to indicate that the current shooting mode is video recording, the second shooting instruction includes W image processing tasks that need to be performed on the video data stream, and W is an integer greater than or equal to 0; further, in response to the second shooting instruction , the second device can copy the collected original image data into two channels to obtain the first video stream data and the first preview stream data; subsequently, the second device can send the first preview stream data to the first device, and the second device can send the first preview stream data to the first device.
  • a device performs related image processing tasks on the first preview stream data; at the same time, the second device can perform W image processing tasks on the first video stream data, obtains the second video stream data, and sends the second video stream data to first device.
  • the second device and the first device can simultaneously perform image processing on the acquired image data stream in a parallel manner, thereby improving the processing efficiency of the entire shooting process.
  • the slave device can send different image data streams to the master device in a time-sharing and segmented manner, thereby reducing the transmission time of the image data stream.
  • the network bandwidth pressure further improves the processing efficiency of the entire shooting process.
  • the method further includes: the second device can receive the first shooting instruction sent by the first device, and the first shooting instruction It is used to indicate that the current shooting mode is a shooting mode such as photographing or previewing, and the first shooting instruction can carry Y image processing tasks that the second device needs to perform; further, the second device can respond to the first shooting The above-mentioned Y image processing tasks are performed on the original image data of , to obtain first image data, and send the first image data to the first device.
  • the second device can also establish a network connection with the first device, and send its own shooting capability parameters to the first device. , this application does not make any restrictions on this.
  • the present application provides an electronic device (such as the above-mentioned first device), comprising: a display screen, a communication module, one or more processors, one or more memories, and one or more computer programs; wherein, The processor is coupled to the communication module, the display screen, and the memory, and the one or more computer programs described above are stored in the memory, and when the electronic device operates, the processor executes the one or more computer programs stored in the memory to cause the electronic device to operate.
  • the apparatus performs the photographing method in any of the above-mentioned aspects.
  • the present application provides an electronic device (such as the above-mentioned second device), comprising: a communication module, one or more processors, one or more memories, and one or more computer programs; wherein the processor and the The communication module and the memory are both coupled, and the above-mentioned one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so that the electronic device performs any of the above-mentioned aspects. shooting method in .
  • the present application provides a distributed shooting system, including the first device in the fifth aspect and the second device in the sixth aspect.
  • the first device uses the camera of the second device to shoot
  • the second device can turn on the camera in response to the instruction of the first device to start collecting raw image data
  • the first device can determine the first shooting strategy during the shooting process
  • the first shooting strategy includes X image processing tasks that the first device needs to perform, and Y image processing tasks that the second device needs to perform, where X and Y are both integers greater than or equal to 0
  • the first shooting strategy sends a first shooting instruction to the second device, so that the second device performs Y image processing tasks on the collected raw image data in response to the first shooting instruction to obtain the first image data; further, the second device
  • the first image data can be sent to the first device; after the first device receives the first image data, X image processing tasks can be performed on the first image data according to the first shooting strategy to obtain the second image data; finally,
  • the first device may display the second image data in the
  • the present application provides a computer-readable storage medium, comprising computer instructions, when the computer instructions are executed on the above-mentioned first device or the second device, the first device or the second device executes any of the above-mentioned aspects. the shooting method described.
  • the present application provides a computer program product, which enables the first device or the second device to execute the photographing method described in any one of the above aspects when the computer program product runs on the above-mentioned first device or the second device.
  • FIG. 1 is a schematic diagram of the architecture of a distributed shooting system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram 1 of an application scenario of a shooting method provided by an embodiment of the present application
  • FIG. 3 is a second schematic diagram of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 4 is an interactive schematic diagram of a shooting method provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram 1 of an electronic device according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram 3 of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 7 is a fourth schematic diagram of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram 5 of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram 6 of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 10 is a seventh schematic diagram of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram eight of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram 9 of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram ten of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram eleventh of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram twelve of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram thirteen of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 17 is a fourteenth schematic diagram of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 18 is a fifteenth schematic diagram of an application scenario of a shooting method provided by an embodiment of the present application.
  • FIG. 19 is a second schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 20 is a third schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the distributed photographing system 200 may include a master device (master) 101 and N slave devices (slaves) 102 , where N is an integer greater than 0. Communication between the master device 101 and any one of the slave devices 102 may be performed in a wired manner or wirelessly.
  • a wired connection may be established between the master device 101 and the slave device 102 using a universal serial bus (USB).
  • the master device 101 and the slave device 102 can use the global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access) multiple access, CDMA), wideband code division multiple access (WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), Bluetooth, wireless fidelity (Wi-Fi), NFC, voice over Internet protocol (VoIP), and communication protocols that support network slicing architecture establish wireless connections.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • Bluetooth wireless
  • one or more cameras can be set in both the master device 101 and the slave device 102 .
  • the master device 101 can use the camera on the slave device 102 to collect image data, thereby distributing the photographing, video recording and other shooting functions of the master device 101 to one or more slave devices 102, thereby realizing the distributed shooting function across devices.
  • the master device 101 may specifically be a mobile phone, a tablet computer, a TV (also referred to as a smart TV, a smart screen, or a large-screen device), a laptop, and an Ultra-mobile Personal computer.
  • Computer, UMPC handheld computers, netbooks, personal digital assistants (Personal Digital Assistant, PDA), wearable electronic devices (such as smart watches, smart glasses, smart helmets, smart bracelets), in-vehicle devices, virtual reality devices, etc.
  • PDA Personal Digital Assistant
  • wearable electronic devices such as smart watches, smart glasses, smart helmets, smart bracelets
  • in-vehicle devices virtual reality devices, etc.
  • a functional electronic device this embodiment of the present application does not impose any limitation on this.
  • a camera application for realizing a shooting function can be installed in the mobile phone.
  • the mobile phone can open its own camera to start collecting image data, and display the image data in the preview frame 202 of the preview interface 201 in real time.
  • one or more image processing tasks can be performed on the raw image data.
  • the mobile phone may use a preset exposure algorithm to perform image processing task 1 on the original image data, so as to adjust the exposure of the original image data.
  • the mobile phone may use a preset face detection algorithm to perform image processing task 2 on the original image data, so as to recognize the face image in the original image data.
  • the mobile phone may use a preset beauty algorithm to perform image processing task 3 on the original image data, so as to perform beauty processing on the face image in the original image data.
  • the mobile phone can also perform image processing tasks such as image stabilization, focus, soft focus, blur, filters, AR special effects, smile detection, skin color adjustment or scene recognition on the raw image data.
  • image processing tasks may be performed by the mobile phone in response to user settings, or may be automatically performed by the mobile phone by default, which is not limited in this embodiment of the present application.
  • the mobile phone can perform image processing tasks corresponding to the beauty function in response to the user's operation of turning on the beauty function. For another example, after the mobile phone obtains the above-mentioned original image data, although the user does not input an operation to adjust the focus, the mobile phone can still automatically perform the image processing task of the focusing function.
  • the mobile phone needs to perform multiple image processing tasks. For example, taking the example that the mobile phone needs to perform image processing task 1 and image processing task 2, the mobile phone can first perform image processing task 1 on the acquired original image data to obtain the processed first image data. Furthermore, the mobile phone may perform image processing task 2 on the first image data to obtain processed second image data. Subsequently, the mobile phone may display the second image data in the preview frame 202 of the preview interface 201, so as to present the captured image after image processing to the user.
  • the mobile phone may set a button 203 in the preview interface 201 of the camera application.
  • the user can click the switch button 203 to query one or more electronic devices that can currently collect image data.
  • the mobile phone can display one or more candidate devices that can collect image data currently searched by the mobile phone in a dialog 301 . For example, whether each electronic device has a shooting function can be recorded in the server. Then, the mobile phone can query the server for an electronic device with a photographing function that is logged into the same account as the mobile phone (for example, a Huawei account). Furthermore, the mobile phone can display the queried electronic device as a candidate device in the dialog 301 .
  • the phone can search for electronic devices that are on the same Wi-Fi network as the phone. Further, the mobile phone can send a query request to each electronic device in the same Wi-Fi network, and the electronic device that triggers the receipt of the query request can send a response message to the mobile phone, and the response message can indicate whether it has a shooting function. Then, the mobile phone can determine, according to the received response message, an electronic device with a photographing function in the current Wi-Fi network. Furthermore, the mobile phone can display the electronic device with the photographing function as a candidate device in the dialog 301 .
  • an application for managing smart home devices in the home can be installed in the mobile phone.
  • the user can add one or more smart home devices in the smart home application, so that the smart home device added by the user is associated with the mobile phone.
  • a two-dimensional code containing device information such as device identification can be set on a smart home device. After the user scans the two-dimensional code with the smart home application of the mobile phone, the corresponding smart home device can be added to the smart home application, thereby establishing a smart home application.
  • the mobile phone when one or more smart home devices added in the smart home application go online, for example, when the mobile phone detects the Wi-Fi signal sent by the added smart home device, the mobile phone can use the smart home device A candidate device is displayed in the dialog 301, prompting the user to choose to use the corresponding smart home device to complete the shooting function of the mobile phone.
  • each candidate device in the dialog 301 can also perform one or more image processing tasks on the raw image data collected by the camera according to the above-mentioned method.
  • the difference is that the image processing capabilities of different electronic devices may be different, that is, the image processing tasks supported by different electronic devices may be different.
  • the TV 1 may support the image processing task of the zoom function, but not the image processing task of the beauty function.
  • the candidate devices searched by the mobile phone include TV 1, watch 2, and mobile phone 3, for example, the user can choose which device to switch the shooting function of the mobile phone to this time from TV 1, watch 2, and mobile phone 3. completed in.
  • the mobile phone can use TV 1 as a slave device for switching the shooting function of the mobile phone this time, and establish a network connection with TV 1.
  • the mobile phone can establish a Wi-Fi connection with the TV 1 through a router, or the mobile phone can directly establish a Wi-Fi P2P connection with the TV 1, or the mobile phone can directly establish a mobile network connection with the TV 1.
  • the mobile network includes but is not limited to Mobile networks supporting 2G, 3G, 4G, 5G and subsequent standard protocols.
  • the mobile phone can obtain the shooting capability parameter of the TV 1 from the TV 1 , and the shooting capability parameter is used to indicate one or more image processing tasks supported by the TV 1 .
  • the shooting capability parameters of the TV 1 may specifically include algorithms corresponding to the image processing tasks supported by the TV 1.
  • the image processing task A corresponds to the algorithm 1 of the beauty function
  • the image processing task B corresponds to the face detection function.
  • Algorithm 2 corresponds.
  • the shooting capability parameter of the TV 1 may further include the number of cameras in the TV 1, the FOV (field of view, field of view), aperture size, resolution, and the like of each camera. In this way, the mobile phone can determine the specific image processing tasks supported by the TV 1 according to the shooting capability parameter of the TV 1 .
  • the mobile phone can instruct the TV 1 to turn on its own camera to collect the original image data; According to the shooting strategy in the shooting process, relevant image processing is performed on the collected original image data in cooperation with the TV 1 according to the shooting strategy.
  • the mobile phone may determine that the image processing task 1 corresponding to the focusing function needs to be performed on the raw image data. If the shooting capability parameter of the TV 1 indicates that the TV 1 does not support the image processing task 1, the mobile phone can set in the shooting strategy 1 that the mobile phone performs the image processing task 1 on the original image data. Furthermore, the mobile phone can send a shooting instruction 1 to the TV 1, and the shooting instruction 1 is used to instruct the TV 1 to send the collected original image data to the mobile phone for image processing. Furthermore, the TV 1 can respond to the shooting instruction 1 and send the original image data collected by the camera in real time to the mobile phone.
  • the mobile phone After receiving the original image data sent from the TV, the mobile phone can perform image processing task 1 on the original image data according to the shooting strategy 1 to obtain processed image data 1 . Subsequently, the mobile phone may display the image data 1 in the preview frame 202 of the preview interface 201, thereby presenting the focused preview image to the user.
  • the camera application needs to perform the image processing task 1 on the original image data in the preview mode.
  • the shooting mode may include preview (also known as preview mode), photo (also known as photo mode), video (also known as video mode), portrait (also known as portrait mode), or slow motion (also known as portrait mode) may be called slow motion mode) and other shooting modes.
  • the image processing tasks that the mobile phone needs to perform may include multiple image processing tasks such as exposure enhancement, face beautification, and soft focus.
  • the user can also manually set the shooting options in the shooting process.
  • the above-mentioned shooting options may include beauty, filters, or focus adjustment.
  • the mobile phone can determine that it also needs to perform image processing task 2 corresponding to the beauty function on the original image data. .
  • the mobile phone can determine a new shooting strategy, such as the shooting strategy 2, in combination with the shooting capability parameters of the TV 1 .
  • the image processing task 1 and the image processing task 2 can be allocated to the mobile phone and/or the TV 1 for execution according to the shooting capability of the TV 1 .
  • the mobile phone can send a new shooting instruction to the TV 1 according to the shooting strategy 2, so that the mobile phone can cooperate with the TV 1 to perform relevant image processing on the collected raw image data according to their shooting capabilities.
  • the master device 101 (such as the above-mentioned mobile phone) can combine the shooting capability of the slave device 102 (such as the above-mentioned TV 1) to determine the shooting strategy in the shooting process in real time.
  • One or more image processing tasks are assigned to corresponding electronic devices for execution.
  • the master device 101 can switch its own shooting function to the slave device 102 according to the image processing capability of the slave device 102 when shooting, and the master device 101 can cooperate with the slave device 102 more efficiently and flexibly to realize the distributed shooting function , so as to achieve a better shooting effect in a distributed shooting scene, and at the same time provide a better shooting experience for the user.
  • FIG. 5 shows a schematic structural diagram of the mobile phone.
  • the mobile phone may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, Speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, Speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, etc.
  • USB universal serial bus
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the mobile phone.
  • the mobile phone may include more or less components than shown, or some components may be combined, or some components may be separated, or different component arrangements.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units can be independent devices, or can be integrated in one or more processors.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the mobile phone.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 can provide applications on the mobile phone including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellite system ( global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation
  • FM near field communication technology
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the antenna 1 of the mobile phone is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the mobile phone can communicate with the network and other devices through wireless communication technology.
  • the mobile phone realizes the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the handset may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the mobile phone can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the mobile phone may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the mobile phone can implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a “speaker” is used to convert audio electrical signals into sound signals.
  • the mobile phone can listen to music through the speaker 170A, or listen to hands-free calls.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound near the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
  • the mobile phone may be provided with at least one microphone 170C.
  • the mobile phone may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function.
  • three, four or more microphones 170C can be set on the mobile phone to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the sensor module 180 may include a pressure sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the mobile phone may also include a charging management module, a power management module, a battery, a button, an indicator, and one or more SIM card interfaces, which are not limited in this embodiment of the present application.
  • the software system of the above mobile phone may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of a mobile phone.
  • operating systems such as Hongmeng system, Linux system, etc.
  • the functions implemented by each functional module are similar to the embodiments of the present application, it falls within the scope of the claims of the present application and their equivalent technologies.
  • FIG. 6 is a block diagram of a software structure of a mobile phone according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, from top to bottom, the application layer, the application framework layer, the Android runtime (Android runtime) and the system library, and the HAL (hardware abstraction layer, hardware abstraction layer) layer and kernel layer.
  • the application layer can include a series of application packages.
  • applications such as call, memo, browser, contact, gallery, calendar, map, bluetooth, music, video, and short message can be installed in the application layer.
  • an application with a shooting function for example, a camera application
  • a camera application may be installed in the application layer.
  • the camera application can also be called to realize the shooting function.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • a camera service (CameraService) is set in the application framework layer.
  • the camera application can start CameraService by calling the preset API.
  • CameraService can interact with Camera HAL in HAL (hardware abstraction layer, hardware abstraction layer) during operation.
  • the Camera HAL is responsible for interacting with the hardware devices (such as cameras) that realize the shooting function in the mobile phone.
  • the Camera HAL hides the implementation details of the relevant hardware devices (such as specific image processing algorithms), and on the other hand can provide the Android system. Call the interface of the relevant hardware device.
  • the camera application may send relevant control instructions (such as preview, zoom in, photographing or video recording instructions) issued by the user to the CameraService.
  • CameraService can send received control commands to Camera HAL, so that Camera HAL can call the camera driver in the kernel layer according to the received control commands, and drive hardware devices such as cameras to collect raw image data in response to the control commands.
  • the camera can transmit each frame of raw image data collected to the Camera HAL through the camera driver at a certain frame rate.
  • the transfer process of the control instruction inside the operating system reference may be made to the specific transfer process of the control flow in FIG. 6 .
  • the CameraService after the CameraService receives the above control instruction, it can determine the shooting strategy at this time according to the received control instruction, and the specific image processing task that needs to be performed on the original image data is set in the shooting strategy. For example, in preview mode, CameraService can set the default image processing task 1 in the shooting strategy to implement the face detection function. For another example, if the user enables the beauty function in the preview mode, the CameraService may also set image processing task 2 in the shooting strategy to implement the beauty function. Furthermore, CameraService can send the determined shooting strategy to Camera HAL.
  • the image processing process of the mobile phone is generally completed in the Camera HAL, that is, the Camera HAL has the image processing capability of the mobile phone.
  • the Camera HAL receives the original image data collected by the camera, it can perform corresponding image processing tasks on the above-mentioned original image data according to the shooting strategy issued by CameraService, and obtain the target image data after image processing.
  • Camera HAL can perform image processing task 1 above using a preset face detection algorithm.
  • the Camera HAL may use a preset beauty algorithm to perform the above image processing task 2.
  • Camera HAL can report the obtained target image data to the camera application through CameraService, and the camera application can display the target image data in the display interface, or the camera application can save the target image data in the form of photos or videos on the mobile phone Inside.
  • image data for example, original image data and target image data
  • FIG. 6 For the transfer process of image data (for example, original image data and target image data) within the operating system, reference may be made to the specific transfer process of the data flow in FIG. 6 .
  • the application framework layer may also include a window manager, a content provider, a view system, a resource manager, a notification manager, and the like, which are not limited in this embodiment of the present application.
  • the above-mentioned window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • the above content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system described above can be used to build the display interface of an application.
  • Each display interface can consist of one or more controls.
  • controls may include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, and widgets.
  • the above resource managers provide various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager described above enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, prompt text information in the status bar, send out a sound, vibrate, and flash the indicator light, etc.
  • the Android runtime includes core libraries and virtual machines. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is located under the HAL and is the layer between hardware and software.
  • the kernel layer at least includes a display driver, a camera driver, an audio driver, a sensor driver, and the like, which are not limited in this embodiment of the present application.
  • a device virtualization (DeviceVirtualization) application for realizing a distributed shooting function may be installed in the application layer of the mobile phone, Subsequent can be referred to as DV applications.
  • the DV application can run in the mobile phone as a system application.
  • the functions implemented by the DV application can also be resident and run in the mobile phone in the form of system services.
  • the DV application in the mobile phone can establish a network connection with the mobile phone as a slave device of the other electronic device.
  • the network connection established between the mobile phone and the slave device may specifically refer to a network connection of a service channel (ie, a service connection).
  • a service connection For example, before the mobile phone establishes the above-mentioned network connection with the slave device, the mobile phone may have established a connection with the slave device through the Wi-Fi network, and the connection at this time refers to the connection of the data channel (ie, data connection).
  • the mobile phone and the slave device can establish a service connection on the basis of the established data connection.
  • the above-mentioned network connection may be a P2P connection based on TCP (transmission control protocol, transmission control protocol) or UDP (user datagram protocol, user datagram protocol), which is not limited in this embodiment of the present application.
  • the DV application can obtain the shooting capability parameter of the slave device based on the network connection, and the shooting capability parameter is used to indicate one or more images supported by the slave device.
  • the shooting capability parameter may include a specific image processing algorithm supported by the slave device, so as to reflect the specific shooting capability of the slave device.
  • the DV application can call the preset interface of the HAL, and input the acquired shooting capability parameters into the preset interface, thereby creating a hardware abstraction module corresponding to the slave device in the HAL.
  • the hardware abstraction module created by the DV application according to the shooting capability parameters of the slave device is called DMSDP (Distributed Mobile Sensing Development Platform) HAL, which may also be called virtual. Camera HAL.
  • DMSDP HAL distributed Mobile Sensing Development Platform
  • the DMSDP HAL does not correspond to the actual hardware device of the mobile phone, but corresponds to the slave device currently connected to the mobile phone.
  • the mobile phone can be used as the master device to send and receive data with the slave device through the DMSDP HAL, and the slave device can be used as a virtual device of the mobile phone to cooperate with the slave device to complete various services in the distributed shooting scene.
  • the DV application of the mobile phone can also obtain the audio capability parameters of the slave device (such as audio playback delay, audio sampling rate or the number of sound channels, etc.), display capability parameters (such as screen resolution, codec of display data, etc.) algorithm, etc.).
  • the slave device can also send the relevant capability parameters to the DV application of the mobile phone.
  • the DV application can input the capability parameters related to the slave device into the preset interface, so as to create a hardware abstraction module corresponding to the slave device in the HAL, such as a DMSDP HAL.
  • the DMSDP HAL not only has the image processing capability of the slave device, but also has the audio and display capabilities of the slave device, so that the slave device can act as a virtual device of the mobile phone and cooperate with the mobile phone to complete various services in distributed scenarios.
  • the DV application in addition to creating a corresponding DMSDP HAL for the slave device of the mobile phone in the HAL, the DV application can also send the shooting capability parameter of the slave device to the CameraService for saving, That is, register the shooting capability of the current slave device in CameraService.
  • the camera application can send control commands such as preview, zoom, photo or video to CameraService.
  • the CameraService receives the control command, it can determine one or more image processing tasks to be executed subsequently according to the control command, for example, image processing task 1 corresponding to the beauty function, image processing task 2 corresponding to the anti-shake function, or image processing task 2 corresponding to the anti-shake function.
  • Image processing task 3 corresponding to the face detection function, etc.
  • CameraService can set a corresponding shooting strategy in combination with the current shooting capability parameters of the slave device, and in the shooting strategy, allocate the above-mentioned one or more image processing tasks to the mobile phone and the slave device for execution.
  • CameraService can set the slave device to perform image processing task 1 in the shooting policy, and set the mobile phone to perform image processing task 2 and image processing task 3.
  • CameraService can also set all image processing tasks in the shooting strategy to be independently completed by the mobile phone or the slave device, which is not limited in this embodiment of the present application.
  • the CameraService of the mobile phone can issue the above-mentioned shooting strategy to the DMSDP HAL, and send a shooting instruction to the slave device through the DMSDP HAL, instructing the slave device to execute the image processing task assigned to the slave device in the above-mentioned shooting strategy .
  • the slave device uses the camera to collect the original image data, it can respond to the shooting instruction to perform the corresponding image processing task on the original image data, and send the processed image data (for example, the first image data) DMSDP HAL for the phone. Since the DMSDP HAL of the mobile phone has obtained the above shooting strategy, the DMSDP HAL can determine, according to the shooting strategy, that the image processing process for the first image data is completed by the DMSDP HAL or by the Camera HAL.
  • the DMSDP HAL can perform the corresponding image processing task on the first image data according to the above shooting strategy, and finally obtain the second image data.
  • DMSDPHAL can The received first image data is sent to the Camera HAL, and the Camera HAL performs the corresponding image processing task on the first image data according to the above-mentioned shooting strategy, and finally obtains the second image data.
  • the DMSDP HAL can first send the received first image data to the CameraService, and then the CameraService sends the first image data to the Camera HAL for image processing (the data flow is not shown in Figure 8) . That is to say, the DMSDP HAL in the mobile phone can directly exchange image data with the Camera HAL, or the DMSDP HAL in the mobile phone can exchange image data through CameraService.
  • the DMSDP HAL (or Camera HAL) of the mobile phone can send the second image data after image processing to the CameraService, and then the CameraService sends the second image data to the camera application, so that the camera application can send the second image data to the camera application.
  • the second image data is displayed on the display interface or stored in the mobile phone.
  • the DV application of the mobile phone can obtain the shooting capability parameters of the slave device. Furthermore, the DV application can create a corresponding DMSDP HAL in the HAL according to the shooting capability parameter, so that the DMSDP HAL in the mobile phone has the image processing capability of the slave device. In addition, the DV application can register the shooting capability of the slave device in the CameraService, so that the CameraService can determine the shooting strategy in the shooting process in real time according to the shooting capability of the slave device, and allocate the image processing tasks to be performed to the mobile phone and the slave device through the shooting strategy. implement.
  • the mobile phone and the slave device can perform corresponding image processing on the original image data according to their own shooting capabilities, so that the mobile phone and the slave device can cooperate more efficiently and flexibly to realize the distributed shooting function. It achieves better shooting effects in distributed shooting scenarios, and provides users with a better shooting experience.
  • the DMSDP HAL created by the DV application of the mobile phone in the HAL can be dynamically updated.
  • the slave device of the mobile phone changes (for example, when the slave device is switched from a TV to a watch), or when the shooting capability of the slave device changes (for example, when the image processing algorithm is updated after the version upgrade of the slave device)
  • the slave device can Dynamically send the latest shooting capability parameters to the mobile phone.
  • the DV application of the mobile phone can update the above-mentioned DMSDP HAL according to the latest shooting capability parameters, so that the DMSDP HAL matches the shooting capability of the slave device.
  • the DV application of the mobile phone can register the latest audio capability parameters in the CameraService, so that the CameraService can update the current shooting strategy according to the latest shooting capability parameters during the shooting process.
  • the number of slave devices of the mobile phone in the distributed shooting scenario is exemplified.
  • the mobile phone may also distribute its own shooting function to multiple slave devices for implementation.
  • the DV application of the mobile phone can establish a network connection with the slave device 1 and the slave device 2 respectively, and obtain the audio of each slave device. Capability parameters. Further, the DV application can create a DMSDP HAL corresponding to each slave device in the HAL according to the audio capability parameters of each slave device. For example, a DV application may create a DMSDP HAL1 at the HAL according to the capture capability parameter 1 of the slave device 1, and the DV application may create a DMSDP HAL2 at the HAL according to the capture capability parameter 2 of the slave device 2.
  • the CameraService can customize the corresponding shooting strategy according to the shooting capability parameters of each slave device during the shooting process.
  • each slave device can perform related image processing tasks according to the corresponding shooting strategy based on its own shooting capabilities, and finally send the processed image data to the mobile phone through the corresponding DMSDP HAL, thereby realizing the distributed shooting function across devices.
  • a function button for realizing a distributed shooting function may be set in the mobile phone. For example, as shown in (a) of FIG. 9 , if it is detected that the user opens the camera application of the mobile phone, the mobile phone can open its own camera to start shooting, and the mobile phone can display the preview interface 801 of the camera application.
  • the preview interface 801 is provided with a function button 802 of a distributed shooting function. If the user wishes to use the cameras of other electronic devices to take pictures, the user can click the function button 802 .
  • the mobile phone may also set the function button 802 of the distributed shooting function in the control center, pull-down menu, negative one-screen menu, or other applications (eg, video calling application, camera application) of the mobile phone, which is not done in this embodiment of the present application. any restrictions.
  • the mobile phone can display the control center 803 in response to the user's operation of opening the control center, and the control center 803 is provided with the above-mentioned function buttons 802 . If the user wishes to use the cameras of other electronic devices to take pictures, the user can click the function button 802 .
  • the DV application can trigger the mobile phone to search for one or more nearby candidate devices with a shooting function. And, as shown in FIG. 10 , the mobile phone can display one or more candidate devices found in the dialog box 901 .
  • the mobile phone may query the server for an electronic device that is logged in to the same account as the mobile phone and has a photographing function, and displays the electronic device obtained as a candidate device in the dialog box 901 .
  • the mobile phone may automatically trigger the distributed shooting function without setting the above-mentioned function button 802 .
  • the mobile phone when a mobile phone is running a video call application, if a video call request from a contact is detected, the mobile phone can automatically search for one or more nearby candidate devices with a shooting function.
  • the mobile phone when it is detected that the user has opened the camera application in the mobile phone, the mobile phone can also automatically search for one or more candidate devices with a photographing function nearby. And, as shown in FIG. 10 , the mobile phone can display one or more candidate devices found in the dialog box 901 .
  • the user can select the slave device in the dialog 901 to cooperate with the mobile phone to realize the distributed shooting function this time.
  • the mobile phone detects that the user selects the TV 902 in the dialog 901, it means that the user wishes to use the camera of the TV 902 to shoot.
  • the DV application of the mobile phone can use the TV 902 as a slave device of the mobile phone to establish a network connection with the mobile phone.
  • the DV application acquires the shooting capability parameters of the TV 902 from the TV 902 based on the network connection.
  • the architecture of the operating system in the television 902 is similar to the architecture of the operating system in the mobile phone.
  • the application layer of the TV 902 may install a proxy application, and the proxy application is used to send and receive data with other devices (eg, mobile phones).
  • the proxy application may also run in the TV 902 in the form of an SDK (Software Development Kit, software development kit) or a system service.
  • the CameraService is provided in the application framework layer of the TV 902 .
  • the HAL of the TV 902 is provided with a Camera HAL, and the Camera HAL of the TV 902 corresponds to a hardware device (eg, a camera) in the TV 902 for capturing image data.
  • the DV application of the mobile phone can send an acquisition request for the shooting capability parameter to the proxy application of the TV 902 .
  • the proxy application of the TV 902 can obtain the shooting capability parameters of the TV 902 from the CameraService of the TV 902, and send the shooting capability parameters of the TV 902 to the DV application of the mobile phone.
  • the shooting capability parameter of the television 902 is used to indicate one or more image processing tasks supported by the television 902 .
  • the shooting capability parameters of the TV 902 may include one or more image processing algorithms such as a face recognition algorithm and an auto-focus algorithm supported by the TV 902 .
  • the shooting capability parameter of the TV 902 may be related to the hardware shooting capability of the TV 902 (such as the number of cameras, the resolution, the model of the image processor, etc.), or may be related to the software shooting capability of the TV 902 (for example, the image processing algorithm), those skilled in the art may set the above-mentioned shooting capability parameters according to actual experience or actual application scenarios, which are not limited in the embodiments of the present application.
  • the DV application of the mobile phone After the DV application of the mobile phone obtains the shooting capability parameters of the TV 902, still as shown in Figure 11, the DV application of the mobile phone can create a DMSDP HAL corresponding to the TV 902 in the HAL according to the shooting capability parameters, so that the DMSDP HAL has the slave device's DV application. Image processing capability, and the mobile phone can subsequently send and receive data with the TV 902 through the DMSDP HAL.
  • the DV application of the mobile phone can also register the shooting capability parameters of the TV 902 in the CameraService of the mobile phone.
  • the CameraService of the mobile phone can continuously receive the control commands issued by the camera application during the running process of the camera application. For example, when the camera application is opened, the camera application can send a preview command to CameraService to trigger the camera application to enter preview mode. For another example, when the camera application detects that the user selects the button of the recording mode, the camera application may send a recording instruction to the CameraService to trigger the camera application to enter the recording mode. In different shooting modes, the camera application can also send control commands corresponding to different shooting functions to CameraService.
  • the camera application can send a beauty command to the CameraService.
  • the camera application may send a control instruction for adding the filter 1 to the CameraService.
  • the CameraService of the mobile phone can determine the current shooting strategy according to the control instruction issued by the latest camera application and combined with the shooting capability parameter of the current slave device (ie, the TV 902 ).
  • the CameraService of the mobile phone may first determine N (N is an integer greater than 0) image processing tasks to be executed subsequently according to the control command issued by the latest camera application. For example, if the most recent control command issued by the camera application is a preview command, the CameraService of the mobile phone can determine the image processing task A that needs to perform autofocus in the preview mode. For another example, if the mobile phone has the face recognition function enabled by default in the preview mode, the CameraService of the mobile phone can also determine the image processing task B that needs to perform face recognition. For another example, if it is detected in the preview mode that the user has enabled the beauty function, the CameraService of the mobile phone may also determine the image processing task C that needs to perform the beauty function.
  • N is an integer greater than 0
  • the CameraService of the mobile phone can allocate the above N image processing tasks to the mobile phone and the TV 902 in combination with the shooting capabilities of the mobile phone and the current slave device (ie, the TV 902 ).
  • the CameraService of the mobile phone can determine the image processing tasks specifically supported by the TV 902 through the shooting capability parameter of the TV 902, and the CameraService of the mobile phone can determine the specific image processing tasks supported by the TV 902.
  • the image processing tasks specifically supported by the mobile phone itself can be obtained.
  • the CameraService of the mobile phone can assign the image processing task A to the device supporting the image processing task A. For example, if the mobile phone supports the image processing task A, but the TV 902 does not support the image processing task A, the CameraService of the mobile phone can assign the image processing task A to the mobile phone for execution.
  • the CameraService of the mobile phone can assign the image processing task A to any device in the mobile phone or the TV 902 .
  • the CameraService of the mobile phone can calculate the time-consuming T1 for the mobile phone to perform the image processing task A, and the time-consuming T2 for the TV 902 to perform the image processing task A. If T1 ⁇ T2, the CameraService of the mobile phone can assign the image processing task A to the mobile phone; correspondingly, if T1>T2, the CameraService of the mobile phone can assign the image processing task A to the TV 902. That is to say, the CameraService of the mobile phone can assign the image processing task A to the device with a faster processing speed, so as to improve the processing efficiency of the subsequent image processing process.
  • the CameraService of the mobile phone can also determine the specific device to perform the image processing task A subsequently according to the current loads of the mobile phone and the TV 902 . For example, if the CameraService of the mobile phone has already assigned the above image processing task B and image processing task C to the mobile phone for execution, at this time, if the image processing task A continues to be assigned to the mobile phone for execution, the load of the mobile phone will be much greater than the load of the TV 902 , the load of the mobile phone is too high and the load of the TV 902 is too low. At this time, the CameraService of the mobile phone can assign the image processing task A to the TV 920 for execution. That is to say, the CameraService of the mobile phone can assign the image processing task A to a relatively idle device, so as to improve the processing efficiency of the subsequent image processing process.
  • the CameraService of the mobile phone may assign the TV 902 support image processing tasks to the mobile phone for execution. For example, when the load of the TV 902 is relatively high and the load of the mobile phone is relatively low, the CameraService of the mobile phone can assign the task of image processing supported by the TV 902 to the mobile phone for execution. Since the DMSDP HAL of the mobile phone has the image processing capability of the TV 902, even if the CameraService of the mobile phone assigns the image processing task supported by the TV 902 to the mobile phone, the mobile phone can complete the image processing task in the DMSDP HAL.
  • the mobile phone as the master device generally has the ability to execute the above N
  • the capability of image processing tasks, and the slave device of the mobile phone does not necessarily have the capability to perform these N image processing tasks. That is to say, for a certain image processing task among the above-mentioned N image processing tasks, there is generally no situation that neither the mobile phone nor the TV 902 supports it.
  • the CameraService of the mobile phone can also assign the image processing task B and the image processing task C to the mobile phone or the TV 902 for execution according to the above method.
  • those skilled in the art can set corresponding algorithms (for example, bin packing algorithm, first adaptation algorithm or best adaptation algorithm) to allocate the N image processing tasks that need to be performed during shooting to the mobile phone and the TV 902, so that the mobile phone and the TV
  • the TV 902 can use its own shooting capability to perform image processing on the subsequently collected original image data to the greatest extent, so as to improve the efficiency and speed of image processing when the mobile phone and the TV 902 are cooperatively shooting.
  • the CameraService of the mobile phone may allocate all image processing tasks (ie, the above N image processing tasks) to one device in the mobile phone or the TV 902 for completion. For example, when the TV 902 does not have the ability to execute the above-mentioned image processing task A, image processing task B and image processing task C, the CameraService of the mobile phone can assign all the above-mentioned image processing task A, image processing task B and image processing task C to For the mobile phone, at this time, the TV 902 only needs to use the camera to collect the original image data, and does not need to perform image processing on the collected original image data.
  • image processing tasks ie, the above N image processing tasks
  • the allocation result can be output to the DMSDP HAL of the mobile phone as the current shooting strategy 1.
  • the DMSDP HAL of the mobile phone can send a shooting instruction 1 to the TV 902 according to the shooting strategy 1, instructing the TV 902 to execute the image processing task assigned to the TV 902 according to the shooting strategy 1.
  • the DMSDP HAL of the mobile phone can carry the identification of the image processing task A that needs to be performed by the TV 902 in the shooting strategy 1 in the shooting instruction 1, and send the shooting instruction 1 to the proxy application of the TV 902.
  • the proxy application of the TV 902 after the proxy application of the TV 902 receives the above-mentioned shooting instruction 1, on the one hand, it can call its own camera to start collecting the original image data.
  • the proxy application of the TV 902 can send the shooting instruction 1 to the CameraService of the TV 902, and the CameraService of the TV 902 can determine that the image processing task A needs to be performed on the original image data according to the identification of the image processing task A in the shooting instruction 1.
  • the CameraService of the TV 902 may issue an instruction 1 for executing the image processing task A to the Camera HAL of the TV 902 (the instruction 1 and the shooting instruction 1 may be the same or different).
  • the Camera HAL of the television 902 can perform image processing task A on the original image data to obtain processed first image data. Furthermore, the Camera HAL of the TV 902 can report the first image data to the CameraService of the TV 902, and then the CameraService of the TV 902 uploads the first image data to the proxy application of the TV 902, and finally the proxy application of the TV 902 can send the first image data The data is sent to the DMSDP HAL of the phone.
  • the shooting instruction 1 when the shooting instruction 1 does not instruct the television 902 to perform any image processing tasks, it means that the image processing tasks that need to be performed at this time are all completed by the mobile phone. Then, the CameraService of the TV can directly send the original image data reported by the Camera HAL as the first image data to the proxy application of the TV 902. At this time, the first image data is the same as the original image data.
  • the proxy application of the TV 902 starts to collect the original image data after receiving the above-mentioned shooting instruction 1 as an example. It can be understood that the TV 902 can also automatically open its own The camera starts to collect original image data, which is not limited in this embodiment of the present application.
  • the DMSDP HAL of the mobile phone can execute the image processing task assigned to the mobile phone according to the above-mentioned shooting strategy 1.
  • the above shooting strategy 1 requires the mobile phone to perform image processing task B and image processing task C, wherein the image processing task B is an image processing task supported by the TV 902, and the image processing task C is an image processing task supported by the mobile phone.
  • the DMSDP HAL of the mobile phone can perform image processing task B on the received first image data according to shooting strategy 1, and obtain processed second image data. Further, the DMSDP HAL of the mobile phone can send the second image data to the Camera HAL of the mobile phone.
  • the DMSDP HAL of the mobile phone can first send the second image data to the CameraService of the mobile phone, and then the CameraService of the mobile phone sends the second image data to the Camera HAL of the mobile phone (this data is not shown in FIG. 11 ). flow direction).
  • the Camera HAL of the mobile phone can perform the image processing task C on the received second image data according to the shooting strategy 1, and obtain the processed third image data.
  • the Camera HAL of the mobile phone can report the third image data to the CameraService of the mobile phone, and then the CameraService of the mobile phone uploads the third image data to the camera application of the mobile phone.
  • the above-mentioned third image data is the image data obtained by the mobile phone and the TV 902 after co-processing in response to the preview instruction issued by the camera application.
  • the camera application of the mobile phone can display the third image data in the preview frame 802 of the preview interface 801 .
  • the user can view the image captured by the TV 902 in the preview interface 801 of the mobile phone, and the image presented in the preview frame 802 is the image processed by the mobile phone and the TV 902 collaboratively.
  • the image processing task B that requires the mobile phone to perform image processing in the shooting strategy 1 is an image processing task supported by the TV 902
  • the image processing task C that requires the mobile phone to perform in the shooting strategy 1 is also an image processing task supported by the TV 902
  • the DMSDP HAL of the mobile phone obtains the first image data from the TV 902, it can directly execute the image processing task B and the image processing task C on the received first image data according to the above-mentioned shooting strategy 1, and the image after the image processing can be processed.
  • the data is sent to the CameraService of the mobile phone, and there is no need for image processing by the Camera HAL of the mobile phone.
  • the DMSDP HAL of the mobile phone is obtained.
  • the first image data can be directly sent to the Camera HAL of the mobile phone, and the Camera HAL of the mobile phone performs image processing task B and image processing task C on the received first image data, and Send the processed image data to the CameraService of the mobile phone.
  • the master device for example, the above-mentioned mobile phone
  • the slave device for example, the above-mentioned TV 902
  • assign the image processing tasks suitable for the master device to the master device for completion assign the shooting capabilities of the master device and the slave device to be fully utilized to perform corresponding image processing tasks, and the resource utilization of each device in a distributed shooting scenario can be improved, thereby improving the processing efficiency of the entire shooting process and achieving better shooting effects.
  • the mobile phone and the The TV 902 can continue to perform corresponding image processing tasks on the original image data collected by the TV 902 according to the above shooting strategy 1, and the mobile phone finally displays the image data after all image processing in the preview frame 802 of the preview interface 801 .
  • the mobile phone can determine a new shooting strategy according to the new control command, and trigger the mobile phone and the TV 902 to follow the new shooting strategy Corresponding image processing tasks are performed on the original image data collected by the television 902 .
  • the camera application in the mobile phone detects that the user clicks the photographing button in the preview interface 801
  • the camera application may send a corresponding photographing instruction to the CameraService of the mobile phone.
  • the CameraService of the mobile phone can determine M (N is an integer greater than 0) image processing tasks to be executed subsequently in response to the photographing instruction. For example, in addition to performing image processing task A, image processing task B, and image processing task C in preview mode, the CameraService of the mobile phone determines that image processing task D for exposure adjustment needs to be performed subsequently.
  • the CameraService of the mobile phone can combine the shooting capabilities of the mobile phone and the current slave device (ie, the TV 902 ) to assign image processing task A, image processing task B, image processing task C, and image processing task D to Cell Phones and TVs 902.
  • the current slave device ie, the TV 902
  • image processing task A, image processing task B, image processing task C, and image processing task D to Cell Phones and TVs 902.
  • the allocation result can be output to the DMSDP HAL of the mobile phone as the current shooting strategy 2.
  • the CameraService of the mobile phone allocates image processing task A and image processing task B supported by the TV 902 to the TV 902, and allocates image processing task C and image processing task D supported by the mobile phone to the mobile phone.
  • the DMSDP HAL of the mobile phone can send the shooting instruction 2 to the proxy application of the TV 902 according to the shooting strategy 2, instructing the TV 902 to execute the shooting strategy 2 assigned to the TV 902 according to the shooting strategy 2.
  • Image processing task A and image processing task B can send the shooting instruction 2 to the proxy application of the TV 902 according to the shooting strategy 2, instructing the TV 902 to execute the shooting strategy 2 assigned to the TV 902 according to the shooting strategy 2.
  • the proxy application of the TV 902 can send the shooting instruction 2 to the CameraService of the TV 902, and the CameraService of the TV 902 can determine that image processing needs to be performed on the original image data according to the shooting instruction 2.
  • Task A and Image Processing Task B Furthermore, the CameraService of the TV 902 can issue the instruction 2 for executing the image processing task A and the image processing task B to the Camera HAL of the TV 902 (the instruction 2 and the shooting instruction 2 may be the same or different)).
  • the Camera HAL of the TV 902 can perform the image processing task A and the image processing task B on the raw image data to obtain the processed first image data.
  • the Camera HAL of the TV 902 can report the fourth image data to the CameraService of the TV 902, and then the CameraService of the TV 902 uploads the fourth image data to the proxy application of the TV 902.
  • the proxy application of the TV 902 can send the fourth image data to the proxy application of the TV 902. The image data is sent to the DMSDP HAL of the mobile phone.
  • the DMSDP HAL of the mobile phone can send the fourth image data to the Camera HAL of the mobile phone.
  • the Camera HAL of the mobile phone can perform image processing task C and image processing task D on the received fourth image data to obtain processed fifth image data.
  • the Camera HAL of the mobile phone can report the fifth image data to the CameraService of the mobile phone, and then the CameraService of the mobile phone uploads the fifth image data to the camera application of the mobile phone.
  • the camera application of the mobile phone can respond to the user's operation of clicking the photographing button in the preview interface 801 to save the fifth image data in the gallery of the mobile phone in the form of a photo to complete the photographing operation.
  • the corresponding image processing tasks were performed on the collected raw image data by using the shooting capabilities of the mobile phone (ie the master device) and the TV 902 (ie the slave device), so that in the distributed shooting scene
  • the resource utilization rate of each device is improved, and the processing efficiency of the entire shooting process is also improved accordingly.
  • the TV 902 can process the raw image data collected in real time into multiple image data streams.
  • the TV 902 can copy the captured raw image data into two channels, and one channel of the raw image data can be used as the preview stream data for the camera application during the video recording process.
  • the preview screen is displayed in the , and the original image data of another channel can be used as the video data stream to make the final video file.
  • the user can switch the shooting mode of the camera application in the mobile phone to the video recording mode.
  • the recording mode if it is detected that the user clicks the recording button 1502 in the preview interface 1501, the mobile phone can cooperate with the TV 902 to complete this recording operation.
  • the camera application of the mobile phone can send a corresponding recording instruction to the CameraService of the mobile phone. Similar to the process shown in FIG. 12 and FIG. 14 , after the CameraService of the mobile phone receives the video recording instruction, it can combine the shooting capability parameters of the TV 902 to allocate the image processing tasks that need to be performed during video recording to the mobile phone and the TV 902, thereby generating corresponding images. shooting strategy 3.
  • the shooting strategy 3 may include a shooting strategy A for the preview stream data and a shooting strategy B for the video stream data.
  • shooting strategy A may include X (X is an integer greater than 0) image processing tasks that need to be executed on preview stream data
  • shooting strategy B may include Y (Y is an integer greater than 0) that needs to be executed on video stream data an integer greater than 0) image processing tasks.
  • the above-mentioned X image processing tasks may be the same as or different from the Y image processing tasks.
  • the shooting strategy A and the shooting strategy B may include image processing task 1 of the beauty function.
  • the CameraService of the mobile phone can additionally add the image processing task 2 of AI recognition to the shooting strategy A, so as to present the AI recognition result to the user in real time in the preview screen of the video.
  • the mobile phone as the main device generally has the ability to perform each image processing task determined by CameraService, and the processing speed and processing performance of the mobile phone are relatively high. Image processing tasks are assigned to mobile phones for execution.
  • the TV 902 can directly send the generated preview stream data to the mobile phone for image processing during subsequent video recording, so as to ensure the high demand for real-time performance of the preview stream data.
  • the CameraService of the mobile phone can assign the Y image processing tasks in the shooting strategy B to TV 902 executes. In this way, while the mobile phone performs image processing on the preview stream data, the TV 902 can simultaneously perform image processing on the video stream data, so that the mobile phone and the TV 902 have higher photographing efficiency during cooperative photographing.
  • the CameraService of the mobile phone can also assign the image processing tasks not supported by the TV 902 to the mobile phone in the shooting strategy B for execution. This does not impose any restrictions.
  • the CameraService of the mobile phone can send the generated shooting strategy 3 (that is, the above-mentioned shooting strategy A and shooting strategy B) to the DMSDP HAL of the mobile phone.
  • the DMSDP HAL of the mobile phone can send a shooting instruction 3 to the proxy application of the TV 902 according to the shooting strategy 3, instructing the TV 902 to switch the shooting mode to the video recording mode.
  • the TV 902 can follow the shooting strategy.
  • the shooting strategy B in 3 performs the corresponding image processing task on the video stream data, and the TV 902 can directly send the preview stream data to the mobile phone without performing image processing on the preview stream data.
  • the proxy application of the TV 902 can instruct the Camera HAL of the TV 902 to copy the original image data collected by the camera into two channels, one is the video stream data 1, and the other is the video stream data 1. Stream data 1 for preview.
  • the proxy application of the TV 902 can deliver the shooting instruction 3 to the Camera HAL of the TV 902 through CameraService (the flow of the shooting instruction 3 inside the TV 902 is not shown in FIG. 17 ). In this way, after the Camera HAL of the TV 902 obtains the preview stream data 1, the preview stream data 1 can be directly sent to the DMSDP HAL of the mobile phone through the CameraService and the proxy application of the TV 902.
  • the Camera HAL of the TV 902 When the Camera HAL of the TV 902 obtains the video stream data 1, it can perform the corresponding image processing task on the video stream data 1 according to the shooting instruction B in the above-mentioned shooting instruction 3, and obtain the processed video stream data (that is, the video stream data 2). ). Furthermore, the Camera HAL of the TV 902 can send the video stream data 2 to the DMSDP HAL of the mobile phone through the CameraService of the TV 902 and the proxy application.
  • the DMSDP HAL of the mobile phone can send the preview stream data 1 to the Camera HAL of the mobile phone, and the Camera HAL of the mobile phone shoots according to the above-mentioned shooting strategy 3.
  • Strategy A executes the image processing task assigned to the mobile phone, and obtains processed preview stream data (ie, preview stream data 2).
  • the Camera HAL of the mobile phone can report the preview stream data 2 to the camera application of the mobile phone through CameraService, and the camera application displays the preview stream data 2 in the preview interface 1501 shown in FIG. 16 in real time.
  • the DMSDP HAL of the mobile phone can process the received video stream data.
  • Data 2 is reported to the camera application of the mobile phone through CameraService. After the camera application of the mobile phone receives the video stream data 2, it can save the video stream data 2 in a video format in the gallery of the mobile phone to complete this video recording operation.
  • the mobile phone ie the master device
  • the shooting capability of the slave device can be used to assign the image processing tasks of different image data streams to different devices to complete .
  • the master device and the slave device can simultaneously perform image processing on the acquired image data stream in a parallel manner, so that the resource utilization rate of each device in a distributed shooting scenario is improved, and the processing efficiency of the entire shooting process is also improved accordingly.
  • the slave device can send different image data streams to the master device in a time-sharing and segmented manner. For example, when the slave device performs image processing on the recording stream data, it can send the preview stream data to the master device, and the master device performs image processing on the preview stream data. Subsequently, the slave device sends the video stream data after image processing to the master device. In this way, the slave device does not need to uniformly send multiple image data streams to the master device, thereby reducing the network bandwidth pressure during image data stream transmission, and further improving the processing efficiency of the entire shooting process.
  • the above embodiment is illustrated by taking the slave device of the mobile phone as the TV 902 as an example. It can be understood that when the slave device of the mobile phone is updated to other electronic devices in the distributed shooting scenario, the mobile phone can still use the new slave device according to the above method.
  • the corresponding shooting strategy is determined in real time during the shooting process, so that the mobile phone and the slave device can perform corresponding image processing on the collected raw image data according to their own shooting capabilities, so as to realize the distributed shooting scene more efficiently and flexibly. Multi-device collaborative shooting function.
  • the mobile phone is used as an example in the distributed shooting scene.
  • the embodiments of the present application do not make any restrictions on this.
  • the Android system is used as an example to illustrate the specific method for realizing the distributed shooting function among the various functional modules. It can be understood that it can also be set in other operating systems (such as Hongmeng system, etc.).
  • the corresponding functional modules implement the above method. As long as the functions implemented by various devices and functional modules are similar to the embodiments of the present application, they fall within the scope of the claims of the present application and their technical equivalents.
  • an embodiment of the present application discloses an electronic device, and the electronic device may be the above-mentioned main device (eg, a mobile phone).
  • the electronic device may specifically include: a touch screen 1901 including a touch sensor 1906 and a display screen 1907; one or more processors 1902; a memory 1903; a communication module 1908; one or more cameras 1909; one or more applications programs (not shown); and one or more computer programs 1904, the various devices may be connected by one or more communication buses 1905.
  • the above-mentioned one or more computer programs 1904 are stored in the above-mentioned memory 1903 and configured to be executed by the one or more processors 1902, and the one or more computer programs 1904 include instructions that can be used to execute the above-mentioned Relevant steps performed by the master device in the embodiment.
  • an embodiment of the present application discloses an electronic device, and the electronic device may be the above-mentioned slave device (eg, a sound box).
  • the electronic device may specifically include: one or more processors 2002; a memory 2003; a communication module 2006; one or more application programs (not shown); one or more cameras 2001; and one or more computer programs 2004,
  • the various devices described above may be connected by one or more communication buses 2005 .
  • a device such as a touch screen may also be set in the slave device, which is not limited in this embodiment of the present application.
  • the above-mentioned one or more computer programs 2004 are stored in the above-mentioned memory 2003 and configured to be executed by the one or more processors 2002, and the one or more computer programs 2004 include instructions, which can be used to execute the above-mentioned Relevant steps performed by the device in the embodiment.
  • Each functional unit in each of the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种拍摄方法、系统及电子设备,涉及终端领域,可在分布式拍摄场景中将一个电子设备的拍摄功能切换至另一电子设备中,实现较好的拍摄效果。该方法包括:当第一设备使用第二设备的摄像头进行拍摄时,第一设备确定第一拍摄策略,第一拍摄策略包括第一设备需要执行的X个图像处理任务,以及第二设备需要执行的Y个图像处理任务;进而,第一设备可向第二设备发送第一拍摄指令,使得第二设备可响应于第一拍摄指令对采集到的原始图像数据执行上述Y个图像处理任务,得到第一图像数据;当第一设备接收到第二设备发送的第一图像数据后,第一设备可对第一图像数据执行上述X个图像处理任务,得到第二图像数据,并在显示界面中显示第二图像数据。

Description

一种拍摄方法、系统及电子设备
本申请要求于2020年12月29日提交国家知识产权局、申请号为202011608325.X、申请名称为“一种拍摄方法、系统及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备领域,尤其涉及一种拍摄方法、系统及电子设备。
背景技术
目前,手机等电子设备中均安装有摄像头,电子设备通过摄像头可以实现拍照、录像等拍摄功能。例如,电子设备可以打开自身的摄像头采集图像数据,摄像头采集到的图像数据可称为原始图像数据。如果电子设备在拍摄时开启了柔焦、美颜或滤镜等功能,则电子设备还可以使用相应的图像处理算法对原始图像数据进行图像处理,输出经图像处理后的目标图像数据。
当一个用户或家庭中具备多个具有拍摄功能的电子设备时,可能会出现用户需要将电子设备1的拍摄功能切换至电子设备2中实现的分布式拍摄场景。例如,用户使用手机进行视频通话时,手机可以通过与电视交互,使用电视的摄像头进行拍摄。在这种分布式拍摄场景中,不同电子设备的拍摄能力可能不同,例如,电子设备1支持的图像处理算法与电子设备2支持的图像处理算法不同。此时,如何将一个电子设备的拍摄功能切换至另一电子设备中实现较好的拍摄效果成为亟需解决的问题。
发明内容
本申请提供一种拍摄方法、系统及电子设备,可以在分布式拍摄场景中将一个电子设备的拍摄功能切换至另一电子设备中,实现较好的拍摄效果,提高用户的使用体验。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种拍摄方法,包括:当第一设备使用第二设备的摄像头进行拍摄时,第一设备可以在拍摄过程中确定第一拍摄策略,第一拍摄策略包括第一设备需要执行的X个图像处理任务,以及第二设备需要执行的Y个图像处理任务,X和Y均为大于或等于0的整数;进而,第一设备可按照第一拍摄策略向第二设备发送第一拍摄指令,第一拍摄指令用于触发第二设备可响应于第一拍摄指令对采集到的原始图像数据执行上述Y个图像处理任务,得到第一图像数据;当第一设备接收到第二设备发送的第一图像数据后,第一设备可按照第一拍摄策略对第一图像数据执行上述X个图像处理任务,得到第二图像数据;进而,第一设备可在显示界面中显示第二图像数据。
也就是说,在分布式拍摄场景中,第一设备作为主设备可以结合第二设备(即从设备)的拍摄能力实时的确定拍摄过程中的拍摄策略,通过拍摄策略将需要执行的一个或多个图像处理任务分配给相应的电子设备执行。这样一来,第一设备能够根据其他设备的图像处理能力将自身的拍摄功能切换至其他设备上实现,第一设备可与其他设备更加高效、灵活的协同实现分布式拍摄功能,从而在分布式拍摄场景中实现较好的拍摄效果,同时为用户 提供较好的拍摄使用体验。
在一种可能的实现方式中,第一设备确定第一拍摄策略,具体包括:响应于当前的拍摄模式和用户选择的拍摄选项,第一设备可确定需要执行的N个图像处理任务,N=X+Y;进而,第一设备可将这N个图像处理任务分配给第一设备和第二设备,得到上述第一拍摄策略。这样,在拍摄每一帧图像数据时第一设备可将需要执行的图像处理任务分配给第一设备(即主设备)和第二设备(即从设备),充分利用从设备的设备能力与主设备协同完成拍摄过程中的图像处理任务。
在一种可能的实现方式中,在第一设备确定第一拍摄策略之前,还包括:第一设备可获取第二设备的拍摄能力参数,该拍摄能力参数用于指示第二设备的图像处理能力,例如,该拍摄能力参数可以包括第二设备支持的图像处理算法等;此时,第一设备将上述N个图像处理任务分配给第一设备和第二设备,具体包括:第一设备可以按照上述拍摄能力参数,将N个图像处理任务分配给第一设备和第二设备,使得第二设备可以执行自身能够支持的相关图像处理任务,同时,第一设备也可以执行自身能够支持的相关图像处理任务,提高分布式拍摄过程的图像处理效率。
在一种可能的实现方式中,以上述N个图像处理任务包括第一图像处理任务举例,其中,第一设备按照拍摄能力参数,将第一图像处理任务分配给第一设备或第二设备,具体包括:若第二设备的拍摄能力参数指示第二设备有能力执行第一图像处理任务,则第一设备可将第一图像处理任务分配给第二设备;也就是说,当第二设备(即从设备)有能力执行的图像处理任务可以分配给从设备执行,减轻第一设备(即主设备)的处理负荷。
或者;当第一设备和第二设备均有能力执行第一图像处理任务时,若第二设备的拍摄能力参数指示第二设备执行第一图像处理任务的时间短于第一设备执行第一图像处理任务的时间,则第一设备可将第一图像处理任务分配给第二设备。也就是说,第一设备可将图像处理任务分配给处理速度更快的设备,以提高后续图像处理过程的处理效率。
在一种可能的实现方式中,在第一设备获取第二设备的拍摄能力参数之后,还包括:第一设备可按照拍摄能力参数在第一设备的HAL中创建对应的硬件抽象模块(例如,DMSDP HAL),该硬件抽象模块具有第二设备的图像处理能力;上述方法还包括:第一设备通过该硬件抽象模块接收第二设备发送的第一图像数据。也就是说,第一设备与第二设备之间可以通过DMSDP HAL进行数据收发。
在一种可能的实现方式中,第一设备的HAL中还可以包括Camera HAL,即与第一设备的摄像头对应的HAL;上述分配给第一设备的X个图像处理任务可以包括第二设备支持的X1个图像处理任务和第一设备支持的X2个图像处理任务,X1+X2=X;其中,第一设备按照第一拍摄策略对第一图像数据执行X个图像处理任务,得到第二图像数据,具体包括:第一设备通过上述硬件抽象模块对第一图像数据执行X1个图像处理任务,得到第三图像数据;进而,上述硬件抽象模块可将第三图像数据发送至Camera HAL,由Camera HAL对第三图像数据执行X2个图像处理任务,得到第二图像数据。
也就是说,由于第一设备的硬件抽象模块具有第二设备的处理能力,因此,该硬件抽象模块可用于执行第二设备支持的图像处理任务;而第一设备中传统的Camera HAL可用于执行第一设备支持的图像处理任务。
在一种可能的实现方式中,如果上述X个图像处理任务均为第一设备支持的图像处理 任务,则第一设备在按照第一拍摄策略对第一图像数据执行X个图像处理任务,得到第二图像数据时,具体包括:第一设备的硬件抽象模块可直接将接收到的第一图像数据发送至Camera HAL;由Camera HAL对第一图像数据执行X个图像处理任务,得到第二图像数据。
在一种可能的实现方式中,上述方法还包括:当第一设备检测到用户输入的预设操作后,第一设备响应于预设操作更新当前的拍摄模式或拍摄选项。例如,用户可以将拍摄模式从拍照模式切换为录像等。又例如,用户可以选择或撤销美颜、滤镜等拍摄选项。后续,基于更新后的拍摄模式或拍摄选项,第一设备可继续按照上述方法确定对应的拍摄策略与第二设备协同进行拍摄。
其中,上述拍摄模式可以包括预览(也可称为预览模式)、拍照(也可称为拍照模式)、录像(也可称为录像模式)、人像(也可称为人像模式)或慢动作(也可称为慢动作模式)等拍摄模式。上述拍摄选项可以包括美颜、滤镜、焦距调整或曝光度调整等拍摄选项。
在一种可能的实现方式中,在第一设备使用第二设备的摄像头进行拍摄之前,还包括:响应于用户的输入的第一操作,第一设备可显示候选设备列表,该候选设备列表中包括第二设备;响应于用户在候选设备列表选择第二设备的操作,第一设备可指示第二设备启动摄像头开始采集原始图像数据。
在一种可能的实现方式中,当第一设备使用第二设备的摄像头进行拍摄时,上述方法还包括:第一设备接收用户输入的录像操作;响应于录像操作,第一设备可确定第二拍摄策略和第三拍摄策略,其中,第二拍摄策略包括需要对预览流数据执行的K个图像处理任务,这K个图像处理任务由第一设备执行,第三拍摄策略包括需要对录像流数据执行的W个图像处理任务,这W个图像处理任务由第二设备执行,K和W均为大于或等于0的整数;进而,第一设备可按照第二拍摄策略和第三拍摄策略向第二设备发送第二拍摄指令,第二拍摄指令用于触发第二设备将采集到的第一预览流数据直接发送给第一设备,而对采集到的第一录像流数据(第一预览流数据和第一录像流数据为第二设备采集到的原始图像数据)执行上述W个图像处理任务,并将得到的第二录像流数据发送给第一设备;这样,当第一设备接收到第二设备采集到的第一预览流数据后,第一设备可对第一预览流数据执行K个图像处理任务,得到第二预览流数据;再由第一设备在显示界面中显示第二预览流数据;当第一设备接收到第二设备发送的第二录像流数据后,第一设备将第二录像流数据保存为视频。
在一种可能的实现方式中,在第一设备确定第二拍摄策略和第三拍摄策略之前,还包括:第一设备可获取第二设备的拍摄能力参数,该拍摄能力参数用于指示第二设备的图像处理能力;此时,第一设备确定第二拍摄策略和第三拍摄策略,包括:第一设备根据上述拍摄能力参数确定第二拍摄策略和第三拍摄策略。
第二方面,本申请提供一种拍摄方法,包括:第一设备接收用户输入的录像操作;响应于录像操作,第一设备可确定第二拍摄策略和第三拍摄策略,其中,第二拍摄策略包括需要对预览流数据执行的K个图像处理任务,这K个图像处理任务由第一设备执行,第三拍摄策略包括需要对录像流数据执行的W个图像处理任务,这W个图像处理任务由第二设备执行,K和W均为大于或等于0的整数;进而,第一设备可按照第二拍摄策略和第三拍摄策略向第二设备发送第二拍摄指令,第二拍摄指令用于触发第二设备将采集到的第一 预览流数据直接发送给第一设备,而对采集到的第一录像流数据(第一预览流数据和第一录像流数据为第二设备采集到的原始图像数据)执行上述W个图像处理任务,并将得到的第二录像流数据发送给第一设备;这样,当第一设备接收到第二设备采集到的第一预览流数据后,第一设备可对第一预览流数据执行K个图像处理任务,得到第二预览流数据;再由第一设备在显示界面中显示第二预览流数据;当第一设备接收到第二设备发送的第二录像流数据后,第一设备将第二录像流数据保存为视频。
这样一来,在产生多路图像数据流的场景中,第一设备可以将预览流数据的图像处理任务分配给第一设备,使得第二设备可将产生的预览流数据直接发送给第一设备进行图像处理,以保证预览流数据对实时性的较高需求。而第二设备可对实时性需求不高的录像流数据进行图像处理,使得第一设备在对预览流数据进行图像处理的同时,第二设备可以同时对录像流数据进行图像处理。
也就是说,主设备和从设备可以以并行的方式同时对不同的图像数据流进行图像处理,使得分布式拍摄场景下各个设备的资源利用率提高,整个拍摄过程的处理效率也相应提高。同时,由于主设备和从设备可以同时对获取到的图像数据流进行图像处理,使得从设备可以分时、分段的将不同的图像数据流发送给主设备,从而降低图像数据流传输时的网络带宽压力,进一步提升整个拍摄过程的处理效率。
示例性的,在录像场景下,第一设备确定拍摄策略、进行图像处理等具体方法可参见第一方面的相关描述。例如,在第一设备确定第二拍摄策略和第三拍摄策略之前,还包括:第一设备获取第二设备的拍摄能力参数,该拍摄能力参数用于指示第二设备的图像处理能力;此时,第一设备确定第二拍摄策略和第三拍摄策略,具体包括:第一设备可以根据拍摄能力参数确定第二拍摄策略和第三拍摄策略,从而利用从设备的设备能力与主设备协同完成拍摄过程中的图像处理任务。
在一种可能的实现方式中,当第一设备使用第二设备的摄像头进行拍摄时,上述方法还包括:第一设备接收用户更新当前的拍摄模式或拍摄选项的操作;响应于更新后的拍摄模式或拍摄选项,第一设备确定需要执行的N个图像处理任务;进而,第一设备可将这N个图像处理任务分配给第一设备和第二设备,得到第一拍摄策略,第一拍摄策略中包括第一设备需要执行的X个图像处理任务,以及第二设备需要执行的Y个图像处理任务;进而,第一设备可按照第一拍摄策略向第二设备发送第一拍摄指令,触发第二设备响应于第一拍摄指令对采集到的原始图像数据执行上述Y个图像处理任务,得到第一图像数据;后续,当第一设备接收到第二设备发送的第一图像数据后,第一设备可按照第一拍摄策略对第一图像数据执行上述X个图像处理任务,得到第二图像数据;并且,第一设备可在显示界面中显示第二图像数据。
在一种可能的实现方式中,在第一设备确定第一拍摄策略之前,还包括:第一设备可获取第二设备的拍摄能力参数,该拍摄能力参数用于指示第二设备的图像处理能力,例如,该拍摄能力参数可以包括第二设备支持的图像处理算法等;此时,第一设备将上述N个图像处理任务分配给第一设备和第二设备,具体包括:第一设备可以按照上述拍摄能力参数,将N个图像处理任务分配给第一设备和第二设备,使得第二设备可以执行自身能够支持的相关图像处理任务,同时,第一设备也可以执行自身能够支持的相关图像处理任务,提高分布式拍摄过程的图像处理效率。
在一种可能的实现方式中,以上述N个图像处理任务包括第一图像处理任务举例,其中,第一设备按照拍摄能力参数,将第一图像处理任务分配给第一设备或第二设备,具体包括:若第二设备的拍摄能力参数指示第二设备有能力执行第一图像处理任务,则第一设备可将第一图像处理任务分配给第二设备;也就是说,当第二设备(即从设备)有能力执行的图像处理任务可以分配给从设备执行,减轻第一设备(即主设备)的处理负荷。
或者;当第一设备和第二设备均有能力执行第一图像处理任务时,若第二设备的拍摄能力参数指示第二设备执行第一图像处理任务的时间短于第一设备执行第一图像处理任务的时间,则第一设备可将第一图像处理任务分配给第二设备。也就是说,第一设备可将图像处理任务分配给处理速度更快的设备,以提高后续图像处理过程的处理效率。
另外,与第一方面提供的拍摄方法类似的,在第二方面提供的拍摄方法中,第一设备的HAL中也可以设置硬件抽象模块和Camera HAL等模块,硬件抽象模块和Camera HAL等模块的具体工作原理可参见第一方面的相关描述,故此处不再赘述。
第三方面,本申请提供一种拍摄方法,包括:第二设备可响应于第一设备的指示打开摄像头开始采集原始图像数据;当第二设备接收到第一设备发送的第一拍摄指令后,如果第一拍摄指令用于指示第二设备需要执行Y个图像处理任务,Y为大于或等于0的整数,则第二设备可对采集到的原始图像数据执行Y个图像处理任务,得到第一图像数据;进而,第二设备可向第一设备发送第一图像数据,由第一设备继续对第一图像数据进行相关的图像处理任务,实现主设备和从设备协同拍摄的功能。
在一种可能的实现方式中,在第二设备响应于第一设备的指示打开摄像头开始采集原始图像数据之前,还包括:第二设备与第一设备建立网络连接;第二设备可将第二设备的拍摄能力参数发送至第一设备,拍摄能力参数用于指示第二设备的图像处理能力。后续,主设备(即第一设备)可以利用分布式拍摄场景中从设备(即第二设备)的拍摄能力,将适合从设备执行的图像处理任务分配给从设备完成,将适合主设备执行的图像处理任务分配给主设备完成,提升整个拍摄过程的处理效率。
在一种可能的实现方式中,在第二设备响应于第一设备的指示打开摄像头开始采集原始图像数据之后,还包括:第二设备可响应于第一设备的指示打开摄像头开始采集原始图像数据;进而,第二设备可接收第一设备发送的第二拍摄指令,第二拍摄指令用于指示当前的拍摄模式为录像,第二拍摄指令中包括需要对录像数据流执行的W个图像处理任务,W为大于或等于0的整数;进而,响应于第二拍摄指令,第二设备可将采集到的原始图像数据复制为两路,得到第一录像流数据和第一预览流数据;后续,第二设备可将第一预览流数据发送至第一设备,由第一设备对第一预览流数据执行相关的图像处理任务;同时,第二设备可对第一录像流数据执行W个图像处理任务,得到第二录像流数据,并将第二录像流数据发送至第一设备。
第四方面,本申请提供一种拍摄方法,包括:第二设备可响应于第一设备的指示打开摄像头开始采集原始图像数据;进而,第二设备可接收第一设备发送的第二拍摄指令,第二拍摄指令用于指示当前的拍摄模式为录像,第二拍摄指令中包括需要对录像数据流执行的W个图像处理任务,W为大于或等于0的整数;进而,响应于第二拍摄指令,第二设备可将采集到的原始图像数据复制为两路,得到第一录像流数据和第一预览流数据;后续,第二设备可将第一预览流数据发送至第一设备,由第一设备对第一预览流数据执行相关的 图像处理任务;同时,第二设备可对第一录像流数据执行W个图像处理任务,得到第二录像流数据,并将第二录像流数据发送至第一设备。这样,第二设备和第一设备可以以并行的方式同时对获取到的图像数据流进行图像处理,提高整个拍摄过程的处理效率。同时,由于主设备和从设备可以同时对获取到的图像数据流进行图像处理,使得从设备可以分时、分段的将不同的图像数据流发送给主设备,从而降低图像数据流传输时的网络带宽压力,进一步提升整个拍摄过程的处理效率。
在一种可能的实现方式中,在第二设备响应第一设备的指示打开摄像头开始采集原始图像数据之后,还包括:第二设备可接收第一设备发送的第一拍摄指令,第一拍摄指令用于指示当前的拍摄模式为拍照或预览等拍摄模式,第一拍摄指令中可以携带第二设备需要执行的Y个图像处理任务;进而,第二设备可响应该第一拍摄指令,对采集到的原始图像数据执行上述Y个图像处理任务,得到第一图像数据,并将第一图像数据发送至第一设备。
与第三方面中第二设备的处理过程类似的,在录像、拍照或预览等拍摄模式下,第二设备也可以与第一设备建立网络连接,并将自身的拍摄能力参数发送至第一设备,本申请对此不做任何限制。
第五方面,本申请提供一种电子设备(例如上述第一设备),包括:显示屏、通信模块、一个或多个处理器、一个或多个存储器、以及一个或多个计算机程序;其中,处理器与通信模块、显示屏以及存储器均耦合,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述任一方面中的拍摄方法。
第六方面,本申请提供一种电子设备(例如上述第二设备),包括:通信模块、一个或多个处理器、一个或多个存储器、以及一个或多个计算机程序;其中,处理器与通信模块、存储器均耦合,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述任一方面中的拍摄方法。
第七方面,本申请提供一种分布式拍摄系统,包括上述第五方面中的第一设备以及上述第六方面中的第二设备。其中,当第一设备使用第二设备的摄像头进行拍摄时,第二设备可响应于第一设备的指示打开摄像头开始采集原始图像数据,并且,第一设备可以在拍摄过程中确定第一拍摄策略,第一拍摄策略包括第一设备需要执行的X个图像处理任务,以及第二设备需要执行的Y个图像处理任务,X和Y均为大于或等于0的整数;进而,第一设备可按照第一拍摄策略向第二设备发送第一拍摄指令,以使得第二设备响应于第一拍摄指令对采集到的原始图像数据执行Y个图像处理任务,得到第一图像数据;进而,第二设备可将第一图像数据发送至第一设备;当第一设备接收到第一图像数据后,可按照第一拍摄策略对第一图像数据执行X个图像处理任务,得到第二图像数据;最终,第一设备可在显示界面中显示第二图像数据。
第八方面,本申请提供一种计算机可读存储介质,包括计算机指令,当计算机指令在上述第一设备或第二设备上运行时,使得第一设备或第二设备执行上述任一方面中所述的拍摄方法。
第九方面,本申请提供一种计算机程序产品,当计算机程序产品在上述第一设备或第二设备上运行时,使得第一设备或第二设备执行上述任一方面中所述的拍摄方法。
可以理解地,上述各个方面所提供的电子设备、分布式拍摄系统、计算机可读存储介质以及计算机程序产品均应用于上文所提供的对应方法,因此,其所能达到的有益效果可参考上文所提供的对应方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种分布式拍摄系统的架构示意图;
图2为本申请实施例提供的一种拍摄方法的应用场景示意图一;
图3为本申请实施例提供的一种拍摄方法的应用场景示意图二;
图4为本申请实施例提供的一种拍摄方法的交互示意图;
图5为本申请实施例提供的一种电子设备的结构示意图一;
图6为本申请实施例提供的一种拍摄方法的应用场景示意图三;
图7为本申请实施例提供的一种拍摄方法的应用场景示意图四;
图8为本申请实施例提供的一种拍摄方法的应用场景示意图五;
图9为本申请实施例提供的一种拍摄方法的应用场景示意图六;
图10为本申请实施例提供的一种拍摄方法的应用场景示意图七;
图11为本申请实施例提供的一种拍摄方法的应用场景示意图八;
图12为本申请实施例提供的一种拍摄方法的应用场景示意图九;
图13为本申请实施例提供的一种拍摄方法的应用场景示意图十;
图14为本申请实施例提供的一种拍摄方法的应用场景示意图十一;
图15为本申请实施例提供的一种拍摄方法的应用场景示意图十二;
图16为本申请实施例提供的一种拍摄方法的应用场景示意图十三;
图17为本申请实施例提供的一种拍摄方法的应用场景示意图十四;
图18为本申请实施例提供的一种拍摄方法的应用场景示意图十五;
图19为本申请实施例提供的一种电子设备的结构示意图二;
图20为本申请实施例提供的一种电子设备的结构示意图三。
具体实施方式
下面将结合附图对本实施例的实施方式进行详细描述。
本申请实施例提供的一种拍摄方法,可应用于图1所示的分布式拍摄系统200中。如图1所示,该分布式拍摄系统200中可以包括主设备(master)101和N个从设备(slave)102,N为大于0的整数。主设备101与任意一个从设备102之间可通过有线的方式通信,也可以通过无线的方式通信。
示例性的,主设备101与从设备102之间可以使用通用串行总线(universal serial bus,USB)建立有线连接。又例如,主设备101与从设备102之间可以通过全球移动通讯系统(global system for mobile communications,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址接入(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE)、蓝牙、无线保真(wireless fidelity,Wi-Fi)、NFC、基于互联网协议的语音通话(voice over Internet protocol,VoIP)、支持网络切片架构的通信协议建立无线连接。
其中,主设备101和从设备102中均可设置一个或多个摄像头。主设备101可使用从设备102上的摄像头采集图像数据,从而将主设备101的拍照、录像等拍摄功能分布至一个或多个从设备102中实现,从而实现跨设备的分布式拍摄功能。
示例性的,主设备101(或从设备102)具体可以为手机、平板电脑、电视(也可称为智能电视、智慧屏或大屏设备)、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备(例如智能手表,智能眼镜,智能头盔,智能手环)、车载设备、虚拟现实设备等具有拍摄功能的电子设备,本申请实施例对此不做任何限制。
以手机为主设备101举例,手机中可以安装用于实现拍摄功能的相机应用。如图2所示,检测到用户打开相机应用后,手机可打开自身的摄像头开始采集图像数据,并将图像数据实时显示在预览界面201的预览框202中。
一般,手机获取到摄像头采集到的图像数据(可称为原始图像数据)后,可以对原始图像数据执行一个或多个图像处理任务。例如,手机可使用预设的曝光算法对原始图像数据执行图像处理任务1,以调整原始图像数据的曝光度。又例如,手机可使用预设的人脸检测算法对原始图像数据执行图像处理任务2,以识别原始图像数据中的人脸图像。又例如,手机可使用预设的美颜算法对原始图像数据执行图像处理任务3,从而对原始图像数据中的人脸图像进行美颜处理。
当然,手机还可以对原始图像数据执行防抖、对焦、柔焦、虚化、滤镜、AR特效、笑脸检测、肤色调整或场景识别等相关功能的图像处理任务。这些图像处理任务可以是手机响应用户的设置执行的,也可以是手机默认自动执行的,本申请实施例对此不做任何限制。
例如,如果用户在相机应用中开启了美颜功能,则手机可获取到上述原始图像数据后,可响应用户打开美颜功能的操作执行与美颜功能对应的图像处理任务。又例如,手机获取到上述原始图像数据后,虽然用户没有输入调整焦距的操作,手机仍可自动执行对焦功能的图像处理任务。
在一些场景下,手机需要执行的图像处理任务可以有多个。例如,以手机需要执行图像处理任务1和图像处理任务2举例,手机可以先对获取到的原始图像数据执行图像处理任务1,得到处理后的第一图像数据。进而,手机可对第一图像数据执行图像处理任务2,得到处理后的第二图像数据。后续,手机可将第二图像数据显示在预览界面201的预览框202中,从而向用户呈现经过图像处理后的拍摄画面。
在本申请实施例中,手机可以在相机应用的预览界面201中设置按钮203。当用户希望将手机的拍摄功能(也可称为相机功能)切换至其他设备中实现时,可点击切换按钮203查询当前可以采集图像数据一个或多个电子设备。
示例性的,手机检测到用户点击切换按钮203后,如图3所示,手机可在对话框301中显示当前手机搜索到的可以采集图像数据一个或多个候选设备。例如,服务器中可以记录每个电子设备是否具有拍摄功能。那么,手机可以在服务器中查询与手机登录同一账号(例如华为账号)的具有拍摄功能的电子设备。进而,手机可将查询到的电子设备作为候选设备显示在对话框301中。
或者,手机可以搜索与手机位于同一Wi-Fi网络中的电子设备。进而,手机可向同一Wi-Fi网络中的各个电子设备发送查询请求,触发接收到查询请求的电子设备可向手机发 送响应消息,响应消息中可以指示自身是否具有拍摄功能。那么,手机可以根据接收到的响应消息确定出当前Wi-Fi网络中具有拍摄功能的电子设备。进而,手机可将具有拍摄功能的电子设备作为候选设备显示在对话框301中。
又或者,手机中可安装用于管理家庭内智能家居设备(例如电视、空调、音箱或冰箱等)的应用。以智能家居应用举例,用户可以在智能家居应用中添加一个或多个智能家居设备,使得用户添加的智能家居设备与手机建立关联。例如,智能家居设备上可以设置包含设备标识等设备信息的二维码,用户使用手机的智能家居应用扫描该二维码后,可将对应的智能家居设备添加至智能家居应用中,从而建立智能家居设备与手机的关联关系。在本申请实施例中,当智能家居应用中添加的一个或多个智能家居设备上线时,例如,当手机检测到已添加智能家居设备发送的Wi-Fi信号时,手机可将该智能家居设备作为候选设备显示在对话框301中,提示用户选择使用相应的智能家居设备完成手机的拍摄功能。
与手机处理上述原始图像数据类似的,对话框301中的各个候选设备也可以按照上述方法对摄像头采集到的原始图像数据执行一个或多个图像处理任务。不同的是,不同电子设备所具有的图像处理的能力可能不同,也就是说,不同电子设备所支持的图像处理任务可能不同。例如,电视1可能支持变焦功能的图像处理任务,但不支持美颜功能的图像处理任务。
仍如图3所示,以手机搜索到的候选设备包括电视1、手表2以及手机3举例,用户可以在电视1、手表2以及手机3中选择本次将手机的拍摄功能具体切换至哪个设备中完成。例如,如果检测到用户选择电视1,则如图4所示,手机可将电视1作为本次手机切换拍摄功能的从设备,与电视1建立网络连接。例如,手机可通过路由器与电视1建立Wi-Fi连接,或者,手机可直接与电视1建立Wi-Fi P2P连接,或者,手机可直接与电视1建立移动网络连接,该移动网络包括但不限于支持2G,3G,4G,5G以及后续标准协议的移动网络。
进而,仍如图4所示,手机可从电视1获取电视1的拍摄能力参数,该拍摄能力参数用于指示电视1所支持的一个或多个图像处理任务。示例性的,电视1的拍摄能力参数具体可以包括电视1所支持的图像处理任务对应的算法,例如,图像处理任务A与美颜功能的算法1对应,图像处理任务B与人脸检测功能的算法2对应。或者,电视1的拍摄能力参数还可以包括电视1中摄像头的数目,每个摄像头的FOV(field of view,视场角)、光圈大小、分辨率等。这样,手机根据电视1的拍摄能力参数可以确定出电视1支持的具体图像处理任务。
仍如图4所示,手机获取到电视1的拍摄能力参数后,一方面,手机可指示电视1打开自身的摄像头采集原始图像数据,另一方面,手机可基于电视1的拍摄能力参数实时确定拍摄过程中的拍摄策略,进而根据该拍摄策略与电视1协同对采集到的原始图像数据进行相关图像处理。
例如,当手机中的相机应用处于预览模式时,手机可确定需要对原始图像数据执行与对焦功能对应的图像处理任务1。如果电视1的拍摄能力参数指示电视1不支持图像处理任务1,则手机可在拍摄策略1中设置由手机对原始图像数据执行图像处理任务1。进而,手机可以向电视1发送拍摄指令1,拍摄指令1用于指示电视1将采集到的原始图像数据发送至手机进行图像处理。进而,电视1可响应拍摄指令1,将摄像头实时采集到的原始 图像数据发送给手机。手机接收到来自电视发送的原始图像数据后,可按照拍摄策略1对原始图像数据执行图像处理任务1,得到处理后的图像数据1。后续,手机可将图像数据1显示在预览界面201的预览框202中,从而向用户呈现经过对焦后的预览画面。
上述实施例中是以相机应用在预览模式下需要对原始图像数据执行图像处理任务1举例说明的,在一些实施例中,手机的拍摄模式可以有多种。例如,拍摄模式可以包括预览(也可称为预览模式)、拍照(也可称为拍照模式)、录像(也可称为录像模式)、人像(也可称为人像模式)或慢动作(也可称为慢动作模式)等拍摄模式。在不同拍摄模式下手机需要对原始图像数据执行的图像处理任务可以有多个。例如,在人像模式下,手机需要执行的图像处理任务可以包括曝光增强、美颜、柔焦等多个图像处理任务。另外,在某一拍摄模式下,用户还以手动设置拍摄过程中的拍摄选项。例如,上述拍摄选项可以包括美颜、滤镜或焦距调整等。仍以上述预览模式举例,如果检测到用户打开相机应用中美颜的拍摄选项,则除了上述图像处理任务1外,手机可确定还需要对原始图像数据执行与美颜功能对应的图像处理任务2。
此时,手机可结合电视1的拍摄能力参数确定新的拍摄策略,例如拍摄策略2。手机在拍摄策略2中可根据电视1的拍摄能力将图像处理任务1和图像处理任务2分配给手机和/或电视1执行。进而,手机可根据拍摄策略2向电视1发送新的拍摄指令,使得手机可协同电视1按照各自的拍摄能力对采集到的原始图像数据进行相关图像处理。
也就是说,在分布式拍摄场景中,主设备101(例如上述手机)可以结合从设备102(例如上述电视1)的拍摄能力实时的确定拍摄过程中的拍摄策略,通过拍摄策略将需要执行的一个或多个图像处理任务分配给相应的电子设备执行。这样一来,主设备101能够根据从设备102拍摄时的图像处理能力将自身的拍摄功能切换至从设备102上实现,主设备101可与从设备102更加高效、灵活的协同实现分布式拍摄功能,从而在分布式拍摄场景中实现较好的拍摄效果,同时为用户提供较好的拍摄使用体验。
其中,主设备101结合从设备102的拍摄能力确定拍摄过程中拍摄策略的具体细节将在后续实施例中详细阐述,故此处不予赘述。
示例性的,以手机作为上述分布式拍摄系统200中的主设备201举例,图5示出了手机的结构示意图。
手机可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180等。
可以理解的是,本发明实施例示意的结构并不构成对手机的具体限定。在本申请另一些实施例中,手机可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或 多个处理器中。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
手机的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。
移动通信模块150可以提供应用在手机上的包括2G/3G/4G/5G等无线通信的解决方案。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在手机上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
在一些实施例中,手机的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得手机可以通过无线通信技术与网络以及其他设备通信。
手机通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机可以包括1个或N个显示屏194,N为大于1的正整数。
手机可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,手机可以包括1个或N个摄像头193,N为大于1的正整数。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
手机可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。手机可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当手机接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。手机可以设置至少一个麦克风170C。在另一些实施例中,手机可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,手机还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
传感器模块180中可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
当然,手机还可以包括充电管理模块、电源管理模块、电池、按键、指示器以及1个或多个SIM卡接口等,本申请实施例对此不做任何限制。
上述手机的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明手机的软件结构。当然,在其他操作系统(例如鸿蒙系统、Linux系统等)中,只要各个功能模块实现的功能和本申请的实施例类似,即属于本申请权利要求及其等同技术的范围之内。
图6是本申请实施例的手机的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应 用程序框架层,安卓运行时(Android runtime)和系统库,HAL(hardware abstraction layer,硬件抽象层)层以及内核层。
应用程序层可以包括一系列应用程序包。
如图6所示,应用程序层中可以安装通话,备忘录,浏览器,联系人,图库,日历,地图,蓝牙,音乐,视频,短信息等应用。
在本申请实施例中,应用程序层中可以安装具有拍摄功能的应用,例如,相机应用。当然,其他应用需要使用拍摄功能时,也可以调用相机应用实现拍摄功能。
应用程序框架层为应用程序层的应用程序提供应用编程接口(应用lication programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图6所示,以相机应用举例,应用程序框架层中设置有相机服务(CameraService)。相机应用可通过调用预设的API启动CameraService。CameraService在运行过程中可以与HAL(hardware abstraction layer,硬件抽象层)中的Camera HAL交互。其中,Camera HAL负责与手机中实现拍摄功能的硬件设备(例如摄像头)进行交互,Camera HAL一方面隐藏了相关硬件设备的实现细节(例如具体的图像处理算法),另一方面可向Android系统提供调用相关硬件设备的接口。
示例性的,相机应用可将用户下发的相关控制指令(例如预览、放大、拍照或录像指令)发送至CameraService。一方面,CameraService可将接收到的控制指令发送至Camera HAL,使得Camera HAL可根据接收到的控制指令调用内核层中的相机驱动,驱动摄像头等硬件设备响应该控制指令采集原始图像数据。例如,摄像头可按照一定的帧率,将采集到的每一帧原始图像数据通过相机驱动传递给Camera HAL。其中,控制指令在操作系统内部的传递过程可参见图6中控制流的具体传递过程。
另一方面,CameraService接收到上述控制指令后,可根据接收到的控制指令确定此时的拍摄策略,拍摄策略中设置了需要对原始图像数据执行的具体图像处理任务。例如,在预览模式下,CameraService可在拍摄策略中设置默认的图像处理任务1用于实现人脸检测功能。又例如,如果在预览模式下用户开启了美颜功能,则CameraService还可以在拍摄策略中设置图像处理任务2用于实现美颜功能。进而,CameraService可将确定出的拍摄策略发送至Camera HAL。
后续,手机的图像处理过程一般在Camera HAL中完成,即Camera HAL具有手机的图像处理能力。当Camera HAL接收到摄像头采集到的原始图像数据后,可根据CameraService下发的拍摄策略对上述原始图像数据执行相应的图像处理任务,得到图像处理后的目标图像数据。例如,Camera HAL可使用预设的人脸检测算法执行上述图像处理任务1。又例如,Camera HAL可使用预设的美颜算法执行上述图像处理任务2。进而,Camera HAL可将得到的目标图像数据通过CameraService上报给相机应用,相机应用可将该目标图像数据显示在显示界面中,或者,相机应用可以照片或视频的形式将该目标图像数据保存在手机内。其中,图像数据(例如原始图像数据和目标图像数据)在操作系统内部的传递过程可参见图6中数据流的具体传递过程。
另外,应用程序框架层还可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器等,本申请实施例对此不做任何限制。
例如,上述窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是 否有状态栏,锁定屏幕,截取屏幕等。上述内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。上述视图系统可用于构建应用程序的显示界面。每个显示界面可以由一个或多个控件组成。一般而言,控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、微件(Widget)等界面元素。上述资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。上述通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,振动,指示灯闪烁等。
如图6所示,Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
内核层位于HAL之下,是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动等,本申请实施例对此不做任何限制。
基于图6所示的Android系统的软件架构,在本申请实施例中,如图7所示,可在手机的应用程序层中安装用于实现分布式拍摄功能的设备虚拟化(DeviceVirtualization)应用,后续可称为DV应用。DV应用可作为系统应用常驻在手机中运行。或者,也可将DV应用实现的功能以系统服务的形式常驻在手机中运行。
当手机需要使用其他电子设备的摄像头实现分布式拍摄功能时,手机中的DV应用可将该其他电子设备作为手机的从设备与手机建立网络连接。其中,手机与从设备建立的网络连接具体可以是指业务通道的网络连接(即业务连接)。例如,在手机与从设备建立上述网络连接之前,手机可能已经与从设备通过Wi-Fi网络建立了连接,此时的连接是指数据通道的连接(即数据连接)。当手机与从设备建立上述网络连接后,手机可与从设备在已经建立的数据连接的基础上建立业务连接。例如,上述网络连接可以是基于TCP(transmission control protocol,传输控制协议)或UDP(user datagram protocol,用户数据报协议)的P2P连接,本申请实施例对此不做任何限制。
仍如图7所示,手机的DV应用与从设备建立网络连接后,DV应用可基于该网络连接获取从设备的拍摄能力参数,该拍摄能力参数用于指示从设备支持的一个或多个图像处理任务。例如,该拍摄能力参数中可以包括从设备支持的具体图像处理算法,从而反映出从设备的具体拍摄能力。进而,DV应用可调用HAL的预设接口,向预设接口中输入获取到的拍摄能力参数,从而在HAL中创建与从设备对应的硬件抽象模块。示例性的,本申请实施例中可将DV应用按照从设备的拍摄能力参数创建的硬件抽象模块称为DMSDP(Distributed Mobile Sensing Development Platform,分布式移动传感开发平台)HAL,也可称为虚拟Camera HAL。与传统的Camera HAL不同的是,DMSDP HAL并不与手机实际的硬件设备相对应,而是与手机当前连接的从设备对应。手机可作为主设备通过DMSDP HAL与从设备进行数据收发,将从设备作为手机的一个虚拟设备,与从设备协同完成分布式拍摄场景中的各项业务。
在一些实施例中,手机的DV应用还可以获取从设备的音频能力参数(例如音频播放时延、音频采样率或声音通道数目等)、显示能力参数(例如屏幕分辨率、显示数据的编解码算法等)。当然,如果从设备还具有其他能力(例如打印能力等),则从设备也可将相关的能力参数发送给手机的DV应用。与上述拍摄能力参数类似的,DV应用可以将与从设备相关的能力参数均输入至预设接口中,从而在HAL中创建与从设备对应的硬件抽象模块,例如DMSDP HAL。此时,DMSDP HAL不仅具有从设备的图像处理能力,还具有从设备的音频、显示等能力,使得从设备可以作为手机的虚拟设备与手机协同完成分布式场景中的各项业务。
另外,仍如图7所示,在本申请实施例中,DV应用除了在HAL中为手机的从设备创建对应的DMSDP HAL之外,还可以将从设备的拍摄能力参数发送至CameraService进行保存,也即在CameraService中注册当前从设备的拍摄能力。
当手机运行相机应用时,相机应用可将预览、变焦、拍照或录像等控制指令下发给CameraService。CameraService接收到控制指令后,可以根据该控制指令确定后续需要执行的一个或多个图像处理任务,例如,与美颜功能对应的图像处理任务1,与防抖功能对应的图像处理任务2或者与人脸检测功能对应的图像处理任务3等。进而,CameraService可以结合当前从设备的拍摄能力参数设置对应的拍摄策略,在拍摄策略中将上述一个或多个图像处理任务分配给手机和从设备执行。例如,CameraService可在拍摄策略中设置从设备执行图像处理任务1,并设置手机执行图像处理任务2和图像处理任务3。当然,CameraService也可以在拍摄策略中设置所有的图像处理任务由手机或从设备独立完成,本申请实施例对此不做任何限制。
进而,仍如图7所示,手机的CameraService可将上述拍摄策略下发给DMSDP HAL,并通过DMSDP HAL向从设备发送拍摄指令,指示从设备执行上述拍摄策略中为从设备分配的图像处理任务。这样,如图8所示,从设备使用摄像头采集到原始图像数据后,可响应该拍摄指令对原始图像数据执行对应的图像处理任务,并将处理后的图像数据(例如第一图像数据)发送给手机的DMSDP HAL。由于手机的DMSDP HAL已经获取到上述拍摄策略,因此,DMSDP HAL根据该拍摄策略可以确定出对第一图像数据的图像处理过程由DMSDP HAL完成或者由Camera HAL完成。
例如,如果拍摄策略中为手机分配的图像处理任务为从设备支持的图像处理任务,由 于创建DMSDP HAL时已经将从设备的拍摄能力参数输入给DMSDP HAL,即DMSDP HAL有能力处理从设备支持的图像处理任务,此时,DMSDP HAL可按照上述拍摄策略对第一图像数据执行对应的图像处理任务,最终得到第二图像数据。
又例如,如果拍摄策略中为手机分配的图像处理任务为手机支持的图像处理任务,此时,Camera HAL有能力处理手机支持的图像处理任务,因此,如图8中的虚线所示,DMSDPHAL可将接收到的第一图像数据发送给Camera HAL,由Camera HAL按照上述拍摄策略对第一图像数据执行对应的图像处理任务,最终得到第二图像数据。作为一种可能的实现方式,DMSDP HAL可将接收到的第一图像数据先发送给CameraService,再由CameraService将第一图像数据发送给Camera HAL进行图像处理(图8中未示出该数据流向)。也就是说,手机中的DMSDP HAL可直接与Camera HAL交互图像数据,或者,手机中的DMSDP HAL可通过CameraService交互图像数据。
后续,仍如图8所示,手机的DMSDP HAL(或者Camera HAL)可将图像处理后的第二图像数据发送至CameraService,再由CameraService将第二图像数据发送至相机应用,使得相机应用可以将第二图像数据显示在显示界面中或者保存在手机内。
可以看出,在上述分布式拍摄场景中,当手机需要使用从设备的摄像头进行拍摄时,手机的DV应用可获取从设备的拍摄能力参数。进而,DV应用可按照该拍摄能力参数在HAL创建对应的DMSDP HAL,使得手机中的DMSDP HAL具有从设备的图像处理能力。并且,DV应用可在CameraService中注册从设备的拍摄能力,使得CameraService能够根据从设备的拍摄能力实时确定拍摄过程中的拍摄策略,通过该拍摄策略将需要执行的图像处理任务分配给手机和从设备执行。这样,从设备采集到原始图像数据后,手机和从设备可按照自身的拍摄能力对原始图像数据进行相应的图像处理,使得手机可与从设备更加高效、灵活的协同实现分布式拍摄功能,从而在分布式拍摄场景中实现较好的拍摄效果,同时为用户提供较好的拍摄使用体验。
需要说明的是,手机的DV应用在HAL创建的DMSDP HAL是可以动态更新的。当手机的从设备发生变化(例如,从设备由电视切换为手表时),或者,当从设备的拍摄能力发生变化(例如,从设备进行版本升级后更新了图像处理算法时),从设备可动态的向手机发送最新的拍摄能力参数。进而,手机的DV应用可根据最新的拍摄能力参数更新上述DMSDP HAL,使得DMSDP HAL与从设备的拍摄能力相匹配。同时,手机的DV应用可将最新的音频能力参数注册至CameraService中,使得CameraService可以在拍摄过程中按照最新的拍摄能力参数更新当前的拍摄策略。
另外,上述实施例是以分布式拍摄场景中手机的从设备数量为1个举例说明的,在一些实施例中,手机还可以将自身的拍摄功能分布至多个从设备上实现。
例如,当手机的从设备包括从设备1和从设备2时,与上述实施例类似的,手机的DV应用可以分别与从设备1和从设备2建立网络连接,并获取每个从设备的音频能力参数。进而,DV应用可根据每个从设备的音频能力参数在HAL中创建与每个从设备对应的DMSDP HAL。例如,DV应用可按照从设备1的拍摄能力参数1在HAL创建DMSDP HAL1,并且,DV应用可按照从设备2的拍摄能力参数2在HAL创建DMSDP HAL2。进而,手机将从设备1和从设备2的拍摄能力注册至CameraService后,CameraService可在拍摄过程中根据每个从设备的拍摄能力参数定制对应的拍摄策略。这样,每个从设备均可 基于自身的拍摄能力按照对应的拍摄策略执行相关图像处理任务,最终通过对应的DMSDP HAL将处理后的图像数据发送给手机,从而实现跨设备的分布式拍摄功能。
仍以手机为主设备举例,以下将结合具体示例阐述本申请实施例提供的一种在分布式拍摄场景下的拍摄方法。
在本申请实施例中,可以在手机中设置用于实现分布式拍摄功能的功能按钮。例如,如图9中的(a)所示,如果检测到用户打开手机的相机应用,则手机可打开自身的摄像头开始进行拍摄,并且,手机可显示相机应用的预览界面801。预览界面801中设置有分布式拍摄功能的功能按钮802。如果用户希望使用其他电子设备的摄像头进行拍摄,可点击功能按钮802。
或者,手机还可以将分布式拍摄功能的功能按钮802设置在手机的控制中心、下拉菜单、负一屏菜单或其他应用(例如视频通话应用,相机应用)中,本申请实施例对此不做任何限制。例如,如图9中的(b)所示,手机可响应用户打开控制中心的操作显示控制中心803,控制中心803中设置有上述功能按钮802。如果用户希望使用其他电子设备的摄像头进行拍摄,可点击功能按钮802。
示例性的,手机检测到用户点击上述功能按钮802后,DV应用可以触发手机搜索附近具有拍摄功能的一个或多个候选设备。并且,如图10所示,手机可将搜索到的一个或多个候选设备显示在对话框901中。例如,手机可在服务器中查询与手机登录同一账号且具有拍摄功能的电子设备,并将查询到的电子设备作为候选设备显示在对话框901中。
又或者,在一些实施例中,手机也可以不设置上述功能按钮802便可自动触发分布式拍摄功能。例如,手机在运行视频通话应用时,如果检测到联系人发来的视频通话请求,则手机可自动搜索附近具有拍摄功能的一个或多个候选设备。又例如,当检测到用户打开手机中的相机应用时,手机也可自动搜索附近具有拍摄功能的一个或多个候选设备。并且,如图10所示,手机可将搜索到的一个或多个候选设备显示在对话框901中。
以对话框901中的候选设备包括电视902、电视903以及手表904举例,用户可在对话框901中选择本次与手机协同实现分布式拍摄功能的从设备。例如,如果手机检测到用户选择对话框901中的电视902,说明用户希望使用电视902的摄像头进行拍摄。此时,手机的DV应用可将电视902作为手机的从设备与手机建立网络连接。进而,如图11所示,DV应用基于该网络连接从电视902获取电视902的拍摄能力参数。
仍如图11所示,电视902中操作系统的架构与手机中操作系统的架构类似。电视902的应用程序层可以安装代理应用,代理应用用于与其他设备(例如手机)进行数据收发。或者,代理应用也可以SDK(Software Development Kit,软件开发工具包)或系统服务的形式运行在电视902中。电视902的应用程序框架层中设置有CameraService。电视902的HAL中设置有Camera HAL,电视902的Camera HAL与电视902中用于拍摄图像数据的硬件设备(例如摄像头)对应。
仍如图11所示,手机与电视902建立网络连接后,手机的DV应用可向电视902的代理应用发送拍摄能力参数的获取请求。进而,响应该获取请求,电视902的代理应用可从电视902的CameraService中获取电视902的拍摄能力参数,并将电视902的拍摄能力参数发送给手机的DV应用。其中,电视902的拍摄能力参数用于指示电视902所支持的一项或多项图像处理任务。例如,电视902的拍摄能力参数可以包括电视902支持的人脸识 别算法、自动对焦算法等一项或多项图像处理算法。其中,电视902的拍摄能力参数可与电视902的硬件拍摄能力(例如摄像头的数目、分辨率、图像处理器的型号等)相关,也可以与电视902的软件拍摄能力(例如电视902中设置的图像处理算法)相关,本领域技术人员可以根据实际经验或实际应用场景设置上述拍摄能力参数,本申请实施例对此不做任何限制。
手机的DV应用获取到电视902的拍摄能力参数后,仍如图11所示,手机的DV应用可按照该拍摄能力参数在HAL中创建与电视902对应的DMSDP HAL,使得DMSDP HAL具有从设备的图像处理能力,并且,手机后续可通过DMSDP HAL与电视902进行数据收发。
并且,手机的DV应用获取到电视902的拍摄能力参数后,手机的DV应用还可以将电视902的拍摄能力参数注册在手机的CameraService中。手机的CameraService在相机应用运行的过程中可以不断接收相机应用下发的控制指令。例如,当相机应用被打开时,相机应用可以向CameraService发送预览指令,触发相机应用进入预览模式。又例如,当相机应用检测到用户选择录像模式的按钮时,相机应用可以向CameraService发送录像指令,触发相机应用进入录像模式。在不同的拍摄模式下,相机应用还可以向CameraService发送不同拍摄功能对应的控制指令。例如,在预览模式下如果检测到用户开启美颜功能,则相机应用可以向CameraService发送美颜指令。又例如,在录像模式下如果检测到用户打开滤镜1,则相机应用可以向CameraService发送添加滤镜1的控制指令。
在本申请实施例中,手机的CameraService可以根据最近一次相机应用下发的控制指令,结合当前从设备(即电视902)的拍摄能力参数确定当前的拍摄策略。
示例性的,如图12所示,手机的CameraService可以先根据最近一次相机应用下发的控制指令,确定后续需要执行的N(N为大于0的整数)个图像处理任务。例如,最近一次相机应用下发的控制指令为预览指令,则手机的CameraService可以确定在预览模式下需要执行自动对焦的图像处理任务A。又例如,如果在预览模式下手机默认开启了人脸识别功能,则手机的CameraService还可以确定需要执行人脸识别的图像处理任务B。又例如,如果在预览模式下检测到用户开启了美颜功能,则手机的CameraService还可以确定需要执行美颜功能的图像处理任务C。
进而,仍如图12所示,手机的CameraService可以结合手机和当前从设备(即电视902)的拍摄能力,将上述N个图像处理任务分配给手机和电视902。以上述N个图像处理任务包括图像处理任务A、图像处理任务B以及图像处理任务C举例,手机的CameraService通过电视902的拍摄能力参数可以确定电视902具体支持的图像处理任务,并且,手机的CameraService可以获取到手机自身具体支持的图像处理任务。
进而,对于上述图像处理任务A,如果手机和电视902中有一个支持图像处理任务A,则手机的CameraService可将图像处理任务A分配给支持图像处理任务A的设备。例如,手机支持图像处理任务A,而电视902不支持图像处理任务A,则手机的CameraService可将图像处理任务A分配给手机执行。
或者,仍以上述图像处理任务A举例,如果手机和电视902均支持图像处理任务A,则手机的CameraService可将图像处理任务A分配给手机或电视902中的任意设备。例如,手机的CameraService可以计算手机执行图像处理任务A的耗时T1,以及电视902执行图 像处理任务A的耗时T2。如果T1<T2,则手机的CameraService可将图像处理任务A分配给手机;相应的,如果T1>T2,则手机的CameraService可将图像处理任务A分配给电视902。也就是说,手机的CameraService可将图像处理任务A分配给处理速度更快的设备,以提高后续图像处理过程的处理效率。
又例如,如果手机和电视902均支持图像处理任务A,则手机的CameraService还可以根据手机和电视902当前的负载确定后续执行图像处理任务A的具体设备。例如,如果手机的CameraService已经将上述图像处理任务B以及图像处理任务C分配给手机执行,此时,如果将图像处理任务A继续分配给手机执行,则手机的负载将会远大于电视902的负载,导致手机的负载过高而电视902的负载过低。此时,手机的CameraService可将图像处理任务A分配给电视920执行。也就是说,手机的CameraService可将图像处理任务A分配给较为空闲的设备,以提高后续图像处理过程的处理效率。
在一些实施例中,手机的CameraService可能会将电视902支持图像处理任务分配给手机执行。例如,当电视902的负载较高而手机的负载较低时,手机的CameraService可将电视902支持图像处理任务分配给手机执行。由于手机的DMSDP HAL具有电视902的图像处理能力,因此,即使手机的CameraService将电视902支持图像处理任务分配给手机,手机也可以在DMSDP HAL中完成该图像处理任务。
另外,由于手机的Camera HAL具有手机自身(即主设备)的图像处理能力,且手机的DMSDP HAL具有电视902(即从设备)的图像处理能力,因此,手机作为主设备一般具有执行上述N个图像处理任务的能力,而手机的从设备(即电视902)不一定具有执行这N个图像处理任务的能力。也就是说,对于上述N个图像处理任务中的某一图像处理任务,一般不会出手机和电视902均不支持的情况。
类似的,手机的CameraService还可以按照上述方法将图像处理任务B和图像处理任务C分配给手机或电视902执行。可以理解的是,本领域技术人员可以设置相应的算法(例如装箱算法、首次适应算法或最佳适应算法)将拍摄时需要执行的N个图像处理任务分配给手机和电视902,使得手机和电视902能够最大程度的利用自身的拍摄能力对后续采集到的原始图像数据进行图像处理,提高手机与电视902协同拍摄时进行图像处理的效率和速度。
在一些实施例中,手机的CameraService有可能将所有的图像处理任务(即上述N个图像处理任务)全部分配给手机或电视902中的一个设备完成。例如,当电视902不具有执行上述图像处理任务A、图像处理任务B以及图像处理任务C的能力时,手机的CameraService可以将上述图像处理任务A、图像处理任务B以及图像处理任务C全部分配给手机,此时,电视902只需要使用摄像头采集原始图像数据,不需要对采集到的原始图像数据进行图像处理。
最终,仍如图12所示,手机的CameraService确定出上述N个图像处理任务中每个图像处理任务的分配结果后,可将该分配结果作为当前的拍摄策略1输出给手机的DMSDP HAL。
仍如图11所示,手机的DMSDP HAL接收到上述拍摄策略1后,可根据拍摄策略1向电视902发送拍摄指令1,指示电视902按照拍摄策略1执行分配给电视902的图像处理任务。例如,手机的DMSDP HAL可将拍摄策略1中需要电视902执行的图像处理任务 A的标识携带在拍摄指令1中,并将拍摄指令1发送给电视902的代理应用。
仍如图11所示,电视902的代理应用接收到上述拍摄指令1后,一方面可以调用自身的摄像头开始采集原始图像数据。另一方面,电视902的代理应用可将拍摄指令1发送至电视902的CameraService,电视902的CameraService根据拍摄指令1中图像处理任务A的标识可以确定需要对原始图像数据执行图像处理任务A。进而,电视902的CameraService可向电视902的Camera HAL下发执行图像处理任务A的指令1(指令1与拍摄指令1可以相同或不同)。这样,当电视902的摄像头将采集到的原始图像数据上报给电视902的Camera HAL后,电视902的Camera HAL可以对该原始图像数据执行图像处理任务A,得到处理后的第一图像数据。进而,电视902的Camera HAL可将第一图像数据上报给电视902的CameraService,再由电视902的CameraService将第一图像数据上传给电视902的代理应用,最终电视902的代理应用可将第一图像数据发送至手机的DMSDP HAL。
在一些实施例中,当拍摄指令1中没有指示电视902执行任何图像处理任务时,说明此时需要执行的图像处理任务均由手机完成。那么,电视的CameraService可以直接将Camera HAL上报的原始图像数据作为第一图像数据发送给电视902的代理应用,此时,第一图像数据与原始图像数据相同。
另外,上述实施例中是以电视902的代理应用接收到上述拍摄指令1后开始采集原始图像数据举例说明的,可以理解的是,电视902也可以在与手机建立网络连接后,自动打开自身的摄像头开始采集原始图像数据,本申请实施例对此不做任何限制。
仍如图11所示,手机的DMSDP HAL接收到电视902发来的第一图像数据后,可按照上述拍摄策略1执行分配给手机的图像处理任务。例如,上述拍摄策略1中需要手机执行图像处理任务B和图像处理任务C,其中,图像处理任务B为电视902支持的图像处理任务,图像处理任务C为手机支持的图像处理任务。此时,手机的DMSDP HAL可按照拍摄策略1对接收到的第一图像数据执行图像处理任务B,得到处理后的第二图像数据。进而,手机的DMSDP HAL可将第二图像数据发送给手机的Camera HAL。作为一种可能的实现方式,手机的DMSDP HAL可将第二图像数据先发送给手机的CameraService,再由手机的CameraService将第二图像数据发送给手机的Camera HAL(图11中未示出该数据流向)。这样,手机的Camera HAL可按照拍摄策略1对接收到的第二图像数据执行图像处理任务C,得到处理后的第三图像数据。后续,手机的Camera HAL可将第三图像数据上报给手机的CameraService,再由手机的CameraService将第三图像数据上传给手机的相机应用。
上述第三图像数据是手机和电视902响应相机应用下发的预览指令通过协同处理后得到的图像数据。如图13所示,手机的相机应用接收到上述第三图像数据后,可将第三图像数据显示在预览界面801的预览框802中。此时,用户在手机的预览界面801中可以观看到电视902采集到的图像,并且,预览框802中呈现的图像为手机和电视902协同进行图像处理后的图像。
在一些实施例中,如果拍摄策略1中需要手机执行图像处理任务B为电视902支持的图像处理任务,并且,拍摄策略1中需要手机执行图像处理任务C也为电视902支持的图像处理任务,则手机的DMSDP HAL获取到来自电视902的第一图像数据后,可直接按照 上述拍摄策略1对接收到的第一图像数据执行图像处理任务B和图像处理任务C,并将图像处理后的图像数据发送给手机的CameraService,无需再由手机的Camera HAL进行图像处理。
或者,如果拍摄策略1中需要手机执行图像处理任务B为手机支持的图像处理任务,并且,拍摄策略1中需要手机执行图像处理任务C也为手机支持的图像处理任务,则手机的DMSDP HAL获取到来自电视902的第一图像数据后,可直接将第一图像数据发送至手机的Camera HAL,由手机的Camera HAL对接收到的第一图像数据执行图像处理任务B和图像处理任务C,并将图像处理后的图像数据发送给手机的CameraService。
可以看出,在本申请实施例中,主设备(例如上述手机)可以利用分布式拍摄场景中从设备(例如上述电视902)的拍摄能力,将适合从设备执行的图像处理任务分配给从设备完成,将适合主设备执行的图像处理任务分配给主设备完成。这样,可以充分利用主设备和从设备的拍摄能力执行相应的图像处理任务,提升在分布式拍摄场景下各个设备的资源利用率,从而提升整个拍摄过程的处理效率,实现更好的拍摄效果。
仍以图13所示的分布式拍摄场景举例,手机的相机应用在预览模式下如果没有接收到用户触发的新的控制指令,例如,添加滤镜、变焦或者AI识别等控制指令,则手机与电视902可继续按照上述拍摄策略1,对电视902采集到的原始图像数据执行相应的图像处理任务,由手机最终将经过所有图像处理后的图像数据显示在预览界面801的预览框802中。
在一些实施例中,如果手机的相机应用在预览模式下接收到用户触发的新的控制指令,则手机可根据新的控制指令确定新的拍摄策略,并触发手机和电视902按照新的拍摄策略对电视902采集到的原始图像数据执行相应的图像处理任务。
示例性的,如果手机中的相机应用检测到用户点击预览界面801中的拍照按钮,则相机应用可向手机的CameraService发送对应的拍照指令。如图14所示,手机的CameraService接收到拍照指令后,可以响应该拍照指令确定后续需要执行的M(N为大于0的整数)个图像处理任务。例如,除了需要执行预览模式下的图像处理任务A、图像处理任务B以及图像处理任务C之外,手机的CameraService确定后续还需要执行曝光度调整的图像处理任务D。
进而,仍如图14所示,手机的CameraService可以结合手机和当前从设备(即电视902)的拍摄能力,将图像处理任务A、图像处理任务B、图像处理任务C以及图像处理任务D分配给手机和电视902。其中,分配各个图像处理任务的过程可参见图12的相关描述,故此处不再赘述。
最终,仍如图14所示,手机的CameraService确定出上述M个图像处理任务中每个图像处理任务的分配结果后,可将该分配结果作为当前的拍摄策略2输出给手机的DMSDP HAL。
例如,在拍摄策略2中,手机的CameraService将电视902支持的图像处理任务A和图像处理任务B分配给电视902,将手机支持的图像处理任务C和图像处理任务D分配给手机。此时,如图15所示,手机的DMSDP HAL接收到上述拍摄策略2后,可根据拍摄策略2向电视902的代理应用发送拍摄指令2,指示电视902按照拍摄策略2执行分配给电视902的图像处理任务A和图像处理任务B。
仍如图15所示,电视902的代理应用接收到上述拍摄指令2后,可将拍摄指令2发送至电视902的CameraService,电视902的CameraService根据拍摄指令2可确定需要对原始图像数据执行图像处理任务A和图像处理任务B。进而,电视902的CameraService可向电视902的Camera HAL下发执行图像处理任务A和图像处理任务B的指令2(指令2与拍摄指令2可以相同或不同))。这样,当电视902的摄像头将采集到的原始图像数据上报给电视902的Camera HAL后,电视902的Camera HAL可以对该原始图像数据执行图像处理任务A和图像处理任务B,得到处理后的第四图像数据。进而,电视902的Camera HAL可将第四图像数据上报给电视902的CameraService,再由电视902的CameraService将第四图像数据上传给电视902的代理应用,最终,电视902的代理应用可将第四图像数据发送至手机的DMSDP HAL。
仍如图15所示,手机的DMSDP HAL接收到电视902发来的第四图像数据后,如果上述拍摄策略2中分配给手机的图像处理任务C和图像处理任务D均为手机支持的图像处理任务,则手机的DMSDP HAL可将第四图像数据发送至手机的Camera HAL。手机的Camera HAL可对接收到的第四图像数据执行图像处理任务C和图像处理任务D,得到处理后的第五图像数据。进而,手机的Camera HAL可将第五图像数据上报给手机的CameraService,再由手机的CameraService将第五图像数据上传给手机的相机应用。
手机的相机应用接收到第五图像数据后,可响应用户点击预览界面801中拍照按钮的操作,将第五图像数据以照片的形式保存在手机的图库中,完成本次拍照操作。
同样,在本次拍摄过程中,同时利用了手机(即主设备)和电视902(即从设备)的拍摄能力对采集到的原始图像数据执行了相应的图像处理任务,使得分布式拍摄场景下各个设备的资源利用率提高,整个拍摄过程的处理效率也相应提高。
仍以电视902为手机的从设备举例,在分布式拍摄场景中,电视902可将实时采集到的原始图像数据处理为多路图像数据流。例如,在录像场景下,电视902使用自身的摄像头采集到原始图像数据后,可将采集到的原始图像数据复制为两路,其中一路原始图像数据可作为预览流数据用于相机应用在录像过程中显示预览画面,另一路原始图像数据可作为录像数据流用于制作最终的视频文件。
示例性的,如图16所示,用户可以将手机中相机应用的拍摄模式切换为录像模式。在录像模式下,如果检测到用户点击预览界面1501中的录像按钮1502,则手机可以与电视902协同完成本次录像操作。
如图17所示,手机的相机应用检测到用户点击录像按钮1502的操作后,可向手机的CameraService发送对应的录像指令。与图12和图14所示的过程类似的,手机的CameraService接收到录像指令后,可结合电视902的拍摄能力参数,将录像时需要执行的图像处理任务分配给手机和电视902,从而生成对应的拍摄策略3。
不同的是,拍摄策略3中可以包括对预览流数据的拍摄策略A以及对录像流数据的拍摄策略B。如图18所示,拍摄策略A中可以包括需要对预览流数据执行的X(X为大于0的整数)个图像处理任务,拍摄策略B中可以包括需要对录像流数据执行的Y(Y为大于0的整数)个图像处理任务。上述X个图像处理任务可以与Y个图像处理任务相同或不同。例如,如果用户在使用相机应用录像时开启了美颜功能,则拍摄策略A和拍摄策略B中可以包括美颜功能的图像处理任务1。又例如,手机的CameraService可以在拍摄策略A 中额外增加AI识别的图像处理任务2,从而在录像的预览画面中实时向用户呈现AI识别的结果。
仍如图18所示,手机作为主设备一般具有能力执行CameraService确定出的各个图像处理任务,且手机的处理速度和处理性能相对较高,因此,手机的CameraService可将拍摄策略A中的X个图像处理任务分配给手机执行。这样一来,后续录像时电视902可将产生的预览流数据直接发送给手机进行图像处理,以保证预览流数据对实时性的较高需求。
而对于实时性需求不高的录像流数据,如果电视902的拍摄能力参数指示电视902有能力执行上述Y个图像处理任务,则手机的CameraService可将拍摄策略B中的Y个图像处理任务分配给电视902执行。这样一来,手机在对预览流数据进行图像处理的同时,电视902可以同时对录像流数据进行图像处理,使得手机与电视902在协同拍摄时的拍摄效率更高。
当然,如果上述Y个图像处理任务中包括电视902不支持的图像处理任务,则手机的CameraService也可在拍摄策略B中将电视902不支持的图像处理任务分配给手机执行,本申请实施例对此不做任何限制。
仍如图17所示,手机的CameraService可将生成的拍摄策略3(即上述拍摄策略A和拍摄策略B)发送给手机的DMSDP HAL。手机的DMSDP HAL接收到上述拍摄策略3后,可根据拍摄策略3向电视902的代理应用发送拍摄指令3,指示电视902将拍摄模式切换为录像模式,在录像模式下,电视902可按照拍摄策略3中的拍摄策略B对录像流数据执行对应的图像处理任务,并且,电视902可直接将预览流数据发送给手机,无需对预览流数据进行图像处理。
仍如图17所示,电视902的代理应用接收到上述拍摄指令3后,可指示电视902的Camera HAL将摄像头将采集到的原始图像数据复制为两路,一路为录像流数据1,另一路为预览流数据1。并且,电视902的代理应用可将拍摄指令3通过CameraService下发给电视902的Camera HAL(图17中未示出拍摄指令3在电视902内部的流向)。这样,当电视902的Camera HAL获取到预览流数据1后,可直接将预览流数据1通过电视902的CameraService和代理应用发送至手机的DMSDP HAL。当电视902的Camera HAL获取到录像流数据1后,可按照上述拍摄指令3中的拍摄指令B对录像流数据1执行相应的图像处理任务,得到处理后的录像流数据(即录像流数据2)。进而,电视902的Camera HAL可将录像流数据2通过电视902的CameraService和代理应用发送至手机的DMSDP HAL。
仍如图17所示,手机的DMSDP HAL接收到电视902发来的预览流数据1后,可将预览流数据1发送至手机的Camera HAL,由手机的Camera HAL按照上述拍摄策略3中的拍摄策略A执行分配给手机的图像处理任务,得到处理后的预览流数据(即预览流数据2)。手机的Camera HAL可将预览流数据2通过CameraService上报给手机的相机应用,由相机应用将预览流数据2实时显示在图16所示的预览界面1501中。
并且,手机的DMSDP HAL接收到电视902发来的录像流数据2后,由于拍摄策略3中设置了手机无需手机对录像流数据进行图像处理,因此,手机的DMSDP HAL可将接收到的录像流数据2通过CameraService上报给手机的相机应用。手机的相机应用接收到录像流数据2后,可将录像流数据2以视频的格式保存在手机的图库中,完成本次录像操作。
可以看出,当拍摄过程中电视902(即从设备)产生多路图像数据流时,手机(即主设 备)可利用从设备的拍摄能力将不同图像数据流的图像处理任务分配给不同设备完成。这样,主设备和从设备可以以并行的方式同时对获取到的图像数据流进行图像处理,使得分布式拍摄场景下各个设备的资源利用率提高,整个拍摄过程的处理效率也相应提高。
同时,由于主设备和从设备可以同时对获取到的图像数据流进行图像处理,使得从设备可以分时、分段的将不同的图像数据流发送给主设备。例如,从设备在对录像流数据进行图像处理的同时,可以将预览流数据发送给主设备,由主设备对预览流数据进行图像处理。后续,从设备再将图像处理后的录像流数据发送给主设备。这样,从设备不需要统一将多路图像数据流一起发送给主设备,从而降低图像数据流传输时的网络带宽压力,进一步提升整个拍摄过程的处理效率。
上述实施例是以手机的从设备为电视902举例说明的,可以理解的是,当手机的从设备在分布式拍摄场景下更新为其他电子设备时,手机仍然可以按照上述方法利用新的从设备的拍摄能力在拍摄过程中实时确定对应的拍摄策略,使得手机和从设备可按照自身的拍摄能力对采集到的原始图像数据进行相应的图像处理,从而更加高效、灵活的实现实现分布式拍摄场景下多设备的协同拍摄功能。
另外,上述实施例中是以手机为分布式拍摄场景中的主设备举例说明的,可以理解的是,分布式拍摄场景中的主设备还可以是平板电脑、电视等具有上述拍摄功能的电子设备,本申请实施例对此不做任何限制。
需要说明的是,上述实施例中是以Android系统为例阐述的各个功能模块之间实现分布式拍摄功能的具体方法,可以理解的是,也可以在其他操作系统(例如鸿蒙系统等)中设置相应的功能模块实现上述方法。只要各个设备和功能模块实现的功能和本申请的实施例类似,即属于本申请权利要求及其等同技术的范围之内。
如图19所示,本申请实施例公开了一种电子设备,该电子设备可以为上述主设备(例如手机)。该电子设备具体可以包括:触摸屏1901,所述触摸屏1901包括触摸传感器1906和显示屏1907;一个或多个处理器1902;存储器1903;通信模块1908;一个或多个摄像头1909;一个或多个应用程序(未示出);以及一个或多个计算机程序1904,上述各器件可以通过一个或多个通信总线1905连接。其中,上述一个或多个计算机程序1904被存储在上述存储器1903中并被配置为被该一个或多个处理器1902执行,该一个或多个计算机程序1904包括指令,该指令可以用于执行上述实施例中主设备执行的相关步骤。
如图20所示,本申请实施例公开了一种电子设备,该电子设备可以为上述从设备(例如音箱)。该电子设备具体可以包括:一个或多个处理器2002;存储器2003;通信模块2006;一个或多个应用程序(未示出);一个或多个摄像头2001;以及一个或多个计算机程序2004,上述各器件可以通过一个或多个通信总线2005连接。当然,从设备中也可以设置触摸屏等器件,本申请实施例对此不做任何限制。其中,上述一个或多个计算机程序2004被存储在上述存储器2003中并被配置为被该一个或多个处理器2002执行,该一个或多个计算机程序2004包括指令,该指令可以用于执行上述实施例中从设备执行的相关步骤。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前 述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种拍摄方法,其特征在于,包括:
    当第一设备使用第二设备的摄像头进行拍摄时,所述第一设备确定第一拍摄策略,所述第一拍摄策略包括所述第一设备需要执行的X个图像处理任务,以及所述第二设备需要执行的Y个图像处理任务,X和Y均为大于或等于0的整数;
    所述第一设备按照所述第一拍摄策略向所述第二设备发送第一拍摄指令,所述第一拍摄指令用于触发所述第二设备响应于所述第一拍摄指令对采集到的原始图像数据执行所述Y个图像处理任务,得到第一图像数据;
    当所述第一设备接收到所述第二设备发送的所述第一图像数据后,所述第一设备按照所述第一拍摄策略对所述第一图像数据执行所述X个图像处理任务,得到第二图像数据;
    所述第一设备在显示界面中显示所述第二图像数据。
  2. 根据权利要求1所述的方法,其特征在于,所述第一设备确定第一拍摄策略,包括:
    响应于当前的拍摄模式和用户选择的拍摄选项,所述第一设备确定需要执行的N个图像处理任务,N=X+Y;
    所述第一设备将所述N个图像处理任务分配给所述第一设备和所述第二设备,得到所述第一拍摄策略。
  3. 根据权利要求2所述的方法,其特征在于,在所述第一设备确定第一拍摄策略之前,还包括:
    所述第一设备获取所述第二设备的拍摄能力参数,所述拍摄能力参数用于指示所述第二设备的图像处理能力;
    其中,所述第一设备将所述N个图像处理任务分配给所述第一设备和所述第二设备,包括:
    所述第一设备按照所述拍摄能力参数,将所述N个图像处理任务分配给所述第一设备和所述第二设备。
  4. 根据权利要求3所述的方法,其特征在于,所述N个图像处理任务包括第一图像处理任务;
    其中,所述第一设备按照所述拍摄能力参数,将所述第一图像处理任务分配给所述第一设备或所述第二设备,包括:
    若所述拍摄能力参数指示所述第二设备有能力执行所述第一图像处理任务,则所述第一设备将所述第一图像处理任务分配给所述第二设备;或者;
    若所述拍摄能力参数指示所述第二设备执行所述第一图像处理任务的时间短于所述第一设备执行所述第一图像处理任务的时间,则所述第一设备将所述第一图像处理任务分配给所述第二设备。
  5. 根据权利要求3或4所述的方法,其特征在于,在所述第一设备获取所述第二设备的拍摄能力参数之后,还包括:
    所述第一设备按照所述拍摄能力参数在所述第一设备的HAL中创建硬件抽象模块,所述硬件抽象模块具有所述第二设备的图像处理能力;所述方法还包括:
    所述第一设备通过所述硬件抽象模块接收所述第二设备发送的所述第一图像数据。
  6. 根据权利要求5所述的方法,其特征在于,所述第一设备的HAL中还包括相机抽象 模块Camera HAL;所述X个图像处理任务中包括所述第二设备支持的X1个图像处理任务和所述第一设备支持的X2个图像处理任务,X1+X2=X;
    其中,所述第一设备按照所述第一拍摄策略对所述第一图像数据执行所述X个图像处理任务,得到所述第二图像数据,包括:
    所述第一设备通过所述硬件抽象模块对所述第一图像数据执行所述X1个图像处理任务,得到第三图像数据;
    所述硬件抽象模块将所述第三图像数据发送至所述Camera HAL;
    所述第一设备通过所述Camera HAL对所述第三图像数据执行所述X2个图像处理任务,得到所述第二图像数据。
  7. 根据权利要求5所述的方法,其特征在于,所述第一设备的HAL中还包括相机抽象模块Camera HAL;所述X个图像处理任务均为所述第一设备支持的图像处理任务;
    其中,所述第一设备按照所述第一拍摄策略对所述第一图像数据执行所述X个图像处理任务,得到所述第二图像数据,包括:
    所述第一设备的所述硬件抽象模块将所述第一图像数据发送至所述Camera HAL;
    所述第一设备通过所述Camera HAL对所述第一图像数据执行所述X个图像处理任务,得到所述第二图像数据。
  8. 根据权利要求2-7中任一项所述的方法,其特征在于,所述方法还包括:
    当所述第一设备检测到用户输入的预设操作后,所述第一设备响应于所述预设操作更新当前的拍摄模式或拍摄选项。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,当第一设备使用第二设备的摄像头进行拍摄时,还包括:
    所述第一设备接收用户输入的录像操作;
    响应于所述录像操作,所述第一设备确定第二拍摄策略和第三拍摄策略,所述第二拍摄策略包括需要对预览流数据执行的K个图像处理任务,所述K个图像处理任务由所述第一设备执行,所述第三拍摄策略包括需要对录像流数据执行的W个图像处理任务,所述W个图像处理任务由所述第二设备执行,K和W均为大于或等于0的整数;
    所述第一设备按照所述第二拍摄策略和所述第三拍摄策略向所述第二设备发送第二拍摄指令,第二拍摄指令用于触发所述第二设备对采集到的第一录像流数据执行所述W个图像处理任务,得到第二录像流数据;
    当所述第一设备接收到所述第二设备采集到的第一预览流数据后,所述第一设备对所述第一预览流数据执行所述K个图像处理任务,得到第二预览流数据;所述第一设备在显示界面中显示所述第二预览流数据;所述第一预览流数据和所述第一录像流数据为所述第二设备采集到的原始图像数据;
    当所述第一设备接收到所述第二设备发送的所述第二录像流数据后,所述第一设备将所述第二录像流数据保存为视频。
  10. 根据权利要求9所述的方法,其特征在于,在所述第一设备确定第二拍摄策略和第三拍摄策略之前,还包括:
    所述第一设备获取所述第二设备的拍摄能力参数,所述拍摄能力参数用于指示所述第二设备的图像处理能力;
    其中,所述第一设备确定第二拍摄策略和第三拍摄策略,包括:
    所述第一设备根据所述拍摄能力参数确定第二拍摄策略和第三拍摄策略。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,在第一设备使用第二设备的摄像头进行拍摄之前,还包括:
    响应于用户的输入的第一操作,所述第一设备显示候选设备列表,所述候选设备列表中包括所述第二设备;
    响应于用户在所述候选设备列表选择所述第二设备的操作,所述第一设备指示所述第二设备启动摄像头开始采集原始图像数据。
  12. 一种拍摄方法,其特征在于,包括:
    第二设备响应于第一设备的指示打开摄像头开始采集原始图像数据;
    所述第二设备接收所述第一设备发送的第一拍摄指令,所述第一拍摄指令用于指示所述第二设备需要执行Y个图像处理任务,Y为大于或等于0的整数;
    响应于所述第一拍摄指令,所述第二设备对采集到的原始图像数据执行所述Y个图像处理任务,得到第一图像数据;
    所述第二设备向所述第一设备发送所述第一图像数据。
  13. 根据权利要求12所述的方法,其特征在于,在第二设备响应于第一设备的指示打开摄像头开始采集原始图像数据之前,还包括:
    所述第二设备与所述第一设备建立网络连接;
    所述第二设备将所述第二设备的拍摄能力参数发送至所述第一设备,所述拍摄能力参数用于指示所述第二设备的图像处理能力。
  14. 根据权利要求12或13所述的方法,其特征在于,在第二设备响应于第一设备的指示打开摄像头开始采集原始图像数据之后,还包括:
    所述第二设备接收所述第一设备发送的第二拍摄指令,所述第二拍摄指令用于指示当前的拍摄模式为录像,所述第二拍摄指令中包括需要对录像数据流执行的W个图像处理任务,W为大于或等于0的整数;
    响应于所述第二拍摄指令,所述第二设备将采集到的原始图像数据复制为两路,得到第一录像流数据和第一预览流数据;
    所述第二设备将所述第一预览流数据发送至所述第一设备;
    所述第二设备对所述第一录像流数据执行所述W个图像处理任务,得到第二录像流数据;所述第二设备将所述第二录像流数据发送至所述第一设备。
  15. 一种电子设备,其特征在于,所述电子设备为第一设备,所述第一设备包括:
    显示屏;
    一个或多个处理器;
    存储器;
    通信模块;
    其中,所述存储器中存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-11中任一项所述第一设备执行的一种拍摄方法。
  16. 一种电子设备,其特征在于,所述电子设备为第二设备,所述第二设备包括:
    一个或多个摄像头;
    一个或多个处理器;
    存储器;
    通信模块;
    其中,所述存储器中存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求12-14中任一项所述第二设备执行的一种拍摄方法。
  17. 一种分布式拍摄系统,其特征在于,所述系统包括如权利要求15所述的电子设备,以及如权利要求16所述的电子设备。
  18. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-11或12-14中任一项所述的一种拍摄方法。
  19. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-11或12-14中任一项所述的一种拍摄方法。
PCT/CN2021/136767 2020-12-29 2021-12-09 一种拍摄方法、系统及电子设备 WO2022143077A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21913818.7A EP4246957A4 (en) 2020-12-29 2021-12-09 PHOTOGRAPHY METHOD, SYSTEM AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011608325.X 2020-12-29
CN202011608325.XA CN114697527B (zh) 2020-12-29 2020-12-29 一种拍摄方法、系统及电子设备

Publications (1)

Publication Number Publication Date
WO2022143077A1 true WO2022143077A1 (zh) 2022-07-07

Family

ID=82131577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136767 WO2022143077A1 (zh) 2020-12-29 2021-12-09 一种拍摄方法、系统及电子设备

Country Status (3)

Country Link
EP (1) EP4246957A4 (zh)
CN (2) CN114697527B (zh)
WO (1) WO2022143077A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116347212A (zh) * 2022-08-05 2023-06-27 荣耀终端有限公司 一种自动拍照方法及电子设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117850989A (zh) * 2022-09-30 2024-04-09 华为技术有限公司 一种服务调用方法、系统和电子设备
CN115623331A (zh) * 2022-10-12 2023-01-17 维沃移动通信有限公司 对焦控制方法、装置、电子设备及存储介质
CN118433523A (zh) * 2022-11-21 2024-08-02 荣耀终端有限公司 一种图像处理方法和电子设备
CN116703692B (zh) * 2022-12-30 2024-06-07 荣耀终端有限公司 一种拍摄性能优化方法和装置
CN118741301A (zh) * 2023-03-28 2024-10-01 荣耀终端有限公司 一种图像拍摄方法、电子设备及系统
CN116647753B (zh) * 2023-07-27 2023-10-10 新唐信通(浙江)科技有限公司 一种无线通信控制装置及控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201608325U (zh) 2010-03-15 2010-10-13 徐孟跃 一种插头
CN103024266A (zh) * 2012-11-15 2013-04-03 北京百度网讯科技有限公司 移动终端的拍摄优化方法、系统和装置
CN104427228A (zh) * 2013-08-22 2015-03-18 展讯通信(上海)有限公司 协作拍摄系统及其拍摄方法
US20160077422A1 (en) * 2014-09-12 2016-03-17 Adobe Systems Incorporated Collaborative synchronized multi-device photography
CN112004076A (zh) * 2020-08-18 2020-11-27 Oppo广东移动通信有限公司 数据处理方法、控制终端、ar终端、ar系统及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3937332B2 (ja) * 2003-03-11 2007-06-27 ソニー株式会社 撮影システム
CN110798586A (zh) * 2010-02-19 2020-02-14 株式会社尼康 电子设备
US20140146172A1 (en) * 2011-06-08 2014-05-29 Omron Corporation Distributed image processing system
KR20140141383A (ko) * 2013-05-31 2014-12-10 삼성전자주식회사 협동 촬영하는 전자 장치 및 그 제어 방법
KR20150027934A (ko) * 2013-09-04 2015-03-13 삼성전자주식회사 다각도에서 촬영된 영상을 수신하여 파일을 생성하는 전자 장치 및 방법
CN104915107A (zh) * 2013-11-27 2015-09-16 深圳市金立通信设备有限公司 一种媒体拍摄方法及终端
JP6354442B2 (ja) * 2014-08-12 2018-07-11 カシオ計算機株式会社 撮像装置、制御方法及びプログラム
JP2016130925A (ja) * 2015-01-14 2016-07-21 レノボ・シンガポール・プライベート・リミテッド 複数の電子機器が連携動作をする方法、電子機器およびコンピュータ・プログラム
CN106775902A (zh) * 2017-01-25 2017-05-31 北京奇虎科技有限公司 一种图像处理的方法和装置、移动终端
CN108965691B (zh) * 2018-06-12 2021-03-02 Oppo广东移动通信有限公司 摄像头控制方法、装置、移动终端及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201608325U (zh) 2010-03-15 2010-10-13 徐孟跃 一种插头
CN103024266A (zh) * 2012-11-15 2013-04-03 北京百度网讯科技有限公司 移动终端的拍摄优化方法、系统和装置
CN104427228A (zh) * 2013-08-22 2015-03-18 展讯通信(上海)有限公司 协作拍摄系统及其拍摄方法
US20160077422A1 (en) * 2014-09-12 2016-03-17 Adobe Systems Incorporated Collaborative synchronized multi-device photography
CN112004076A (zh) * 2020-08-18 2020-11-27 Oppo广东移动通信有限公司 数据处理方法、控制终端、ar终端、ar系统及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4246957A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116347212A (zh) * 2022-08-05 2023-06-27 荣耀终端有限公司 一种自动拍照方法及电子设备
CN116347212B (zh) * 2022-08-05 2024-03-08 荣耀终端有限公司 一种自动拍照方法及电子设备

Also Published As

Publication number Publication date
EP4246957A1 (en) 2023-09-20
CN114697527A (zh) 2022-07-01
CN114697527B (zh) 2023-04-18
EP4246957A4 (en) 2024-03-27
CN116405773A (zh) 2023-07-07

Similar Documents

Publication Publication Date Title
WO2022143077A1 (zh) 一种拍摄方法、系统及电子设备
EP4030276B1 (en) Content continuation method and electronic device
JP7355941B2 (ja) 長焦点シナリオにおける撮影方法および端末
JP2022549157A (ja) データ伝送方法及び関連装置
CN112398855B (zh) 应用内容跨设备流转方法与装置、电子设备
WO2021121052A1 (zh) 一种多屏协同方法、系统及电子设备
WO2022121775A1 (zh) 一种投屏方法及设备
JP7369281B2 (ja) デバイス能力スケジューリング方法および電子デバイス
WO2022105803A1 (zh) 摄像头调用方法、系统及电子设备
CN112394895A (zh) 画面跨设备显示方法与装置、电子设备
WO2022143883A1 (zh) 一种拍摄方法、系统及电子设备
WO2022017393A1 (zh) 显示交互系统、显示方法及设备
WO2022160985A1 (zh) 一种分布式拍摄方法,电子设备及介质
WO2022179405A1 (zh) 一种投屏显示方法及电子设备
WO2022127661A1 (zh) 应用共享方法、电子设备和存储介质
WO2023005900A1 (zh) 一种投屏方法、电子设备及系统
CN110413383B (zh) 事件处理方法、装置、终端及存储介质
WO2022222773A1 (zh) 拍摄方法、相关装置及系统
WO2022156721A1 (zh) 一种拍摄方法及电子设备
WO2022110939A1 (zh) 一种设备推荐方法及电子设备
WO2024041006A1 (zh) 一种控制摄像头帧率的方法及电子设备
WO2023160224A1 (zh) 一种拍摄方法及相关设备
WO2024140757A1 (zh) 跨设备分屏方法及相关装置
WO2022111701A1 (zh) 投屏方法及系统
WO2022206659A1 (zh) 一种投屏方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913818

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021913818

Country of ref document: EP

Effective date: 20230612

NENP Non-entry into the national phase

Ref country code: DE