CN115514882A - Distributed shooting method, electronic device and medium - Google Patents

Distributed shooting method, electronic device and medium Download PDF

Info

Publication number
CN115514882A
CN115514882A CN202210973275.8A CN202210973275A CN115514882A CN 115514882 A CN115514882 A CN 115514882A CN 202210973275 A CN202210973275 A CN 202210973275A CN 115514882 A CN115514882 A CN 115514882A
Authority
CN
China
Prior art keywords
shooting
camera
mobile phone
image data
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210973275.8A
Other languages
Chinese (zh)
Inventor
冯可荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210973275.8A priority Critical patent/CN115514882A/en
Priority claimed from CN202110131870.2A external-priority patent/CN114845035B/en
Publication of CN115514882A publication Critical patent/CN115514882A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72415User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories for remote control of appliances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

A distributed shooting method, electronic equipment and medium can reduce network bandwidth occupied by data transmission between remote equipment and local equipment in the process of realizing distributed shooting. In the method, the first device can receive a first operation that a user selects to synchronously shoot with the second device. In response to the first operation, the first device may begin acquiring first image data, and the first device may further instruct the second device to begin acquiring image data. Thereafter, the first device may receive second image data from the second device, the second image data including an original image captured by a camera of the second device. Finally, the first device may display a first shot picture corresponding to the first image data in the first window and a second shot picture corresponding to the second image data in the second window. The first window and the second window are located in the same display interface.

Description

Distributed shooting method, electronic equipment and medium
Technical Field
The embodiment of the application relates to the technical field of photographing, in particular to a distributed photographing method, electronic equipment and a medium.
Background
With the development of internet technology, a plurality of electronic devices including cameras may constitute a distributed camera system. For example, the electronic device may be a mobile phone, a tablet computer, a smart watch, a smart television, or the like. In the distributed camera system, one electronic device (referred to as a local device) can not only present a picture taken by a camera of the electronic device for a user, but also acquire and display pictures taken by cameras of other electronic devices (referred to as remote devices).
Specifically, the home device may obtain the picture of the remote device in the following manner: (1) The home terminal equipment sends a control command to the remote terminal equipment, instructs the remote terminal equipment to acquire an image and returns a preview result to the home terminal equipment; (2) After the remote equipment receives the control command, acquiring an image and performing Image Signal Processing (ISP) on the acquired image to obtain a preview result; and (3) the remote end equipment sends the preview result to the local end equipment.
In the process of transmitting the image data across the devices, the remote device sends the pieces (namely the preview result) obtained by image processing to the local device, so that the data volume is increased, a larger network bandwidth is occupied, and network resources are wasted.
Disclosure of Invention
The application provides a distributed shooting method, electronic equipment and a medium, which can reduce network bandwidth occupied by data transmission between remote equipment and local equipment in the process of realizing distributed shooting.
In a first aspect, the present application provides a distributed shooting method, which is applied to a first device. In the method, a first device can receive a first operation selected by a user to synchronously shoot with a second device. In response to the first operation, the first device may begin acquiring first image data and instruct the second device to begin acquiring image data. Thereafter, the first device may receive second image data from the second device, the second image data including the original image. Finally, the first device may display a first photographed picture corresponding to the first image data in the first window and a second photographed picture corresponding to the second image data in the second window. The first window and the second window are located in the same display interface.
It should be noted that the above-described original image may also be referred to as a RAW image. RAW can be translated as "RAW". That is, the above-described original image is an image which has not been processed by the second device. The data amount of the original image is small compared to the image processed by the second device. Therefore, the network bandwidth occupied by the distributed shooting service of the cross-equipment can be reduced.
In a possible design of the first aspect, the above method may further comprise: the first device transmits first image data to the second device, the first image data including an original image. The original image (also referred to as a RAW image) is an image that has not been processed by the first device.
In this way, not only can the first device simultaneously display the shot picture of the first device and the shot picture of the second device, but also the second device can simultaneously display the shot picture of the first device and the shot picture of the second device.
In another possible design manner of the first aspect, after the first device receives a first operation that the user selects to perform synchronous shooting with the second device, the method of the application may further include: the first device acquires shooting capability parameters of the second device, wherein the shooting capability parameters are used for indicating image processing algorithms supported by the second device.
Then, the first device can perform image processing on the first image data according to the shooting capability parameter of the first device to obtain a first shooting picture; and performing image processing on the second image data according to the shooting capability parameter of the second device to obtain a second shooting picture. The photographing capability parameter of the first device is used to indicate an image processing algorithm supported by the first device.
It should be understood that the first device performs image processing on the RAW image acquired by the second device based on the algorithm capability of the second device (i.e. the image processing algorithm supported by the second device and indicated by the shooting capability parameter of the second device), and may obtain the same or similar effect as the second device performs image processing on the RAW image. Thus, the picture effect of the second device can be restored at the first device.
In another possible design manner of the first aspect, the first device may perform image processing on the first image data according to a shooting capability parameter of the first device to obtain the first shooting picture; and performing image processing on the second image data according to the shooting capability parameter of the first device to obtain a second shooting picture. Wherein the shooting capability parameter of the first device is used for indicating an image processing algorithm supported by the first device.
It should be understood that the first device performs image processing on the RAW images acquired by the first device and the second device based on the algorithm capability of the first device, so that the image effect of the shot picture of the second device can be consistent with that of the shot picture of the first device.
In addition, in the design mode, the first device does not need to acquire and store the shooting capability parameters of the second device. Therefore, the network bandwidth occupied by the distributed shooting service of the cross-device can be further reduced, and the storage space of the first device can be saved.
In another possible design manner of the first aspect, the instructing, by the first device, the second device to start acquiring image data includes: the first device sends a shooting instruction to the second device.
The shooting instruction comprises a preset mark, the shooting instruction is used for indicating the second equipment to collect image data, and the preset mark is used for indicating the second equipment to transmit an original image collected by the second equipment to the first equipment. After receiving the shooting instruction comprising the preset mark, the second equipment does not process the acquired original image; but rather the acquired original image is transmitted through to the first device.
In another possible design of the first aspect, the first device may receive the second operation before the first device receives the first operation. The second operation is a click operation of a function button for realizing distributed shooting by a user, and at least one of a preview interface of a camera application of the first device, a chat interface of a video communication application of the first device, a control center of the first device, a pull-down menu, or a negative one-screen includes the function button. In response to a second operation for clicking the function button, the first device displays a list of candidate devices in the preview interface. The candidate device list includes the second device. The first operation is an operation of selecting the second equipment in the candidate equipment list by the user.
In another possible design manner of the first aspect, the synchronous shooting includes at least one of synchronous video recording, synchronous photographing, synchronous live broadcasting, or synchronous video call. That is, the method of the present application may be applied to the process of recording, taking a picture, live broadcasting or video call with other devices by the first device.
In another possible design manner of the first aspect, after the first device receives the first operation that the user selects to perform synchronous shooting with the second device, the method of the present application further includes: the first device creates a control session and a data session of the first device with the second device.
The control session is used for transmitting a control command between the first device and the second device, the control command comprises a shooting instruction used for instructing the second device to acquire an image, and the data session is used for transmitting second image data from the second device.
In the process that the first device and the second device execute the cross-device distributed shooting service, transmission paths of control streams and data streams can be distinguished. Thus, the problem that the control flow and the data flow mutually occupy the bandwidth can be avoided.
In a second aspect, the present application provides an electronic device, which is a first device, including: one or more cameras, one or more processors, a display screen, a memory, and a communication module. The camera, the display screen, the memory, the communication module and the processor are coupled.
Wherein the memory is adapted to store computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method according to the first aspect and any of its possible designs.
In a third aspect, the present application provides a chip system applied to an electronic device including a display screen, a memory, and a communication module; the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal comprising computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method as described in the first aspect and any one of its possible designs. The electronic device is a first device.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect and any one of its possible designs.
In a fifth aspect, the present application provides a computer program product for causing a computer to perform the method according to the first aspect and any one of its possible designs when the computer program product runs on the computer.
In a sixth aspect, the present application provides a distributed shooting system, including the first device according to the second aspect and any one of the possible design manners, and the second device according to the first aspect and any one of the possible design manners.
It can be understood that, for the electronic device according to the second aspect, the chip system according to the third aspect, the computer storage medium according to the fourth aspect, the computer program product according to the fifth aspect, and the beneficial effects that can be achieved by the distributed shooting system according to the sixth aspect, reference may be made to the beneficial effects of the first aspect and any one of the possible design manners thereof, which are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a distributed shooting system according to an embodiment of the present disclosure;
fig. 2A is a functional schematic diagram of a distributed shooting method according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of a hardware structure of a mobile phone according to an embodiment of the present disclosure;
fig. 2C is a schematic diagram of a software architecture of the local device 101 or the remote device 102;
fig. 2D is a schematic diagram of a software architecture of the local device 101 and the remote device 102;
fig. 2E is a schematic diagram of a software architecture of the local device 101 and the remote device 102 according to an embodiment of the present disclosure;
FIG. 2F is a simplified diagram of the software architecture shown in FIG. 2E;
fig. 3A is a schematic diagram of a distributed shooting interface according to an embodiment of the present application;
fig. 3B is a schematic diagram of another distributed shooting interface provided in the embodiment of the present application;
fig. 4 is a schematic diagram of another distributed shooting interface provided in an embodiment of the present application;
fig. 5 is a schematic diagram of another distributed shooting interface provided in an embodiment of the present application;
fig. 6A is a schematic view of another distributed shooting interface provided in the embodiment of the present application;
fig. 6B is a schematic diagram of another distributed shooting interface provided in the embodiment of the present application;
fig. 7 is a schematic diagram of another distributed shooting interface provided in an embodiment of the present application;
fig. 8 is a schematic view of another distributed shooting interface provided in an embodiment of the present application;
fig. 9 is a schematic view of another distributed shooting interface provided in an embodiment of the present application;
fig. 10 is a schematic view of another distributed shooting interface provided in an embodiment of the present application;
fig. 11 is a schematic view of another distributed shooting interface provided in an embodiment of the present application;
fig. 12 is a schematic data structure diagram of a shooting capability parameter according to an embodiment of the present application;
fig. 13 is a schematic diagram of a data transmission flow between a local device and a remote device according to an embodiment of the present application;
fig. 14 is a schematic diagram illustrating a data processing and transmission flow in a remote device according to an embodiment of the present disclosure;
fig. 15 is a schematic view of another distributed shooting interface provided in an embodiment of the present application;
fig. 16 is a schematic diagram of a data transmission flow between a local device and a remote device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The distributed shooting method provided by the embodiment of the application can be applied to the distributed shooting system 100 shown in fig. 1. As shown in fig. 1, the distributed shooting system 100 may include a home device 101 and N remote devices 102, where N is an integer greater than 0. The home device 101 and any one of the remote devices 102 may communicate with each other in a wired or wireless manner. The local device 101 is a first device, and the remote device 102 is a second device.
Illustratively, a wired connection may be established between the local device 101 and the remote device 102 using a Universal Serial Bus (USB). For another example, a wireless connection architecture may be established between the local device 101 and the remote device 102 through a global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (TD-SCDMA), long Term Evolution (LTE), 5G, and subsequent standard protocols, bluetooth, wireless fidelity (Wi-Fi), NFC, voice over Internet protocol (VoIP), and a communication protocol supporting network slicing.
In some embodiments, the local device 101 may communicate directly with the remote device 102 using the wireless connection described above. In other embodiments, the local device 101 may communicate with the remote device 102 through the cloud server by using the above-mentioned wireless connection, which is not limited in this embodiment.
One or more cameras may be disposed in the local device 101 and the remote device 102. The local device 101 may use its own camera to collect image data, and the remote device 102 may also use its own camera to collect image data. In this embodiment of the application, the local device 101 and the remote device 102 may simultaneously use their own cameras to acquire image data, and the remote device 102 may transmit the image data to the local device 101, and the local device 101 simultaneously displays the image data from the local device 101 and the remote device 102, thereby implementing a cross-device distributed shooting function.
As shown in fig. 2A, the home device 101 may include a preset Application (APP). The home terminal device 101 can implement the above-described cross-device distributed shooting function by using the application. For example, the preset application may be a system camera application or a third party's camera application. In the embodiment of the present application, the system camera application or the camera application of the third party may be collectively referred to as a camera application. The system application may also be referred to as an embedded application, and the embedded application is an application program provided as part of an implementation of an electronic device (e.g., the local device 101 or the remote device 102). The third party application may also be referred to as a downloadable application. A downloadable application is an application that may provide its own internet protocol multimedia subsystem (IMS) connection. The downloadable application may be an application pre-installed in the electronic device or may be an application downloaded by a user and installed in the electronic device.
Specifically, as shown in fig. 2A, the local device 101 may be connected to the remote device 102, and the remote device 102 may be controlled to open the camera of the remote device 102 through the connection between the local device 101 and the remote device 120. As shown in fig. 2A, an image preview on the local device 101 may be implemented on the remote device 102, so as to control the preview effect. As shown in fig. 2A, the local device 101 may control the remote device 102 to take a picture and control the picture-taking effect of the remote device 102. As shown in fig. 2A, the local device 101 may control the remote device 102 to record video and control the video recording effect of the remote device 102. In which, the above-described cross-device distributed shooting function is implemented, as shown in fig. 2A, the home device 101 may display shooting pictures of the home device 101 and the remote device 102. In this way, the user can see not only the shot picture of the home device 101 on the home device 101, a picture taken of the remote device 102 can also be seen. Of course, the local device 101 may also control the remote device 102 to close the camera of the remote device 102 through the connection between the local device 101 and the remote device 120.
The local device 101 (or the remote device 102) may be specifically a mobile phone, a tablet Computer, a television (also referred to as a smart television, a smart screen, or a large screen device), a notebook Computer, an Ultra-mobile Personal Computer (UMPC), a handheld Computer, a camera (such as a Digital camera), a video camera (such as a Digital video camera), a netbook, a Personal Digital Assistant (PDA), a wearable electronic device (e.g., a smart watch, a smart bracelet, smart glasses), a vehicle-mounted device, a virtual reality device, or other electronic devices with shooting function, which is not limited in this embodiment.
Illustratively, a mobile phone is taken as the home device 101 in the distributed shooting system 100, and fig. 2B shows a schematic structural diagram of the mobile phone. As shown in fig. 2B, the mobile phone may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention is not to be specifically limited to a mobile phone. In other embodiments of the present application, the handset may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to a mobile phone. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to a mobile phone, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like.
In some embodiments, the handset antenna 1 is coupled to the mobile communication module 150 and the handset antenna 2 is coupled to the wireless communication module 160 so that the handset can communicate with the network and other devices via wireless communication techniques.
The mobile phone realizes the display function through the GPU, the display screen 194, the application processor and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. In some embodiments, the handset may include 1 or N display screens 194, N being a positive integer greater than 1.
The mobile phone can realize shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor and the like. The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, the handset may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, a phone book and the like) created in the use process of the mobile phone. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone can implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
Certainly, the mobile phone may further include a charging management module, a power management module, a battery, a key, an indicator, 1 or more SIM card interfaces, and the like, which is not limited in this embodiment of the present application.
Fig. 2C is a block diagram of a software structure of a mobile phone according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, which are an application layer, an application framework layer, an Android runtime (Android runtime) and system library (HAL) layer, and a kernel layer from top to bottom. It should be understood that: it should be noted that, for example, in an Android system, in other operating systems (for example, a hong meng system, an IOS system, etc.), as long as the functions implemented by the respective function modules are similar to the embodiments of the present application, the solution of the present application can also be implemented.
The application layer may include a series of application packages.
As shown in fig. 2C, applications such as call, memo, browser, contact, gallery, calendar, map, bluetooth, music, video, and short message may be installed in the application layer.
In the embodiment of the present application, an application having a shooting function, for example, a camera application, may be installed in the application layer. Of course, when other applications need to use the shooting function, the camera application may also be called to implement the shooting function.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
For example, the application framework layer may include a window manager, a content provider, a view system, an explorer, a notification manager, etc., which is not limited in any way by the embodiments of the present application.
For example, the window manager described above is used to manage window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make the data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. The view system can be used for constructing the display interface of the application program. Each display interface may be composed of one or more controls. Generally, a control may include an interface element such as an icon, button, menu, tab, text box, dialog box, status bar, navigation bar, widget, and the like. The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, and the like, to the application. The notification manager enables the application program to display notification information in the status bar, can be used for conveying notification type messages, can automatically disappear after a short stay, and does not need user interaction. Such as a notification manager used to notify download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, to prompt text messages in the status bar, to emit a prompt tone, to vibrate, to flash an indicator light, etc.
As shown in fig. 2C, the Android runtime includes a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
Wherein the surface manager is used for managing the display subsystem and providing the fusion of the 2D and 3D layers for a plurality of application programs. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is located below the HAL and is the layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and the like, and the embodiment of the application does not limit the display driver, the camera driver, the audio driver, the sensor driver and the like.
In the embodiment of the present application, as shown in fig. 2C, for example, a Camera application is used, and a Camera Service (Camera Service) is provided in the application framework layer. The Camera application may start a Camera Service by calling a preset API. The Camera Service may interact with a Camera HAL (hardware abstraction layer) in a HAL during operation. The Camera HAL is responsible for interacting with a hardware device (for example, a Camera) for implementing a shooting function in the mobile phone, and on one hand, the Camera HAL hides implementation details (for example, a specific image processing algorithm) of the related hardware device, and on the other hand, the Camera HAL can provide an interface for calling the related hardware device for the Android system.
For example, the Camera application runtime may send a related control command (e.g., a preview, zoom, photograph, or video command) issued by the user to the Camera Service. On the one hand, camera Service can send the received control command to Camera HAL, so that Camera HAL can call a Camera driver in the kernel layer according to the received control command, and the Camera driver drives hardware devices such as a Camera to respond to the control command to acquire image data. For example, the Camera may transmit each frame of image data collected to the Camera HAL through the Camera driver at a certain frame rate. The process of passing the control command inside the operating system can be seen in the specific process of passing the control flow in fig. 2C.
On the other hand, after the Camera Service receives the control command, the Camera Service can determine the shooting strategy according to the received control command, and the shooting strategy is provided with a specific image processing task which needs to be executed on the acquired image data. For example, in the preview mode, camera Service may set an image processing task 1 in a shooting policy for implementing a face detection function. For another example, if the user turns on the beauty function in the preview mode, the Camera Service may also set the image processing task 2 in the photographing policy for implementing the beauty function. Further, camera Service may send the determined shooting strategy to Camera HAL.
After the Camera HAL receives each frame of image data collected by the Camera, the Camera HAL can execute a corresponding image processing task on the image data according to a shooting strategy issued by the Camera Service to obtain each frame of shooting picture after image processing. For example, camera HAL may perform image processing task 1 on each frame of received image data according to shooting strategy 1, and obtain a corresponding shooting picture for each frame. After the shooting strategy 1 is updated to the shooting strategy 2, the Camera HAL can execute the image processing task 2 and the image processing task 3 on each frame of received image data according to the shooting strategy 2 to obtain each corresponding frame of shot picture.
Subsequently, the Camera HAL may report each frame of captured image after image processing to the Camera application through the Camera Service, and the Camera application may display each frame of captured image in the display interface, or the Camera application stores each frame of captured image in the mobile phone in the form of a photo or a video. The process of transferring the shot picture inside the operating system can refer to the specific process of transferring the data stream in fig. 2C.
Referring to fig. 2D, a schematic diagram of a software architecture of the local device 101 and the remote device 102 is shown. The embodiment of the present application, in conjunction with the software architecture shown in fig. 2D, illustrates a scheme for implementing cross-device distributed shooting by the local device 101 and the remote device 102.
The home device 101 has device discovery and remote registration functions. For example, the home device 101 may discover remote devices 102, such as the television 1, watch 2, and cell phone 3 shown in fig. 4. When the user selects the tv 1 as the remote device 102 to perform cross-device distributed shooting with the home device 101 (for example, the user selects the tv 1 shown in fig. 4), the home device 101 may register the remote device 102 (for example, the tv 1) on the home device 101.
For example, the local device 101 may create a Distributed Mobile Sensing Development Platform (DMSDP) HAL, which may also be referred to as a virtual Camera HAL, for the remote device 102 at a Hardware Abstraction Layer (HAL) of the local device 101. Unlike the conventional Camera HAL in the home terminal device 101, the DMSDP HAL does not correspond to an actual hardware device of the home terminal device 101, but corresponds to the remote device 102 currently connected to the home terminal device 101. DMSDP HAL shown in fig. 2D is a HAL created by the home terminal apparatus 101 in accordance with the shooting capability parameters of the remote terminal apparatus 102. The home device 101 may perform data transceiving with the remote device 102 through the DMSDP HAL, and use the remote device 102 as a virtual device of the home device 101 to cooperate with the remote device 102 to complete a cross-device distributed shooting service.
The following describes a specific process for the local device 101 and the remote device 102 to complete the cross-device distributed shooting service cooperatively. As shown in fig. 2D, the flow may include step (1) -step (r).
Step (1): the application layer of the home device 101 issues a control command for controlling the camera of the remote device 102 to the camera service of the service layer. Step (2): the service layer of the home device 101 sends the control command to the DMSDP HAL of the HAL. And (3): the DMSDP HAL of the home device 101 transmits a control command to the remote device 102. And (4): after receiving the control command, the remote device 102 transmits the control command to the service layer. And (5): the service layer of remote device 102 transmits control commands to the Camera HAL of the HAL. Where Camera HAL of remote device 102 corresponds to the actual hardware device of remote device 102. Thus, the HAL of the remote device 102 may invoke its bottom layer (e.g., kernel layer) to execute a corresponding shooting task (e.g., start a camera, switch a camera, or adjust related parameters of a camera, etc.) according to the control command. The data transmitted in the above steps (1) to (5) may be referred to as a control flow.
And (6): the HAL of the remote device 101 uploads the preview stream to the camera service of the service layer. The preview stream includes one or more frames of preview images captured by a Camera device at the bottom of the remote device 102 and processed by an Image Signal Processor (ISP) at the bottom. And (7): the service layer of the remote device 101 transmits the preview stream to the application layer of the remote device 102. And (8): the application layer of the remote device 101 transmits the preview stream to the DMSDP HAL of the home device 101. And (9): the DMSDP HAL of the home device 101 transmits the preview stream to the camera service of the service layer. Step r: the service layer of the home device 101 reports the preview stream to the application layer. In this manner, the camera application of the home device 101 may present the preview stream (e.g., capture frame 2, capture frame 3, or capture frame c, as described above) from the remote device 102 to the user. Of course, the camera application of the home device 101 may also present the preview stream from the home device 101 to the user, and the specific flow may refer to related descriptions of the conventional technology, which is not described herein again. The data transmitted in step (6) to step (r) may be referred to as a preview stream or a data stream.
It should be noted that, as shown in fig. 2D, the service layer of the local device 101 or the remote device 102 may be included in the framework layer, and the service layer may be implemented in the framework layer. Of course, the service layer of the local device 101 or the remote device 102 may also be independent of the framework layer, which is not limited in this embodiment of the present application.
On one hand, the distributed shooting service of the cross-equipment can be completed by executing the flow; however, in the process of transmitting the cross-device preview stream, the preview stream transmitted from the remote device 102 to the local device 101 is fragmented (i.e., a preview result that can be directly displayed) by image processing, and the data amount thereof is increased, which requires a large network bandwidth to be occupied, thereby wasting network resources.
On the other hand, the performance of image processing differs among different apparatuses. For example, the performance of layer processing performed by the remote device 102 may be lower than that performed by the home device 101. As such, the remote device 102 has a less efficient map. When the shot pictures from the local device 101 and the remote device 102 are displayed on the local device 101 at the same time, a large time delay is required. In addition, the above flow does not distinguish the transmission paths of the control stream and the data stream (or the preview stream), and the control stream and the data stream may occupy bandwidth mutually.
The embodiment of the application provides a distributed shooting method, and in the process that the local device 101 and the remote device 102 execute a cross-device distributed shooting service, the local device 101 may directly acquire an original image acquired by the remote device 102 from the remote device 102. This original image may also be referred to as a RAW image. RAW can be translated as "RAW". That is, the original image captured by the remote device 102 is an image that has not been processed by the remote device 102. In the following embodiments, the method of the embodiments of the present application is described by using RAW images to represent original images.
Wherein the data amount of the RAW image is small compared to the image processed by the remote device 102. Therefore, the network bandwidth occupied by the distributed shooting service of the cross-equipment can be reduced.
Moreover, the home device 101 processes the RAW image from the remote device 102, so that the problem of time delay increase due to large performance difference between the two devices can be avoided. By adopting the scheme, the time delay of the distributed shooting service can be reduced. Further, in the process that the local device 101 and the remote device 102 execute the cross-device distributed shooting service, transmission paths of a control stream and a data stream can be distinguished. Therefore, the problem that the control flow and the data flow mutually occupy the bandwidth can be avoided.
Please refer to fig. 2E, which illustrates a schematic software architecture diagram of the local device 101 and the remote device 102 according to an embodiment of the present application. As shown in fig. 2E, the home device 101 may include: application layer 200, framework layer 201, service layer 201a, and HAL 202; the remote device 102 may include: application layer 210, framework layer 211, service layer 211a, and HAL 212. Wherein the service layer may be implemented in a framework layer. For example, as shown in FIG. 2E, service layer 201a may be implemented in framework layer 201 and service layer 211a may be implemented in framework layer 211.
Wherein the application layer may comprise a series of application packages. For example, the application layer 200 may include a plurality of applications, such as a camera application and a Dv application 200a. Of course, when other applications need to use the shooting function, the camera application may also be called to implement the shooting function. As another example, the application layer 210 includes a camera proxy service 210a for supporting the remote device 102 to complete a distributed shooting service in cooperation with the home device 101. The camera proxy service 210a may also be referred to as a camera proxy application, which is not limited by the embodiment of the present application.
In addition, in the present application, a device virtualization (Dv) application 200a for implementing a distributed shooting function may be installed in the application layer 200. The Dv application 200a may be resident as a system application running in the home device 101. Alternatively, the function implemented by the Dv application 200a may be resident in the home device 101 in the form of a system service.
The framework layer provides APIs and programming framework for applications at the application layer. For example, framework layer 201 and framework layer 211 may each provide Camera API 2.0. Other functions of the framework layer 201 and the framework layer 211 may refer to detailed descriptions of the application framework layer described in fig. 2C in this embodiment, which are not described herein again.
As shown in fig. 2E, a Camera kit 201c is provided in the frame layer 201 by taking a Camera application as an example. Among them, the Camera kit 201c can package therein a plurality of Camera modes. For example, a photographing mode, a video recording mode, and a dual view mode shown in fig. 2E.
As shown in fig. 2E, a Camera Service (Camera Service) 201a1 may be provided in the Service layer 201 a. The Camera application may initiate the Camera Service201a1 by calling a preset API (e.g., camera API 2.0). Camera HAL 202a and DMSDP HAL 202b are provided in HAL 202. Camera Service201a1 may interact with Camera HAL 202a and/or DMSDP HAL 202b of HAL 202 during operation.
The Camera HAL 202a is responsible for interacting with a hardware device (e.g., a Camera) that implements a shooting function in the home terminal device 101, and on one hand, the Camera HAL 202a hides implementation details (e.g., a specific image processing algorithm) of the related hardware device, and on the other hand, may provide an interface for invoking the related hardware device to the Android system.
It should be noted that, when the Camera application runs in the application layer 200, the interaction flow between the application layer 200 and the framework layer 201, the service layer 201a, the Camera HAL 202a, the DMSDP HAL 202b, and the like may refer to the description of the corresponding software modules in fig. 2C in the foregoing embodiment, and details of the embodiment of the present application are omitted here.
The framework layer 201 may further include a Dv kit (kit) 201b for implementing a distributed shooting function. The Dv kit 201b may be called by a Dv application 200a provided at the application layer 200 to implement a function of discovering and connecting the remote device 102. Alternatively, the functions implemented by the Dv application 200a may be resident in the mobile phone in the form of system services.
When the mobile phone needs to use the Camera of another electronic device to implement the distributed shooting function, in the Camera mode (such as the dual-view mode) provided by the Camera kit 201c, the home device 101 may discover and connect to the remote device 102 through the Dv kit 201b. After the local device 101 establishes a connection with the remote device 102, the local device 101 may create a DMSDP HAL 202b for the remote device 102 in the HAL 202 of the local device 101. The remote device 102 may be registered with the home device 101. DMSDP HAL 202b shown in fig. 2E is a HAL created by the local device 101 according to the shooting capability parameters of the remote device 102.
The remote device 102 may also include Dv applications and Dv kits, among others. The functions of the Dv application and the Dv kit in the remote device 102 may refer to the descriptions of the functions of the Dv application 200a and the Dv kit 201b in the embodiments of the present application, which are not described herein again.
As also shown in fig. 2E, in addition to creating the corresponding DMSDP HAL 202b for the remote device 201 in the HAL 202, the shooting capability parameter of the remote device 102 may be sent to the Camera Service201a1 for saving, that is, the shooting capability parameter (including the device capability information and the algorithm capability information) of the current remote device 102 is registered in the Camera Service201a 1. The device capability information indicates hardware capability of the remote device 102 to capture an image, and the algorithm capability information indicates algorithm capability of the remote device 102 to perform image processing on the captured image. The capability information of the remote device 102 may be registered or mapped to Camera Service201a1 by DMSDP HAL 202b. Further, DMSDP HAL 202b may also be configured with a "peer Meta synchronization map" function. The peer Meta refers to a tag in the remote device 102 for indicating the shooting capability parameter of the remote device 102.
DMSDP HAL 202b may also be referred to as a virtual Camera HAL or DMSDP Camera HAL. Unlike the conventional Camera HAL, DMSDP HAL 202b does not correspond to the actual hardware device of the local device 101, but rather corresponds to the remote device 102. As shown in fig. 2F, the remote device 102 (e.g., a television, a tablet, a mobile phone, etc.) may be used as a virtual camera of the local device 101. The home device 101 (e.g., the virtual frame shown in fig. 2F) may perform data transceiving with the remote device 102 through the DMSDP HAL 202b, and use the remote device 102 as a virtual camera of the home device 101 to cooperate with the remote device 102 to complete various services in a distributed shooting scene.
In some embodiments, the Dv application 200a of the local device 101 may further obtain, through the Dv kit 201b, an audio capability parameter (e.g., audio playback delay, audio sampling rate or number of sound channels, etc.), a display capability parameter (e.g., screen resolution, codec algorithm of display data, etc.) of the remote device 102. Of course, if the remote device 102 has other capabilities (e.g., printing capability, etc.), the remote device 102 may also send the relevant capability parameters to the Dv application 200a of the home device 101. At this time, the DMSDP HAL 202b has not only the image processing capability of the remote device 102 but also the audio and display capabilities of the remote device 102, so that the remote device 102 can cooperate with the home device 101 as a virtual device of the home device 101 to complete various services in a distributed scenario.
Different from the scheme shown in fig. 2D, after the home device 101 establishes a connection with the remote device 102, two Camera sessions (Camera sessions) can be created respectively. For example, the home terminal apparatus 101 may create a Control Session (Control Session) for transmitting a Control flow and a Data Session (Data Session) for transmitting a Data flow (i.e., a preview flow) through the DMSDP HAL 202b.
Besides creating the corresponding DMSDP HAL 202b for the remote device 102 in the HAL, the shooting capability parameter of the remote device 102 may be sent to the Camera Service201a1 for storage, that is, the current shooting capability of the remote device 102 is registered in the Camera Service201a 1.
When the mobile phone runs the Camera application, the Camera Service201a1 can determine a shooting strategy in a shooting process in real time according to a control command (for example, instructions such as preview, amplification, video recording) issued by the Camera application in combination with the shooting capability of the remote device 102. For example, camera Service may set image processing tasks that a mobile phone needs to execute and image processing tasks that the remote device 102 needs to execute in a shooting policy according to the shooting capability parameters of the slave device. Furthermore, camera Service can use DMSDP HAL 202b to send a shooting instruction corresponding to a shooting policy to the remote device 102 through Control Session, and trigger the remote device 102 to execute a corresponding image processing task.
Then, after the shooting instruction is transmitted to the Camera HAL 212a of the remote device 102, the Camera HAL 212a of the remote device 102 may invoke a Camera driver in the kernel layer according to the received shooting instruction, and drive a hardware device such as a Camera to collect image data in response to the shooting instruction. For example, the Camera may transmit each frame of image data collected to the Camera HAL through the Camera driver at a certain frame rate.
After Camera HAL 212a of remote device 102 receives each frame of image data collected by the Camera, it transmits a data stream including RAW images collected by the Camera to the upper layer. After the application layer 210 of the remote device 102 receives the data stream including the RAW image, the camera proxy service 210a (or may also be referred to as a camera proxy application) of the application layer 210 may send the data stream including the RAW image to the local device 101 through the DMSDP HAL 202b. For example, the camera proxy service 210a may transmit a Data stream including a RAW image to the home terminal apparatus 101 through a Data Session provided by the DMSDP HAL 202b.
After the DMSDP HAL 202b of the home terminal device 101 receives the data stream including the RAW image, it may call a relevant algorithm to perform post-processing (algorithm) according to the shooting capability parameter stored in the Camera Service201a1, and perform image processing on the RAW image to obtain each corresponding frame of shooting picture. Subsequently, the DMSDP HAL 202b can report each frame of the captured image after image processing to the Camera application through the Camera Service201a1, and the Camera application can display each frame of the captured image in the display interface.
Thus, when the local device 101 and the remote device 102 implement the distributed shooting function, the local device 101 may perform corresponding image processing on the image data according to the shooting capability of the local device 101 and the shooting capability of the remote device 102 based on the shooting policy, so as to implement a better shooting effect in the distributed shooting scene.
In some embodiments, the local device 101 may also send the RAW image acquired by the local device 101 to the remote device 102. The Camera Service 211a1 of the remote device 102 may also store the shooting capability parameter of the local device 101. The remote device 102 may also call a related algorithm according to the shooting capability parameter of the local device 101, and perform image processing on the RAW image from the local device 101 to obtain a corresponding shot picture of each frame. The remote device 102 may call a related algorithm according to the shooting capability parameter of the remote device 102 to perform image processing on the RAW image acquired by the remote device 102, so as to obtain a corresponding shooting picture of each frame. In this way, the remote device 102 can also simultaneously display the shooting picture of the home device 101 and the shooting picture of the remote device 102. By adopting the scheme, the cross-device distributed shooting function can be realized on the local device 101 and the remote device 102 at the same time.
The following illustrates an application scenario of the method provided in the embodiment of the present application. The method of the embodiment of the present application can be applied to at least the following application scenarios (1) to (2).
Application scenario (1): and shooting a scene in multiple scenes across equipment.
For example, the cross-device multi-view shooting scene may be a cross-device two-view shooting scene. The cross-device double-scene shooting scene may include a cross-device double-scene shooting scene and a cross-device double-scene video recording scene. In the following embodiments, a cross-device double-scene shooting scene is taken as an example, and the cross-device double-scene shooting scene is introduced.
Illustratively, the preset application may be a camera application for implementing a photographing function. Taking the mobile phone 0 as the home device 101 for example, the mobile phone 0 may be installed with a camera application. The mobile phone 0 (i.e. the home device 101) may perform cross-device multi-scene shooting with the remote device 102 through the camera application. As shown in fig. 3A, after detecting that the user opens the camera application, the mobile phone 0 may open its own camera to start to collect image data, and display a corresponding shooting picture in real time in the preview frame 302 of the preview interface 301 according to the collected image data.
The shooting picture displayed in the preview frame 302 by the mobile phone 0 may be different from the image data acquired by the mobile phone 0 using the camera. For example, after the mobile phone 0 acquires the image data acquired by the camera, image processing tasks such as anti-shake, focusing, soft focus, blurring, filtering, beautifying, face detection, or AR recognition may be performed on the acquired image data to obtain a captured image after image processing. Further, the mobile phone 0 can display the photographed image after the image processing in the preview frame 302.
Similarly, the remote device 102 may also turn on its own camera to start collecting image data, and perform image processing tasks such as anti-shake, focusing, soft focus, blurring, filtering, beautifying, face detection, or AR recognition on the collected image data to obtain a shot picture after image processing, which is not limited in this embodiment of the present application.
In the embodiment of the present application, as shown in fig. 3A, the mobile phone 0 may set a function button 303 for multi-view shooting across devices in a preview interface 301 of a camera application. When the user wants to see the multi-view images captured by the mobile phone 0 and the remote device 102 on the mobile phone 0, the function button 303 can be clicked to start the cross-device multi-view capturing function.
Alternatively, the mobile phone 0 may further set the function button 304 for the synchronous shooting function in a control center, a pull-down menu, a negative screen, or other applications (e.g., a video call application) of the mobile phone, which is not limited in this embodiment of the present application. For example, as shown in fig. 3B, the mobile phone may display the control center 305 in response to the user's operation of opening the control center, and the control center 305 is provided with the above function button 304. If the user wishes to use the handset and other electronic devices together for synchronized photography, the function button 304 may be clicked.
For example, after the mobile phone 0 detects that the user clicks the function button 303 or the function button 304, as shown in fig. 4, the mobile phone 0 may display one or more candidate devices which may acquire image data and are searched by the current mobile phone 0 in a dialog box 401.
For example, the cloud server may record whether each electronic device has a shooting function. Then, the mobile phone 0 may query the cloud server for an electronic device with a shooting function that logs in to the same account (for example, hua is an account) as the mobile phone 0. Further, the cellular phone 0 may display the queried electronic device as a candidate device in the dialog 401.
Alternatively, the handset 0 may search for electronic devices that are located in the same Wi-Fi network as the handset 0. Furthermore, the mobile phone 0 may send an inquiry request to each electronic device in the same Wi-Fi network, and the electronic device that triggered the inquiry request may send a response message to the mobile phone 0, where the response message may indicate whether the mobile phone 0 has a shooting function. Then, the mobile phone 0 may determine, according to the received response message, an electronic device having a shooting function in the current Wi-Fi network. Further, the cellular phone 0 may display the electronic apparatus having the photographing function as a candidate apparatus in the dialog 401.
Alternatively, the mobile phone 0 may be installed with an application for managing an in-home smart home device (e.g., a television, an air conditioner, a sound box, or a refrigerator). Taking the smart home application as an example, the user may add one or more smart home devices to the smart home application, so that the smart home devices added by the user are associated with the mobile phone 0. For example, a two-dimensional code including device information such as a device identifier may be set on the smart home device, and after the user scans the two-dimensional code using the smart home application of the mobile phone 0, the corresponding smart home device may be added to the smart home application, so as to establish an association relationship between the smart home device and the mobile phone 0. In this embodiment of the application, when one or more smart home devices added to the smart home application are online, for example, when the mobile phone 0 detects a Wi-Fi signal sent by the added smart home device, the mobile phone 0 may display the smart home device as a candidate device in the dialog box 401, and prompt the user to select to use a corresponding smart home device to perform synchronous shooting with the mobile phone 0.
As shown in fig. 4, the candidate devices searched by the mobile phone 0 include, for example, a television 1, a watch 2, and a mobile phone 3, and the user may select one or more remote devices 102 in the television 1, the watch 2, and the mobile phone 3, which perform cross-device multi-view shooting with the mobile phone 0 this time. In the embodiment of the application, cross-device double-scene shooting is taken as an example. For example, if it is detected that the user selects television 1, handset 0 may establish a network connection with television 1 as remote device 102 with television 1. For example, the mobile phone 0 may establish a Wi-Fi connection with the television 1 through the router, or the mobile phone 0 may directly establish a Wi-Fi P2P connection with the television 1, or the mobile phone 0 may directly establish a bluetooth connection with the television 1; or, the mobile phone 0 can directly establish a bluetooth connection with the television 1; alternatively, the mobile phone 0 may directly establish a short-range wireless connection with the television 1, the short-range wireless connection including, but not limited to, near Field Communication (NFC) connection, infrared connection, ultra Wide Band (UWB) connection, and ZigBee connection; alternatively, the mobile phone 0 may establish a mobile network connection directly with the television 1, including but not limited to a mobile network supporting 2g,3g,4g,5g and subsequent standard protocols.
In other embodiments, after the mobile phone 0 detects that the user clicks the function button 303, the mobile phone may search for one or more electronic devices with a shooting function according to the method described above. Further, the mobile phone 0 can automatically establish a network connection with the searched electronic device. At this time, the user does not need to manually select a specific device for establishing network connection with the mobile phone 0.
Still alternatively, the cell phone may have established a network connection with one or more electronic devices having a camera function before the user opens the camera application. For example, the user has established a bluetooth connection with the tablet computer before opening the camera application in the cell phone 0. Subsequently, after the mobile phone 0 opens the camera application to display the preview interface 301 of the camera application, if it is detected that the user clicks the function button 303, the mobile phone 0 may not search for the electronic device with the shooting function, but execute the following method.
After the mobile phone 0 establishes a network connection with the television 1, on one hand, as shown in fig. 5, the mobile phone 0 can turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain a shot picture 1. On the other hand, as shown in fig. 5, the mobile phone 0 may instruct the television 1 to turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain the shot picture 2. Subsequently, the television 1 may send the photographed picture 2 to the mobile phone 0. In this way, the mobile phone 0 can simultaneously display the shot picture 1 from the mobile phone 0 and the shot picture 2 from the television 1 in the display interface of the camera application.
As described in the foregoing embodiment, in the first case of the application scenario (1), in the above-mentioned cross-device multi-view shooting scenario, multiple cameras can be used to achieve multi-view shooting of the same shooting object, so as to improve the interest of shooting.
In a second case of the application scenario (1), in the above-mentioned cross-device multi-scene shooting scenario, a plurality of cameras may also be used to realize a close shot of a plurality of shooting objects (such as a close shot of parents and children or a close shot of friends). For example, assume that the home device 101 is the handset 0 shown in fig. 5 and the remote device 102 is the handset 3 shown in fig. 6A. The mobile phone 0 or the mobile phone 3 may implement a plurality of shooting objects through the preset application (e.g., a camera application or other application that can be used to implement shooting of photos or videos).
For example, the preset application is a camera application. When a user wants to see a snap shot of the mobile phone 0 and the mobile phone 3 on the mobile phone 0, the user can click the function button 303 or the function button 304 to start the multi-scene snap shot function across devices.
For example, after the mobile phone 0 detects that the user clicks the function button 303 or the function button 304, as shown in fig. 4, the mobile phone 0 may display one or more candidate devices which may acquire image data and are searched by the current mobile phone 0 in a dialog box 401. As shown in fig. 4, the candidate devices searched by the mobile phone 0 include, for example, a television 1, a watch 2, and a mobile phone 3, and the user may select, from the television 1, the watch 2, and the mobile phone 3, a remote device 102 that performs cross-device multi-view shooting with the mobile phone 0 this time. For example, if the user is detected to select cell phone 3, cell phone 0 may establish a network connection with cell phone 3 as remote device 102 with cell phone 3.
After the mobile phone 0 and the mobile phone 3 establish a network connection, on one hand, as shown in fig. 6A, the mobile phone 0 may open its own camera to start collecting image data, and perform image processing on the collected image data to obtain the shot picture 1. On the other hand, as shown in fig. 6A, the mobile phone 0 may instruct the mobile phone 3 to turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain the shot picture 3. Subsequently, the mobile phone 3 can send the shot picture 3 to the mobile phone 0. In this way, the mobile phone 0 can simultaneously display the shot 1 from the mobile phone 0 and the shot 3 from the mobile phone 3 in the display interface of the camera application.
It should be noted that, unlike the first case, the mobile phone 0 may transmit the shot picture 1 to the mobile phone 3 during the process of performing the cross-device multi-scene co-shooting. As shown in fig. 6A, the mobile phone 3 may simultaneously display the shot screen 1 from the mobile phone 0 and the shot screen 3 from the mobile phone 3. At this time, the mobile phone 0 can be used as both the home device and the remote device. Specifically, the mobile phone 0 serves as a home device, the mobile phone 3 serves as a remote device, and the mobile phone 0 can simultaneously display the shot picture 1 of the mobile phone 0 and the shot picture 3 from the mobile phone 3. The mobile phone 3 is used as a home device, the mobile phone 0 is used as a remote device, and the mobile phone 3 can simultaneously display the shot picture 3 of the mobile phone 3 and the shot picture 1 from the mobile phone 0.
It should be noted that the above-described cross-device multi-view shooting scene may be a scene in which cross-device multi-view shooting is performed by at least two devices. Illustratively, three electronic devices may also apply the method of the embodiment of the present application to implement a cross-device multi-scene shooting function. For example, the mobile phone 0 shown in fig. 6B may be used as a home device, and the mobile phone 3 and the television 1 as a remote device; the mobile phone 0 can realize cross-device multi-scene shooting with the mobile phone 3 and the television 1. As shown in fig. 6B, the mobile phone 0 can simultaneously display the mobile phone 0 shot picture 1, the shot picture 3 from the mobile phone 3, and the shot picture 2 from the television 1.
Application scenario (2): the home device 101 calls the remote device 102 to perform a video call, a live broadcast, a photographing or a video recording scene. The local device 102 may call a camera of the remote device 102, or a camera and a display screen of the remote device 102, to assist the local device 101 in video call, live broadcast, photographing or recording.
For example, in the embodiment of the present application, the application scenario (2) is described here by taking the home device 101 as an example to call the remote device 102 for a video call. The preset application may be a video communication application (e.g., wechat (TM) application) for implementing a video call. Taking the mobile phone 0 as the local device 101 and the television 1 as the remote device 102 for example, the mobile phone 0 may be installed with the video communication application.
In one implementation, the mobile phone 0 may call a camera and a display of the remote device 102 (e.g., the television 1) to assist the mobile phone 0 in the video call with the mobile phone 4. For example, as shown in fig. 7, the mobile phone 0 displays a video call interface 701, and a function button 702 for calling another device to assist the mobile phone 0 in performing a video call is arranged in the video call interface 701. The function button 702 may be clicked when the user wishes the cell phone 0 to invoke another device to assist the cell phone 0 in video call.
In another implementation, the mobile phone 0 may invoke a camera and a display of the remote device 102 (e.g., the tv 1) to assist the mobile phone 0 to perform the video call with the mobile phone 4 before requesting the video call with the mobile phone 4. For example, as shown in fig. 8, in response to a user clicking on a "video call" option 801 in a chat interface, the mobile phone 0 may display a confirmation window 802, the confirmation window 802 being used to request the user to confirm whether to use a large screen to assist the mobile phone in a voice call. The yes button in the confirmation window 802 may be clicked when the user wishes the cell phone 0 to invoke another device to assist the cell phone 0 in the video call.
Illustratively, the handset 0 may search for candidate devices upon detecting that the user has clicked the function button 702 or the "yes" button in the confirmation window 802. Such as dialog box 401 shown in fig. 4 may be displayed. For a specific method for searching for a candidate device and displaying the dialog box 401, reference may be made to the detailed description in the foregoing embodiments, which is not repeated herein.
When the mobile phone 0 detects that the user selects the television 1, the mobile phone 0 may establish a network connection with the television 1 by using the television 1 as the remote device 102. After the mobile phone 0 establishes network connection with the television 1, on one hand, as shown in fig. 9, the mobile phone 0 can receive the shot picture b from the mobile phone 4 and transmit the shot picture b to the television 1. On the other hand, as shown in fig. 9, the mobile phone 0 may instruct the television 1 to turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain a shot picture c. Subsequently, the television 1 may send the shot picture c to the mobile phone 0, and the mobile phone 0 transmits the shot picture c to the mobile phone 4. Thus, the television 1 can simultaneously display the shot picture b from the mobile phone 4 and the shot picture c from the television 1; the mobile phone 4 may simultaneously display the shot screen c from the television 1 and the shot screen b from the mobile phone 4.
In other embodiments, the mobile phone 0 calls the camera and the display screen of the television 1 to assist the mobile phone 0 to display the shot picture b from the mobile phone 4 and/or the shot picture c from the television 1 in the process of performing the video call with the mobile phone 4.
Or, the mobile phone 0 calls the camera and the display screen of the television 1 to assist the mobile phone 0 to display the preset interface in the process of carrying out the video call with the mobile phone 4. The preset interface may be the main interface 901 shown in fig. 9. Alternatively, the preset interface may be a preset picture or a preset animation.
Or, the mobile phone 0 calls the camera and the display screen of the television 1 to assist the mobile phone 0 to perform the video call with the mobile phone 4, and the mobile phone 0 can be in a screen blacking state. Thus, the power consumption of the mobile phone 0 can be reduced.
It should be noted that, since the shooting angles of the mobile phone 0 and the television 1 are different, the shooting picture of the mobile phone 0 may be different from the shooting picture of the television 1. For example, the shot screen a shown in fig. 7 is shot by the mobile phone 0, the shot screen c shown in fig. 9 is shot by the television 1, and the shot screen a is different from the shot screen c.
In any of the above application scenarios, the home device 101 may use and control the camera of the remote device 102, and the camera of the remote device 102 is used to assist the home device 101 to complete a shooting task, a video task, or other related tasks. Of course, the remote device 102 may also use and control the camera of the local device 101, and the camera of the local device 101 is used to assist the remote device 102 to complete the above task. For example, as shown in fig. 6A, the mobile phone 3 may simultaneously display the shooting screen 1 from the mobile phone 0 and the shooting screen 3 from the mobile phone 3.
In the embodiment of the present application, the local device 101 and the remote device 102 may perform shooting in a distributed shooting scenario (e.g., multi-shot shooting). In the following embodiments, the method according to the embodiments of the present application is described in detail by taking the home device 101 as a mobile phone as an example.
For example, as shown in the above embodiments, the local device 101 (e.g., a mobile phone) may provide a dual-view mode (including a dual-view photographing mode and a dual-view recording mode). As shown in fig. 10, the cellular phone may set a function button 1001 and a function button 1002 for multi-shot across devices in a preview interface of a camera application. Specifically, the function button 1001 is used to trigger the mobile phone to enter a multi-view photographing mode. The function button 1002 is used to trigger the mobile phone to enter a multi-view recording mode. When a user wants to see a multi-view picture shot by the mobile phone and the remote device 1002 on the mobile phone, the function button 1001 or the function button 1002 can be clicked to start the cross-device multi-view shooting function. In the embodiment of the present application, the click operation of the function button 1001, the function button 1002, the function button 303, or the like by the user is the second operation.
For example, after the mobile phone detects that the user clicks the function button 1001, a preset application (e.g., a camera application) of the mobile phone may trigger the mobile phone to search for one or more candidate devices with a shooting function nearby. For example, a preset application of a mobile phone may discover (i.e., search) one or more candidate devices having a photographing function through the Dv kit.
Also, as shown in fig. 11, the handset may display the searched one or more candidate devices in a dialog 1101. For example, the mobile phone may query the server for an electronic device having a shooting function and registered with the same account as the mobile phone, and display the queried electronic device as a candidate device in the dialog 1101. In the embodiment of the present application, a click operation of a candidate device in the dialog 1101 and the dialog 401 (also referred to as a candidate device list) by a user is a first operation.
Taking the example that the candidate devices in the dialog box 1101 include the television 1102, the television 1103 and the watch 1104, the user can select a remote device that is currently cooperated with the mobile phone to realize the synchronous shooting function in the dialog box 1101. For example, if the cell phone detects that the user selects the television 1102 in the dialog 1101, it indicates that the user wishes to take a photograph using the cell phone and the television 1102 simultaneously. At this time, the Dv kit of the mobile phone may establish a network connection with the mobile phone using the television 1102 as a remote device of the mobile phone.
After the handset establishes a network connection with the television 1102, the television 1102 may register with the HAL of the handset. Specifically, the mobile phone may obtain the shooting capability parameter of the television 1102 from the television 1102 based on the network connection; and creates a corresponding DMSDP HAL at the HAL according to the shooting capability parameters of the television 1102.
Wherein the shooting capability parameter of the television 1002 is used to reflect the specific shooting capability of the television 1102. The photographing capability parameter may include device capability information. The device capability information is used to indicate the hardware capability of the television 1102 to capture images. For example, the device capability information may indicate parameters such as the number of cameras in the television 1102, the resolution of the cameras, or the model of the image processor. The device capability information may be used by the handset to determine a shooting policy for the television 1102.
The shooting capability parameter of the television 1102 may be stored in a Camera Service of the mobile phone. Illustratively, the shooting capability parameter of the tv 1102 may be stored in a Camera Service of the mobile phone in the form of a Tag (Tag).
In the embodiment of the present application, the device capability information is taken as an example here, and a storage format of the shooting capability parameter of the television 1102 in the Camera Service of the mobile phone is described. For example, please refer to fig. 12, which shows a schematic diagram of a format of device capability information of a television 1102 stored in a Camera Service of a mobile phone in an embodiment of the present application.
The Camera Service of the mobile phone may use the three-layer data structure shown in fig. 12 to store the device capability information of the tv 1102. Wherein, the handset can store each device capability information of the tv 1102 in segments in the Camera Service. Section _ name shown in fig. 12 is a storage address of the television 1102; the Tag _ name is a plurality of Tag names under one address of the Section _ name; the Tag _ index is an index of capability information of the Tag name. The index is used to indicate the storage address of the specific capability information.
For example, the device capability information of the television 1102 may include 5 Section _ names shown in fig. 12: com.huawei.capture.metadata; com, huawei, device, capabilities; android.huawei.device.parameters; android.huawei.stream.info; android.
Take a Section _ name (e.g., com. Huawei. Device. Capabilities) shown in fig. 12 as an example. As shown in fig. 12, the device capability information of the television 1102 may further include: tag names of multiple tags stored in the storage address corresponding to the Section _ name, such as device _ sensor _ position, hidden _ camera _ id, colour bar checkUnsupport, amoothZoomSupport, and tofType. For example, the device _ sensor _ position represents a sensor of a camera of the television 1102.
Take a Tag name shown in fig. 12 as an example. As shown in fig. 12, the device capability information of the television 1102 may further include: the index of capability information of the Tag name, such as CAMERA _ HUAWEI _ DEVICE _ CAPABILITIES _ START and CAMERA _ HUAWEI _ DEVICE _ CAPABILITIES _ END.
Wherein, CAMERA _ HUAWEI _ DEVICE _ CAPABILITIES _ START can indicate the starting address of the capability information storing one Tag name in the TV 1102. CAMERA _ HUAWEI _ DEVICE _ CAPABILITIES _ END can indicate the END address of the capability information storing one Tag name in the TV 1102. Based on the start address and the end address, the handset can query the television 1102 for various device capabilities of the television 1102.
It should be understood that as shown in fig. 13, the TAG stored in Camera Service (i.e., the shooting capability parameter mentioned above) may be combined with a post-processing (algorithm) module in the mobile phone to process the RAW map returned by the tv 1102. The specific method for processing the RAW image returned by the television 1102 by the mobile phone in combination with the post-processing (algorithm) module according to the TAG is described in detail in the following embodiments, and is not described herein again.
Subsequently, when the mobile phone runs the camera application, on one hand, the mobile phone may call its own camera to obtain each frame of shot picture, and on the other hand, the mobile phone may instruct the television 1102 to obtain a RAW image, and send the obtained RAW image to the camera application of the mobile phone through the DMSDP HAL 202b. The mobile phone may call its own camera to obtain each frame of shot picture according to a certain frame rate, and instruct the television 1102 to obtain RAW images according to the same frame rate.
The mobile phone may send a shooting instruction to the tv 1102 through the DMSDP HAL to instruct the tv 1102 to acquire a RAW image. For example, as shown in fig. 13, the application layer 200 of the mobile phone may transmit a shooting instruction to the DMSDP HAL 202b of the HAL 202 through the framework layer 201 and the service layer 201 a; DMSDP HAL 202b of HAL 202 may perform step a of transmitting a shooting instruction to television 1102. After the tv 1102 receives the shooting instruction transmitted by the mobile phone through the DMSDP HAL 202b, the camera agent service 210a of the application layer 210 may add a preset flag to the shooting instruction. The preset flag is used to instruct to directly acquire RAW data acquired by the underlying Camera device and to pass through the RAW data to the Camera agent service 210a of the application layer 211. For example, as shown in fig. 13, the Camera agent service 210a of the application layer 210 may add a preset flag to the shooting instruction, and transmit the shooting instruction added with the preset flag to the Camera HAL 212a of the HAL 212 through the framework layer 211 and the service layer 211 a. Upon receiving the capture command, camera HAL 212a in HAL 212 may call the Camera device of kernel layer 213 to execute the capture command. For example, camera HAL 212a in HAL 212 may call a Camera device in kernel layer 213 to capture a RAW image and perform step b to Camera proxy service 210a that passthrough the RAW data to application layer 211. Then, step c is performed by the camera proxy service 210a of the application layer 211, transmitting the RAW image to the DMSDP HAL 202b of the handset.
Among them, the Camera device of the core layer 213 may include a photo sensing device, a Digital Signal Processor (DSP), and an image processor (ISP) shown in fig. 14. The light sensing device may include a lens and a Sensor (Sensor), etc. The photosensitive device is used for collecting RAW images. The Digital Signal Processor (DSP) is used for sampling the multi-frame RAW image collected by the photosensitive device; then, the sampled RAW image is transmitted to an image processor (ISP). In general, an image processor (ISP) may perform image processing on a RAW image from the DSP. However, in the embodiment of the present application, ISP does not perform image processing on the RAW image from DSP, but rather transmits the RAW image from DSP to Camera HAL 212a in HAL 212.
It should be noted that, step b in fig. 13 is directed from the kernel layer 213 to the application layer 210 directly, only for explaining that the mobile phone does not perform image processing on the RAW image, and does not indicate that the RAW image can be directly transmitted from the kernel layer 213 to the application layer 210 without passing through the Camera HAL 212a, the service layer 211a, and the framework layer 211 in the HAL 212. Capture of RAW images by Camera devices in the core layer 213 is required to be transmitted to the application layer 210 through Camera HAL 212a, the service layer 211a and the framework layer 211 in HAL 212. However, as shown in fig. 14, camera HAL 212a, service layer 211a, and frame layer 211 in HAL 212 may only pass through the RAW image without any processing on the RAW image.
In the embodiment of the present application, the tv 1102 transmits not a processed shot that can be directly displayed but a RAW image that is not subjected to image processing to the mobile phone. After the tv 1102 uses the camera to acquire the RAW image, the RAW image can be directly transmitted to the mobile phone without performing image processing on the RAW image. The RAW image is an image that has not been processed by the tv 1102. The data amount of the RAW image is small compared to the image processed by the television 1102. Therefore, the network bandwidth occupied by the distributed shooting service of the cross-equipment can be reduced.
Then, the mobile phone can perform image processing on the RAW image from the television 1102 to obtain a corresponding shot picture. In this way, the camera application of the mobile phone can acquire not only each frame of shot picture from the mobile phone but also each frame of shot picture of the television 1102. Furthermore, the camera application can synchronously display the shooting picture of the mobile phone and the shooting picture of the television 1102 in a display interface of the camera application, so that a cross-device distributed shooting function is realized. For example, as shown in fig. 15, a preview interface of a camera application of a mobile phone includes a shooting screen 1501 of the mobile phone and a shooting screen 1502 of a television 1102.
The mobile phone processes the RAW image from the television 1102, so that the problem of time delay increase caused by large performance difference of the two devices can be avoided. By adopting the scheme, the time delay of the distributed shooting service can be reduced.
In the embodiment of the present application, after the mobile phone establishes a connection with the television 1102, two Camera sessions (Camera sessions), such as a Control Session (Control Session) for transmitting a Control stream (such as the above shooting instruction) and a Data Session (Data Session) for transmitting a Data stream (such as the above RAW Data), may be created based on the DMSDP HAL 202b shown in fig. 13. Wherein, the Control Session and the Data Session may correspond to different transmission paths (or transmission pipelines). The DMSDP HAL 202b of the mobile phone executes step a shown in fig. 13, and can transmit a shooting instruction to the camera agent service 210a of the television 1102 through the transmission pipeline of the Control Session. The camera agent service 210a of the tv 1102 executes step c shown in fig. 13, and may transmit a Data stream (e.g., the RAW Data) to the DMSDP HAL 202b of the mobile phone through the transmission pipe of the Data Session.
In the process that the mobile phone and the television 1102 execute a cross-device distributed shooting service, transmission paths of a control stream and a data stream can be distinguished. Thus, the problem that the control flow and the data flow mutually occupy the bandwidth can be avoided.
In some embodiments, the shooting capability parameters of the television 1102 may also include algorithmic capability information. The algorithm capability information is used to indicate the algorithm capability of the television 1102 to perform image processing on a captured image. For example, the algorithm capability information may indicate one or more image processing algorithms, such as a face recognition algorithm, an auto-focus algorithm, etc., supported by the television 1102. That is, the handset may also synchronize the post-processing algorithms of the television 1002 into the handset. For example, as shown in fig. 13, the service layer 201a of the handset retains the post-processing algorithm of the television 1102.
In this embodiment, the service layer 201a of the handset may call the reserved post-processing algorithm of the tv 1102 to perform image processing on the RAW image from the DMSDP HAL 202b based on the algorithm capability information of the tv 1102. Then, the mobile phone can synchronously render the images acquired by the mobile phone and the post-processed images from the television 1102 based on the time stamps, and obtain and display the shooting picture 1501 of the mobile phone and the shooting picture 1502 of the television 1102 shown in fig. 15.
It should be noted that, the mobile phone calls the retained post-processing algorithm of the television 1102 to perform image processing on the RAW image acquired by the television 1102 based on the algorithm capability information of the television 1102, so that the same or similar effect as that of the image processing performed on the RAW image by the television 1102 side can be obtained.
Wherein, even if the RAW image collected by the television 1102 is image-processed on the mobile phone side; however, the image processed may also achieve the same effect as that of the image processing on the television 1102 side. That is to say, by adopting the method of the embodiment of the application, not only the network bandwidth occupied by the distributed shooting service of the cross-device can be reduced, but also the image effect of the remote device can be restored.
In other embodiments, the shooting capability parameters of the television 1102 may not include algorithmic capability information. That is, the handset may not synchronize the post-processing algorithms of the television 1002 to the handset. For example, as shown in fig. 16, the post-processing algorithm of the tv set 1102 is not reserved in the service layer 201a of the handset.
In this embodiment, the service layer 201a of the mobile phone may invoke a post-processing algorithm of the mobile phone based on the algorithm capability information of the mobile phone, and perform image processing on the RAW image from the DMSDP HAL 202b. Then, the mobile phone can synchronously render the images acquired by the mobile phone and the post-processed images from the television 1102 based on the time stamps, and obtain and display the shooting picture 1501 of the mobile phone and the shooting picture 1502 of the television 1102 shown in fig. 15.
It should be noted that, the mobile phone calls a post-processing algorithm of the mobile phone to perform image processing on the RAW image acquired by the television 1102, although the effect of performing image processing on the television 1102 side cannot be restored; however, for both the image acquired by the mobile phone and the RAW image acquired by the television 1102, the post-processing algorithm of the mobile phone is called to perform image processing, so that the image effects of the shot picture of the mobile phone and the shot picture of the television 1102 can be consistent.
Moreover, under the condition that the algorithm processing capability of the mobile phone is better than that of the television 1102, compared with the case that the post-processing algorithm of the television 1102 is called to perform image processing on the RAW image acquired by the television 1102, the post-processing algorithm of the mobile phone is called to perform image processing on the RAW image, and the image effect of the shot picture of the television 1102 displayed by the mobile phone can be improved.
Further, a post-processing algorithm of the mobile phone is used for image processing of the RAW image acquired by the television 1102, and the post-processing algorithm of the television 1102 does not need to be reserved in the mobile phone. Therefore, the storage space of the mobile phone can be saved.
In addition, in the embodiment, a mobile phone is taken as an example of the home terminal device in the distributed shooting scene, and it can be understood that the home terminal device in the distributed shooting scene may also be an electronic device with the shooting function, such as a tablet computer and a television, and the embodiment of the present application does not limit this.
It should be noted that, in the embodiment, a specific method for implementing a distributed shooting function among the function modules is described by taking the Android system as an example, and it can be understood that corresponding function modules may also be set in other operating systems to implement the method. As long as the functions implemented by the respective devices and functional modules are similar to the embodiments of the present application, they are within the scope of the claims of the present application and their equivalents.
In this embodiment of the application, the image data acquired by the home terminal device 101 is first image data, and the shooting picture of the home terminal device 101 is a first shooting picture displayed in a first window. The image data acquired by the remote device 102 is second image data, and the shot picture of the remote device 102 is a second shot picture displayed in the second window. The second image data includes a RAW image.
For example, the first photographing screen may be the photographing screen 1 shown in fig. 5, and the second photographing screen may be the photographing screen 2 shown in fig. 5. The shot screen 1 and the shot screen 2 shown in fig. 5 are displayed on the same display interface.
For another example, the first photographing screen may be the photographing screen 1 shown in fig. 6A, and the second photographing screen may be the photographing screen 3 shown in fig. 6A. The shot screen 1 and the shot screen 2 shown in fig. 6A are displayed on the same display interface on both the mobile phone 0 and the mobile phone 1.
Other embodiments of the present application provide an electronic device, which may include: the touch screen, memory, and one or more processors described above. The touch screen, memory and processor are coupled. The memory is for storing computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform various functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device can refer to the structure of the mobile phone shown in fig. 2B.
The embodiment of the present application further provides a chip system, as shown in fig. 17, the chip system 1700 includes at least one processor 1701 and at least one interface circuit 1702. The processor 1701 and the interface circuit 1702 may be interconnected by wires. For example, the interface circuit 1702 may be used to receive signals from other devices, such as a memory of an electronic device. As another example, the interface circuit 1702 may be used to send signals to other devices, such as the processor 1701. Illustratively, the interface circuit 1702 may read instructions stored in memory and send the instructions to the processor 1701. The instructions, when executed by the processor 1701, may cause the electronic device to perform the various steps in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device is enabled to execute each function or step executed by the mobile phone in the foregoing method embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute each function or step executed by the mobile phone in the above method embodiments.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A distributed shooting method, comprising:
the method comprises the steps that first equipment receives a first operation that a user selects to carry out synchronous shooting with second equipment;
in response to the first operation, the first device starts to acquire first image data, and the first device instructs the second device to start to acquire image data through a wireless connection;
the first equipment receives second image data from the second equipment, wherein the second image data comprises an original image acquired by a camera of the second equipment;
and the first equipment displays a first shooting picture corresponding to the first image data in a first window, and displays a second shooting picture corresponding to the second image data in a second window, wherein the first window and the second window are positioned in the same display interface.
2. The method of claim 1, further comprising:
the first device sends the first image data to the second device, and the first image data comprises an original image collected by a camera of the first device.
3. The method according to claim 1 or 2, wherein after the first device receives a first operation of user selection for synchronous shooting with the second device, the method further comprises:
the first equipment acquires shooting capability parameters of the second equipment, and the shooting capability parameters of the second equipment are used for indicating an image processing algorithm supported by the second equipment;
wherein before the first device displays the first shot picture corresponding to the first image data in a first window and displays the second shot picture corresponding to the second image data in a second window, the method further comprises:
the first device performs image processing on the first image data according to the shooting capability parameter of the first device to obtain a first shooting picture, wherein the shooting capability parameter of the first device is used for indicating an image processing algorithm supported by the first device;
and the first equipment performs image processing on the second image data according to the shooting capability parameter of the second equipment to obtain a second shooting picture.
4. The method according to claim 1 or 2, wherein before the first device displays the first shot corresponding to the first image data in a first window and displays the second shot corresponding to the second image data in a second window, the method further comprises:
the first device performs image processing on the first image data according to the shooting capability parameter of the first device to obtain a first shooting picture, wherein the shooting capability parameter of the first device is used for indicating an image processing algorithm supported by the first device;
and the first equipment carries out image processing on the second image data according to the shooting capability parameter of the first equipment to obtain a second shooting picture.
5. The method of any of claims 1-4, wherein the first device instructs the second device to begin acquiring image data, comprising:
the first equipment sends a shooting instruction to the second equipment;
the shooting instruction comprises a preset mark, the shooting instruction is used for indicating the second device to collect image data, and the preset mark is used for indicating the second device to transmit a RAW image collected by the second device to the first device.
6. The method according to any one of claims 1-5, wherein before the first device receives a first operation of user selection for synchronized capturing with the second device, the method further comprises:
the first device receives a second operation, wherein the second operation is a click operation of a user on a function button for realizing distributed shooting, and at least one of a preview interface of a camera application of the first device, a chat interface of a video communication application of the first device, a control center of the first device, a pull-down menu or a negative screen of the first device comprises the function button;
in response to a second operation for clicking the function button, the first device displays a candidate device list in the preview interface, wherein the candidate device list comprises the second device;
wherein the first operation is an operation of a user selecting the second device in the candidate device list.
7. The method of any of claims 1-6, wherein the synchronized capturing comprises at least one of synchronized video recording, synchronized photographing, synchronized live broadcasting, or synchronized video calling.
8. The method according to any one of claims 1-7, wherein after the first device receives a user selection of a first operation to take a synchronized shot with the second device, the method further comprises:
the first device creating a control session and a data session of the first device with the second device;
wherein the control session is used for transmitting a control command between the first device and the second device, the control command comprises a shooting instruction for instructing the second device to acquire an image, and the data session is used for transmitting the second image data from the second device.
9. An electronic device, wherein the electronic device is a first device, the electronic device comprising: one or more cameras, one or more processors, a display screen, a memory, and a communication module; the camera, the display screen, the memory, the communication module and the processor are coupled;
wherein the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-8.
10. A computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the method of any of claims 1-8.
CN202210973275.8A 2021-01-30 2021-01-30 Distributed shooting method, electronic device and medium Pending CN115514882A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210973275.8A CN115514882A (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic device and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210973275.8A CN115514882A (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic device and medium
CN202110131870.2A CN114845035B (en) 2021-01-30 Distributed shooting method, electronic equipment and medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110131870.2A Division CN114845035B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115514882A true CN115514882A (en) 2022-12-23

Family

ID=82561398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210973275.8A Pending CN115514882A (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic device and medium

Country Status (2)

Country Link
CN (1) CN115514882A (en)
WO (1) WO2022160985A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320783B (en) * 2022-09-14 2023-11-14 荣耀终端有限公司 Method for capturing images in video and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336677A (en) * 2013-06-25 2013-10-02 北京小米科技有限责任公司 Method, device and system for outputting images to display equipment
US20130329124A1 (en) * 2012-06-08 2013-12-12 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN104284234A (en) * 2014-10-17 2015-01-14 惠州Tcl移动通信有限公司 Method and system for sharing synchronous images among plurality of terminals
CN105493621A (en) * 2014-08-04 2016-04-13 华为技术有限公司 Terminal, server, and terminal control method
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
CN109769087A (en) * 2017-11-09 2019-05-17 中兴通讯股份有限公司 Image pickup method, device and the mobile terminal remotely taken a group photo
CN110224804A (en) * 2018-03-01 2019-09-10 国民技术股份有限公司 Data transfer control method, terminal, base station and computer storage medium
CN110602805A (en) * 2019-09-30 2019-12-20 联想(北京)有限公司 Information processing method, first electronic device and computer system
CN110944109A (en) * 2018-09-21 2020-03-31 华为技术有限公司 Photographing method, device and equipment
CN111083379A (en) * 2019-12-31 2020-04-28 维沃移动通信(杭州)有限公司 Shooting method and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130329124A1 (en) * 2012-06-08 2013-12-12 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN103336677A (en) * 2013-06-25 2013-10-02 北京小米科技有限责任公司 Method, device and system for outputting images to display equipment
CN105493621A (en) * 2014-08-04 2016-04-13 华为技术有限公司 Terminal, server, and terminal control method
CN104284234A (en) * 2014-10-17 2015-01-14 惠州Tcl移动通信有限公司 Method and system for sharing synchronous images among plurality of terminals
CN109769087A (en) * 2017-11-09 2019-05-17 中兴通讯股份有限公司 Image pickup method, device and the mobile terminal remotely taken a group photo
CN110224804A (en) * 2018-03-01 2019-09-10 国民技术股份有限公司 Data transfer control method, terminal, base station and computer storage medium
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
CN110944109A (en) * 2018-09-21 2020-03-31 华为技术有限公司 Photographing method, device and equipment
CN110602805A (en) * 2019-09-30 2019-12-20 联想(北京)有限公司 Information processing method, first electronic device and computer system
CN111083379A (en) * 2019-12-31 2020-04-28 维沃移动通信(杭州)有限公司 Shooting method and electronic equipment

Also Published As

Publication number Publication date
CN114845035A (en) 2022-08-02
WO2022160985A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
WO2022257977A1 (en) Screen projection method for electronic device, and electronic device
CN110958475A (en) Cross-device content projection method and electronic device
CN114697527B (en) Shooting method, system and electronic equipment
CN112130788A (en) Content sharing method and device
US11895713B2 (en) Data sharing and instruction operation control method and system
CN114554000A (en) Camera calling method and system and electronic equipment
WO2022156721A1 (en) Photographing method and electronic device
CN116489268A (en) Equipment identification method and related device
WO2022143883A1 (en) Photographing method and system, and electronic device
WO2022166521A1 (en) Cross-device collaborative photographing method, related apparatus, and system
WO2022160985A1 (en) Distributed photographing method, electronic device, and medium
CN113747056A (en) Photographing method and device and electronic equipment
WO2023231697A1 (en) Photographing method and related device
WO2022222773A1 (en) Image capture method, and related apparatus and system
CN114466131B (en) Cross-device shooting method and related device
CN114845035B (en) Distributed shooting method, electronic equipment and medium
CN114928898A (en) Method and device for establishing session based on WiFi direct connection
CN111131019B (en) Multiplexing method and terminal for multiple HTTP channels
CN114584817B (en) Screen projection method and system
CN114827439A (en) Panoramic image shooting method and electronic equipment
WO2022206769A1 (en) Method for combining content, electronic device, and system
WO2022179327A1 (en) Content storage method, electronic device, and system
CN115914983A (en) Data interaction method, electronic device and computer-readable storage medium
CN114615362A (en) Camera control method, device and storage medium
CN116419362A (en) Data transmission method, terminal device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination