CN114845035B - Distributed shooting method, electronic equipment and medium - Google Patents

Distributed shooting method, electronic equipment and medium Download PDF

Info

Publication number
CN114845035B
CN114845035B CN202110131870.2A CN202110131870A CN114845035B CN 114845035 B CN114845035 B CN 114845035B CN 202110131870 A CN202110131870 A CN 202110131870A CN 114845035 B CN114845035 B CN 114845035B
Authority
CN
China
Prior art keywords
shooting
camera
mobile phone
image data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110131870.2A
Other languages
Chinese (zh)
Other versions
CN114845035A (en
Inventor
冯可荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110131870.2A priority Critical patent/CN114845035B/en
Priority to CN202210973275.8A priority patent/CN115514882B/en
Priority to PCT/CN2021/137917 priority patent/WO2022160985A1/en
Publication of CN114845035A publication Critical patent/CN114845035A/en
Application granted granted Critical
Publication of CN114845035B publication Critical patent/CN114845035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72415User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories for remote control of appliances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A distributed shooting method, electronic equipment and media can reduce network bandwidth occupied by data transmission between remote equipment and local equipment in the process of realizing distributed shooting. In the method, the first device may receive a first operation that a user selects to perform synchronous shooting with the second device. In response to the first operation, the first device may begin acquiring the first image data, and the first device may also instruct the second device to begin acquiring the image data. Thereafter, the first device may receive second image data from the second device, the second image data including the original image captured by the camera of the second device. Finally, the first device may display a first shot corresponding to the first image data in the first window, and display a second shot corresponding to the second image data in the second window. The first window and the second window are located on the same display interface.

Description

Distributed shooting method, electronic equipment and medium
Technical Field
The embodiment of the application relates to the technical field of photographing, in particular to a distributed photographing method, electronic equipment and a medium.
Background
With the development of internet technology, a plurality of electronic devices including video cameras may constitute a distributed camera system. For example, the electronic device may be a mobile phone, a tablet computer, a smart watch, a smart television, or the like. In the distributed camera system, one electronic device (called a local device) can not only present a picture taken by a camera of the electronic device to a user, but also acquire and display pictures taken by cameras of other electronic devices (called remote devices).
Specifically, the local device may acquire the picture of the remote device by: (1) The local terminal equipment sends a control command to the remote terminal equipment, instructs the remote terminal equipment to collect images and returns a preview result to the local terminal equipment; (2) After receiving the control command, the remote equipment acquires an image and performs image signal processing (IMAGE SIGNAL processing, ISP) on the acquired image to obtain a preview result; (3) the remote device sending the preview result to the local device.
In the process of transmitting the cross-device image data, the remote device sends the image processed piece (namely the preview result) to the local device, so that the data size is increased, larger network bandwidth is required to be occupied, and network resources are wasted.
Disclosure of Invention
The application provides a distributed shooting method, electronic equipment and a medium, which can reduce network bandwidth occupied by data transmission between remote equipment and local equipment in the process of realizing distributed shooting.
In a first aspect, the present application provides a distributed photographing method, which is applied to a first device. In the method, the first device may receive a first operation that a user selects to perform synchronous shooting with the second device. In response to the first operation, the first device may begin acquiring first image data and instruct the second device to begin acquiring image data. Thereafter, the first device may receive second image data from the second device, the second image data comprising the original image. Finally, the first device may display a first shot corresponding to the first image data in the first window, and display a second shot corresponding to the second image data in the second window. The first window and the second window are located on the same display interface.
Note that the above original image may also be referred to as a RAW image. RAW may be translated into "unprocessed". That is, the above original image is an image that has not been processed by the second device. The amount of data of the original image is smaller than the image processed by the second device. In this way, the network bandwidth occupied by the distributed photographing traffic across devices can be reduced.
In one possible design manner of the first aspect, the method may further include: the first device transmits first image data to the second device, the first image data including an original image. The original image (also referred to as a RAW image) is an image that has not been processed by the first device.
Thus, not only the first device can simultaneously display the photographing screen of the first device and the photographing screen of the second device, but also the second device can simultaneously display the photographing screen of the first device and the photographing screen of the second device.
In another possible design manner of the first aspect, after the first device receives the first operation that the user selects to perform synchronous shooting with the second device, the method of the present application may further include: the first device obtains a shooting capability parameter of the second device, where the shooting capability parameter is used to indicate an image processing algorithm supported by the second device.
Then, the first device can perform image processing on the first image data according to shooting capability parameters of the first device to obtain a first shooting picture; and carrying out image processing on the second image data according to the shooting capability parameters of the second equipment to obtain a second shooting picture. The shooting capability parameter of the first device is used to indicate an image processing algorithm supported by the first device.
It should be understood that, the first device performs image processing on the RAW image acquired by the second device based on the algorithm capability of the second device (i.e., the image processing algorithm supported by the second device indicated by the shooting capability parameter of the second device), which may obtain the same or similar effect as the image processing performed on the RAW image by the second device. In this way, the picture effect of the second device can be restored at the first device.
In another possible design manner of the first aspect, the first device may perform image processing on the first image data according to a shooting capability parameter of the first device, to obtain the first shooting picture; and carrying out image processing on the second image data according to the shooting capability parameters of the first equipment to obtain the second shooting picture. The shooting capability parameter of the first device is used for indicating an image processing algorithm supported by the first device.
It should be understood that, based on the algorithmic capability of the first device, the first device performs image processing on RAW images acquired by the first device and the second device, so that the image effect of the shot image of the second device and the shot image of the first device can be consistent.
In addition, in the design manner, the first device does not need to acquire and store the shooting capability parameters of the second device. In this way, the network bandwidth occupied by the distributed shooting service across the devices can be further reduced, and the storage space of the first device can be saved.
In another possible design manner of the first aspect, the first device instructs the second device to start collecting image data, including: the first device sends a shooting instruction to the second device.
The shooting instruction comprises a preset mark, the shooting instruction is used for indicating the second equipment to collect image data, and the preset mark is used for indicating the second equipment to transmit an original image collected by the second equipment to the first equipment. After the second device receives the shooting instruction comprising the preset mark, the second device does not process the acquired original image; but instead the acquired original image is transmitted through to the first device.
In another possible design of the first aspect, the first device may receive the second operation before the first device receives the first operation. The second operation is a click operation of a function button for realizing distributed photographing by a user, and at least one of a preview interface of a camera application of the first device, a chat interface of a video communication application of the first device, a control center of the first device, a drop-down menu, or a negative screen includes the function button. In response to a second operation for clicking the function button, the first device displays a list of candidate devices in the preview interface. The candidate device list includes a second device. Wherein the first operation is an operation in which the user selects the second device in the candidate device list.
In another possible design manner of the first aspect, the synchronous shooting includes at least one of synchronous video recording, synchronous shooting, synchronous live broadcasting or synchronous video call. That is, the method of the present application may be applied to the process of video recording, photographing, live broadcasting, or video communication with other devices.
In another possible design manner of the first aspect, after the first device receives the first operation that the user selects to perform synchronous shooting with the second device, the method of the present application further includes: the first device creates a control session and a data session of the first device with the second device.
The control session is used for transmitting a control command between the first device and the second device, the control command comprises a shooting instruction for instructing the second device to acquire an image, and the data session is used for transmitting the second image data from the second device.
In the process of executing the cross-device distributed shooting service by the first device and the second device, transmission paths of the control flow and the data flow can be distinguished. In this way, the problem of bandwidth occupation of the control stream and the data stream can be avoided.
In a second aspect, the present application provides an electronic device, which is a first device, including: one or more cameras, one or more processors, a display screen, memory, and a communication module. The camera, the display screen, the memory, the communication module and the processor are coupled.
Wherein the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method according to the first aspect and any one of its possible designs.
In a third aspect, the present application provides a chip system for use in an electronic device comprising a display screen, a memory and a communication module; the system-on-chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a circuit; the interface circuit is configured to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal including computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method according to the first aspect and any one of its possible designs. The electronic device is a first device.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any one of its possible designs.
In a fifth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to the first aspect and any one of its possible designs.
In a sixth aspect, the present application provides a distributed photographing system, which includes the first device according to the second aspect and any possible design manner, and the second device according to the first aspect and any possible design manner.
It will be appreciated that the advantages achieved by the electronic device according to the second aspect, the chip system according to the third aspect, the computer storage medium according to the fourth aspect, the computer program product according to the fifth aspect, and the distributed photographing system according to the sixth aspect provided above may refer to the advantages in any one of the possible designs of the first aspect and the advantages will not be repeated herein.
Drawings
Fig. 1 is a schematic diagram of a distributed shooting system according to an embodiment of the present application;
fig. 2A is a functional schematic diagram of a distributed shooting method according to an embodiment of the present application;
Fig. 2B is a schematic hardware structure of a mobile phone according to an embodiment of the present application;
fig. 2C is a schematic diagram of a software architecture of the local device 101 or the remote device 102;
Fig. 2D is a schematic diagram of a software architecture of the local device 101 and the remote device 102;
Fig. 2E is a schematic software architecture diagram of a local device 101 and a remote device 102 according to an embodiment of the present application;
FIG. 2F is a simplified diagram of the software architecture shown in FIG. 2E;
fig. 3A is a schematic diagram of a distributed shooting interface according to an embodiment of the present application;
Fig. 3B is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
Fig. 4 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 5 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 6A is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 6B is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
Fig. 7 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 8 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 9 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
Fig. 10 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 11 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 12 is a schematic diagram of a data structure of shooting capability parameters according to an embodiment of the present application;
fig. 13 is a schematic diagram of a data transmission flow between a local device and a remote device according to an embodiment of the present application;
fig. 14 is a schematic diagram of a data processing and transmission flow in a remote device according to an embodiment of the present application;
Fig. 15 is a schematic diagram of another distributed shooting interface according to an embodiment of the present application;
fig. 16 is a schematic diagram of a data transmission flow between a local device and a remote device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The distributed photographing method provided by the embodiment of the application can be applied to the distributed photographing system 100 shown in fig. 1. As shown in fig. 1, the distributed photographing system 100 may include a home terminal apparatus 101 and N remote terminal apparatuses 102, where N is an integer greater than 0. The local end device 101 and any one of the remote end devices 102 can communicate in a wired manner or can communicate in a wireless manner. Where the home device 101 is a first device and the remote device 102 is a second device.
Illustratively, a wired connection may be established between the local device 101 and the remote device 102 using a universal serial bus (universal serial bus, USB). As another example, wireless connection between the home device 101 and the remote device 102 may be established by global system for mobile communications (global system for mobile communications, GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), 5G and subsequent standard protocols, bluetooth, wireless fidelity (WIRELESS FIDELITY, wi-Fi), NFC, voice over internet protocol (voice over Internet protocol, voIP), communication protocols supporting a network slice architecture.
In some embodiments, the local device 101 may communicate directly with the remote device 102 using the wireless connection described above. In other embodiments, the local device 101 may communicate with the remote device 102 through a cloud server using the wireless connection described above, which is not limited by the embodiments of the present application.
Wherein, one or more cameras can be arranged in the local end device 101 and the remote end device 102. The local device 101 may use its own camera to collect image data, and the remote device 102 may also use its own camera to collect image data. In the embodiment of the present application, the local device 101 and the remote device 102 may collect image data by using their own cameras at the same time, and the remote device 102 may send the image data to the local device 101, where the local device 101 displays the image data from the local device 101 and the remote device 102 at the same time, so as to implement a cross-device distributed photographing function.
As shown in fig. 2A, the home device 101 may include a preset Application (APP). The local device 101 may implement the above-described cross-device distributed photographing function through the application. The preset application may be, for example, a system camera application or a third party camera application. In the embodiment of the application, the system camera application or the camera application of the third party can be collectively called as a camera application. The system application may also be referred to as an embedded application, which is an application program provided as part of an electronic device (e.g., local device 101 or remote device 102) implementation. The third party application may also be referred to as a downloadable application. The downloadable application is an application that can provide its own internet protocol multimedia subsystem (internet protocol multimedia subsystem, IMS) connections. The downloadable application may be an application pre-installed in the electronic device or may be an application downloaded by a user and installed in the electronic device.
Specifically, as shown in fig. 2A, the local end device 101 may be connected to the remote end device 102, and may also control the remote end device 102 to turn on a camera of the remote end device 102 through a connection between the local end device 101 and the remote end device 120. As shown in fig. 2A, the home device 101 may implement image preview on the remote device 102, and control the preview effect. As shown in fig. 2A, the home terminal apparatus 101 may control the remote terminal apparatus 102 to take a photograph and control the photographing effect of the remote terminal apparatus 102. As shown in fig. 2A, the local device 101 may control the remote device 102 to record video, and control the video recording effect of the remote device 102. The local device 101 may display the photographed pictures of the local device 101 and the remote device 102, as shown in fig. 2A, when the above-mentioned cross-device distributed photographing function is implemented. In this way, the user can see not only the photographed image of the home terminal apparatus 101 but also the photographed image of the remote terminal apparatus 102 on the home terminal apparatus 101. Of course, the local device 101 may also control the remote device 102 to turn off the camera of the remote device 102 through the connection between the local device 101 and the remote device 120.
By way of example, the local device 101 (or the remote device 102) may be a mobile phone, a tablet computer, a television (also referred to as a smart television, a smart screen or a large screen device), a notebook computer, an Ultra-mobile Personal Computer, a UMPC, a handheld computer, a camera (such as a digital camera), a video camera (such as a digital video camera), a netbook, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a wearable electronic device (such as a smart watch, a smart bracelet, a smart glasses), a vehicle-mounted device, a virtual reality device, or the like, which has a photographing function, and the embodiments of the present application are not limited in this respect.
For example, taking a mobile phone as the local device 101 in the distributed photographing system 100, fig. 2B shows a schematic structural diagram of the mobile phone. As shown in fig. 2B, the handset may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, and the like.
It will be appreciated that the structure illustrated in the embodiments of the present application is not limited to a specific configuration of the mobile phone. In other embodiments of the application, the handset may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to a cell phone. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network, WIRELESS FIDELITY), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc. applied to a cell phone.
In some embodiments, the antenna 1 and the mobile communication module 150 of the handset are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the handset can communicate with a network and other devices through wireless communication technology.
The cell phone implements display functions through the GPU, the display 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the handset may include 1 or N display screens 194, N being a positive integer greater than 1.
The cell phone may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like. The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the handset may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect to an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the handset (e.g., audio data, phonebook, etc.), etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The handset may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The sensor module 180 may include therein a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
Of course, the mobile phone may further include a charging management module, a power management module, a battery, a key, an indicator, 1 or more SIM card interfaces, and the embodiment of the present application is not limited in this respect.
Fig. 2C is a software architecture block diagram of a mobile phone according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and system libraries, a HAL (hardware abstraction layer ) layer, and a kernel layer, respectively. It should be understood that: the Android system is used for illustration, and in other operating systems (e.g. hong-mo system, IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the functional modules are similar to those of the embodiment of the present application.
The application layer may include a series of application packages.
As shown in fig. 2C, applications such as conversation, memo, browser, contacts, gallery, calendar, map, bluetooth, music, video, short message, etc. may be installed in the application layer.
In the embodiment of the present application, an application having a photographing function, for example, a camera application, may be installed in the application layer. Of course, when other applications need to use the shooting function, the camera application may also be called to realize the shooting function.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
For example, the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, etc., to which embodiments of the present application are not limited in any way.
For example, the window manager described above is used to manage window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system described above may be used to build a display interface for an application. Each display interface may be composed of one or more controls. In general, controls may include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets (widgets), and the like. The resource manager provides various resources, such as localization strings, icons, pictures, layout files, video files, and the like, to the application program. The notification manager enables the application to display notification information in a status bar, can be used for conveying notification type messages, and can automatically disappear after a short stay without user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is presented in a status bar, a prompt tone is emitted, vibration is generated, and an indicator light blinks.
As shown in FIG. 2C, android runtime includes a core library and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used for managing the display subsystem and providing fusion of 2D and 3D layers for a plurality of application programs. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer, which is located below the HAL, is the layer between hardware and software. The kernel layer at least comprises display drive, camera drive, audio drive, sensor drive, etc., which is not limited in this embodiment of the present application.
In an embodiment of the present application, still as shown in FIG. 2C, for example, a camera application, camera services (CAMERA SERVICE) are provided in the applicable framework layer. The camera application may be launched CAMERA SERVICE by calling a preset API. CAMERA SERVICE can interact with CAMERA HAL in the HAL (hardware abstraction layer ) during operation. Wherein CAMERA HAL is responsible for interacting with a hardware device (such as a camera) for realizing a shooting function in the mobile phone, CAMERA HAL hides implementation details (such as a specific image processing algorithm) of the related hardware device on one hand, and can provide an interface for calling the related hardware device for an Android system on the other hand.
For example, the camera application runtime may send CAMERA SERVICE relevant control commands (e.g., preview, zoom in, take a picture, or record instructions) issued by the user. In one aspect, CAMERA SERVICE may send the received control command to CAMERA HAL, such that CAMERA HAL may invoke a camera driver in the kernel layer according to the received control command, the camera driver driving a hardware device such as a camera to capture image data in response to the control command. For example, the camera may transmit each frame of image data acquired to CAMERA HAL through a camera drive at a certain frame rate. The transmission process of the control command in the operating system can be referred to as a specific transmission process of the control flow in fig. 2C.
On the other hand, after CAMERA SERVICE receives the control command, a shooting strategy at this time may be determined according to the received control command, where a specific image processing task to be executed on the acquired image data is set in the shooting strategy. For example, in preview mode CAMERA SERVICE may set image processing task 1 in the capture policy for implementing the face detection function. For another example, if the user turns on the beauty function in the preview mode, CAMERA SERVICE may also set the image processing task 2 in the shooting policy for implementing the beauty function. Further, CAMERA SERVICE may send the determined capture strategy to CAMERA HAL.
After CAMERA HAL receives each frame of image data acquired by the camera, corresponding image processing tasks can be executed on the image data according to the shooting strategy issued by CAMERA SERVICE, so as to obtain each frame of shooting picture after image processing. For example, CAMERA HAL may execute the image processing task 1 on each frame of the received image data according to the capturing policy 1, so as to obtain a corresponding captured image of each frame. When the shooting strategy 1 is updated to the shooting strategy 2, CAMERA HAL may execute the image processing task 2 and the image processing task 3 on each frame of the received image data according to the shooting strategy 2, so as to obtain a corresponding shooting picture of each frame.
Subsequently, CAMERA HAL may report each frame of the image-processed image to the camera application through CAMERA SERVICE, and the camera application may display each frame of the image in the display interface, or the camera application may save each frame of the image in the mobile phone in a photograph or video form. The above-mentioned transfer process of the shot frame in the operating system may refer to a specific transfer process of the data stream in fig. 2C.
Referring to fig. 2D, a software architecture diagram of the local device 101 and the remote device 102 is shown. The embodiment of the present application herein describes a scheme in which the local device 101 and the remote device 102 implement cross-device distributed shooting in conjunction with the software architecture shown in fig. 2D.
The home device 101 has a device discovery and remote registration function. For example, the local device 101 may discover the remote device 102, such as the television 1, watch 2, and cell phone 3 shown in fig. 4. When the user selects the television 1 as the remote device 102 and performs cross-device distributed photographing with the local device 101 (e.g., the user selects the television 1 shown in fig. 4), the local device 101 may register the remote device 102 (e.g., the television 1) with the local device 101.
For example, the home device 101 may create a distributed mobile sensor development platform (Distributed Mobile Sensing Development Platform, DMSDP) HAL, also referred to as a virtual camera HAL (i.e., virtual CAMERA HAL), for the remote device 102 at a hardware abstraction layer (hardware abstract layer, HAL) of the home device 101. Unlike conventional CAMERA HAL in home device 101, DMSDP HAL does not correspond to the actual hardware device of home device 101, but rather corresponds to remote device 102 to which home device 101 is currently connected. DMSDP HAL shown in fig. 2D is a HAL created by the local device 101 in accordance with the shooting capability parameters of the remote device 102. The local end device 101 can transmit and receive data with the remote end device 102 through DMSDP HAL, and the remote end device 102 is used as a virtual device of the local end device 101 to cooperate with the remote end device 102 to complete the cross-device distributed shooting service.
The following describes a specific flow of the distributed shooting service performed by the local device 101 and the remote device 102 in cooperation. As shown in fig. 2D, the process may include steps ① - ⑩.
Step ①: the application layer of the local device 101 issues control commands for controlling the camera of the remote device 102 to the camera service of the service layer. Step ②: the service layer of the home terminal apparatus 101 sends the control command to DMSDP HAL of the HAL. Step ③: DMSDP HAL of the home device 101 transmits control commands to the remote device 102. Step ④: after receiving the control command, the remote device 102 transmits the control command to the service layer. Step ⑤: the service layer of the remote device 102 transmits control commands to the CAMERA HAL of the HAL. Wherein CAMERA HAL of the remote device 102 corresponds to the actual hardware device of the remote device 102. In this way, the HAL of the remote device 102 may call its bottom layer (e.g., kernel layer Kernel) to execute the corresponding shooting task (e.g., start the camera, switch the camera, or adjust the relevant parameters of the camera) according to the control command. The data transmitted in steps ① - ⑤ above may be referred to as a control flow.
Step ⑥: the HAL of the remote device 101 uploads the preview stream to the camera service of the service layer. The preview stream includes one or more preview images acquired by the underlying Camera device of the remote device 102 and processed by an underlying image signal processor (IMAGE SIGNAL processor, ISP). Step ⑦: the service layer of the remote device 101 transfers the preview stream to the application layer of the remote device 102. Step ⑧: the application layer of the remote device 101 transmits the preview stream to DMSDP HAL of the local device 101. Step ⑨: DMSDP HAL of the home device 101 transmits the preview stream to the camera service of the service layer. Step ⑩: the service layer of the home terminal apparatus 101 reports the preview stream to the application layer. In this way, the camera application of the home device 101 may present the user with a preview stream from the remote device 102 (e.g., shot 2, shot 3, or shot c described above). Of course, the camera application of the home terminal apparatus 101 may also present the preview stream from the home terminal apparatus 101 to the user, and the specific flow may refer to the related description of the conventional technology, which is not repeated herein. The data transmitted in the above steps ⑥ to ⑩ may be referred to as a preview stream or a data stream.
It should be noted that, as shown in fig. 2D, the service layer of the local device 101 or the remote device 102 may be included in the framework layer, and the service layer may be implemented in the framework layer. Of course, the service layer of the local device 101 or the remote device 102 may also be independent of the framework layer, which is not limited by the embodiment of the present application.
On the one hand, although executing the above-mentioned procedure can complete the distributed shooting business across devices; however, in the above-mentioned cross-device preview stream transmission process, the preview stream transmitted from the remote device 102 to the local device 101 is a slice (i.e., a preview result that can be directly displayed) obtained by image processing, and its data size is increased, so that a larger network bandwidth needs to be occupied, and network resources are wasted.
On the other hand, the performance of image processing differs from device to device. For example, the performance of the layer processing by the remote device 102 may be lower than the local device 101. As such, the remote device 102 is less efficient to map. A larger time delay is required to realize that the home terminal apparatus 101 simultaneously displays the photographed pictures from the home terminal apparatus 101 and the remote terminal apparatus 102. In addition, the above-mentioned flow does not distinguish between the transmission paths of the control stream and the data stream (or the preview stream), and the control stream and the data stream may occupy bandwidth.
The embodiment of the application provides a distributed shooting method, in the process that a local device 101 and a remote device 102 execute a cross-device distributed shooting service, the local device 101 can directly acquire an original image acquired by the remote device 102 from the remote device 102. The original image may also be referred to as a RAW image. RAW may be translated into "unprocessed". That is, the original image acquired by the remote device 102 is an image that has not been processed by the remote device 102. In the following embodiments, RAW images are used to represent original images, and the method according to the embodiments of the present application is described.
Wherein the amount of data of the RAW image is smaller compared to the image processed by the remote device 102. In this way, the network bandwidth occupied by the distributed photographing traffic across devices can be reduced.
And, the local end device 101 processes the RAW image from the remote end device 102, so that the problem of increasing time delay due to large difference of performance of the two end devices can be avoided. By adopting the scheme, the time delay of the distributed shooting service can be reduced. Further, in the process of executing the cross-device distributed photographing service by the local device 101 and the remote device 102, the transmission paths of the control flow and the data flow can be distinguished. Thus, the problem that the control flow and the data flow occupy bandwidth mutually can be avoided.
Fig. 2E is a schematic diagram of a software architecture of a local device 101 and a remote device 102 according to an embodiment of the present application. As shown in fig. 2E, the home terminal apparatus 101 may include: an application layer 200, a framework layer 201, a service layer 201a, and a HAL 202; the remote device 102 may include: an application layer 210, a framework layer 211, a service layer 211a and a HAL 212. Wherein the service layer may be implemented in a framework layer. For example, as shown in fig. 2E, service layer 201a may be implemented in framework layer 201 and service layer 211a may be implemented in framework layer 211.
The application layer may include a series of application packages, among other things. For example, the application layer 200 may include a plurality of applications, such as a camera application and Dv application 200a. Of course, when other applications need to use the shooting function, the camera application may also be called to realize the shooting function. As another example, the application layer 210 includes a camera proxy service 210a for supporting the remote device 102 to complete distributed photographing services in cooperation with the local device 101. The camera proxy service 210a may also be referred to as a camera proxy application, as embodiments of the application are not limited in this regard.
In the present application, a device virtualization (device virtualization, dv) application 200a for implementing a distributed photographing function may be installed in the application layer 200. The Dv application 200a may be running as a system application resident in the home device 101. Alternatively, the functions implemented by the Dv application 200a may be executed in the local device 101 in the form of a system service.
The framework layer provides APIs and programming frameworks for application programs of the application layer. For example, both frame layer 201 and frame layer 211 may provide CAMERA API 2.0.0. The framework layer 201 and the framework layer 211 may refer to the detailed description of the application framework layer described in fig. 2C according to the embodiment of the present application, and will not be repeated herein.
As shown in fig. 2E, for example, in the camera application, CAMERA KIT c is provided in the frame layer 201. Therein CAMERA KIT c may be packaged a plurality of camera modes. For example, a photographing mode, a video recording mode, and a double view mode shown in fig. 2E.
As shown in fig. 2E, a camera service (CAMERA SERVICE) 201a1 may be provided in the service layer 201 a. The camera application may launch CAMERA SERVICE a 201a1 by calling a preset API (e.g., CAMERA API 2.0.0). CAMERA HAL 202a and DMSDP HAL 202b are provided in HAL 202. CAMERA SERVICE 201a1 can interact with CAMERA HAL a and/or DMSDP HAL 202b in HAL 202 during operation.
The CAMERA HAL and 202a are responsible for interacting with a hardware device (such as a camera) that implements a shooting function in the local device 101, and CAMERA HAL and 202a hide implementation details (such as a specific image processing algorithm) of the relevant hardware device on one hand, and may provide an interface for calling the relevant hardware device to the Android system on the other hand.
It should be noted that, when the camera application in the application layer 200 is running, the interaction flow of the application layer 200 with the framework layer 201, the service layers 201a, CAMERA HAL a, 202a, DMSDP HAL b, etc. may refer to the description of the corresponding software modules in fig. 2C in the above embodiment, and the embodiments of the present application are not repeated here.
The framework layer 201 may further include a Dv kit (kit) 201b for implementing a distributed photographing function. The Dv kit201b may be invoked by a Dv application 200a provided at the application layer 200 to implement the functionality of discovering and connecting the remote device 102. Alternatively, the functions implemented by the Dv application 200a may also be run as system services resident in the handset.
When the mobile phone needs to use the camera of other electronic devices to implement the distributed shooting function, the local end device 101 may discover and connect to the remote end device 102 through the Dv kit 201b in the camera mode (such as the dual-view mode) provided by CAMERA KIT c. After the home device 101 establishes a connection with the remote device 102, the home device 101 may create a DMSDP HAL 202b for the remote device 102 at the HAL 202 of the home device 101. The remote device 102 may be registered with the local device 101. DMSDP HAL 202b shown in fig. 2E is a HAL created by the home terminal apparatus 101 in accordance with the shooting capability parameters of the remote terminal apparatus 102.
The remote device 102 may also include Dv applications and Dv kits, among other things. The Dv application and Dv kit functions in the remote device 102 may refer to the description of the Dv application 200a and the Dv kit 201b functions in the embodiment of the present application, which are not described herein.
As also shown in fig. 2E, in addition to creating a corresponding DMSDP HAL 202b in the HAL 202 for the remote device 201, the shooting capability parameters of the remote device 102 may also be sent to CAMERA SERVICE 201a1 for saving, i.e. registering the current shooting capability parameters (including device capability information and algorithm capability information) of the remote device 102 in CAMERA SERVICE a1. The device capability information is used to indicate the hardware capability of the remote device 102 to capture an image, and the algorithm capability information is used to indicate the algorithm capability of the remote device 102 to perform image processing on the captured image. Capability information of the remote device 102 may be registered or mapped to CAMERA SERVICE 201a1 through DMSDP HAL 202 b. Further, DMSDP HAL b may also set a "peer Meta synchronization map" function. The opposite Meta refers to a tag in the remote device 102 for indicating shooting capability parameters of the remote device 102.
Note that DMSDP HAL b may also be referred to as virtual CAMERA HAL or DMSDP CAMERA HAL. Unlike conventional CAMERA HAL, DMSDP HAL b does not correspond to the actual hardware device of the local device 101, but rather to the remote device 102. As shown in fig. 2F, a remote device 102 (e.g., a television, tablet, cell phone, etc. that includes a camera) may act as a virtual camera for the local device 101. The home device 101 (e.g., the virtual framework shown in fig. 2F) may perform data transceiving with the remote device 102 through DMSDP HAL b, and use the remote device 102 as a virtual camera of the home device 101 to cooperate with the remote device 102 to complete each service in the distributed shooting scene.
In some embodiments, the Dv application 200a of the local device 101 may also obtain audio capability parameters (e.g., audio playback delay, audio sampling rate, or number of sound channels, etc.), display capability parameters (e.g., screen resolution, codec algorithms to display data, etc.) of the remote device 102 via the Dv kit 201 b. Of course, if the remote device 102 also has other capabilities (e.g., printing capabilities, etc.), the remote device 102 may also send the relevant capability parameters to the Dv application 200a of the local device 101. At this time, DMSDP HAL 202b has not only the image processing capability of the remote device 102, but also the capabilities of audio, display, etc. of the remote device 102, so that the remote device 102 can be used as a virtual device of the local device 101 to cooperate with the local device 101 to complete various services in the distributed scenario.
Unlike the scheme shown in fig. 2D, after the local device 101 establishes a connection with the remote device 102, two Camera sessions (Camera sessions) may be created respectively. For example, the home device 101 may create a Control Session (Control Session) for transmitting a Control stream and a Data Session (Data Session) for transmitting a Data stream (i.e., preview stream) through DMSDP HAL 202 b.
In addition to creating the corresponding DMSDP HAL 202b for the remote device 102 in the HAL, the shooting capability parameter of the remote device 102 may be sent to CAMERA SERVICE a 201a1 for saving, that is, registering the shooting capability of the current remote device 102 in CAMERA SERVICE a 201a 1.
When the mobile phone runs the camera application, CAMERA SERVICE a 201a1 can determine the shooting strategy in the shooting process in real time according to the control command (such as preview, zoom-in, video recording, etc. instructions) issued by the camera application and combining the shooting capability of the remote device 102. For example, CAMERA SERVICE may set the image processing tasks that the handset needs to perform and the image processing tasks that the remote device 102 needs to perform in the shooting policy according to the shooting capability parameters of the slave device. Further, CAMERA SERVICE may use DMSDP HAL 202b to send a shooting instruction corresponding to the shooting policy to the remote device 102 through Control Session, to trigger the remote device 102 to execute a corresponding image processing task.
After the shooting instruction is transmitted to CAMERA HAL a of the remote device 102, CAMERA HAL a of the remote device 102 can call a camera driver in the kernel layer according to the received shooting instruction, and hardware devices such as a camera are driven to acquire image data in response to the shooting instruction. For example, the camera may transmit each frame of image data acquired to CAMERA HAL through a camera drive at a certain frame rate.
After CAMERA HAL a of the remote device 102 receives each frame of image data collected by the camera, a data stream including the RAW image collected by the camera is transmitted to an upper layer. Wherein, after the application layer 210 of the remote device 102 receives the data stream including the RAW image, the camera proxy service 210a (or may also be referred to as a camera proxy application) of the application layer 210 may send the data stream including the RAW image to the local device 101 through DMSDP HAL b. For example, the camera proxy service 210a may send a Data stream including a RAW image to the home terminal apparatus 101 through the Data Session provided by DMSDP HAL 202 b.
After receiving the data stream including the RAW image, DMSDP HAL b of the local end device 101 may call a related algorithm to perform post-processing (algorithm) according to the shooting capability parameter stored in CAMERA SERVICE a 201a1, and perform image processing on the RAW image to obtain a corresponding shooting picture of each frame. Subsequently, DMSDP HAL b may report each frame of the image-processed image to the camera application through CAMERA SERVICE a 201a1, and the camera application may display each frame of the image-processed image on the display interface.
In this way, when the local end device 101 and the remote end device 102 implement the distributed shooting function, the local end device 101 may perform corresponding image processing on the image data according to the shooting capability of the local end device 101 and the shooting capability of the remote end device 102 based on the above shooting policy, so as to implement a better shooting effect in the distributed shooting scene.
In some embodiments, the local device 101 may also send the RAW image acquired by the local device 101 to the remote device 102. The CAMERA SERVICE a1 of the remote device 102 may also store the shooting capability parameters of the local device 101. The remote device 102 may also call a related algorithm according to the shooting capability parameter of the local device 101, and perform image processing on the RAW image from the local device 101 to obtain a corresponding shooting picture of each frame. The remote device 102 may call a related algorithm according to the shooting capability parameter of the remote device 102 to perform image processing on the RAW image acquired by the remote device 102, so as to obtain a corresponding shooting picture of each frame. In this way, the remote device 102 can also display the photographed image of the local device 101 and the photographed image of the remote device 102 at the same time. By adopting the scheme, the cross-device distributed shooting function can be realized on the local equipment 101 and the remote equipment 102 at the same time.
The following illustrates an application scenario of the method provided by the embodiment of the present application. The method of the embodiment of the application is at least applicable to the following application scene (1) -application scene (2).
Application scenario (1): a scene is shot across multiple views of a device.
For example, the cross-device multi-scene shooting scene may be a cross-device dual-scene shooting scene. The cross-device double-scene shooting scene can comprise a cross-device double-scene shooting scene and a cross-device double-scene video scene. In the following embodiments, a cross-device dual-view photographing scene is taken as an example, and the cross-device dual-view photographing scene is described.
The preset application may be, for example, a camera application for implementing a photographing function. Taking the mobile phone 0 as the home terminal 101 as an example, a camera application may be installed in the mobile phone 0. Handset 0 (i.e., home device 101) may perform cross-device multi-view shooting with remote device 102 through a camera application. As shown in fig. 3A, after detecting that the user opens the camera application, the mobile phone 0 may open its own camera to start collecting image data, and display a corresponding shooting picture in real time in the preview box 302 of the preview interface 301 according to the collected image data.
The shot image displayed by the mobile phone 0 in the preview box 302 may be different from the image data acquired by the mobile phone 0 using the camera. For example, after the mobile phone 0 obtains the image data collected by the camera, image processing tasks such as anti-shake, focusing, soft focus, blurring, filtering, beautifying, face detection or AR recognition can be performed on the collected image data, so as to obtain a photographed image after image processing. Further, the mobile phone 0 may display the photographed image after the image processing in the preview box 302.
Similarly, the remote device 102 may also turn on its own camera to start collecting image data, and perform image processing tasks such as anti-shake, focusing, soft focus, blurring, filtering, beautifying, face detection or AR recognition on the collected image data to obtain a photographed image after image processing, which is not limited in the embodiment of the present application.
In an embodiment of the present application, as shown in fig. 3A, the mobile phone 0 may set a function button 303 for cross-device multi-view shooting in a preview interface 301 of a camera application. When the user wishes to see the multi-view shot by handset 0 and remote device 102 on handset 0, the function button 303 can be clicked to turn on the cross-device multi-view shooting function.
Or the mobile phone 0 may further set the function button 304 of the synchronous shooting function in a control center, a drop-down menu, a negative one-screen or other applications (for example, a video call application) of the mobile phone, which is not limited in any way in the embodiment of the present application. For example, as shown in fig. 3B, the mobile phone may display the control center 305 in response to a user opening the control center, and the control center 305 is provided with the above-described function buttons 304. If the user wishes to use the cell phone with other electronic equipment for synchronized shooting, the function button 304 may be clicked.
For example, after the mobile phone 0 detects that the user clicks the function buttons 303 and 304, as shown in fig. 4, the mobile phone 0 may display one or more candidate devices that may be searched for by the current mobile phone 0 and may collect image data in a dialog box 401.
For example, whether each electronic device has a photographing function may be recorded in the cloud server. Then, the mobile phone 0 may query the cloud server for the electronic device with the shooting function that logs in the same account (for example, the mobile phone 0 is the account). Further, the handset 0 may display the queried electronic device as a candidate device in the dialog 401.
Or handset 0 may search for electronic devices that are in the same Wi-Fi network as handset 0. Furthermore, the mobile phone 0 may send a query request to each electronic device in the same Wi-Fi network, and the electronic device that triggers the receiving of the query request may send a response message to the mobile phone 0, where the response message may indicate whether the mobile phone has a shooting function. Then, the mobile phone 0 can determine the electronic device with the shooting function in the current Wi-Fi network according to the received response message. Further, the mobile phone 0 may display the electronic device having the photographing function as a candidate device in the dialog 401.
Or the mobile phone 0 can be provided with an application for managing intelligent home equipment (such as televisions, air conditioners, sound boxes or refrigerators and the like) in the family. Taking smart home applications as an example, a user may add one or more smart home devices to the smart home applications, so that the smart home devices added by the user are associated with the mobile phone 0. For example, a two-dimensional code containing equipment information such as equipment identification can be set on the smart home equipment, and after the user scans the two-dimensional code by using the smart home application of the mobile phone 0, the corresponding smart home equipment can be added into the smart home application, so that the association relationship between the smart home equipment and the mobile phone 0 is established. In the embodiment of the present application, when one or more smart home devices added in the smart home application are online, for example, when the mobile phone 0 detects that a Wi-Fi signal sent by the smart home device has been added, the mobile phone 0 may display the smart home device as a candidate device in the dialog box 401, and prompt the user to select to use the corresponding smart home device to perform synchronous shooting with the mobile phone 0.
As shown in fig. 4, the candidate devices searched by the mobile phone 0 include, for example, the television 1, the watch 2 and the mobile phone 3, and the user may select one or more remote devices 102 that perform the cross-device multi-view shooting with the mobile phone 0 in the television 1, the watch 2 and the mobile phone 3. In the embodiment of the application, cross-device double-shot is taken as an example. For example, if it is detected that the user selects television 1, handset 0 may establish a network connection with television 1 using television 1 as remote device 102. For example, the mobile phone 0 may establish a Wi-Fi connection with the television 1 through the router, or the mobile phone 0 may directly establish a Wi-Fi P2P connection with the television 1, or the mobile phone 0 may directly establish a bluetooth connection with the television 1; or the mobile phone 0 can directly establish Bluetooth connection with the television 1; or the handset 0 may directly establish a short-range wireless connection with the television 1, including but not limited to a Near Field Communication (NFC) connection, an infrared connection, an Ultra Wideband (UWB) connection, a ZigBee connection; or the handset 0 may establish a mobile network connection directly with the television 1 including, but not limited to, mobile networks supporting 2g,3g,4g,5g and subsequent standard protocols.
In other embodiments, after the mobile phone 0 detects that the user clicks the function button 303, the mobile phone may search for one or more electronic devices having a photographing function according to the above method. Furthermore, the mobile phone 0 can automatically establish network connection with the searched electronic device. At this time, the user does not need to manually select a specific device that establishes a network connection with the mobile phone 0.
Still alternatively, the handset may have established a network connection with one or more electronic devices with camera capabilities before the user opens the camera application. For example, the user has established a bluetooth connection with the tablet computer before opening the camera application in the handset 0. Subsequently, after the mobile phone 0 opens the camera application to display the preview interface 301 of the camera application, if it is detected that the user clicks the function button 303, the mobile phone 0 may not search for the electronic device having the photographing function any more, but may perform the following method.
After the mobile phone 0 and the television 1 are connected in a network, on one hand, as shown in fig. 5, the mobile phone 0 may open its own camera to start collecting image data, and perform image processing on the collected image data to obtain a shooting picture 1. On the other hand, as shown in fig. 5, the mobile phone 0 may instruct the television 1 to turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain a shooting picture 2. Subsequently, the television 1 may transmit the shot 2 to the mobile phone 0. In this way, the mobile phone 0 can simultaneously display the photographed screen 1 from the mobile phone 0 and the photographed screen 2 from the television 1 in the display interface of the camera application.
As described in the above embodiment, in the first case of the application scenario (1), in the above cross-device multi-view shooting scenario, multi-view shooting of the same shooting object may be implemented by using multiple cameras, and the interestingness of shooting may be improved.
In the second case of the application scenario (1), in the above-mentioned cross-device multi-view shooting scenario, multiple cameras may also be used to achieve a time of shooting multiple shooting objects (such as parent-child time shooting or friend time shooting). For example, assume that the home terminal apparatus 101 is the cellular phone 0 shown in fig. 5, and the remote terminal apparatus 102 is the cellular phone 3 shown in fig. 6A. The mobile phone 0 or the mobile phone 3 may be used to realize the shooting of multiple shooting objects through the preset application (such as a camera application or other applications that can be used to realize shooting of photos or videos).
For example, taking the example that the preset application is a camera application. When the user wants to see the picture of the mobile phone 0 and the mobile phone 3 in time on the mobile phone 0, the user can click the function button 303 or the function button 304 to start the cross-device multi-scene shooting function.
For example, after the mobile phone 0 detects that the user clicks the function button 303 or the function button 304, as shown in fig. 4, the mobile phone 0 may display one or more candidate devices that may be searched for by the current mobile phone 0 and may collect image data in a dialog box 401. As shown in fig. 4, the candidate devices searched by the mobile phone 0 include, for example, a television 1, a watch 2, and a mobile phone 3, and the user may select a remote device 102 that performs a cross-device multi-view shooting with the mobile phone 0 in the television 1, the watch 2, and the mobile phone 3. For example, if it is detected that the user selects handset 3, handset 0 may establish a network connection with handset 3 using handset 3 as remote device 102.
After the mobile phone 0 and the mobile phone 3 are connected through the network, on one hand, as shown in fig. 6A, the mobile phone 0 may open its own camera to start collecting image data, and perform image processing on the collected image data to obtain a shooting picture 1. On the other hand, as shown in fig. 6A, the mobile phone 0 may instruct the mobile phone 3 to turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain the shot picture 3. Subsequently, the mobile phone 3 may transmit the shot 3 to the mobile phone 0. In this way, the mobile phone 0 can simultaneously display the photographed screen 1 from the mobile phone 0 and the photographed screen 3 from the mobile phone 3 in the display interface of the camera application.
It should be noted that, unlike the first case, the mobile phone 0 may send the shot 1 to the mobile phone 3 during the process of performing cross-device multi-view shooting. As shown in fig. 6A, the mobile phone 3 may display the shot screen 1 from the mobile phone 0 and the shot screen 3 from the mobile phone 3 at the same time. At this time, the mobile phone 0 may serve as both the local device and the remote device. Specifically, the mobile phone 0 is used as a local device, the mobile phone 3 is used as a remote device, and the mobile phone 0 can simultaneously display the shooting picture 1 of the mobile phone 0 and the shooting picture 3 from the mobile phone 3. The mobile phone 3 serves as a local terminal device, the mobile phone 0 serves as a remote terminal device, and the mobile phone 3 can simultaneously display a shooting picture 3 of the mobile phone 3 and a shooting picture 1 from the mobile phone 0.
It should be noted that the above-described cross-device multi-view shooting scene may be a scene in which at least two devices perform cross-device multi-view shooting. The method of the embodiment of the application can be applied to three electronic devices to realize the cross-device multi-view shooting function. For example, the mobile phone 0 shown in fig. 6B may be used as a local device, and the mobile phone 3 and the television 1 may be used as remote devices; the mobile phone 0 can realize cross-equipment multi-view shooting with the mobile phone 3 and the television 1. As shown in fig. 6B, the mobile phone 0 can simultaneously display a mobile phone 0 photographing screen 1, a photographing screen 3 from the mobile phone 3, and a photographing screen 2 from the television 1.
Application scenario (2): the local device 101 invokes the remote device 102 to make a video call, live broadcast, photograph, or video scene. The local device 102 may invoke a camera of the remote device 102, or a camera and a display screen of the remote device 102, to assist the local device 101 in performing video call, live broadcast, photographing, or video recording.
In an exemplary embodiment of the present application, taking the example that the local device 101 invokes the remote device 102 to perform a video call, the application scenario (2) is described herein. The preset application may be a video communication application (e.g., a micro-letter TM application) for implementing a video call. Taking the mobile phone 0 as the home device 101, the television 1 is exemplified by the remote device 102, and the video communication application can be installed in the mobile phone 0.
In one implementation, the mobile phone 0 may invoke a camera and a display of the remote device 102 (such as the television 1) during the video call with the mobile phone 4, so as to assist the mobile phone 0 in performing the video call with the mobile phone 4. For example, as shown in fig. 7, the mobile phone 0 displays a video call interface 701, and a function button 702 for calling other devices to assist the mobile phone 0 in performing video call is provided in the video call interface 701. The function button 702 may be clicked when the user wishes the handset 0 to invoke other devices to assist the handset 0 in video telephony.
In another implementation, the mobile phone 0 may invoke a camera and a display of the remote device 102 (e.g., the television 1) before requesting to make a video call with the mobile phone 4, to assist the mobile phone 0 in making a video call with the mobile phone 4. For example, as shown in fig. 8, in response to a user clicking on the "video call" option 801 in the chat interface, the handset 0 may display a confirmation window 802, where the confirmation window 802 is used to request the user to confirm whether to use the large screen to assist the handset in voice calls. The "yes" button in the confirmation window 802 may be clicked when the user wishes the handset 0 to invoke other devices to assist the handset 0 in video telephony.
Illustratively, handset 0 may search for candidate devices upon detecting a user clicking on the function button 702 or the "yes" button in the confirmation window 802. Such as dialog 401 shown in fig. 4 may be displayed. The specific method for searching candidate devices and displaying the dialog box 401 by the mobile phone may refer to the detailed description in the above embodiment, which is not repeated here.
When the mobile phone 0 detects that the user selects the television 1, the mobile phone 0 can use the television 1 as the remote device 102 to establish network connection with the television 1. After the mobile phone 0 establishes a network connection with the television 1, on the one hand, as shown in fig. 9, the mobile phone 0 may receive the shot b from the mobile phone 4 and transmit the shot b to the television 1. On the other hand, as shown in fig. 9, the mobile phone 0 may instruct the television 1 to turn on its own camera to start collecting image data, and perform image processing on the collected image data to obtain a shooting picture c. Subsequently, the television 1 may transmit the shot c to the mobile phone 0, and the mobile phone 0 transmits the shot c to the mobile phone 4. Thus, the television 1 can simultaneously display the photographed screen b from the mobile phone 4 and the photographed screen c from the television 1; the mobile phone 4 may display the photographed screen c from the television 1 and the photographed screen b from the mobile phone 4 at the same time.
In other embodiments, the mobile phone 0 invokes the camera and the display screen of the television 1 to assist the mobile phone 0 in performing a video call with the mobile phone 4, and the mobile phone 0 may also display the shot b from the mobile phone 4 and/or the shot c from the television 1.
Or the mobile phone 0 calls the camera and the display screen of the television 1 to assist the mobile phone 0 to display a preset interface in the process of video call between the mobile phone 0 and the mobile phone 4. The preset interface may be the main interface 901 shown in fig. 9. Or the preset interface may be a preset picture or a preset animation, etc.
Or the mobile phone 0 calls the camera and the display screen of the television 1 to assist the mobile phone 0 to carry out video call with the mobile phone 4, and the mobile phone 0 can be used for blacking the screen. In this way, the power consumption of the mobile phone 0 can be reduced.
Since the viewing angles of the mobile phone 0 and the television 1 are different, the image of the mobile phone 0 may be different from the image of the television 1. For example, the image capture screen a shown in fig. 7 is captured by the mobile phone 0, and the image capture screen c shown in fig. 9 is captured by the television 1, and the image capture screen a is different from the image capture screen c.
In any of the above application scenarios, the local device 101 may use and control the camera of the remote device 102, and use the camera of the remote device 102 to assist the local device 101 in completing a shooting task, a video task, or other related tasks. Of course, the remote device 102 may also use and control the camera of the local device 101, and use the camera of the local device 101 to assist the remote device 102 in accomplishing the above task. For example, as shown in fig. 6A, the mobile phone 3 may display the shot screen 1 from the mobile phone 0 and the shot screen 3 from the mobile phone 3 at the same time.
In an embodiment of the present application, the local device 101 and the remote device 102 may perform shooting in a distributed shooting scene (such as multi-view shooting). In the following embodiments, the method according to the embodiments of the present application will be described in detail by taking the home terminal apparatus 101 as an example of a mobile phone.
For example, as shown in the above embodiment, the local device 101 (such as a mobile phone) may provide a dual-view mode (including a dual-view photographing mode and a dual-view video recording mode). As shown in fig. 10, the handset may set a function button 1001 and a function button 1002 for cross-device multi-view shooting in a preview interface of the camera application. Specifically, the function button 1001 is used to trigger the mobile phone to enter a multi-view photographing mode. The function button 1002 is used to trigger the handset to enter a multi-view video mode. When the user wishes to see the multi-view picture taken by the handset and the remote device 1002 on the handset, the function button 1001 or the function button 1002 may be clicked to turn on the cross-device multi-view taking function. In the embodiment of the present application, the click operation of the function button 1001, the function button 1002, the function button 303, or the like by the user is the second operation.
For example, after the mobile phone detects that the user clicks the function button 1001, a preset application (such as a camera application) of the mobile phone may trigger the mobile phone to search for one or more candidate devices having a shooting function nearby. For example, a preset application of the handset may discover (i.e., search for) one or more candidate devices with shooting functionality through Dv kit.
And, as shown in fig. 11, the handset may display the searched one or more candidate devices in a dialog 1101. For example, the mobile phone may query the server for an electronic device that is registered with the same account as the mobile phone and has a photographing function, and display the queried electronic device as a candidate device in the dialog 1101. In the embodiment of the present application, the click operation of the candidate device in the dialog 1101 and dialog 401 (also referred to as a candidate device list) by the user is the first operation.
Taking the example that the candidate devices in the dialog 1101 include the tv 1102, the tv 1103 and the watch 1104, the user may select a remote device that cooperates with the mobile phone to implement the synchronous shooting function in the dialog 1101. For example, if the handset detects that the user selects the television 1102 in the dialog 1101, it is indicated that the user wishes to take a photograph simultaneously using the handset and the television 1102. At this time, the Dv kit of the mobile phone may establish a network connection between the tv 1102 as a remote device of the mobile phone and the mobile phone.
After the handset establishes a network connection with the television 1102, the television 1102 may register with the HAL of the handset. Specifically, the mobile phone may obtain, from the tv 1102, shooting capability parameters of the tv 1102 based on the network connection; and creates a corresponding DMSDP HAL at the HAL according to the shooting capability parameters of the television 1102.
Wherein the shooting capability parameter of the television 1002 is used to reflect the specific shooting capability of the television 1102. The shooting capability parameter may include device capability information. The device capability information is used to indicate the hardware capabilities of the television 1102 to take images. For example, the device capability information may indicate parameters such as the number of cameras in the television 1102, the resolution of the cameras, or the model of the image processor. The device capability information may be used by the handset to determine the shooting strategy of the television 1102.
The shooting capability parameters of the television 1102 may be stored in the CAMERA SERVICE of the mobile phone. Illustratively, the shooting capability parameters of the television 1102 may be stored in the handset CAMERA SERVICE in the form of a Tag (Tag).
The embodiment of the present application uses device capability information as an example, and describes a storage format of shooting capability parameters of the tv 1102 in CAMERA SERVICE of the mobile phone. For example, please refer to fig. 12, which illustrates a schematic diagram of a format of device capability information of the tv 1102 stored in CAMERA SERVICE of the mobile phone according to an embodiment of the present application.
The three-layer data structure shown in fig. 12 can be used in CAMERA SERVICE of the mobile phone to store the device capability information of the tv 1102. Wherein the handset may store each device capability information of the television 1102 in segments at CAMERA SERVICE. The section_name shown in fig. 12 is a storage address of the tv 1102; tag_name is a plurality of Tag names under one address of the section_name; tag_index is an index of capability information of a Tag name. The index is used to indicate the memory address of the specific capability information.
For example, the device capability information of the television 1102 may include 5 as shown in FIG. 12 Section_name:com.huawei.capture.metadata;com.huawei.device.capabilities;android.huawei.device.parameters;android.huawei.stream.info;android.huawei.stream.parameters.
Take one section_name (e.g., com.huawei.device.capabilities) as shown in fig. 12 as an example. As shown in fig. 12, the device capability information of the television 1102 may further include: the Tag names of the plurality of tags stored in the storage address corresponding to the section_name, such as device_sensor_ position, hidden _camera_ id, colerBarCheckUnsupport, amoothZoomSupport and tofType. For example, device_sensor_position represents a sensor of a camera of the television 1102.
Take one Tag name shown in fig. 12 as an example. As shown in fig. 12, the device capability information of the television 1102 may further include: the index of the capability information of the Tag name, such as CAMERA_HUAWI_DEVICE_CAPABLITIES_START and CAMERA_HUAWI_DEVICE_CAPABLIES_END.
Wherein CAMERA_HUAWEI_DEVICE_CAPABLITIES_START may indicate the starting address of the capability information in the television 1102 to store a Tag name. The CAMERA_HUAWEI_DEVICE_CAPABLITIES_END may indicate an END address of the capability information in the television 1102 to store a Tag name. Based on the start address and end address, the handset may query the television 1102 for various device capabilities of the television 1102.
It should be appreciated that, as shown in fig. 13, the TAG stored in CAMERA SERVICE (i.e., the shooting capability parameter described above) may be combined with a post-processing (algorithm) module in the handset to process the RAW image returned by the tv 1102. The specific method for the mobile phone to process the RAW image returned by the tv 1102 according to the TAG and the post-processing (algorithm) module is described in detail in the following embodiments, which is not described herein.
Subsequently, when the mobile phone runs the camera application, on one hand, the mobile phone can call the camera of the mobile phone to acquire each frame of shooting picture, and on the other hand, the mobile phone can instruct the television 1102 to acquire the RAW image and send the acquired RAW image to the camera application of the mobile phone through DMSDP HAL and 202 b. The mobile phone may call its own camera according to a certain frame rate to obtain each frame of shot picture, and instruct the tv 1102 to obtain the RAW image according to the same frame rate.
The mobile phone may send a shooting instruction to the tv 1102 through DMSDP HAL to instruct the tv 1102 to acquire a RAW image. For example, as shown in fig. 13, the application layer 200 of the mobile phone may transmit a photographing instruction to DMSDP HAL b 202b of the HAL 202 through the framework layer 201 and the service layer 201 a; DMSDP HAL 202b of HAL 202 may perform step a to transmit a shooting instruction to television 1102. After the tv 1102 receives the shooting instruction transmitted by the mobile phone through DMSDP HAL b, the camera proxy service 210a of the application layer 210 may add a preset flag to the shooting instruction. The preset flag is used to indicate that RAW data collected by the Camera device at the bottom layer is directly acquired, and the RAW data is transmitted to the Camera proxy service 210a at the application layer 211. For example, as shown in fig. 13, the camera proxy service 210a of the application layer 210 may add a preset flag to the photographing instruction, and transmit the photographing instruction to which the preset flag is added to CAMERA HAL a in the HAL 212 through the frame layer 211, the service layer 211 a. After CAMERA HAL a in HAL 212 receives the shooting instruction, the Camera device of kernel layer 213 may be called to execute the shooting instruction. For example, CAMERA HAL a in HAL 212 may invoke a Camera device in kernel layer 213 to capture a RAW image and perform step b to pass the RAW data through to Camera proxy 210a of application layer 211. Step c is then performed by the camera proxy service 210a of the application layer 211 to transmit the RAW image to DMSDP HAL b of the handset.
Among them, the Camera device of the core layer 213 may include a photosensitive device, a Digital Signal Processor (DSP), and an image processor (ISP) shown in fig. 14. The photosensitive device may include a lens, a Sensor (Sensor), and the like. The photosensitive device is used for acquiring RAW images. The Digital Signal Processor (DSP) is used for sampling and processing a plurality of frames of RAW images acquired by the photosensitive device; the sampled RAW image is then transmitted to an image processor (ISP). In general, an image processor (ISP) can perform image processing on a RAW image from a DSP. However, in the embodiment of the present application, the ISP does not perform image processing on the RAW image from the DSP, but passes the RAW image from the DSP to CAMERA HAL a in the HAL 212.
It should be noted that, in step b in fig. 13, the kernel layer 213 is directly directed to the application layer 210, which is only for illustrating that the mobile phone does not perform image processing on the RAW image, and does not indicate that the RAW image can be directly transmitted to the application layer 210 from the kernel layer 213 without passing through CAMERA HAL a, the service layer 211a, and the frame layer 211 in the HAL 212. The Camera device in the kernel layer 213 acquires a RAW image to be transferred to the application layer 210 through CAMERA HAL a, the service layer 211a, and the frame layer 211 in the HAL 212. However, as shown in fig. 14, CAMERA HAL a, the service layer 211a, and the frame layer 211 in the HAL 212 can only transmit the RAW image without any processing of the RAW image.
In the embodiment of the present application, the television 1102 transmits a RAW image, which is not a processed, directly displayable shot, but is not image-processed, to the mobile phone. After the television 1102 collects the RAW image with the camera, the RAW image can be directly transmitted to the mobile phone without performing image processing on the RAW image. The RAW image is an image that has not been processed by the television 1102. The amount of data of the RAW image is smaller than that of the image processed by the television 1102. In this way, the network bandwidth occupied by the distributed photographing traffic across devices can be reduced.
Then, the mobile phone can perform image processing on the RAW image from the television 1102 to obtain a corresponding shot picture. Thus, the camera application of the mobile phone can obtain not only each frame shot from the mobile phone, but also each frame shot of the television 1102. Furthermore, the camera application can synchronously display the shooting picture of the mobile phone and the shooting picture of the television 1102 in a display interface of the camera application, so as to realize a cross-device distributed shooting function. For example, as shown in fig. 15, a preview interface of a camera application of a mobile phone includes a shot screen 1501 of the mobile phone and a shot screen 1502 of a television 1102.
The mobile phone processes the RAW image from the television 1102, so that the problem of increasing time delay due to large performance difference of devices at two ends can be avoided. By adopting the scheme, the time delay of the distributed shooting service can be reduced.
In addition, in the embodiment of the present application, after the mobile phone establishes a connection with the tv 1102, two Camera sessions (Camera Session) may be created based on DMSDP HAL b shown in fig. 13, such as a Control Session (Control Session) for transmitting a Control stream (such as the above-mentioned shooting instruction) and a Data Session (Data Session) for transmitting a Data stream (such as the above-mentioned RAW Data). Wherein Control Session and Data Session may correspond to different transmission paths (alternatively referred to as transmission pipes). The DMSDP HAL b of the mobile phone executes step a shown in fig. 13, and may transmit the shooting instruction to the camera proxy service 210a of the tv 1102 through the transmission pipeline of the Control Session. The camera proxy service 210a of the tv 1102 performs step c shown in fig. 13, and may transmit a Data stream (such as RAW Data) to DMSDP HAL b of the handset through the transmission pipe of the Data Session.
In the process of executing the cross-device distributed shooting service by the mobile phone and the television 1102, the transmission paths of the control flow and the data flow can be distinguished. In this way, the problem of bandwidth occupation of the control stream and the data stream can be avoided.
In some embodiments, the shooting capability parameters of the television 1102 may further include algorithm capability information. The algorithm capability information is used to indicate the algorithm capability of the television 1102 to image process the captured image. For example, the algorithm capability information may indicate one or more image processing algorithms, such as face recognition algorithms, auto-focus algorithms, etc., supported by the television 1102. That is, the handset may also synchronize the post-processing algorithm of the television 1002 into the handset. For example, as shown in fig. 13, the service layer 201a of the mobile phone retains the post-processing algorithm of the tv 1102.
In this embodiment, the service layer 201a of the handset may invoke a post-processing algorithm of the reserved tv 1102 to perform image processing on the RAW image from DMSDP HAL 202b based on the algorithm capability information of the tv 1102. Then, the mobile phone can synchronously render the image acquired by the mobile phone and the image from the television 1102 after post-processing based on the time stamp, so as to obtain and display a shooting picture 1501 of the mobile phone and a shooting picture 1502 of the television 1102 shown in fig. 15.
It should be noted that, based on the algorithm capability information of the television 1102, the mobile phone invokes the post-processing algorithm of the reserved television 1102 to perform image processing on the RAW image collected by the television 1102, so as to obtain the same or similar effect as that of performing image processing on the RAW image on the television 1102 side.
Wherein, even if the RAW image acquired by the television 1102 is image-processed on the mobile phone side; but the image processed image can also achieve the same effect as the image processed image on the television 1102 side. That is, by adopting the method of the embodiment of the application, the network bandwidth occupied by the distributed shooting business of the cross-equipment can be reduced, and the image effect of the remote equipment can be restored.
In other embodiments, the shooting capability parameters of the tv 1102 may not include algorithm capability information. That is, the handset may not synchronize the post-processing algorithm of the television 1002 into the handset. For example, as shown in fig. 16, the post-processing algorithm of the tv 1102 is not reserved in the service layer 201a of the mobile phone.
In this embodiment, the service layer 201a of the mobile phone may invoke a post-processing algorithm of the mobile phone to perform image processing on the RAW image from DMSDP HAL b based on the algorithm capability information of the mobile phone. Then, the mobile phone can synchronously render the image acquired by the mobile phone and the image from the television 1102 after post-processing based on the time stamp, so as to obtain and display a shooting picture 1501 of the mobile phone and a shooting picture 1502 of the television 1102 shown in fig. 15.
It should be noted that, the mobile phone invokes the post-processing algorithm of the mobile phone to perform image processing on the RAW image collected by the tv 1102, although the effect of performing image processing on the tv 1102 side cannot be restored; however, the image collected by the mobile phone and the RAW image collected by the television 1102 are subjected to image processing by invoking a post-processing algorithm of the mobile phone, so that the image effect of the shooting picture of the mobile phone is consistent with that of the shooting picture of the television 1102.
In addition, under the condition that the algorithm processing capability of the mobile phone is better than that of the television 1102, compared with the condition that the post-processing algorithm of the television 1102 is called to process the RAW image acquired by the television 1102, the effect of the image of the shooting picture of the television 1102 displayed by the mobile phone can be improved by calling the post-processing algorithm of the mobile phone to process the RAW image.
Further, the image processing is performed on the RAW image collected by the tv 1102 by using the post-processing algorithm of the mobile phone, and the post-processing algorithm of the tv 1102 does not need to be reserved in the mobile phone. Thus, the memory space of the mobile phone can be saved.
In addition, in the above embodiment, the mobile phone is used as an example of the local device in the distributed shooting scene, it may be understood that the local device in the distributed shooting scene may also be an electronic device with the shooting function, such as a tablet computer, a television, etc., which is not limited in any way.
It should be noted that, in the above embodiment, the specific method for implementing the distributed photographing function between each functional module described by taking the Android system as an example, it may be understood that the above method may also be implemented by setting corresponding functional modules in other operating systems. Insofar as the functions performed by the respective devices and functional modules are similar to those of the embodiments of the present application, they are within the scope of the claims of the present application and the equivalents thereof.
In the embodiment of the present application, the image data collected by the home terminal apparatus 101 is first image data, and the shot image of the home terminal apparatus 101 is a first shot image, and the first shot image is displayed in a first window. The image data collected by the remote device 102 is second image data, and the photographed picture of the remote device 102 is a second photographed picture, which is displayed in a second window. The second image data includes a RAW image.
For example, the first shot may be shot 1 shown in fig. 5, and the second shot may be shot 2 shown in fig. 5. The photographing screen 1 and the photographing screen 2 shown in fig. 5 are displayed on the same display interface.
For another example, the first shot may be shot 1 shown in fig. 6A, and the second shot may be shot 3 shown in fig. 6A. The shot 1 and shot 2 shown in fig. 6A are displayed on the same display interface on both the mobile phone 0 and the mobile phone 1.
Other embodiments of the present application provide an electronic device that may include: the touch screen, memory and one or more processors described above. The touch screen, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the mobile phone shown in fig. 2B.
Embodiments of the present application also provide a chip system, as shown in FIG. 17, the chip system 1700 includes at least one processor 1701 and at least one interface circuit 1702. The processor 1701 and the interface circuit 1702 may be interconnected by wires. For example, the interface circuit 1702 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1702 may be used to send signals to other devices, such as the processor 1701. The interface circuit 1702 may, for example, read instructions stored in a memory and send the instructions to the processor 1701. The instructions, when executed by the processor 1701, may cause the electronic device to perform the various steps described in the embodiments above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic equipment, the electronic equipment is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product which, when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the above method embodiment.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. A distributed photographing method, comprising:
The method comprises the steps that a first device receives a first operation that a user selects to synchronously shoot with a second device;
Responsive to the first operation, the first device begins to acquire first image data, and the first device instructs the second device to begin acquiring image data;
The first device receives second image data from the second device, wherein the second image data comprises an original image acquired by a camera of the second device;
the first device displays a first shooting picture corresponding to the first image data on a first window, and displays a second shooting picture corresponding to the second image data on a second window, wherein the first window and the second window are positioned on the same display interface;
After the first device receives a first operation that the user selects to perform synchronous shooting with the second device, the method further comprises:
the first device creates a control session and a data session of the first device and the second device;
Wherein the control session is for transmitting a control command between the first device and the second device, the control command including a photographing instruction for instructing the second device to collect an image, the data session being for transmitting the second image data from the second device;
After the first device receives a first operation that the user selects to perform synchronous shooting with the second device, the method further comprises:
The first device obtains shooting capability parameters of the second device, wherein the shooting capability parameters of the second device are used for indicating an image processing algorithm supported by the second device;
Wherein, before the first device displays the first shooting picture corresponding to the first image data on the first window and displays the second shooting picture corresponding to the second image data on the second window, the method further comprises:
the first device performs image processing on the first image data according to shooting capability parameters of the first device to obtain the first shooting picture, wherein the shooting capability parameters of the first device are used for indicating an image processing algorithm supported by the first device;
The first device performs image processing on the second image data according to shooting capability parameters of the second device to obtain a second shooting picture;
The method further comprises the steps of: the first device sends the first image data to the second device, wherein the first image data comprises an original image acquired by a camera of the first device.
2. The method of claim 1, wherein the first device instructs the second device to begin acquiring image data, comprising:
the first device sends a shooting instruction to the second device;
the shooting instruction comprises a preset mark, the shooting instruction is used for indicating the second equipment to collect image data, and the preset mark is used for indicating the second equipment to transmit RAW images collected by the second equipment to the first equipment.
3. The method according to claim 1 or 2, wherein before the first device receives the first operation that the user selects to take a synchronization shot with the second device, the method further comprises:
The first device receives a second operation, wherein the second operation is a click operation of a function button for realizing distributed shooting by a user, and at least one of a preview interface of a camera application of the first device, a chat interface of a video communication application of the first device, a control center of the first device, a drop-down menu or a negative screen comprises the function button;
In response to a second operation for clicking the function button, the first device displays a candidate device list in the preview interface, wherein the candidate device list comprises the second device;
Wherein the first operation is an operation in which a user selects the second device in the candidate device list.
4. The method of any of claims 1-3, wherein the synchronized shooting includes at least one of synchronized video recording, synchronized shooting, synchronized live broadcasting, or synchronized video telephony.
5. An electronic device, wherein the electronic device is a first device, the electronic device comprising: one or more cameras, one or more processors, a display screen, a memory, and a communication module; the camera, the display screen, the memory, the communication module and the processor are coupled;
wherein the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-4.
6. A computer readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the method of any of claims 1-4.
CN202110131870.2A 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium Active CN114845035B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110131870.2A CN114845035B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium
CN202210973275.8A CN115514882B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium
PCT/CN2021/137917 WO2022160985A1 (en) 2021-01-30 2021-12-14 Distributed photographing method, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110131870.2A CN114845035B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210973275.8A Division CN115514882B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114845035A CN114845035A (en) 2022-08-02
CN114845035B true CN114845035B (en) 2024-04-26

Family

ID=82561398

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110131870.2A Active CN114845035B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium
CN202210973275.8A Active CN115514882B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210973275.8A Active CN115514882B (en) 2021-01-30 2021-01-30 Distributed shooting method, electronic equipment and medium

Country Status (2)

Country Link
CN (2) CN114845035B (en)
WO (1) WO2022160985A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320783B (en) * 2022-09-14 2023-11-14 荣耀终端有限公司 Method for capturing images in video and electronic equipment
CN115514898A (en) * 2022-11-17 2022-12-23 深圳开鸿数字产业发展有限公司 Photographing method, terminal device and storage medium
WO2024145063A1 (en) * 2022-12-29 2024-07-04 Meta Platforms, Inc. Methods, apparatuses and computer program products for providing virtual cameras for hardware inputs
CN117707242A (en) * 2023-07-11 2024-03-15 荣耀终端有限公司 Temperature control method and related device

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796964A (en) * 1996-01-16 1998-08-18 International Business Machines Method for modifying an existing computer bus to enhance system performance
CN101404726A (en) * 2008-10-20 2009-04-08 深圳华为通信技术有限公司 Control method, system and apparatus for far-end camera
CN101446890A (en) * 2007-11-29 2009-06-03 株式会社瑞萨科技 Stream processing apparatus, method for stream processing and data processing system
CN103336677A (en) * 2013-06-25 2013-10-02 北京小米科技有限责任公司 Method, device and system for outputting images to display equipment
CN104284234A (en) * 2014-10-17 2015-01-14 惠州Tcl移动通信有限公司 Method and system for sharing synchronous images among plurality of terminals
CN105493621A (en) * 2014-08-04 2016-04-13 华为技术有限公司 Terminal, server, and terminal control method
WO2017107629A1 (en) * 2015-12-24 2017-06-29 努比亚技术有限公司 Mobile terminal, data transmission system and shooting method of mobile terminal
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
CN109361869A (en) * 2018-11-28 2019-02-19 维沃移动通信(杭州)有限公司 A kind of image pickup method and terminal
CN110224804A (en) * 2018-03-01 2019-09-10 国民技术股份有限公司 Data transfer control method, terminal, base station and computer storage medium
CN110602805A (en) * 2019-09-30 2019-12-20 联想(北京)有限公司 Information processing method, first electronic device and computer system
WO2020057661A1 (en) * 2018-09-21 2020-03-26 华为技术有限公司 Image capturing method, device, and apparatus
CN111083379A (en) * 2019-12-31 2020-04-28 维沃移动通信(杭州)有限公司 Shooting method and electronic equipment
WO2020159154A1 (en) * 2018-02-08 2020-08-06 Samsung Electronics Co., Ltd. Method for encoding images and corresponding terminals
CN111788603A (en) * 2018-02-23 2020-10-16 三星电子株式会社 Electronic device and method for correcting an image corrected with a first image processing scheme in an external electronic device with a second image processing scheme
CN111860530A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Electronic equipment, data processing method and related device
CN111988528A (en) * 2020-08-31 2020-11-24 北京字节跳动网络技术有限公司 Shooting method, shooting device, electronic equipment and computer-readable storage medium
CN112004076A (en) * 2020-08-18 2020-11-27 Oppo广东移动通信有限公司 Data processing method, control terminal, AR system, and storage medium
CN112292847A (en) * 2018-06-25 2021-01-29 索尼公司 Image processing apparatus, mobile apparatus, method, and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254432A (en) * 2012-06-08 2013-12-19 Canon Inc Image processing apparatus and image processing method
CN109769087A (en) * 2017-11-09 2019-05-17 中兴通讯股份有限公司 Image pickup method, device and the mobile terminal remotely taken a group photo
KR102482860B1 (en) * 2018-01-02 2022-12-30 삼성전자 주식회사 Method for processing image based on context information and electronic device using the same

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796964A (en) * 1996-01-16 1998-08-18 International Business Machines Method for modifying an existing computer bus to enhance system performance
CN101446890A (en) * 2007-11-29 2009-06-03 株式会社瑞萨科技 Stream processing apparatus, method for stream processing and data processing system
CN101404726A (en) * 2008-10-20 2009-04-08 深圳华为通信技术有限公司 Control method, system and apparatus for far-end camera
CN103336677A (en) * 2013-06-25 2013-10-02 北京小米科技有限责任公司 Method, device and system for outputting images to display equipment
CN105493621A (en) * 2014-08-04 2016-04-13 华为技术有限公司 Terminal, server, and terminal control method
CN104284234A (en) * 2014-10-17 2015-01-14 惠州Tcl移动通信有限公司 Method and system for sharing synchronous images among plurality of terminals
WO2017107629A1 (en) * 2015-12-24 2017-06-29 努比亚技术有限公司 Mobile terminal, data transmission system and shooting method of mobile terminal
WO2020159154A1 (en) * 2018-02-08 2020-08-06 Samsung Electronics Co., Ltd. Method for encoding images and corresponding terminals
CN111788603A (en) * 2018-02-23 2020-10-16 三星电子株式会社 Electronic device and method for correcting an image corrected with a first image processing scheme in an external electronic device with a second image processing scheme
CN110224804A (en) * 2018-03-01 2019-09-10 国民技术股份有限公司 Data transfer control method, terminal, base station and computer storage medium
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
CN112292847A (en) * 2018-06-25 2021-01-29 索尼公司 Image processing apparatus, mobile apparatus, method, and program
WO2020057661A1 (en) * 2018-09-21 2020-03-26 华为技术有限公司 Image capturing method, device, and apparatus
CN110944109A (en) * 2018-09-21 2020-03-31 华为技术有限公司 Photographing method, device and equipment
CN109361869A (en) * 2018-11-28 2019-02-19 维沃移动通信(杭州)有限公司 A kind of image pickup method and terminal
CN110602805A (en) * 2019-09-30 2019-12-20 联想(北京)有限公司 Information processing method, first electronic device and computer system
CN111083379A (en) * 2019-12-31 2020-04-28 维沃移动通信(杭州)有限公司 Shooting method and electronic equipment
CN111860530A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Electronic equipment, data processing method and related device
CN112004076A (en) * 2020-08-18 2020-11-27 Oppo广东移动通信有限公司 Data processing method, control terminal, AR system, and storage medium
CN111988528A (en) * 2020-08-31 2020-11-24 北京字节跳动网络技术有限公司 Shooting method, shooting device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
WO2022160985A1 (en) 2022-08-04
CN115514882A (en) 2022-12-23
CN115514882B (en) 2024-06-14
CN114845035A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN114845035B (en) Distributed shooting method, electronic equipment and medium
WO2022257977A1 (en) Screen projection method for electronic device, and electronic device
CN110381195A (en) A kind of throwing screen display methods and electronic equipment
CN114697527B (en) Shooting method, system and electronic equipment
CN114697732B (en) Shooting method, shooting system and electronic equipment
CN113835649B (en) Screen projection method and terminal
CN112130788A (en) Content sharing method and device
CN114598414B (en) Time slice configuration method and electronic equipment
US11895713B2 (en) Data sharing and instruction operation control method and system
CN113395364B (en) Access method of application server and terminal
CN115225753A (en) Shooting method, related device and system
CN114201130A (en) Screen projection method and device and storage medium
WO2022156721A1 (en) Photographing method and electronic device
CN115145518A (en) Display method, electronic equipment and system
CN114928898B (en) Method and device for establishing session based on WiFi direct connection
WO2022161058A1 (en) Photographing method for panoramic image, and electronic device
CN116723415A (en) Thumbnail generation method and terminal equipment
CN116056053A (en) Screen projection method, electronic device, system and computer readable storage medium
CN111131019B (en) Multiplexing method and terminal for multiple HTTP channels
CN114567871A (en) File sharing method and device, electronic equipment and readable storage medium
CN114007202A (en) Method for establishing binding relationship and related equipment
CN114584817B (en) Screen projection method and system
WO2024159925A1 (en) Screen mirroring method, screen mirroring system, and electronic device
WO2022206769A1 (en) Method for combining content, electronic device, and system
CN115914983A (en) Data interaction method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant