CN114697732A - Shooting method, system and electronic equipment - Google Patents

Shooting method, system and electronic equipment Download PDF

Info

Publication number
CN114697732A
CN114697732A CN202011630348.0A CN202011630348A CN114697732A CN 114697732 A CN114697732 A CN 114697732A CN 202011630348 A CN202011630348 A CN 202011630348A CN 114697732 A CN114697732 A CN 114697732A
Authority
CN
China
Prior art keywords
equipment
shooting
mobile phone
picture
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011630348.0A
Other languages
Chinese (zh)
Inventor
冯可荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011630348.0A priority Critical patent/CN114697732A/en
Priority to CN202210973885.8A priority patent/CN115550597A/en
Priority to PCT/CN2021/143005 priority patent/WO2022143883A1/en
Publication of CN114697732A publication Critical patent/CN114697732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a shooting method, a shooting system and electronic equipment, and relates to the field of shooting. The method comprises the following steps: the first equipment can display a first shooting picture in the display interface, and the first shooting picture is a shooting picture acquired by the first equipment by using a camera of the first equipment; after the first device establishes network connection with the N electronic devices, the first device can instruct the N electronic devices to start to collect shooting pictures, wherein N is an integer greater than 1; if the first equipment acquires a second shooting picture acquired by the second equipment, the first equipment can identify whether the second shooting picture contains a preset sounding identifier, and the sounding identifier is used for indicating that a user is sounding; if the second shot picture contains the preset sound-emitting identification, which indicates that the user using the second device is speaking, the first device can display the second shot picture in the display interface.

Description

Shooting method, system and electronic equipment
Technical Field
The present disclosure relates to the field of photography, and in particular, to a photography method and system and an electronic device.
Background
Currently, many users are provided with a plurality of electronic devices having a photographing function in a home or a work place. In some scenarios, a user may need to view a captured picture taken by a different electronic device.
For example, in a conference scenario or a scenario in which multiple people are engaged in a video call, each user may use one electronic device (e.g., a mobile phone) to take a picture. When the user a speaks using the mobile phone 1 in a conference, other users may wish to receive audio data and a photographed picture from the mobile phone 1 using their own mobile phones. Subsequently, when the user B speaks using the mobile phone 2 in the conference, other users may wish to receive audio data and a photographed picture from the mobile phone 2 using their own mobile phones. In such a scenario, how to flexibly and accurately switch the shot pictures of different devices to the electronic device for display becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a shooting method, a shooting system and electronic equipment, and the electronic equipment can flexibly and accurately switch shooting pictures of different equipment to the electronic equipment for display, so that the use experience of a user is improved.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a shooting method, including: the first equipment can display a first shooting picture in the display interface, and the first shooting picture is a shooting picture acquired by the first equipment by using a camera of the first equipment; after the first device establishes network connection with the N electronic devices, the first device can instruct the N electronic devices to start to collect shooting pictures, wherein N is an integer greater than 1; if the first device acquires a second shooting picture acquired by a second device (the second device is one of the N electronic devices), the first device can identify whether the second shooting picture contains a preset sounding identifier, wherein the sounding identifier is used for indicating that a user is sounding; if the second shot picture contains the preset sound-emitting identification, which indicates that the user using the second device is speaking, the first device can display the second shot picture in the display interface. At this time, the first device may continue to display the first photographed picture (i.e., the photographed picture of the first device) in the display interface, or may not display the first photographed picture.
It can be seen that, in a distributed shooting scene in which a first device is connected with a plurality of electronic devices, when a user inputs an audio signal to a certain electronic device (for example, a second device), the second device may make the first device know that the user inputs the audio signal to the second device by adding a preset sounding marker in a captured shooting picture. Furthermore, the first device can display the shooting picture carrying the sounding mark in the display interface. Therefore, when different users use respective devices to input audio signals, the first device serving as the main device can flexibly and accurately switch the shooting picture collected by the slave device with the user sound input to the main device to be displayed, so that the user can focus on the shooting picture of the slave device with the user sound input, and the use experience of the user is improved.
In a possible implementation manner, a preset button may be set in the display interface, where the preset button is used to synchronously display shooting pictures of multiple devices; for example, the name of the preset button may be a dual view mode or a multi view mode. For example, in the dual view mode, before the first device acquires the second shot picture acquired by the second device, the method further includes: in response to the operation of selecting a preset button by a user, the first device can create a first window and a second window in the display interface, wherein the first window is used for displaying a first shooting picture acquired by the first device, and the second window is used for displaying shooting pictures acquired by other devices; at this time, the first device displays the second shooting picture in the display interface, and the method specifically includes: the first device displays the second photographing screen in the second window, and displays the first photographing screen in the first window. Therefore, after the user selects the preset button, the first device can simultaneously display the first shooting picture acquired by the first device and the shooting pictures of other devices input by the user voice.
In one possible implementation manner, in response to an operation of the user selecting the preset button, the method further includes: the method comprises the steps that network connection is established between first equipment and N pieces of electronic equipment; if the first device and the third device establish network connection first, before the first device acquires a second shooting picture acquired by the second device, the method further includes: the first equipment acquires a third shooting picture acquired by third equipment, wherein the third equipment is one of the N electronic equipment except the second equipment; the first device displays the third photographing screen in the second window, and displays the first photographing screen in the first window. That is, after the user selects the preset button, the first device may first display the captured image of the first connected device in the second window, and subsequently, when the captured image with the user voice input is acquired, the displayed captured image may be switched to the captured image with the user voice input (e.g., the second captured image) in the second window.
Of course, after the user selects the preset button, the first device may also display the shooting picture of one or more other devices of the N electronic devices in the second window, which is not limited in this application.
In a possible implementation manner, if the second captured image includes a preset sound generation identifier, the first device displays the second captured image in a display interface, and the method specifically includes: if the second shot picture contains the preset sound production identification, the first device can judge whether the time for displaying the third shot picture in the second window exceeds the preset time or not; and if the preset time is exceeded, the first equipment displays a second shooting picture in the second window.
That is, if the time for switching the shot picture in the second window last time does not exceed the preset time, which indicates that the time for switching the shot picture in the second window by the first device is not short, in order to avoid the user experience that is poor due to frequent switching of the shot pictures in the second window, the first device may continue to display the third shot picture of the third device in the second window. If the time for switching the shooting picture in the third shooting picture last time exceeds the preset time, the first device may switch the shooting picture displayed in the third shooting picture to the currently received shooting picture carrying the sounding identification.
In one possible implementation manner, after the first device displays the second shooting picture in the second window, the method further includes: the first equipment acquires a third shooting picture acquired by the third equipment; if the third shooting picture does not contain the sound production identification, the first device can continue to display the second shooting picture in the second window and does not display the third shooting picture. Therefore, the shot picture collected by the slave equipment without the user voice input cannot be displayed in the display interface of the first equipment (namely the master equipment), the master equipment can flexibly and accurately switch the shot picture collected by the slave equipment with the user voice input to the master equipment for display, and a user can focus on the shot picture of the slave equipment with the user voice input.
In a possible implementation manner, after the first device acquires the third captured picture acquired by the third device, the method further includes: if the third shooting picture contains the sound production identifier, the first device can display the third shooting picture in the second window, and at the moment, the second window does not display the second shooting picture of the second device any more; or, if the third shooting picture includes the sound generation identifier, the first device may create a third window in the display interface, and display the third shooting picture in the third window, at this time, the second window may continue to display the second shooting picture of the second device.
In a possible implementation manner, the second window may include a prompt message, where the prompt message is used to prompt that the user corresponding to the second window is speaking.
In a possible implementation manner, if the second captured image includes a preset sound generation identifier, the first device displays the second captured image in the display interface, including: and if the second shot picture contains the preset sound production identification, the first equipment switches the first shot picture into the second shot picture in the display interface, and at the moment, the shot picture of the first equipment is not displayed in the display interface.
In a possible implementation manner, after the first device establishes network connections with the N electronic devices, the method further includes: the first device may detect whether a user inputs an audio signal to the first device; if the user inputs the audio signal to the first equipment, the first equipment can add a sounding identifier to the collected first shooting picture; furthermore, the first equipment can send the first shooting picture carrying the sound production identification to the N connected electronic equipment, so that the N electronic equipment can display the first shooting picture according to the sound production identification. In this way, the slave device connected with the first device can also display the shooting picture carrying the sounding marker in the display interface, so that all users in the distributed shooting scene can focus on the shooting picture of the slave device with the user voice input.
In a possible implementation manner, the detecting, by the first device, whether there is a user inputting an audio signal to the first device specifically includes: when the first equipment uses the camera to collect the first shot picture, the microphone can be used for detecting an audio signal like the slave equipment; when the first device detects an audio signal, M first shooting pictures corresponding to the detected audio signal can be obtained, wherein M is an integer larger than 1; if the mouth shape features in the M first shot pictures are matched with the detected audio signals, the first device can determine that a user inputs the audio signals to the first device. The mode of detecting whether the user inputs the audio signal to the equipment does not need to train and store information such as voiceprints, facial features and the like of the user in advance, does not need to interact with a server, and can reduce the complexity and the cost of implementation.
In a possible implementation manner, when the first device detects an audio signal, acquiring M first captured pictures corresponding to the detected audio signal specifically includes: when the first device detects that the loudness of the audio signal is greater than the preset value, M first shooting pictures corresponding to the detected audio signal are obtained, so that the accuracy of determining that the user inputs the audio signal to the first device is improved.
In one possible implementation manner, after the first device detects whether a user inputs an audio signal to the first device, the method further includes: if the user inputs the audio signal to the first device, which indicates that the speaking user is the user using the first device at the moment, the first device can stop displaying the second shooting picture of the second device. Of course, the first device may also continue to display the second captured image of the second device, which is not limited in this application.
In a second aspect, the present application provides a shooting method, including: after the second equipment establishes network connection with the first equipment, the second equipment can acquire shot pictures by using a camera and detect audio signals by using a microphone; when the second device detects that the user inputs an audio signal into the second device, the second device can add a preset sounding identifier into the collected shooting picture, wherein the sounding identifier is used for indicating that the user is sounding; and then, the second equipment can send the picture of shooing that carries the vocal mark to first equipment for first equipment can show the picture of shooing that carries the vocal mark in display interface. Therefore, when different users use respective slave devices to input audio signals, the first device serving as the master device can flexibly and accurately switch the shooting picture collected by the slave device with the user sound input to the master device to be displayed, so that the user can focus on the shooting picture of the slave device with the user sound input, and the use experience of the user is improved.
In a possible implementation manner, after the second device uses a camera to capture a shot picture and uses a microphone to detect an audio signal, the method further includes: when the second device detects an audio signal, acquiring M shooting pictures corresponding to the detected audio signal, wherein M is an integer greater than 1; if the mouth shape features in the M shot pictures are matched with the detected audio signals, the second device can determine that the user inputs the audio signals to the second device.
In one possible implementation, when the second device detects an audio signal, acquiring M shot pictures corresponding to the detected audio signal includes: and when the second equipment detects that the loudness of the audio signal is greater than a preset value, acquiring M shooting pictures corresponding to the detected audio signal.
In a possible implementation manner, after the second device establishes a network connection with the first device, the method further includes: the second equipment acquires a shooting picture of the first equipment; and if the shooting picture of the first equipment contains the sound production identification, the second equipment displays the shooting picture of the first equipment in the display interface. That is to say, the second device may also display the shot picture carrying the sounding identification, so that the users in the distributed shooting scene can focus on the shot picture of the slave device with the user voice input.
In a possible implementation manner, the display interface of the second device may also include a first window and a second window, where the second device displays a shooting picture of the first device in the display interface, and the method includes: the second device may display a photographed picture of the first device in the first window and display a photographed picture of the second device in the second window.
In a third aspect, the present application provides an electronic device (e.g., the first device described above), comprising: a display screen, a communication module, one or more processors, one or more cameras, one or more memories, and one or more computer programs; wherein, the processor is coupled with the communication module, the display screen and the memory, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the shooting method in any one of the above aspects.
In a fourth aspect, the present application provides an electronic device (e.g., the second device described above), comprising: a display screen, a communication module, one or more processors, one or more cameras, one or more memories, and one or more computer programs; wherein, the processor is coupled with both the communication module and the memory, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the shooting method in any one of the above aspects.
In a fifth aspect, the present application provides a distributed photographing system, including the first device in the third aspect and the second device in the fourth aspect. The method comprises the following steps that first equipment can display a first shooting picture in a display interface, wherein the first shooting picture is a shooting picture collected by the first equipment; after the first equipment establishes network connection with N electronic equipment, the first equipment indicates the N electronic equipment to start to collect shooting pictures, wherein N is an integer larger than 1; when the second equipment collects a shot picture by using the camera, a microphone can be used for detecting an audio signal; when the second device detects that a user inputs an audio signal into the second device, the second device adds a preset sounding identifier into a collected shooting picture (such as a second shooting picture), wherein the sounding identifier is used for indicating that the user is sounding; the second equipment sends a second shooting picture carrying the sound production identification to the first equipment; after the first device acquires a second shooting picture, if the second shooting picture contains a preset sound production identifier, the first device displays the second shooting picture in the display interface.
In a sixth aspect, the present application provides a computer-readable storage medium, comprising computer instructions, which, when run on the first device or the second device, cause the first device or the second device to perform the shooting method of any one of the above aspects.
In a seventh aspect, the present application provides a computer program product, when running on the first device or the second device, causing the first device or the second device to execute the shooting method in any one of the above aspects.
An eighth aspect provides a Graphical User Interface (GUI) on an electronic device, where the electronic device has a display screen, a camera, a memory, and one or more processors, where the one or more processors are configured to execute one or more computer programs stored in the memory, and the GUI includes a GUI displayed when the electronic device executes any one of the first aspect and the first aspect thereof, or any one of the second aspect and the second aspect thereof.
It is to be understood that the electronic device, the distributed photographing system, the computer readable storage medium, and the computer program product provided in the foregoing aspects are all applied to the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device, the distributed photographing system, the computer readable storage medium, and the computer program product may refer to the beneficial effects of the corresponding method provided above, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a distributed shooting system according to an embodiment of the present disclosure;
fig. 2 is a first schematic view of an application scenario of a shooting method according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of a shooting method according to an embodiment of the present application;
fig. 4 is a schematic view of an application scenario of a shooting method according to an embodiment of the present application;
fig. 5 is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating an architecture of an operating system in an electronic device according to an embodiment of the present application;
fig. 7 is a schematic view of an application scenario of a shooting method according to an embodiment of the present application;
fig. 8 is a schematic view of an application scenario of a shooting method according to an embodiment of the present application;
fig. 9 is a schematic view sixth of an application scenario of a shooting method provided in an embodiment of the present application;
fig. 10 is a schematic view seventh of an application scenario of a shooting method according to an embodiment of the present application;
fig. 11 is an application scenario schematic diagram eight of a shooting method provided in the embodiment of the present application;
fig. 12 is a schematic view nine of an application scenario of a shooting method according to an embodiment of the present application;
fig. 13 is a schematic view ten of an application scenario of a shooting method provided in the embodiment of the present application;
fig. 14 is an eleventh application scenario schematic diagram of a shooting method according to an embodiment of the present application;
fig. 15 is a schematic view twelve of an application scenario of a shooting method according to an embodiment of the present application;
fig. 16 is a schematic view thirteen of an application scenario of a shooting method provided in the embodiment of the present application;
fig. 17 is a fourteenth application scenario schematic diagram of a shooting method provided in the embodiment of the present application;
fig. 18 is a schematic view fifteen of an application scenario of a shooting method provided in the embodiment of the present application;
fig. 19 is a schematic view sixteen of an application scenario of a shooting method provided in the embodiment of the present application;
fig. 20 is a schematic structural diagram of a second electronic device according to an embodiment of the present application;
fig. 21 is a schematic structural diagram three of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
The shooting method provided by the embodiment of the application can be applied to the distributed shooting system 200 shown in fig. 1. As shown in fig. 1, the distributed camera system 200 may include a master device (master)101 and N slave devices (slave)102, where N is an integer greater than 0. The master device 101 and any one of the slave devices 102 may communicate with each other by wire or wirelessly.
Illustratively, a wired connection may be established between the master device 101 and the slave device 102 using a Universal Serial Bus (USB). For another example, the master device 101 and the slave device 102 may establish a wireless connection through a global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time division code division multiple access (TD-SCDMA), Long Term Evolution (LTE), bluetooth, wireless fidelity (Wi-Fi), NFC, voice over Internet protocol (VoIP), and a communication protocol supporting a network slice architecture.
One or more cameras may be disposed in both the master device 101 and the slave device 102.
The master device 101 may capture image data (which may also be referred to as a shot) using its own camera, and the slave device 102 may capture image data using its own camera. For example, the slave device 102 may capture image data using its own camera, and the slave device 102 may transmit the image data to the master device 101, and the master device 101 displays the image data from the slave device 102. Or, the master device 101 and the slave device 102 may simultaneously use their own cameras to acquire image data, and the slave device 102 may transmit the image data to the master device 101, and the master device 101 simultaneously displays the image data from the master device 101 and the slave device 102, thereby implementing a cross-device distributed shooting function.
For example, the master device 101 (or the slave device 102) may be specifically an electronic device having a shooting function, such as a mobile phone, a tablet Computer, a television (also referred to as a smart television, a smart screen, or a large screen device), a notebook Computer, an Ultra-mobile Personal Computer (UMPC), a handheld Computer, a netbook, a Personal Digital Assistant (PDA), a wearable electronic device (e.g., a smart watch, smart glasses, a smart helmet, a smart bracelet), an in-vehicle device, or a virtual reality device, which is not limited in this embodiment.
For example, the mobile phone is taken as the main device 101, and an application having a shooting function, such as a camera application, a live application, or a video call application, may be installed in the mobile phone. As shown in fig. 2, taking a camera application as an example, after detecting that a user opens the camera application, the mobile phone may open a camera of the mobile phone to start capturing a shot picture, and display the shot picture in a preview frame 202 of a preview interface 201 in real time. Still taking the example of the camera application, as shown in fig. 2, the mobile phone may set a preset button 203 in a preview interface 201 of the camera application, where the preset button 203 is used to trigger the mobile phone to acquire a shooting picture taken by the slave device and display the shooting picture taken by the slave device. For example, the preset button 203 may trigger the cell phone to display both the self and the slave device's shooting pictures in the preview interface 201. For another example, the preset button 203 may trigger the mobile phone to switch the self-capturing picture in the preview interface 201 to the capturing picture of the slave device, and at this time, the mobile phone may not display the self-capturing picture.
For example, after the mobile phone detects that the user clicks the preset button 203, the mobile phone may search for one or more electronic devices currently having a shooting function. For example, the server may record whether each electronic device has a shooting function. Then, the mobile phone may query the server for an electronic device with a shooting function that is logged in to the same account (e.g., hua shi account) as the mobile phone.
Alternatively, the handset may search for electronic devices that are located in the same Wi-Fi network as the handset. Furthermore, the mobile phone can send an inquiry request to each electronic device in the same Wi-Fi network, and triggers the electronic device receiving the inquiry request to send a response message to the mobile phone, wherein the response message can indicate whether the mobile phone has a shooting function. Therefore, the mobile phone can determine the electronic equipment with the shooting function in the current Wi-Fi network according to the received response message.
The electronic devices with the shooting function searched by the mobile phone include the television 1, the watch 2 and the tablet computer 3, for example, the mobile phone can use the three electronic devices as slave devices to establish network connection with the television 1, the watch 2 and the tablet computer 3 respectively. For example, the cell phone may establish a Wi-Fi connection or a Wi-Fi P2P connection with the tv 1, the watch 2, and the tablet 3, respectively, or the cell phone may establish a mobile network connection directly with the tv 1, the watch 2, and the tablet 3, including but not limited to a mobile network supporting 2G, 3G, 4G, 5G, and subsequent standard protocols.
Of course, the mobile phone may also automatically search and connect one or more electronic devices when the user opens the camera application or when the user triggers a certain option in the camera application, which is not limited in this embodiment of the present application.
As a possible implementation manner, an application for managing the smart home devices (such as a television, an air conditioner, a sound box, or a refrigerator) in a home can be installed in the mobile phone. Taking the smart home application as an example, a user may add one or more smart home devices to the smart home application, so that the smart home devices added by the user are associated with a mobile phone. For example, a two-dimensional code including device information such as a device identifier may be set on the smart home device, and after the user scans the two-dimensional code using the smart home application of the mobile phone, the corresponding smart home device may be added to the smart home application, so as to establish an association relationship between the smart home device and the mobile phone. In this embodiment of the application, when one or more smart home devices added to the smart home application are online, for example, when the mobile phone detects a Wi-Fi signal sent by the added smart home device, the mobile phone may establish a network connection with the smart home device.
For example, if the mobile phone first establishes a network connection with the television 1, the mobile phone may send a shooting instruction to the television 1, so that the television 1 may start to capture a shooting picture using its own camera in response to the shooting instruction. Also, as shown in fig. 3, the television 1 may transmit a captured picture 301 acquired in real time to the mobile phone. At this time, the cellular phone can display the shooting screen 301 from the television 1 in the preview interface 201 of the camera application. In this way, the mobile phone can switch the shooting picture of a certain slave device (e.g. the television 1) to be displayed in the mobile phone (i.e. the master device) in response to the operation of clicking the preset button 203 by the user. Certainly, after the network connection between the mobile phone and the watch 2 (or the tablet pc 3) is established, the mobile phone may also instruct the watch 2 (or the tablet pc 3) to start collecting the shot picture, and send the collected shot picture to the mobile phone, but the mobile phone may not display the shot picture from the watch 2 (or the tablet pc 3).
Subsequently, the television 1, the watch 2 and the tablet computer 3 can detect whether a user inputs an audio signal in real time while acquiring a shot picture by using a camera. For example, the television 1, the watch 2, and the tablet computer 3 may use their own microphones to detect whether there is an audio signal input by the user. Illustratively, if the tablet computer 3 detects that the user inputs an audio signal to itself, it indicates that the user using the tablet computer 3 is speaking. At this time, as shown in fig. 4, the tablet pc 3 may add a preset sounding mark to the captured picture 401, and send the captured picture 401 carrying the sounding mark to the mobile phone (i.e., the host device). Furthermore, after receiving the shooting picture 401 sent by the tablet pc 3, if it is recognized that the voice mark is set in the shooting picture 401, the mobile phone indicates that the user using the tablet pc 3 is speaking at this time. Further, as also shown in fig. 4, the mobile phone may display a shooting screen 401 from the tablet pc 3 in the preview interface 201 of the camera application. At this time, the mobile phone may stop displaying the shooting screen from the television 1 in the preview interface 201.
That is, in a distributed photographing scene, a plurality of slave devices connected to a master device (e.g., the above-described cellular phone) can detect in real time whether or not a user inputs an audio signal to itself. When a user inputs an audio signal to a certain slave device (for example, the tablet computer 3), the slave device may actively add a preset sounding mark to the captured photographed picture. In this way, when the master device receives the shot picture sent by the slave device, the fact that the user inputs the audio signal to the slave device can be known through the sounding mark in the shot picture. Furthermore, the main device can display the shooting picture carrying the sounding mark in the display interface. Therefore, when different users use respective slave devices to input audio signals, the master device can flexibly and accurately switch the shooting pictures collected by the slave devices with user sound input to the master device to display the pictures, so that the users can focus on the shooting pictures of the slave devices with user sound input, and the use experience of the users is improved.
In addition, the tablet computer 3 may send the photographed picture with the sounding marker to a mobile phone (i.e., a master device), and may also send the photographed picture with the sounding marker to other slave devices (e.g., the television 1 and/or the watch 2), so that the other slave devices may also display the photographed picture with the sounding marker on a display interface, so that the user in the distributed photographing scene may focus on the photographed picture of the slave device with the user voice input.
In other embodiments, after the master device (e.g., the above-mentioned mobile phone) establishes a connection with each slave device, it may also continue to use its own camera to capture a shot picture. At this time, the mobile phone may set two windows in the preview interface 201, where one window is used to display the captured picture sent by the slave device (e.g., the tablet pc 3) that has the user uttered according to the method, and the other window displays the captured picture captured by the mobile phone (i.e., the master device), which is not limited in this embodiment of the present application.
Illustratively, a mobile phone is taken as an example of the master device 201 in the distributed shooting system 200, and fig. 5 shows a schematic structural diagram of the mobile phone.
The handset may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention is not to be specifically limited to a mobile phone. In other embodiments of the present application, the handset may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to a mobile phone. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to a mobile phone, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.
In some embodiments, the handset antenna 1 is coupled to the mobile communication module 150 and the handset antenna 2 is coupled to the wireless communication module 160 so that the handset can communicate with the network and other devices via wireless communication techniques.
The mobile phone realizes the display function through the GPU, the display screen 194, the application processor and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the cell phone may include 1 or N display screens 194, with N being a positive integer greater than 1.
The mobile phone can realize shooting functions through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor and the like.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the handset may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, a phone book and the like) created in the use process of the mobile phone. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The handset can listen to music through the speaker 170A or listen to a hands-free conversation.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the mobile phone receives a call or voice information, the receiver 170B can be close to the ear to receive voice.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The handset may be provided with at least one microphone 170C. In other embodiments, the mobile phone may be provided with two microphones 170C to achieve the noise reduction function in addition to collecting the sound signal. In other embodiments, the mobile phone may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
Certainly, the mobile phone may further include a charging management module, a power management module, a battery, a key, an indicator, 1 or more SIM card interfaces, and the like, which is not limited in this embodiment of the present application.
The software system of the mobile phone can adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture or a cloud architecture. The embodiment of the application takes an Android system with a layered architecture as an example, and exemplifies a software structure of a mobile phone. Of course, in other operating systems (e.g., hong meng system, Linux system, etc.), as long as the functions implemented by the respective functional modules are similar to the embodiments of the present application, the functional modules are within the scope of the claims of the present application and their equivalents.
Fig. 6 is a block diagram of a software structure of a mobile phone according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, which are an application layer, an application framework layer, an Android runtime (Android runtime) and system library (HAL) layer, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 6, applications (applications) such as call, memo, browser, contact, gallery, calendar, map, bluetooth, music, video, and short message may be installed in the application layer.
In the embodiment of the present application, an application having a shooting function, such as a camera application, a video call application, or a live broadcast application, may be installed in the application layer. Of course, when another application needs to use the shooting function, an application having the shooting function, such as a camera application, may be called to implement the shooting function.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 6, a camera service (CameraService) is provided in the application framework layer, taking a camera application as an example. The camera application may start the CameraService by calling a preset API. The CameraServer can interact with the Camera HAL (hardware abstraction layer) in the HAL (hardware abstraction layer) during the operation process. The Camera HAL is responsible for interacting with hardware equipment (such as a Camera) for realizing a shooting function in the mobile phone, on one hand, the Camera HAL hides implementation details (such as a specific image processing algorithm) of the related hardware equipment, and on the other hand, the Camera HAL can provide an interface for calling the related hardware equipment for the Android system.
For example, the camera application may send a relevant control instruction (e.g., a preview, zoom, photograph, or video instruction) issued by the user to the CameraService. On one hand, the Camera service can send the received control instruction to the Camera HAL, so that the Camera HAL can call a Camera driver in the kernel layer according to the received control instruction, and drive hardware devices such as a Camera to respond to the control instruction to acquire original image data. For example, the Camera may transmit each frame of raw image data acquired by the Camera to the Camera HAL through the Camera driver according to a certain frame rate. The process of passing the control instruction inside the operating system can be seen in the specific process of passing the control flow in fig. 6.
On the other hand, after the CameraService receives the control instruction, the CameraService can determine the shooting strategy according to the received control instruction, and the shooting strategy is provided with a specific image processing task which needs to be executed on the acquired image data. For example, in the preview mode, the CameraService may set the image processing task 1 in the shooting policy for implementing the face detection function. For another example, if the user turns on the beauty function in the preview mode, the CameraService may also set the image processing task 2 in the shooting policy for implementing the beauty function. And the Camera service can send the determined shooting strategy to Camera HAL.
After the Camera HAL receives each frame of image data acquired by the Camera, the Camera HAL can execute a corresponding image processing task on the image data according to a shooting strategy issued by the Camera service to obtain each frame of shot image after image processing. For example, Camera HAL may perform image processing task 1 on each frame of received image data according to shooting strategy 1, and obtain a corresponding shooting picture for each frame. After the shooting strategy 1 is updated to the shooting strategy 2, the Camera HAL may execute the image processing task 2 and the image processing task 3 on each frame of received image data according to the shooting strategy 2 to obtain each corresponding frame of shooting picture.
Subsequently, Camera HAL may report each frame of captured image after image processing to a Camera application through Camera service, and the Camera application may display each frame of captured image in a display interface, or the Camera application may store each frame of captured image in a mobile phone in the form of a photo or a video. The process of transferring the shot picture inside the operating system can refer to the specific process of transferring the data stream in fig. 6.
In addition, the application framework layer may further include a window manager, a content provider, a view system, a resource manager, a notification manager, and the like, which is not limited in this embodiment of the present application.
For example, the window manager described above is used to manage window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. The view system described above can be used to build a display interface for an application. Each display interface may be comprised of one or more controls. Generally, a control may include an interface element such as an icon, button, menu, tab, text box, dialog box, status bar, navigation bar, Widget, and the like. The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, and the like, to the application. The notification manager enables the application program to display notification information in the status bar, can be used for conveying notification type messages, can automatically disappear after a short time of stay, and does not need user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, to prompt text messages in the status bar, to emit a prompt tone, to vibrate, to flash an indicator light, etc.
As shown in fig. 6, the Android runtime includes a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
Wherein the surface manager is used for managing the display subsystem and providing the fusion of the 2D and 3D layers for a plurality of application programs. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is located below the HAL and is the layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and the like, and the embodiment of the application does not limit the display driver, the camera driver, the audio driver, the sensor driver and the like.
Based on the software architecture of the Android system shown in fig. 6, in the embodiment of the present application, as shown in fig. 7, a device virtualization (device virtualization) application for implementing a distributed shooting function may be installed in an application layer of a mobile phone, and may be referred to as a DV application in the following. The DV application can be resident as a system application running in the mobile phone. Or, the functions realized by the DV application can be resident in the mobile phone to run in the form of system services.
When the mobile phone needs to realize a distributed shooting function together with other electronic devices, the DV application in the mobile phone can establish network connection between one or more electronic devices as slave devices of the mobile phone and the mobile phone. As also shown in fig. 7, after the DV application of the cell phone establishes a network connection with the slave device, the DV application may obtain, based on the network connection, shooting capability parameters of the slave device, the shooting capability parameters being used to indicate the shooting capability of the slave device. For example, the shooting capability parameter may include a specific image processing algorithm supported by the slave device, a related hardware parameter of a camera in the slave device, and the like. Furthermore, the DV application may call a preset interface of the HAL, and input the acquired shooting capability parameter into the preset interface, so as to create the HAL corresponding to the slave device in the HAL.
In the embodiment of the present application, the HAL created by the DV application according to the shooting capability parameter of the slave device may be referred to as a DMSDP (Distributed Mobile Sensing Development Platform) HAL, or may be referred to as a virtual Camera HAL. Unlike the conventional Camera HAL in a mobile phone, the DMSDP HAL does not correspond to the actual hardware device of the mobile phone, but to the slave device to which the mobile phone is currently connected. The mobile phone can be used as a master device to transmit and receive data with the slave device through the DMSDP HAL, and the slave device is used as a virtual device of the mobile phone to cooperate with the slave device to complete various services in the distributed shooting scene.
Still as shown in fig. 7, in addition to creating a corresponding DMSDP HAL for a slave device of a mobile phone in the HAL, the DV application may also send a shooting capability parameter of the slave device to the CameraService for saving, that is, register the shooting capability of the current slave device in the CameraService.
Subsequently, taking a camera application as an example, when the mobile phone runs the camera application, the CameraService can determine a shooting strategy in a shooting process in real time according to a control instruction (for example, an instruction of previewing, amplifying, recording and the like) issued by the camera application in combination with the shooting capability of the slave device. For example, the CameraService may set an image processing task that needs to be executed by the mobile phone and an image processing task that needs to be executed by the slave device in the shooting policy according to the shooting capability parameter of the slave device. Furthermore, the CameraService can send a shooting instruction corresponding to the shooting strategy to the slave device through the DMSDP HAL, and trigger the slave device to execute a corresponding image processing task.
Therefore, when the mobile phone and the slave device realize the distributed shooting function, the mobile phone and the slave device can perform corresponding image processing on the image data according to the shooting capabilities of the mobile phone and the slave device based on the shooting strategy, so that the mobile phone and the slave device can realize the distributed shooting function more efficiently and flexibly in a coordinated manner, and a better shooting effect is realized in a distributed shooting scene.
In the embodiment of the present application, for example, the mobile phone 1 is taken as a master device, and the mobile phone 1 and a plurality of slave devices can perform synchronous shooting in a distributed shooting scene.
For example, if it is detected that the user opens the camera application of the mobile phone 1, the mobile phone 1 may open its own camera to start shooting, and, as shown in fig. 8, the mobile phone 1 may display the captured shooting screen in the preview interface 801 of the camera application. In the embodiment of the present application, a preset button 802 for shooting in synchronization with the slave device may be provided in the preview interface 801, and the preset button 802 may be used to trigger the mobile phone 1 to display shooting screens of two or more electronic devices. For example, the name of the preset button 802 may be "dual view mode", "triple view mode", or "multi view mode", etc. Taking the "dual-view mode" as an example, after the mobile phone 1 detects that the user selects the preset button 802 named "dual-view mode", the mobile phone 1 may be triggered to create two windows in the preview interface 801, where one window is used to display the shooting picture of the mobile phone 1, and the other window is used to display the shooting pictures of other electronic devices. It should be understood that: if the mobile phone is wirelessly connected with the N electronic devices with the cameras, when a user clicks a preset button 802 on a preview interface 801, the mobile phone 1 is triggered to create N +1 windows on the preview interface 801, wherein one window is used for displaying shooting pictures of the mobile phone 1, and the other N windows are used for displaying the shooting pictures of the N electronic devices. Of course, fewer than N +1 windows or more than N +1 windows may be created to display the camera interface of the mobile phone 1 and the camera interfaces of the N other electronic devices, and if the created windows are less than N +1, the camera interfaces of some electronic devices may be selected not to be displayed. If more than N +1 windows are created, more than N +1 windows may display other content or repeatedly display the camera interface of one or more of the electronic devices.
Of course, the mobile phone 1 may also set the preset button 802 for synchronous shooting with the slave device in a control center, a pull-down menu, a negative one-screen menu, or other applications (e.g., a video call application, a live broadcast application, etc.) of the mobile phone 1, which is not limited in this embodiment of the present application.
In the following embodiment, for example, the preset button 802 named "dual-view mode" is used, and when the mobile phone 1 detects that the user clicks the preset button 802, the DV application of the mobile phone 1 may trigger the mobile phone 1 to search for one or more nearby electronic devices with a shooting function. Of course, the mobile phone 1 may also automatically search for one or more electronic devices having a shooting function nearby after the user opens the camera application. For example, the DV application of the mobile phone 1 detects two electronic devices, namely the television 1 and the mobile phone 2, and the DV application of the mobile phone 1 can establish network connections with the mobile phone 1 respectively by using the television 1 and the mobile phone 2 as two slave devices of the mobile phone 1. For example, cell phone 1 may establish a Wi-Fi P2P connection with television 1, and cell phone 1 may establish a Wi-Fi P2P connection with cell phone 2. Further, the DV application of the cellular phone 1 can acquire the shooting-capability parameter 1 of the television 1 based on the network connection with the television 1. Also, the DV application of the mobile phone 1 can acquire the shooting-capability parameter 2 of the mobile phone 2 based on the network connection with the mobile phone 2.
Further, as shown in fig. 9, the DV application of the mobile phone 1 may create a corresponding DMSDP HAL in the HAL according to the acquired shooting capability parameter 1 and shooting capability parameter 2. For example, the DV application of the cell phone 1 may create the virtual HAL 1 corresponding to the television 1 according to the shooting capability parameter 1, and the DV application of the cell phone 1 may create the virtual HAL 2 corresponding to the cell phone 2 according to the shooting capability parameter 2. The virtual HAL 1 and the virtual HAL 2 may run as two functional modules in the DMSDP HAL. Subsequently, the mobile phone 1 can perform data interaction with the television 1 through the virtual HAL 1, and the mobile phone 1 can perform data interaction with the mobile phone 2 through the virtual HAL 2.
It will be appreciated that when the handset 1 detects a new slave device, a new virtual HAL may also be created in the DMSDP HAL in the manner described above. Alternatively, when the mobile phone 1 detects that the network connection with a certain slave device is disconnected, the corresponding virtual HAL may be deleted in the DMSDP HAL. Or, when the mobile phone 1 detects that the shooting capability parameter of a certain slave device is updated, the DV application of the mobile phone 1 may update the corresponding virtual HAL according to the updated shooting capability parameter, which is not limited in this embodiment of the present application.
In other embodiments, after the DV application of the mobile phone 1 acquires the shooting capability parameters of the television 1 and the mobile phone 2, two DMSDP HALs may also be created in the HALs, where one DMSDP HAL corresponds to the television 1 and the other DMSDP HAL corresponds to the mobile phone 2, which is not limited in this embodiment.
Alternatively, after the mobile phone 1 searches for one or more nearby electronic devices having a shooting function, a corresponding device list may be displayed. In this way, the user can select one or more electronic devices, which are the slave devices of the mobile phone 1 this time, from the device list displayed by the mobile phone 1. Subsequently, the mobile phone 1 may respond to the selection operation of the user, establish a network connection with the corresponding electronic device according to the above method, and create a corresponding DMSDP HAL, which is not limited in this embodiment of the present application.
As also shown in fig. 9, a Camera HAL corresponding to the Camera of the mobile phone may also be provided in the HAL of the mobile phone 1. When detecting that the user opens the camera application, the mobile phone 1 may open its own camera to acquire image data. The Camera of the mobile phone 1 may report the acquired image data to the Camera HAL, and the Camera HAL processes the acquired image data into each corresponding frame of shot picture (i.e., shot picture 1). Further, Camera HAL may report the obtained shooting picture 1 to Camera service, and the Camera service reports the shooting picture 1 to the Camera application, so that the Camera application displays the shooting picture 1 in the display interface of the mobile phone 1.
In this embodiment of the application, after the camera application of the mobile phone 1 detects that the user clicks the preset button 802, as shown in fig. 10, the camera application may create a window 1201 and a window 1202 in the display interface. At this time, similar to the processing process of the mobile phone 1 in fig. 9, the Camera service of the mobile phone 1 may receive the shot picture 1 from the mobile phone 1 reported by the Camera HAL in real time, and the Camera service of the mobile phone 1 may report the shot picture 1 of the mobile phone 1 to the Camera application. At this time, the camera application may display the shooting screen 1 of the mobile phone 1 in the window 1201 (or the window 1202).
Meanwhile, after the mobile phone 1 establishes network connection with the current slave device, the camera application of the mobile phone 1 may also display a shot picture from the slave device in real time in another window (e.g., window 1202) of the display interface. For example, the mobile phone 1 may display a shooting screen of the first slave device establishing a network connection with the mobile phone 1 in the window 1202 by default.
Still taking the slave device of the mobile phone 1 as the television 1 and the mobile phone 2 for example, after the mobile phone 1 establishes a network connection with the television 1 and creates a corresponding virtual HAL 1 in the DMSDP HAL according to the shooting capability parameter of the television 1, the mobile phone 1 may send a shooting instruction 1 to the television 1. As also shown in fig. 9, the television 1 can acquire a photographed picture 2 in real time using a camera of the television 1 in response to the photographing instruction 1. Furthermore, the television 1 can send the shot picture 2 to the virtual HAL 1 of the mobile phone 1, and after receiving the shot picture 2 from the television 1, the virtual HAL 1 can report the shot picture 2 to the camera application through the CameraService of the mobile phone 1. Then, as shown in fig. 10, the camera application may display a photographic screen 2 from the television 1 in a window 1202. In this way, when the user starts the synchronous shooting function of the "dual-view mode" in the mobile phone 1, the mobile phone 1 can synchronously display the shooting picture of the mobile phone itself and the shooting picture of the first slave device connected to the mobile phone 1 in the display interface of the mobile phone 1.
Of course, the mobile phone 1 may be configured to display the shooting screen of the second (or nth) slave device connected to the mobile phone 1 in the window 1202. Alternatively, the mobile phone 1 may be configured to display a shooting screen of a specific slave device in the window 1202. Alternatively, the mobile phone 1 may display all the shot pictures of a plurality of slave devices connected to the mobile phone 1 on the display interface. Of course, the mobile phone 1 may not display any shooting picture of the slave device in the window 1202, which is not limited in this embodiment.
On the other hand, after the network connection between the mobile phone 1 and the television 1 is established, when the network connection between the mobile phone 1 and the mobile phone 2 is established, and the corresponding virtual HAL 2 is created in the DMSDP HAL according to the shooting capability parameter of the mobile phone 2, since the mobile phone 2 is not the slave device that first establishes the network connection with the mobile phone 1, that is, the shooting picture of the mobile phone 2 does not need to be displayed in the window 1202, the mobile phone 1 may not send the shooting instruction to the mobile phone 2. Therefore, the mobile phone 2 does not respond to the shooting instruction sent by the mobile phone 1 to turn on the camera to acquire the corresponding shooting picture in real time, and the mobile phone 1 does not receive the shooting picture from the mobile phone 2.
Alternatively, in another embodiment, as shown in fig. 11, a control module 1 may be further disposed in the DMSDP HAL of the mobile phone 1, and the control module 1 may be configured to determine to which slave device to send the shooting picture to the CameraService. For example, when creating virtual HALs corresponding to respective slave devices, the DV application of the mobile phone 1 may store the correspondence between different slave devices and different virtual HALs in the control module 1 of the DMSDP HAL. As shown in table 1, the tv 1 and the handset 2 are two slave devices to which the handset 1 is currently connected. The television 1 corresponds to the virtual HAL 1, a camera a is arranged in the television 1, and the shooting capability parameter of the camera a is a parameter 1. The mobile phone 2 corresponds to the virtual HAL 2, two cameras (i.e., a camera B and a camera C) are arranged in the mobile phone 2, a shooting capability parameter of the camera B is a parameter 2, and a shooting capability parameter of the camera C is a parameter 3.
TABLE 1
Figure BDA0002876165780000141
Then, if the mobile phone 1 first establishes a network connection with the television 1, the control module 1 may set a corresponding relationship between the window 1202 and the television 1, that is, at this time, a shooting picture of the television 1 needs to be displayed in the window 1202. Further, the mobile phone 1 can send a shooting command to the television 1 according to the above method, and instruct the television 1 to send the shooting picture 2 to the virtual HAL 1 of the mobile phone 1. As shown in fig. 11, after receiving the captured image 2 from the television 1, the virtual HAL 1 of the mobile phone 1 may send the captured image 2 to the control module 1, and since the control module 1 has set the correspondence between the window 1202 and the television 1, after recognizing that the captured image 2 is the captured image from the television 1, the control module 1 may report the captured image 2 to the camera application through the CameraService, and the camera application displays the captured image 2 from the television 1 in the window 1202.
Also, as shown in fig. 11, after the mobile phone 1 establishes a network connection with the mobile phone 2 and creates a corresponding virtual HAL 2 in the DMSDP HAL, the mobile phone 1 may also send a shooting instruction (i.e., shooting instruction 2) to the mobile phone 2. The mobile phone 2 can respond to the shooting instruction 2 and use the camera of the mobile phone 2 to obtain the corresponding shooting picture 3 in real time. Further, the mobile phone 2 can transmit the photographed picture 3 to the virtual HAL 2 of the mobile phone 1. After receiving the shot picture 3 from the mobile phone 2, the virtual HAL 2 may send the shot picture 3 to the control module 1, and since the control module 1 has set the correspondence between the window 1202 and the television 1, when the control module 1 recognizes that the shot picture 3 is not the shot picture from the television 1, the control module 1 does not report the shot picture 3 to the CameraService. For example, the control module 1 may discard or delete the shot picture 3 transmitted by the mobile phone 2. In this way, the camera application does not receive the shot picture 3 from the mobile phone 2, and does not display the shot picture 3 on the display interface of the mobile phone 1.
Illustratively, each slave device (e.g., the television 1 and the mobile phone 2) may transmit the shooting picture of itself to the mobile phone 1 in the form of a data packet when transmitting the shooting picture to the master device (e.g., the mobile phone 1).
As shown in fig. 12, the data packet of the above-described photographed picture generally includes a header (head) and a data (data) portion. Wherein the data (data) portion includes specific image data in the photographic picture, such as YUV (Y for brightness and U and V for chroma) data. The header (head) includes parameters related to the data part, such as a name of a photographed picture, photographing time, codec information, and the like. In this embodiment, the slave device may carry its own device identifier in the header of the data packet. For example, the tv 1 may carry the device identifier of the tv 1 in the header of the packet corresponding to the photographed picture 2, and the mobile phone 2 may carry the device identifier of the mobile phone 2 in the header of the packet corresponding to the photographed picture 3. In this way, the control module 1 of the DMSDP HAL in the mobile phone 1 can recognize from which slave device the shot picture in the currently received packet is from by analyzing the packet header of the packet.
Alternatively, the television 1 may also carry the camera identification of the camera used to take the picture 2 in the header of the corresponding packet. In this way, the control module 1 of the DMSDP HAL in the mobile phone 1 can also know the specific camera used when the television 1 shoots the picture 2 by analyzing the packet header of the data packet.
Of course, the control module 1 of the DMSDP HAL in the mobile phone 1 may also determine the captured picture acquired from the virtual HAL 1 as the captured picture of the television 1, and correspondingly, may determine the captured picture acquired from the virtual HAL 2 as the captured picture of the mobile phone 2. Further alternatively, since the network connection 1 between the mobile phone 1 and the television 1 may be different from the network connection 2 between the mobile phone 1 and the mobile phone 2, the control module 1 of the DMSDP HAL in the mobile phone 1 may determine the shooting picture acquired from the network connection 1 as the shooting picture of the television 1, and may determine the shooting picture acquired from the network connection 2 as the shooting picture of the mobile phone 2.
In the embodiment of the application, the slave device of the mobile phone 1 is taken as the example of the television 1 and the mobile phone 2, the television 1 and the mobile phone 2 use their own cameras to collect the shot pictures, and send the shot pictures to the mobile phone 1, and at the same time, the microphone can be used to collect the audio signals input by the user, and then it is determined whether the user is using the slave device to generate sound based on the collected audio signals.
Taking the mobile phone 2 as an example, as shown in fig. 13, the architecture of the operating system in the mobile phone 2 is similar to that in the mobile phone 1. The application layer of the mobile phone 2 may install a proxy application, and the proxy application is used for receiving and sending data with other devices (for example, the mobile phone 1 or the television 1). Alternatively, the proxy application may be run in the handset 2 in the form of an SDK (Software Development Kit) or a system service. The CameraService may also be set in the application framework layer of the mobile phone 2. Camera HAL may be provided in HAL of mobile phone 2, and Camera HAL of mobile phone 2 corresponds to a Camera of mobile phone 2.
After the mobile phone 2 establishes network connection with the mobile phone 1, the proxy application of the mobile phone 2 may receive the shooting instruction sent by the mobile phone 1. In response to the shooting instruction, the agent application of the mobile phone 2 may call the Camera of the mobile phone 2 to capture a corresponding shooting picture (for example, the shooting picture 3) through its Camera service and Camera HAL. The Camera HAL of the mobile phone 2 can receive the shooting picture 3 sent by the Camera in real time, and further the Camera HAL of the mobile phone 2 can send the received shooting picture 3 to the agent application through the Camera service, and the agent application sends the shooting picture 3 of the mobile phone 2 to the virtual HAL 2 corresponding to the mobile phone 2 in the mobile phone 1, so that the mobile phone 1 can obtain the shooting picture of the mobile phone 2 in real time. Of course, after the Camera HAL of the mobile phone 2 receives the shot picture 3, the Camera HAL may also perform related image processing (for example, processing such as beautifying and focusing) on the shot picture 3, and then send the shot picture after the image processing to the agent application, which is not limited in this embodiment of the application.
As also shown in fig. 13, the operating system of the handset 2 may also include related modules for implementing audio functions. For example, AudioRecord and AudioFlinger may be provided in the application framework layer of the mobile phone 2, and an audiohal may be provided in the HAL of the mobile phone 2. The Audio HAL corresponds to an Audio input/output device such as a microphone and a speaker of the cellular phone 2. For example, the microphone of the handset 2 may report the captured audio signal to the AudioFlinger. The AudioFlinger is used for processing the collected audio signals such as sound mixing, resampling, sound effect setting and the like. The AudioRecord is used for obtaining a corresponding audio signal from the AudioFlinger for audio recording. The AudioRecord and the CameraService may interact with the proxy application through a unified interface (e.g., HwMediaProject interface). For example, the AudioRecord may report the recorded audio signal to the proxy application through a HwMediaProject interface, and the CameraService may report the captured picture to the proxy application through the HwMediaProject interface.
In the embodiment of the application, in the process that the mobile phone 2 uses the camera to collect the shot picture and sends the shot picture to the mobile phone 1, the mobile phone 2 can also detect whether the user inputs the audio signal to the mobile phone 2 through a microphone of the mobile phone 2. As also shown in fig. 13, when the microphone of the handset 2 detects an Audio signal, the detected Audio signal may be transmitted to the Audio HAL of the handset 2, and the Audio HAL of the handset 2 may transmit the received Audio signal to the Audio flinger. Further, the AudioRecord may obtain the audio signal from the AudioFlinger and report the audio signal to the proxy application through the HwMediaProject interface. The agent application, upon receiving the audio signal detected by the microphone, may determine whether the audio signal is a sound input by the user to the handset 2.
Illustratively, voiceprint characteristics of one or more legitimate users using the handset 2 may be stored in the handset 2. When the agent application of the mobile phone 2 acquires the audio signal detected by the current microphone, the voiceprint feature in the audio signal can be extracted. Further, the proxy application of the handset 2 may compare whether the extracted voiceprint characteristics are voiceprint characteristics of legitimate users. If it is a voiceprint feature of a legitimate user, the proxy application of the handset 2 can determine that the currently detected audio signal is the sound input by the user into the handset 2. Otherwise, the proxy application of the handset 2 may determine that the currently detected audio signal is not sound input by the user to the handset 2, which may be noise or other sound input by the user to other devices.
Or, as shown in fig. 13, the agent application of the mobile phone 2 may also receive the shot picture reported by the CameraService while receiving the audio signal reported by the AudioRecord. Then, the agent application of the mobile phone 2 may determine whether the currently received audio signal is a sound input by the user to the mobile phone 2, in combination with the received photographed picture and the audio signal. For example, as shown in fig. 14, for an audio signal and a photographed picture received at the same time, the proxy application may detect whether a face image is included in the corresponding photographed picture using a face recognition algorithm. If a facial image is included, the proxy application may further extract mouth shape features in the facial image. Further, the agent application may determine whether there is a user inputting a sound to the mobile phone 2 by comparing whether the audio signal at the same time matches the mouth shape feature. For example, if the proxy application detects that the audio signal within 500ms of a run matches the corresponding mouth shape characteristics, the proxy application may determine that the currently detected audio signal is the sound input by the user to the handset 2, even though the user of the handset 2 is speaking.
Or, in addition to comparing whether the audio signal at the same time is matched with the mouth shape feature, the agent application may also detect whether the loudness of the audio signal is greater than a preset value. For example, if the agent application detects that the audio signal within 500ms consecutively matches the corresponding mouth shape feature, and the loudness of the audio signal within 500ms is greater than a preset value, the agent application may determine that the currently detected audio signal is the sound input by the user to the handset 2.
Of course, the mobile phone 2 may determine whether the currently detected audio signal is a sound input by the user to the mobile phone 2 by other methods. For example, the mobile phone 2 may request the server to determine whether the currently detected audio signal is a sound input by the user to the mobile phone 2, which is not limited in this embodiment.
In the embodiment of the present application, if the proxy application of the mobile phone 2 determines that the currently received audio signal is the sound input by the user to the mobile phone 2, it indicates that there is a user speaking using the mobile phone 2 among the plurality of slave devices of the mobile phone 1. At this time, as shown in fig. 15, the agent application of the mobile phone 2 may add a preset utterance identifier to the shooting picture (i.e., the shooting picture 3) reported by the CameraService, and further send the shooting picture 3 added with the utterance identifier to the virtual HAL 2 of the mobile phone 1. For example, the voice identifier may be a specific flag bit, and the proxy application of the mobile phone 2 may add the voice identifier to the header of the data packet of the shot picture 3 received in real time. Subsequently, when the proxy application of the mobile phone 2 determines that no audio signal is input, or when the proxy application of the mobile phone 2 determines that the currently received audio signal is not the sound input by the user to the mobile phone 2, the proxy application of the mobile phone 2 may stop adding the above-mentioned sound generation identifier to the data packet of the corresponding shot picture 3.
As shown in fig. 15, the virtual HAL 2 of the mobile phone 1 can receive the packet of the shot screen 3 from the mobile phone 2, and then transmit the packet of the shot screen 3 to the control module 1 of the DMSDP HAL. Furthermore, if the control module 1 of the DMSDP HAL analyzes the utterance flag of the packet, indicating that the user is inputting a voice to the mobile phone 2 when shooting the shooting picture 3, the control module 1 may modify the correspondence between the window 1202 and the television 1 to the correspondence between the window 1202 and the mobile phone 2. Further, the control module 1 of the DMSDP HAL may report the shooting picture 3 of the mobile phone 2 to the CameraService of the mobile phone 1, and stop reporting the shooting picture 2 of the television 1 to the CameraService. That is, the control module 1 of the DMSDP HAL in the mobile phone 1 can determine the slave device from which the user is speaking by analyzing the utterance identification in the shooting picture, and further send the shooting picture of the slave device from which the user is speaking to the CameraService.
For example, by including the utterance identifier in the packet header of the data packet 1, the control module 1 of the DMSDP HAL may send the image data in the data part of the data packet 1 to the CameraService of the mobile phone 1. Alternatively, the control module 1 of the DMSDP HAL may send both the image data of the data portion and the utterance identification to the CameraService of the mobile phone 1.
After the camera service of the mobile phone 1 reports the shot picture of the mobile phone 2 to the camera application, as shown in fig. 16, the camera application may display the received shot picture 3 of the mobile phone 2 in the window 1202, and at this time, the shot picture 2 of the television 1 originally displayed in the window 1202 is switched to the shot picture 3 of the mobile phone 2. Also, the camera application can still display the photographic screen 1 taken by the cell phone 1 itself in the window 1201. That is, the mobile phone 1 as a master device can display a shot picture taken by the mobile phone 1 itself and a shot picture taken by a slave device that the user is speaking. In some embodiments, as also shown in fig. 16, if the camera application receives the shot picture 3 of the mobile phone 2 with the utterance indicator, the camera application may further display a first prompt message 1601 in the window 1202 for prompting the user that the user using the mobile phone 2 corresponding to the window 1202 is speaking. The first prompting message 1601 may be a text, an icon, an animation, a voice, or the like, which is not limited in this embodiment.
In addition, the mobile phone 2 can send the shot picture to the mobile phone 1 and simultaneously send the detected audio signal to the mobile phone 1 in real time. Subsequently, the mobile phone 1 can display the shooting picture of the mobile phone 2 in the window 1202 and play the audio received by the mobile phone 2. In this way, in a scene such as a conference or a video call, the mobile phone 1 as a master device can automatically present a user with a captured screen and audio contents of a slave device used by a speaking user.
In some embodiments, if the agent application of the handset 2 determines that the currently received audio signal is not the sound input by the user to the handset 2, or if no audio signal is detected by the handset 2, the handset 2 need not add a sound emitting marker to the captured picture. Furthermore, after receiving the shot picture sent by the mobile phone 2, the control module 1 of the DMSDP HAL in the mobile phone 1 does not report the shot picture of the mobile phone 2 to the camera service of the mobile phone 1, and the camera application does not display the shot picture of the mobile phone 2 on the display interface.
In some embodiments, the tv 1 as a slave device of the handset 1 may also detect whether there is a user inputting an audio signal to the tv 1 according to the method described above. If the television 1 detects that the user inputs an audio signal to the television 1, the television 1 may also send a shot picture with a sounding identification added thereto to the mobile phone 1 in the above-described method. If the mobile phone 1 recognizes that the sounding reference is added to the photographed picture of the tv 1 and the sounding reference is not added to the photographed picture of the mobile phone 2, the mobile phone 1 may continue to display the photographed picture of the tv 1 in the window 1202.
If the mobile phone 1 recognizes that the sound-emitting identifier is added to the shot picture of the television 1 and the sound-emitting identifier is also added to the shot picture of the mobile phone 2, because the control module 1 of the DMSDP HAL in the mobile phone 1 records the corresponding relationship between the television 1 and the window 1202, the control module 1 of the DMSDP HAL can continue to report the shot picture of the television 1 to the CameraService of the mobile phone 1, and does not report the shot picture of the mobile phone 2 to the CameraService of the mobile phone 1, so that the camera application can continue to display the shot picture of the television 1 in the window 1202.
Or, if the mobile phone 1 recognizes that the sound-emitting identifier is added to the shooting picture of the television 1 and the sound-emitting identifier is also added to the shooting picture of the mobile phone 2, the control module 1 of the DMSDP HAL in the mobile phone 1 may report the shooting pictures of the two slave devices to the CameraService. At this time, after receiving the photographed screen of the television 1 and the photographed screen of the mobile phone 2, the camera application may display the photographed screen of the television 1 in the window 1202. And, the camera application may create a new window in the current display interface, and display the shooting picture of the mobile phone 2 in the newly created window. In this way, the shooting interface of each slave device that detects the utterance of the user can be displayed in the display interface of the master device.
In some embodiments, still taking the example that the mobile phone 1 switches the shot picture of the television 1 in the window 1202 to the shot picture of the mobile phone 2, after the control module 1 of the DMSDP HAL in the mobile phone 1 updates the correspondence between the window 1202 and the television 1 to the correspondence between the window 1202 and the mobile phone 2, the current time may be recorded as the time for switching the picture of the window 1202. Subsequently, if the control module 1 of the DMSDP HAL in the mobile phone 1 receives the shooting picture with the sounding identifier sent by another slave device (e.g., the television 1), the control module 1 of the DMSDP HAL may determine whether the time for switching the shooting picture in the window 1202 last time exceeds a preset time (e.g., 3 seconds). If the time for switching the shot picture in the window 1202 last time does not exceed the preset time, which indicates that the time for switching the shot picture in the window 1202 by the mobile phone 1 is short, in order to avoid that the user experiences bad due to frequent switching of the shot picture in the window 1202, the mobile phone 1 may continue to display the shot picture of the mobile phone 2 in the window 1202. If the time for switching the shooting picture in the window 1202 last time exceeds the preset time, the mobile phone 1 may switch the shooting picture of the mobile phone 2 displayed in the window 1202 to the currently received shooting picture with the sounding identification.
It can be seen that, in the above-described distributed shooting scenario, after the mobile phone 1 (i.e. the master device) establishes network connection with each slave device (e.g. the above-described television 1 and mobile phone 2), if a certain slave device detects that there is a user inputting an audio signal to itself, the slave device may notify the master device to display a shooting picture of the slave device by adding a sounding marker in the shooting picture. Therefore, when different users use respective slave devices to input audio signals, the master device can flexibly and accurately switch the shooting pictures collected by the slave devices with user sound input to the master device to display the pictures, so that the users can focus on the shooting pictures of the slave devices with user sound input, and the use experience of the users is improved.
Of course, after the slave device detects that the user inputs the audio signal to itself, the slave device may also notify the master device to display the shooting picture of the slave device in other manners, which is not limited in this embodiment of the application. For example, after the mobile phone 2 detects that the currently received audio signal is a sound input by the user to the mobile phone 2, the mobile phone 2 may send a first indication message to the mobile phone 1, where the first indication message is used to indicate that the user using the mobile phone 2 is speaking. Furthermore, after receiving the first instruction message sent by the mobile phone 2, the mobile phone 1 may display a subsequently received shooting picture of the mobile phone 2 in the display interface. Subsequently, when the mobile phone 2 does not receive the audio signal within the preset time, the mobile phone 2 may send a second indication message to the mobile phone 1, where the second indication message is used to indicate that the user using the mobile phone 2 stops sounding. Furthermore, after receiving the second instruction message sent by the mobile phone 2, the mobile phone 1 may stop displaying the shooting picture of the mobile phone 2 on the display interface. Alternatively, when the mobile phone 1 displays the shot picture of the mobile phone 2, if the first instruction message transmitted by another slave device is received, the mobile phone 1 may display the shot picture of the other slave device, and stop displaying the shot picture of the mobile phone 2.
In some embodiments, in addition to the shot screen of the slave device (e.g., the mobile phone 2 described above) that the mobile phone 1 (i.e., the master device) can display the user voice input in the display interface, other slave devices (e.g., the television 1) can also display the shot screen of the slave device that the user voice input in the display interface.
Still taking an example that the mobile phone 2 detects that the user inputs the audio signal to the mobile phone 2, the mobile phone 2 may send the shot picture carrying the sounding marker to the mobile phone 1, and may also send the shot picture carrying the sounding marker to the television 1. As shown in fig. 17, the architecture of the operating system in the television 1 is similar to that in the handset 2. The application layer of the television 1 may install a proxy application, where the proxy application is used for sending and receiving data with other devices (for example, the mobile phone 1 or the mobile phone 2). The CameraService may also be set in the application framework layer of the television 1. Camera HAL may be provided in HAL of television 1, and Camera HAL of television 1 corresponds to a Camera of television 1. Furthermore, the application framework layer of the tv 1 may be provided with AudioRecord and AudioFlinger, and the HAL of the tv 1 may be provided with Audio HAL. The Audio HAL corresponds to Audio input/output devices such as a microphone and a speaker of the television 1.
The process of the television 1 serving as the slave device of the mobile phone 1 to send the shot picture to the mobile phone 1 is similar to the process of the mobile phone 2 to send the shot picture to the mobile phone 1, and the process of the television 1 detecting whether the user inputs the audio signal to the television 1 is similar to the process of the mobile phone 2 detecting whether the user inputs the audio signal to the mobile phone 2, so details are omitted here.
In contrast, as also shown in fig. 17, the application layer of the television 1 can also set a camera application and a DV application. The television 1 can establish network connection not only with the mobile phone 1 but also with the mobile phone 2. Furthermore, the television 1 can acquire the shooting capability parameters of the mobile phone 1 and the shooting capability parameters of the mobile phone 2. For example, the mobile phone 1 may send the shooting capability parameters of the television 1 and the shooting capability parameters of the mobile phone 2 acquired by the mobile phone 1 to the DV application of the television 1. Further, the DV application of the television 1 may create a corresponding DMSDP HAL in the HAL of the television 1. For example, the DV application of the television 1 may create a virtual HAL 3 corresponding to the handset 1 and a virtual HAL 4 corresponding to the handset 2 in a DMSDP HAL. Subsequently, the tv 1 can receive the data sent from the mobile phone 1 through the virtual HAL 3, and can also receive the data sent from the mobile phone 2 through the virtual HAL 4. Similarly to the control module 1 in the DMSDP HAL of the mobile phone 1, the DMSDP HAL of the television 1 may be provided with a control module (hereinafter referred to as a control module 2), and the control module 2 is used to determine which device the shooting picture is transmitted to the CameraService of the television 1.
Subsequently, as also shown in fig. 17, if another slave device (e.g. the mobile phone 2) of the mobile phone 1 detects that there is a user inputting an audio signal into the mobile phone 2, the mobile phone 2 may also send a shot picture (e.g. the shot picture 3 described above) carrying the utterance identification to the virtual HAL 4 of the television 1. Further, the virtual HAL 4 may report the received shot image 3 to the control module 2. Similar to the working principle of the control module 1, after the control module 2 receives the shot picture 3 from the mobile phone 2, if the sound production identifier carried by the shot picture is analyzed, the control module 2 can report the shot picture 3 of the mobile phone 2 to the CameraService of the television 1. Subsequently, the camera service of the television 1 may report the shot picture 3 of the mobile phone 2 to the camera application of the television 1, and the camera application of the television 1 displays the shot picture 3 of the mobile phone 2 in the display interface of the television 1. In this way, the television 1 may present, as a slave device of the mobile phone 1, a shot screen of a slave device (for example, the mobile phone 2) to which the user has input the voice.
For example, as shown in fig. 18 (a), after receiving the shooting picture 3 of the mobile phone 2 reported by the CameraService, the camera application of the television 1 may display only the shooting picture 3 of the mobile phone 2 in the display interface. At this time, if the CameraService of the television 1 receives the shot picture (for example, the shot picture 2) of the television 1 reported by the camerahal of the television 1, the CameraService of the television 1 may report the shot picture 2 of the television 1 to the proxy application of the television 1, and the proxy application of the television 1 sends the shot picture 2 of the television 1 to the mobile phone 1. That is to say, the camera service of the television 1 does not need to report the shot picture 2 of the television 1 to the camera application of the television 1, and the camera application of the television 1 does not display the shot picture 2 itself on the display interface.
Or, in some scenes, the CameraService of the television 1 may receive the shot picture 2 of the television 1 reported by the Camera HAL of the television 1, or may receive the shot picture 3 of the mobile phone 2 reported by the DMSDP HAL of the television 1. At this time, the camera service of the television 1 may report the shooting pictures of both the devices to the camera application of the television 1. Furthermore, as shown in fig. 18 (b), the camera application of the television 1 may also create two windows (i.e., a window 1801 and a window 1802) in its display interface, and at this time, the camera application of the television 1 may display the captured picture 3 of the cell phone 2 in one of the windows (e.g., the window 1801), and the camera application of the television 1 may display the captured picture 2 of the television 1 itself in the other window (e.g., the window 1802). In this way, the television 1 may display a captured image of the slave device to which the user has input the voice while displaying a captured image captured by the television 1 itself.
It should be noted that, in the above embodiment, the slave device of the mobile phone 1 includes the television 1 and the mobile phone 2, and when there is the mobile phone 2 input by the user's voice, the mobile phone 2 may transmit the shot picture of the mobile phone 2 to the mobile phone 1 and the television 1 for displaying. It is understood that the mobile phone 2 may also receive and display the shot picture of other slave devices (for example, the television 1) according to the above method, and the embodiment of the present application does not limit this.
In other embodiments, the mobile phone 1 as a main device may also use a microphone to collect an audio signal input by the user while using its own camera to collect a shot picture, and then determine whether the user is using the mobile phone 1 to generate a sound based on the collected audio signal. For the method for acquiring the audio signal by the mobile phone 1 and determining whether the user inputs the audio signal to the mobile phone 1 based on the acquired audio signal, reference may be made to the method for acquiring the audio signal by the mobile phone 2 and determining whether the user inputs the audio signal to the mobile phone 2 based on the acquired audio signal in the above embodiment, which is not repeated herein.
For example, if the cellular phone 1 determines that the user is currently inputting an audio signal to the cellular phone 1, the cellular phone 1 may continue to display the photographed picture of the cellular phone 1 in the window 1201 in which the photographed picture of the cellular phone 1 is originally displayed, as shown in (a) of fig. 19. At this time, the mobile phone 1 may not display any of the photographed pictures of the slave devices in the window 1202. Currently, the mobile phone 1 may also display the shot picture of the mobile phone 1 in a full screen. Alternatively, as shown in fig. 19 (b), the mobile phone 1 may display the shooting screen of a certain slave device in the window 1202 while the mobile phone 1 continues to display the shooting screen of the mobile phone 1 in the window 1201. For example, the cellular phone 1 may continue to display the photographed picture 2 of the television 1 before the user inputs the audio signal to the cellular phone 1 in the window 1202.
In addition, if the mobile phone 1 determines that the user is currently inputting an audio signal to the mobile phone 1, the mobile phone 1 may also add a sounding mark in the shot picture of the mobile phone 1 according to the above method. Further, the mobile phone 1 can send the shot picture carrying the sounding reference to the slave devices of the mobile phone 1, namely, the television 1 and the mobile phone 2. In this way, the television 1 and the mobile phone 2 can display the shot picture of the mobile phone 1 on their own display interfaces according to the above method. That is to say, in a distributed shooting scene, when a user inputs an audio signal to any one of the master device and the slave device, the device can inform other devices of displaying a shooting picture of the device together by adding a sounding marker in the shooting picture, so that users of the master device and the slave device can focus on the shooting picture of the slave device input by the user sound, and the use experience of the user is improved.
It can be understood that, in the foregoing embodiment, the mobile phone 1 is taken as a master device in a distributed shooting scene, and the television 1 and the mobile phone 2 are taken as slave devices of the mobile phone 1 for example, and it can be understood that the master device and the slave devices in the distributed shooting scene may be any electronic devices having the shooting function described above, which is not limited in this embodiment of the present application.
It should be noted that, in the embodiment, a specific method for implementing a distributed shooting function among the function modules is described by taking an Android system as an example, and it can be understood that corresponding function modules may also be set in other operating systems (e.g., a hong meng system, etc.) to implement the method. As long as the functions implemented by the respective devices and functional modules are similar to the embodiments of the present application, they are within the scope of the claims of the present application and their equivalents.
As shown in fig. 20, an embodiment of the present application discloses an electronic device, which may be the above-mentioned main device (e.g., a mobile phone). The electronic device may specifically include: a touch screen 2001, the touch screen 2001 including a touch sensor 2006 and a display screen 2007; one or more processors 2002; a memory 2003; a communication module 2008; one or more cameras 2009; one or more application programs (not shown); and one or more computer programs 2004, which may be connected via one or more communication buses 2005. Wherein the one or more computer programs 2004 are stored in the memory 2003 and configured to be executed by the one or more processors 2002, the one or more computer programs 2004 include instructions that may be used to perform the steps associated with the host device (e.g., the handset 1) of the embodiments described above.
As shown in fig. 21, an embodiment of the present application discloses an electronic device, which may be the slave device (e.g., a sound box) described above. The electronic device may specifically include: a touch screen 2101, the touch screen 2101 comprising a touch sensor 2106 and a display screen 2107; one or more processors 2102; a memory 2103; a communication module 2108; one or more cameras 2109; one or more application programs (not shown); and one or more computer programs 2104, which may be connected via one or more communication buses 2105. Wherein the one or more computer programs 2104 are stored in the memory 2103 and configured to be executed by the one or more processors 2102, the one or more computer programs 2104 comprise instructions which can be used for executing relevant steps performed by a slave device (e.g. the television 1 or the mobile phone 2) in the above embodiments.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered within the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. A photographing method, characterized by comprising:
the method comprises the steps that first equipment displays a first shooting picture in a display interface, wherein the first shooting picture is a shooting picture collected by the first equipment;
after the first equipment establishes network connection with N electronic equipment, the first equipment indicates the N electronic equipment to start collecting shooting pictures, wherein N is an integer greater than 1;
the first equipment acquires a second shooting picture acquired by second equipment, wherein the second equipment is one of the N pieces of electronic equipment;
if the second shooting picture contains a preset sound production identifier, the first device displays the second shooting picture in the display interface, and the sound production identifier is used for indicating that the user is producing sound.
2. The method according to claim 1, wherein a preset button is arranged in the display interface and is used for synchronously displaying shooting pictures of a plurality of devices;
before the first device acquires a second shooting picture acquired by a second device, the method further comprises the following steps:
responding to the operation of selecting the preset button by a user, the first equipment creates a first window and a second window in the display interface, the first window is used for displaying a first shooting picture acquired by the first equipment, and the second window is used for displaying shooting pictures acquired by other equipment;
wherein the first device displays the second shooting picture in the display interface, and the method includes:
the first device displays the second photographing screen in the second window, and displays the first photographing screen in the first window.
3. The method of claim 2, wherein in response to a user operation to select the preset button, further comprising:
the first equipment establishes network connection with the N electronic equipment;
if the first device and the third device first establish network connection, before the first device acquires a second shooting picture acquired by the second device, the method further includes:
the first device acquires a third shooting picture acquired by a third device, wherein the third device is one of the N electronic devices except the second device;
the first device displays the third photographing screen in the second window, and displays the first photographing screen in the first window.
4. The method of claim 3, wherein if the second shot picture includes a preset sound generation identifier, the first device displays the second shot picture in the display interface, including:
if the second shot picture contains a preset sound production identifier, the first equipment judges whether the time for displaying the third shot picture in the second window exceeds preset time or not;
and if the preset time is exceeded, the first equipment displays the second shooting picture in the second window.
5. The method of claim 3, further comprising, after the first device displays the second shot in the second window:
the first equipment acquires a third shooting picture acquired by the third equipment;
and if the third shooting picture does not contain the sound production identification, the first equipment continues to display the second shooting picture in the second window, and does not display the third shooting picture.
6. The method according to claim 5, wherein after the first device acquires the third shot captured by the third device, the method further comprises:
if the third shooting picture contains the sound production identification, the first equipment displays the third shooting picture in the second window; alternatively, the first and second electrodes may be,
and if the third shooting picture contains the sound production identification, the first device creates a third window in the display interface and displays the third shooting picture in the third window.
7. The method of any of claims 2-6, wherein the second window includes a prompt message for prompting a user corresponding to the second window to speak.
8. The method of claim 1, wherein if the second captured image includes a preset sound generation identifier, the first device displays the second captured image in the display interface, including:
and if the second shooting picture contains a preset sound production identifier, the first equipment switches the first shooting picture into the second shooting picture in the display interface.
9. The method according to any one of claims 1-8, further comprising, after the first device establishes network connections with N electronic devices:
the first device detects whether a user inputs an audio signal to the first device;
if a user inputs an audio signal to the first equipment, the first equipment adds the sounding identification to the collected first shooting picture;
and the first equipment sends a first shooting picture carrying the sound production identification to the N electronic equipment, so that the N electronic equipment displays the first shooting picture according to the sound production identification.
10. The method of claim 9, wherein the first device detecting whether a user inputs an audio signal to the first device comprises:
when the first equipment uses the camera to collect the first shot picture, a microphone is used for detecting an audio signal;
when the first equipment detects an audio signal, acquiring M first shooting pictures corresponding to the detected audio signal, wherein M is an integer larger than 1;
and if the mouth shape features in the M first shooting pictures are matched with the detected audio signals, the first equipment determines that a user inputs the audio signals to the first equipment.
11. The method according to claim 10, wherein acquiring, when the first device detects an audio signal, M first shot pictures corresponding to the detected audio signal includes:
and when the first equipment detects that the loudness of the audio signal is greater than a preset value, acquiring M first shooting pictures corresponding to the detected audio signal.
12. The method according to any one of claims 9-11, further comprising, after the first device detects whether a user inputs an audio signal to the first device:
and if the user inputs an audio signal to the first equipment, the first equipment stops displaying the second shooting picture.
13. A photographing method, characterized by comprising:
the second equipment establishes network connection with the first equipment;
the second equipment uses a camera to collect a shot picture and uses a microphone to detect an audio signal;
when the second equipment detects that a user inputs an audio signal to the second equipment, the second equipment adds a preset sounding identifier to the collected shooting picture, wherein the sounding identifier is used for indicating that the user is sounding;
and the second equipment sends the shooting picture carrying the sound production identification to the first equipment.
14. The method of claim 13, wherein after the second device captures the captured picture using a camera and detects the audio signal using a microphone, further comprising:
when the second device detects an audio signal, acquiring M shooting pictures corresponding to the detected audio signal, wherein M is an integer greater than 1;
and if the mouth shape characteristics in the M shooting pictures are matched with the detected audio signals, the second equipment determines that a user inputs the audio signals to the second equipment.
15. The method of claim 14, wherein when the second device detects an audio signal, acquiring M shot pictures corresponding to the detected audio signal comprises:
and when the second equipment detects that the loudness of the audio signal is greater than a preset value, acquiring M shooting pictures corresponding to the detected audio signal.
16. The method according to any one of claims 13-15, further comprising, after the second device establishes the network connection with the first device:
the second equipment acquires a shooting picture of the first equipment;
and if the shooting picture of the first equipment contains the sound production identification, the second equipment displays the shooting picture of the first equipment in a display interface.
17. The method of claim 16, wherein the display interface comprises a first window and a second window, and wherein the second device displays a picture taken by the first device in the display interface, comprising:
and the second equipment displays the shooting picture of the first equipment in the first window and displays the shooting picture of the second equipment in the second window.
18. An electronic device, wherein the electronic device is a first device, the first device comprising:
a display screen;
one or more cameras;
one or more processors;
a memory;
a communication module;
wherein the memory has stored therein one or more computer programs, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform a photographing method performed by the first device of any of claims 1-12.
19. An electronic device, wherein the electronic device is a second device, the second device comprising:
a display screen;
one or more cameras;
one or more processors;
a memory;
a communication module;
wherein the memory has stored therein one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform a photographing method performed by the second device of any of claims 13-17.
20. A distributed camera system, characterized in that the system comprises an electronic device according to claim 18 and an electronic device according to claim 19.
21. A computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform a photographing method according to any one of claims 1-12 or 13-17.
22. A computer program product comprising instructions for causing an electronic device to perform a method of capturing as claimed in any one of claims 1-12 or 13-17 when the computer program product is run on the electronic device.
CN202011630348.0A 2020-12-30 2020-12-30 Shooting method, system and electronic equipment Pending CN114697732A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011630348.0A CN114697732A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment
CN202210973885.8A CN115550597A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment
PCT/CN2021/143005 WO2022143883A1 (en) 2020-12-30 2021-12-30 Photographing method and system, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011630348.0A CN114697732A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210973885.8A Division CN115550597A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment

Publications (1)

Publication Number Publication Date
CN114697732A true CN114697732A (en) 2022-07-01

Family

ID=82134853

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210973885.8A Pending CN115550597A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment
CN202011630348.0A Pending CN114697732A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210973885.8A Pending CN115550597A (en) 2020-12-30 2020-12-30 Shooting method, system and electronic equipment

Country Status (2)

Country Link
CN (2) CN115550597A (en)
WO (1) WO2022143883A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703692A (en) * 2022-12-30 2023-09-05 荣耀终端有限公司 Shooting performance optimization method and device
CN117880626A (en) * 2024-03-11 2024-04-12 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium
CN117880626B (en) * 2024-03-11 2024-05-24 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116366957B (en) * 2022-07-21 2023-11-14 荣耀终端有限公司 Virtualized camera enabling method, electronic equipment and cooperative work system
CN116033259A (en) * 2022-12-20 2023-04-28 浙江力石科技股份有限公司 Method, device, computer equipment and storage medium for generating short video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080043144A1 (en) * 2006-08-21 2008-02-21 International Business Machines Corporation Multimodal identification and tracking of speakers in video
CN103795959A (en) * 2012-10-31 2014-05-14 三星Sds株式会社 Apparatus for multi-party video call, server for controlling multi-party video call, and method of displaying multi-party image
CN104010158A (en) * 2014-03-11 2014-08-27 宇龙计算机通信科技(深圳)有限公司 Mobile terminal and implementation method of multi-party video call
CN104301564A (en) * 2014-09-30 2015-01-21 成都英博联宇科技有限公司 Intelligent conference telephone with mouth shape identification
CN106791238A (en) * 2016-11-28 2017-05-31 努比亚技术有限公司 The call control method and device of MPTY conference system
CN107430858A (en) * 2015-03-20 2017-12-01 微软技术许可有限责任公司 The metadata of transmission mark current speaker
CN107682752A (en) * 2017-10-12 2018-02-09 广州视源电子科技股份有限公司 Method, apparatus, system, terminal device and the storage medium that video pictures are shown

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581608B (en) * 2012-07-20 2019-02-01 Polycom 通讯技术(北京)有限公司 Spokesman's detection system, spokesman's detection method and audio/video conferencingasystem figureu
CN105592286B (en) * 2014-10-22 2019-03-01 阿里巴巴集团控股有限公司 Instant messaging interface information processing method and processing device
EP3101838A1 (en) * 2015-06-03 2016-12-07 Thomson Licensing Method and apparatus for isolating an active participant in a group of participants

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080043144A1 (en) * 2006-08-21 2008-02-21 International Business Machines Corporation Multimodal identification and tracking of speakers in video
CN103795959A (en) * 2012-10-31 2014-05-14 三星Sds株式会社 Apparatus for multi-party video call, server for controlling multi-party video call, and method of displaying multi-party image
CN104010158A (en) * 2014-03-11 2014-08-27 宇龙计算机通信科技(深圳)有限公司 Mobile terminal and implementation method of multi-party video call
CN104301564A (en) * 2014-09-30 2015-01-21 成都英博联宇科技有限公司 Intelligent conference telephone with mouth shape identification
CN107430858A (en) * 2015-03-20 2017-12-01 微软技术许可有限责任公司 The metadata of transmission mark current speaker
CN106791238A (en) * 2016-11-28 2017-05-31 努比亚技术有限公司 The call control method and device of MPTY conference system
CN107682752A (en) * 2017-10-12 2018-02-09 广州视源电子科技股份有限公司 Method, apparatus, system, terminal device and the storage medium that video pictures are shown

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703692A (en) * 2022-12-30 2023-09-05 荣耀终端有限公司 Shooting performance optimization method and device
CN117880626A (en) * 2024-03-11 2024-04-12 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium
CN117880626B (en) * 2024-03-11 2024-05-24 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium

Also Published As

Publication number Publication date
CN115550597A (en) 2022-12-30
WO2022143883A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN110381197B (en) Method, device and system for processing audio data in many-to-one screen projection
CN110231905B (en) Screen capturing method and electronic equipment
CN112714214B (en) Content connection method, equipment, system, GUI and computer readable storage medium
CN112394895B (en) Picture cross-device display method and device and electronic device
CN110958475A (en) Cross-device content projection method and electronic device
CN112398855B (en) Method and device for transferring application contents across devices and electronic device
CN114040242B (en) Screen projection method, electronic equipment and storage medium
CN114697732A (en) Shooting method, system and electronic equipment
CN114697527B (en) Shooting method, system and electronic equipment
CN112995727A (en) Multi-screen coordination method and system and electronic equipment
CN113194242A (en) Shooting method in long-focus scene and mobile terminal
US20230308534A1 (en) Function Switching Entry Determining Method and Electronic Device
CN113691842A (en) Cross-device content projection method and electronic device
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
EP4254927A1 (en) Photographing method and electronic device
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
CN113391743B (en) Display method and electronic equipment
CN114845035B (en) Distributed shooting method, electronic equipment and medium
CN114567619B (en) Equipment recommendation method and electronic equipment
CN114244955B (en) Service sharing method and system, electronic device and computer readable storage medium
CN115883893A (en) Cross-device flow control method and device for large-screen service
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
US20230275986A1 (en) Accessory theme adaptation method, apparatus, and system
CN116541589A (en) Broadcast record display method and related equipment
CN117014546A (en) Video recording method, audio recording method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination