WO2021121236A1 - 一种控制方法、电子设备、计算机可读存储介质、芯片 - Google Patents

一种控制方法、电子设备、计算机可读存储介质、芯片 Download PDF

Info

Publication number
WO2021121236A1
WO2021121236A1 PCT/CN2020/136645 CN2020136645W WO2021121236A1 WO 2021121236 A1 WO2021121236 A1 WO 2021121236A1 CN 2020136645 W CN2020136645 W CN 2020136645W WO 2021121236 A1 WO2021121236 A1 WO 2021121236A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
camera
shooting
cameras
photo
Prior art date
Application number
PCT/CN2020/136645
Other languages
English (en)
French (fr)
Inventor
蒋东生
杜明亮
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US17/757,673 priority Critical patent/US11991441B2/en
Priority to EP20902272.2A priority patent/EP4064683A4/en
Publication of WO2021121236A1 publication Critical patent/WO2021121236A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • This application relates to the field of Internet of Things, and in particular to a control method, electronic equipment, computer-readable storage medium, and chip.
  • the various terminal devices we use such as mobile phones, tablets, surveillance cameras, TVs, car cameras, glasses, etc., can only be controlled individually based on manual selection by users, or remotely controlled by mobile phones connected to cameras or smart home products.
  • the present application provides a control method, an electronic device, a computer-readable storage medium, and a chip to perform collaborative photography through a camera of a second electronic device to improve the quality of photography.
  • an embodiment of the present invention provides a control method, including:
  • the first electronic device obtains a shooting instruction
  • the location information of the content to be shot corresponding to the shooting instruction is determined. Based on the location information of the content to be shot, the most suitable camera for shooting the content to be shot is determined from the at least two cameras that it can control. Target camera; and/or,
  • the shooting mode is determined based on the content to be shot; from the at least two cameras that can be controlled, the camera containing the shooting mode is determined as the target camera; the at least two cameras include: the first A camera on an electronic device and a camera on a second electronic device, the first electronic device is different from the second electronic device;
  • the first electronic device controls the target camera to execute the shooting instruction to obtain image data collected by the target camera.
  • determining the camera with the most suitable position for photographing the content to be photographed from the at least two cameras that can be controlled as the target camera includes:
  • the camera Based on the location information of the content to be shot, determine the camera whose shooting range covers the location information from the at least two cameras that it can control, and if there is only one camera determined, determine that the camera is the target camera;
  • the shooting mode is determined based on the content to be shot; the camera containing the shooting mode is determined from at least two cameras that it can control, and if there is only one camera determined, the camera is determined to be the target camera.
  • determining the camera with the most suitable position for photographing the content to be photographed from the at least two cameras that can be controlled as the target camera includes:
  • the first electronic device controls the target camera to execute the shooting instruction to obtain image data collected by the target camera includes:
  • the first electronic device sends a shooting request to the electronic device where the target camera is located, and receives image data sent by the electronic device where the target camera is located; or,
  • the first electronic device calls the target camera as a virtual camera of the first electronic device, and acquires image data collected by the virtual camera.
  • the first preset rule includes:
  • the method further includes: the first electronic device uses its distributed device virtualization module MSDP Virtualize the camera of the second electronic device as the virtual camera; the first electronic device calls the target camera as the virtual camera of the first electronic device to obtain the image collected by the virtual camera Data, including:
  • the CaaS service queries MSDP whether there is a virtual camera, and when there is a virtual camera, the image data collected by the virtual camera is obtained through the camera interface.
  • an electronic device including:
  • One or more processors are One or more processors;
  • the one or more computer programs include instructions, and when the instructions are executed by the first electronic device, The first electronic device executes the method described in any embodiment of the present invention.
  • an embodiment of the present invention provides an electronic device, including:
  • the first obtaining module is used to obtain shooting instructions
  • the first determining module is configured to determine the location information of the content to be shot corresponding to the shooting instruction according to the shooting instruction, and determine the location information of the content to be shot based on the location information of the content to be shot, from at least two cameras that it can control.
  • the camera with the most suitable content location is used as the target camera; and/or,
  • the shooting mode is determined based on the content to be shot; the camera containing the shooting mode is determined as the target camera from the at least two cameras that can be controlled; the at least two cameras include: the first A camera on an electronic device and a camera on a second electronic device, the first electronic device is different from the second electronic device;
  • the control module is configured to control the target camera to execute the shooting instruction to obtain image data collected by the target camera.
  • an embodiment of the present invention provides a control method, including:
  • the first electronic device obtains a shooting instruction
  • the first electronic device controls at least two cameras that it can control to execute the shooting instruction to obtain image data collected by the target camera, thereby obtaining at least two photos, the at least two cameras Comprising: a camera on the first electronic device and a camera on a second electronic device, the first electronic device is different from the second electronic device;
  • the score value of the photo can be calculated by the following formula:
  • E represents the rating value of the photo
  • the distance parameter x takes the best shooting distance of 50cm in the physical distance as the maximum value, and the gradient decreases toward the far or near.
  • represents the weight value of the distance parameter x, and its value range is [0, 1];
  • the angle parameter y takes the facing camera as the maximum value, and the deflection gradient to the three-axis angle decreases.
  • represents the weight value of the angle parameter y, and its value range is [0,1];
  • the aesthetic composition parameter z takes the maximum score through the aesthetic composition scoring model as the maximum value, and the gradient decreases.
  • represents the weight value of the aesthetic composition parameter z, and its value range is [0, 1].
  • the score value of the photo can be calculated by the following formula:
  • E represents the rating value of the photo
  • the distance parameter x takes the best shooting distance of 50cm in the physical distance as the maximum value, and the gradient decreases toward the far or near.
  • represents the weight value of the distance parameter x, and its value range is [0, 1];
  • the aesthetic composition parameter z takes the maximum score through the aesthetic composition scoring model as the maximum value, and the gradient decreases.
  • represents the weight value of the aesthetic composition parameter z, and its value range is [0, 1].
  • an embodiment of the present invention adopts an electronic device including: one or more processors;
  • the one or more computer programs include instructions, and when the instructions are executed by the first electronic device, The first electronic device executes the method described in any embodiment of the present invention.
  • an embodiment of the present invention provides an electronic device, including:
  • the response module is configured to respond to the shooting instruction, and the first electronic device controls at least two cameras that it can control to execute the shooting instruction to obtain image data collected by the target camera, thereby obtaining at least two photos, so
  • the at least two cameras include: a camera on the first electronic device and a camera on a second electronic device, and the first electronic device is different from the second electronic device;
  • the scoring module is used to calculate the scoring value of each photo according to at least one of the distance between the camera and the object being photographed, the angle of the face, and the aesthetic composition parameter of each photo;
  • the determining module is configured to use a photo that meets the preset score value as the shooting result of the shooting instruction; and/or, when the shooting instruction is a video capture instruction, perform the processing through the camera corresponding to the photo that meets the preset score value Video capture.
  • an embodiment of the present invention provides a computer-readable storage medium, including instructions, characterized in that, when the instructions are executed on an electronic device, the electronic device is caused to execute the method described in any of the embodiments of the present invention. method.
  • an embodiment of the present invention provides a computer program product, where the computer program product includes software code, and the software code is used to execute the method described in any embodiment of the present invention.
  • an embodiment of the present invention provides a chip containing instructions, which when the chip runs on an electronic device, causes the electronic device to execute the method according to any embodiment of the present invention.
  • the first electronic device can determine the most suitable camera for shooting the content to be shot based on the position information of the content to be shot from the at least two cameras that it can control.
  • the target camera, or the shooting mode is determined based on the content to be shot, and the camera containing the shooting mode is determined from the at least two cameras that can be controlled as the target camera, so as to control the target camera to execute the shooting instruction, wherein the target camera may be at least
  • the cameras on the two electronic devices can select different cameras based on the location information and shooting mode of the content to be shot, not limited to the camera of the first electronic device itself, so as to achieve the technical effect of improving the quality of the collected image data .
  • Figure 1 is a structural diagram of an electronic device in an embodiment of the present invention.
  • Figure 2 is a software framework diagram of an electronic device provided by an embodiment of the present invention.
  • Figure 3 is a framework diagram of a smart home system provided by an embodiment of the present invention.
  • FIG. 4 is a flowchart of a control method provided by an embodiment of the present invention.
  • 5 is a software framework diagram of virtual cameras of other electronic devices as cameras of electronic device 100 in an embodiment of the present invention.
  • Figure 6 is an interface interaction diagram for processing photos in an embodiment of the present invention.
  • FIG. 7 is a flowchart of a control method provided by an embodiment of the present invention.
  • FIG. 8 is a flowchart of a control method provided by another embodiment of the present invention.
  • FIG. 9 is a flowchart of a control method provided by another embodiment of the present invention.
  • FIG. 10 is a schematic diagram of photos collected by different cameras in an embodiment of the present invention.
  • Fig. 11 is a flowchart of a control method provided by another embodiment of the present invention.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, “plurality” means two or more.
  • the electronic equipment is equipped with cameras, microphones, global positioning system (global positioning system, GPS) chips, various sensors (such as magnetic field sensors, gravity sensors, gyroscope sensors, etc.) and other devices to sense the external environment and user actions Wait.
  • the electronic device According to the perceived external environment and the user's actions, the electronic device provides the user with a personalized and contextual business experience.
  • the camera can obtain rich and accurate information so that the electronic device can perceive the external environment and the user's actions.
  • the embodiments of the present application provide an electronic device, which can be implemented as any of the following devices including a camera: a mobile phone, a tablet computer (pad), a portable game console, a personal digital assistant (PDA), a notebook computer, a super Mobile personal computers (ultra mobile personal computers, UMPC), handheld computers, netbooks, vehicle-mounted media playback devices, wearable electronic devices, virtual reality (VR) terminal devices, augmented reality (AR) terminal devices, etc.
  • a camera a mobile phone, a tablet computer (pad), a portable game console, a personal digital assistant (PDA), a notebook computer, a super Mobile personal computers (ultra mobile personal computers, UMPC), handheld computers, netbooks, vehicle-mounted media playback devices, wearable electronic devices, virtual reality (VR) terminal devices, augmented reality (AR) terminal devices, etc.
  • a camera a mobile phone, a tablet computer (pad), a portable game console, a personal digital assistant (PDA), a notebook computer, a super Mobile personal computers (ultra mobile personal
  • FIG. 1 shows a schematic diagram of the structure of an electronic device 100.
  • the electronic device 100 shown in FIG. 1 is only an example, and the electronic device 100 may have more or fewer components than those shown in FIG. 1, two or more components may be combined, or Can have different component configurations.
  • the various components shown in the figure may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display 194, And subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 2 is a block diagram of the software structure of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • CN201910430270.9 For detailed introduction of software functions, please refer to the previous patent application: CN201910430270.9.
  • the embodiment of the present invention provides a control method, which can be used for image capture or video capture.
  • the method is applied to an electronic device 100, and the electronic device 100 is used to invoke the capabilities of other electronic devices associated with it.
  • the camera function so as to use the camera function of other electronic devices to realize image collection.
  • Other electronic devices include, for example, electronic devices that are bound to the same account with the electronic device 100, electronic devices that are in the same local area network as the electronic device 100, electronic devices that belong to the same user as the electronic device 100, and electronic devices that belong to the same family as the electronic device 100.
  • Electronic devices for example, if the user of the electronic device 100 is a mother in the family, other electronic devices include electronic devices of mothers, fathers, and children), other electronic devices that can be controlled by the electronic device 100, which are not listed in detail in this embodiment of the present invention. And there is no restriction. For example, if each electronic device is willing to share functions (for example, a camera), it can be registered in the cloud, and the cloud records electronic devices with sharing functions (for example: sharing a camera, microphone, display screen, etc.), then the electronic device 100 needs to take pictures. At the time, based on sending the current location and shooting requirements to the cloud, the cloud selects a suitable electronic device for shooting and provides it to the electronic device 100.
  • functions for example, a camera
  • sharing functions for example: sharing a camera, microphone, display screen, etc.
  • the electronic device 100 may be an electronic device included in a smart home scene. Please refer to FIG. 3.
  • the smart home scene includes the following devices: a desktop 30 (including a camera, with a camera ID of 30a), a smart TV 31 (including a camera, with a camera ID of 30a) 31a), PAD32 (including camera, camera ID is 32a), smart watch 33 (including camera, camera ID is 33a), mobile phone 34 (including front camera and rear camera, front camera ID is 34a, rear camera ID 34b), car 35 (including five cameras, the camera IDs are 35a, 35b, 35c, 35d, 35e).
  • a desktop 30 including a camera, with a camera ID of 30a
  • a smart TV 31 including a camera, with a camera ID of 30a) 31a
  • PAD32 including camera, camera ID is 32a
  • smart watch 33 including camera, camera ID is 33a
  • mobile phone 34 including front camera and rear camera, front camera ID is 34a, rear camera ID 34b
  • car 35 including five cameras, the camera IDs are 35a, 35b, 35c, 35d, 35e).
  • the electronic device 100 is, for example, a mobile phone 34.
  • the smart home scene may also include other devices.
  • the electronic device 100 may also be other devices in the smart home scene, and is usually an electronic device with relatively powerful computing capabilities in the smart home scene.
  • the electronic device 100 can be regarded as the master device, and the electronic device whose capabilities (for example: camera, microphone, display, etc.) are registered to the master device (electronic device 100) can be regarded as the controlled device.
  • the control method provided by the embodiments of the present invention can also be applied to the cloud. When an electronic device needs to take a photo or video, for example, after the user generates a shooting instruction, it is sent to the cloud, and the cloud selects based on the capabilities of each electronic device. The most suitable electronic equipment for shooting.
  • the method includes the following steps:
  • S400 The electronic device 100 receives a shooting instruction.
  • the shooting instruction is, for example, an instruction to take a photo
  • the shooting instruction may be a voice instruction, an instruction to trigger a camera button of a camera application, a preset gesture, and so on.
  • the user of the electronic device 100 can send a voice command to the electronic device 100, the voice command is, for example, "take a picture of me", "take a picture in the car", etc., where the user can be in the electronic device 100
  • the voice command is issued when the screen is locked, or the voice command can be issued after the electronic device 100 is unlocked, and the electronic device 100 can respond to the voice command; or, the user can open the camera application of the electronic device 100, and use the camera application to issue a shooting Instruction; or, the user can open the instant messaging application, click the camera button of the instant messaging application to trigger the shooting instruction, or click the video communication button of the instant messaging application to trigger the shooting instruction, etc.
  • the electronic device 100 is, for example, the mobile phone 34 shown in FIG. 3.
  • the user taking a photo through the shooting button of the camera application is taken as an example for introduction.
  • the shooting instruction is issued by the user of the electronic device 100.
  • the electronic device 100 directly collects The shooting instruction; another example: User A picks up the mobile phone 34, opens the vehicle monitoring application, and triggers the remote camera function, the vehicle monitoring application of the electronic device 100 generates the shooting instruction based on the user operation, etc.; another optional embodiment , The electronic device 100 receives shooting instructions sent by other electronic devices. For example, the user sends a voice command "take me a picture of the kitchen" to the car navigator. After the car navigator receives the shooting instruction, it sends it to the electronic device. The device 100 thus runs the shooting instruction based on the powerful computing function of the electronic device 100.
  • the car navigator after detecting the shooting instruction, the car navigator first determines whether it has the ability to process the shooting instruction, if it has the ability to process the shooting instruction, it processes the shooting instruction itself, otherwise it sends the shooting instruction to the electronic device 100 .
  • the vehicle navigator when the vehicle navigator detects the shooting instruction, it may determine whether it can respond to the shooting instruction and obtain the photo or video corresponding to the shooting instruction. If it can respond, it can be considered that it has the ability to process the shooting instruction. Otherwise, Think that he does not have the ability to process the shooting instruction.
  • the electronic device 100 collects and obtains photos through at least two cameras of the electronic device 100 to obtain at least two photos.
  • the at least two cameras of the electronic device 100 include at least one of a physical camera and a virtual camera. , There can be one or more physical cameras, and one or more virtual cameras.
  • the at least two cameras of the electronic device 100 include both physical cameras and virtual cameras, so that the camera itself and the cameras of other electronic devices can respond to the shooting instruction at the same time.
  • the physical camera of the electronic device 100 is the camera of the electronic device 100, which is, for example, the front camera, the rear camera, etc. of the electronic device 100;
  • the virtual camera of the electronic device 100 is the camera of the electronic device 100.
  • the camera of the electronic device is virtually its own camera, and the electronic device 100 can trigger the operation of registering the virtual camera at various times. Two of them are listed below for introduction. Of course, in the specific implementation process, it is not limited to the following two situations.
  • the mobile phone 34 has two physical cameras (front camera and rear camera, the ID of the front camera is 34a, the ID of the rear camera is 34b) and 9 Virtual cameras (camera IDs are: 30a, 31a, 32a, 33a, 35a, 35b, 35c, 35d, 35e).
  • the electronic device 100 Before calling the function of the virtual camera, the electronic device 100 needs to register the data (image or video) collected by the camera of other electronic device as a virtual camera. It can register the camera of other electronic device as a virtual camera at various times. Enumerate two of them for introduction. Of course, in the specific implementation process, it is not limited to the following two opportunities.
  • the electronic device 100 (as the master device), after being powered on (or connected to a router, or turned on the Bluetooth function), sends broadcast information through a short-range communication method (for example: Bluetooth, WIFI, etc.) to find the communication range Other electronic devices within the device send device information to the electronic device 100.
  • the device information includes, for example, device capabilities (camera, microphone, display, etc.), device location, device ID (Identity document), etc., electronic device 100 If you want to register the camera of the corresponding electronic device as a virtual camera, send a request message to the corresponding electronic device.
  • the electronic device 100 If the corresponding electronic device agrees to use the camera as the virtual camera of the electronic device 100, the electronic device generates a confirmation message (for example: click Preset buttons, generate preset voice instructions, generate voice gestures, etc.). After receiving the confirmation information, the electronic device 100 executes a virtualization operation of virtualizing the camera of the corresponding electronic device as a local camera. Similarly, the electronic device 100 can also virtualize other capabilities of other electronic devices (such as a microphone, a display, etc.) as a native device.
  • a confirmation message for example: click Preset buttons, generate preset voice instructions, generate voice gestures, etc.
  • the electronic device 100 executes a virtualization operation of virtualizing the camera of the corresponding electronic device as a local camera.
  • the electronic device 100 can also virtualize other capabilities of other electronic devices (such as a microphone, a display, etc.) as a native device.
  • the electronic device 100 After other electronic devices are powered on (or connected to the router), they can also generate broadcast information to find the main control device (electronic device 100). After finding the electronic device 100, the electronic device 100 virtualizes its camera as the electronic device's camera. webcam. Similarly, the electronic device 100 may also virtualize other capabilities of other electronic devices as capabilities of the electronic device 100.
  • the electronic device 100 can search for other electronic devices registered with the same account as the electronic device 100, and register the camera of the other electronic device as the camera of the electronic device 100, and can also register other electronic devices as the camera of the electronic device 100.
  • the other capabilities of the electronic device are registered as the capabilities of the electronic device 100; after other electronic devices (controlled devices) are registered to the server through an account, they can also search for the master device registered with the same account and register their functions to the server. Master control equipment.
  • the electronic device 100 can call the data collected by the camera of other electronic devices. For other capabilities of other electronic devices, the electronic device 100 can be called in a similar manner. .
  • the electronic device 100 can call CaaS functions (for example: CaaS Kit).
  • CaaS Communication as a Service
  • CaaS Communication as a Service
  • the CaaS function includes many contents, such as call signaling, media transmission, CaaS service, etc.
  • the CaaS function is provided to the electronic device 100 to call through the CaaS service.
  • MSDP MSDP Mobile Sensing development platform: Registers the cameras of other electronic devices as virtual cameras of the hardware abstraction layer.
  • Application framework layer including: camera framework, used to provide camera functions to the outside world; camera interface, used to obtain data collected by the camera; MSDP used to register virtual cameras, that is: virtual cameras of other electronic devices as the hardware abstraction layer Virtual camera.
  • the hardware abstraction layer includes cameras (physical cameras and virtual cameras). Through the camera interface, both the physical camera of the electronic device 100 (such as the data of the front camera, the data of the rear camera, etc.) can be accessed, and the virtual camera can also be accessed.
  • the hardware abstraction layer is located between the system library and the kernel layer in the software system framework shown in Figure 2.
  • the electronic device 100 When the electronic device 100 needs to call the camera function of the CaaS, it first registers the CaaS service with the system.
  • the CaaS service queries the MSDP whether there is a virtual camera, and when there is a virtual camera, it obtains the virtual camera video data through the camera interface.
  • the virtual camera and the physical camera may have different tags, so the CaaS service can accurately obtain the video data of the virtual camera based on the tags of the virtual camera.
  • other capabilities can be provided for the electronic device in a similar manner, and the embodiment of the present invention does not limit it.
  • each camera has a camera ID for the electronic device 100 to identify the identity of the camera.
  • the electronic device 100 can control all its cameras to shoot, or control some of its cameras to shoot.
  • the electronic device can select the target camera from all the cameras it contains in a variety of ways. Two of them are listed below for introduction. Of course, in the specific implementation process, it is not limited to the following two situations.
  • the electronic device 100 may first determine the location information of the captured content corresponding to the shooting instruction, and then determine the camera for shooting based on the location information. For example, the user is concerned that the window of the car is not closed or the car A thief is encountered in the car and therefore wants to check the situation in the car below, and the following shooting instruction "Look at the situation in the car" is generated.
  • the electronic device 100 first determines that the filmed content is located in the car through semantic analysis. In this case, the electronic device 100 calls the cameras in each camera (physical camera and virtual camera), that is, cameras 35a, 35b, 35c, 35d, 35e, and then controls these cameras to perform image collection.
  • the shooting instruction is "Hi, little E, take a photo for me"
  • the electronic device 100 can determine the camera of the electronic device within the preset distance range as the current collection The camera used.
  • the electronic device 100 may first determine its own area (for example, located in the living room) through a positioning device (or through analysis of collected environmental images), and then determine the camera of the electronic device in the area as the camera for collection.
  • the electronic devices currently located in the living room include: PAD32 and smart TV 31 (including the electronic device 100 itself)
  • the electronic device 100 can determine that the cameras 32a, 31a, 34a, and 34b are cameras for collection.
  • the electronic device can match the voice information information with preset user voiceprint information to verify the identity of the user who sent the voice information.
  • the pickup of the electronic device 100 collects the user's voice information, it is transmitted to the CPU or NPU (Neural-network Processing Unit: embedded neural network processor) through the main board for voice recognition and converted into a voice that can be recognized by the electronic device 100 instruction.
  • NPU Neuro-network Processing Unit: embedded neural network processor
  • the electronic device 100 may first obtain the location information of other electronic devices through the positioning device, and obtain the location information of the other electronic devices, and then control the camera of the electronic device within a preset distance range from it as a collection camera.
  • the distance is, for example, 10 meters, 20 meters, etc., which is not limited in the embodiment of the present invention.
  • the electronic device 100 can determine the content to be photographed in the photographing instruction; determine the photographing mode based on the content to be photographed; and determine the camera containing the photographing mode as the acquisition camera based on the photographing mode.
  • the electronic device 100 may first recognize the voice instruction, and then perform semantic analysis based on the recognized content, so as to determine the content to be shot, such as a person, a landscape, a still life, and so on. For example: if the shooting instruction is "take me a photo”, the content to be shot includes people, if the shooting instruction is "take a picture of the bedroom", the content to be shot includes still life, if the shooting instruction is "take a picture in front of you” Scenery", the content to be shot is scenery, etc.
  • the shooting mode of the camera is, for example, portrait mode and large aperture mode; if the content to be shot is "landscape", the shooting mode of the camera is, for example, landscape mode.
  • the electronic device 100 may first determine the cameras with the portrait mode, and then control these cameras to collect images; if the determined shooting mode is the landscape mode, the electronic device 100 may first determine Have a camera in landscape mode, and then control these cameras for image capture and so on.
  • the shooting instruction may carry a shooting mode.
  • the shooting mode is a portrait mode, and the shooting instruction is received
  • the electronic device uses the portrait mode to capture photos and sends them to the electronic device 100;
  • the shooting mode can be landscape mode, and the electronic device that receives the shooting instruction uses landscape mode to capture The photo is obtained and sent to the electronic device 100.
  • the shooting mode closest to the shooting mode is used to take pictures, or the user's favorite shooting mode of the electronic device 100 is used to take pictures (for example, the default camera mode). , The most used photo mode in history, etc.).
  • the electronic device 100 controls the camera to shoot, it can also inform the camera shooting parameters, such as the size of the photo, the exposure, the shooting mode, and so on.
  • S420 The electronic device 100 obtains the score of each of the aforementioned at least two photos.
  • the electronic device 100 can send the at least two photos to the server, and after the server scores the at least two photos, the score is returned to the electronic device 100; the electronic device 100 can also score the two photos locally, how to obtain the score The score of each photo will be introduced later.
  • the electronic device 100 determines the photo finally provided to the user based on the scores of the at least two photos.
  • the electronic device 100 can directly provide the user with the photo with the highest score (or the score is ranked in the front preset position, or the score is greater than the preset value) to the user, for example, the score in Table 1 is directly 8.3 photos are provided to the user; the electronic device 100 can also determine the camera of the photo with the highest score (or the score is in the previous preset position, or the score is greater than the preset value) as the capture camera, and re-acquire the photo as a provide Photos to users.
  • the electronic device 100 can send photos to a server for processing, can be processed locally, or can be sent to other electronic devices for processing.
  • the electronic device 100 wants to perform beautification processing on the photos through the Meitu software, but the electronic device 100 If the Meitu software is not installed, and the electronic device 100 finds that there is Meitu software on the PAD32, the electronic device 100 can process the photos through the PAD32 and provide them to the user.
  • the electronic device 100 may prompt the user to borrow the Meitu software of other electronic devices for processing. For example, as shown in FIG. 6, the user of the electronic device 100 Click the edit button 60 (of course, the editing operation can also be triggered in other ways). After the electronic device 100 responds to the operation, it displays a selection menu 61.
  • the selection menu 61 displays multiple editing modes for the user to select the editing mode.
  • the editing mode can be It is an editing method provided by the electronic device 100, for example: "local editing” 61a shown in Figure 6; it can also be an editing method provided by other electronic devices, for example: "Meitu Software 1 is located in PAD” shown in Figure 6 61b, which means that the photos can be processed by the Meitu software 1 installed on the PAD.
  • the "Meitu software 2 is located on the desktop” 61c shown in Figure 6, which means that the photos can be processed by the Meitu software 2 installed on the desktop. To process.
  • the electronic device 100 When the electronic device 100 detects that the user selects the editing mode of other electronic devices for photo processing, it can control the corresponding electronic device to start the corresponding application. For example, if the user selects "Meitu Software 1 is located in PAD" 61b, the electronic device 100 controls the PAD Control the Meitu software 1 in the open state, and display the processing interface of the Meitu software 1 on the electronic device 1.
  • the electronic device 100 can process the photos of the electronic device 100 through the PAD image processing application; or, when the electronic device 100 detects that the user selects the editing mode of other electronic devices for photo processing, the electronic device 100 sends the photos to Correspond to the electronic device, and control the corresponding electronic device to open the application and open the photo in the application. For example, the electronic device 100 controls the PAD32's Meitu software 1 to be in the open state, and opens the photo in Meitu software 1, and then the user The processing of the photo is completed on the PAD32, and then sent to the electronic device 100.
  • all the photos can also be displayed on the display interface of the electronic device 100, and the user can select the favorite photo.
  • the at least two photos may be stitched together to provide the user, so that the user can simultaneously take pictures from multiple angles when taking pictures.
  • each electronic device in the smart home system can serve as a master control device, so that after receiving a shooting instruction, each electronic device can respond to the shooting instruction and perform the foregoing steps.
  • some electronic devices in the smart home system are the master devices, and some of the electronic devices are controlled devices. After the master device receives the shooting instruction, it directly responds to the shooting instruction and executes the foregoing steps; After the device receives the shooting instruction, it sends the shooting instruction to the main control device, so that the main control device executes the aforementioned steps.
  • the mobile phone 34 is the main control device, and the smart watch is the controlled device.
  • the mobile phone 34 directly responds to the shooting instruction after receiving the shooting instruction, and after receiving the shooting instruction, the smart watch sends the shooting instruction to the mobile phone 34, and the phone 34 receives the shooting instruction.
  • the master device collects and obtains the photo, it can be stored locally on the master device or sent to the controlled device. Before the master device sends the photo to the controlled device, the display size of the photo can be adjusted to Make it adapt to the display unit of the controlled device.
  • the electronic device 100 executes the foregoing steps; in another embodiment, after the electronic device 100 receives the shooting instruction, the electronic device 100 sends the shooting instruction to the server, and the server executes the electronic devices in S400-S430. Steps performed by the device 100.
  • taking pictures can not be limited to the current electronic equipment, and solve the technical problem of poor choice of angle and distance caused by only taking pictures with current electronic equipment, resulting in poor photographing effect; it can be based on the photos taken by each camera.
  • the score is used to select the most suitable camera for image capture, achieving the technical effect of improving the quality of the captured photos.
  • the selection is directly based on the electronic device (or cloud server) without the need for the user to manually select, so the efficiency of selection is improved; in addition, in this solution, in a thin terminal (electronic device with weak processing capability) )
  • a thin terminal electronic device with weak processing capability
  • the powerful image algorithm capabilities and photographing mode advantages of electronic devices with strong processing capabilities can be used to assist the thin terminal to improve the shooting effect, thereby achieving The technical effect of high-quality photos can also be taken with a thin terminal.
  • the electronic device 100 can also use other electronic devices installed for applications to process data on the current electronic device (for example, beautify photos), so as to achieve the functions of cooperating with various electronic devices.
  • the technical effect of the application can also be used when an application is not installed on the electronic device.
  • FIG. 7 Another embodiment of the present invention provides a control method. Please refer to FIG. 7, which includes the following steps:
  • S700 The electronic device 100 receives a shooting instruction.
  • the shooting instruction is, for example, a video capture instruction, and the manner of generating the shooting instruction is similar to that in S400, and will not be repeated here.
  • the shooting instruction can be used to capture video, and can also be used to communicate with another electronic device.
  • the user of the electronic device 100 generates a voice command "take me an activity video", in this case the electronic device 100 shoots a video through the shooting command;
  • another example is the user of the electronic device 100 opens the instant messaging application and starts the video call function, After detecting the operation, the electronic device 100 activates the camera to make a video call with another electronic device through a shooting instruction.
  • the electronic device 100 collects and obtains photos through at least two cameras of the electronic device 100 to obtain at least two photos.
  • the at least two cameras of the electronic device 100 include at least one of a physical camera and a virtual camera. , There can be one or more physical cameras, and one or more virtual cameras. This step is similar to S410, and will not be repeated here.
  • S720 The electronic device 100 determines the first camera based on the aforementioned at least two photos.
  • the electronic device 100 can obtain the scores of the aforementioned at least two photos, and then determine the first camera based on the scores of the at least two photos.
  • the specific determination method S420 has been introduced, so it will not be repeated here.
  • the electronic device 100 may also display the photos collected by each camera on the display unit of the electronic device 100, and prompt the user to select the photo she thinks is the best, and then use the camera corresponding to the photo selected by the user as the first camera.
  • the first camera can be one camera or multiple cameras. For example, you can choose the camera with the best shooting effect (highest score) as the first camera, or you can choose several different angles and better shooting results.
  • the camera (the score is greater than the preset value) is used as the first camera, so that multiple videos can be captured to provide the user with videos of different angles, and it can also give the user more choices.
  • the electronic device 100 in S710 may also control each camera to collect and obtain video.
  • the video score (averaged by the score of each frame of the video) or the video selected by the user may be used to determine the first video.
  • the electronic device 100 controls the first camera to collect and obtain a video.
  • the electronic device 100 controls the first camera to capture and obtain the video, it may be directly used as the shooting result of the shooting instruction, or it may be processed.
  • the electronic device 100 can also process the video with the help of applications contained in other electronic devices, which will not be repeated here.
  • the electronic device 100 controls the first camera to perform video capture
  • other cameras can be controlled to be in an on or off state, which is not limited in the embodiment of the present invention.
  • the first camera is always used for video capture; in another optional embodiment, the electronic device determines After the first camera, if at least one of the captured content and the position of the first camera changes, and if the position of the captured content relative to the first camera changes, the camera used for video capture can be re-determined, and multiple cameras can be used. This method is determined, and two of them are listed below for introduction. Of course, in the specific implementation process, it is not limited to the following two situations.
  • the first one is to control other cameras to be always on. Every preset time interval (for example: 10 seconds, 20 seconds, 1 minute, etc.), take photos (or videos) through these cameras, and collect these electronic devices. The obtained photos (or videos) and the photos (or videos) collected by the first camera are scored separately. If the scores of the photos obtained by the first camera are still the highest (or still meet the conditions in S420 and S720), the The first camera is used as the video capture camera; if the score of the photo captured by other cameras is higher than the score of the photo captured by the first camera (or more in line with the conditions of S420 and S720 than the first camera), it will correspond The electronic device is set as a new camera for video capture.
  • preset time interval for example: 10 seconds, 20 seconds, 1 minute, etc.
  • the camera selected by the electronic device 100 in the initial stage is the camera 35a, assuming that the scores of the captured images are as shown in Table 3. Show:
  • the camera 35c Since the photo captured by the camera 35c still has the highest score after 1 minute, the camera 35c is still used as the camera for video capture. After 2 minutes, the photo captured by the camera 35b has the highest score, the camera 35b is used as the video capture camera, and the video data will be acquired by the camera 35b later.
  • the video data finally collected by the electronic device 100 is the video collected by at least two cameras. If the solution is used for video calls, at different moments, the video data received by the opposite electronic device The video is the video collected by different cameras.
  • the second type is to control other cameras except the first camera to stop collecting, and detect the movement amount of the first camera and the movement amount of the captured content at preset time intervals (such as 20 seconds, 1 minute, etc.).
  • the amount of movement of the first camera is greater than the preset amount of movement (for example: 5 meters, 7 meters, etc.), or the amount of movement of the shot content is greater than the preset amount of movement, or the amount of the shot content is equivalent to the situation where the relative movement amount of the first camera is greater than the preset amount of movement
  • control other cameras in the collection state collect and obtain photos, and compare the scores of the photos obtained by each camera to determine whether the camera needs to be updated.
  • the determination method has been introduced previously, so it will not be repeated here.
  • the user is in the living room at the beginning, and the first camera is the camera 31a of the smart TV 31.
  • the camera for collecting video is switched from the camera of the smart TV 31 to the camera 30a of the desktop computer 30 in the study.
  • the display unit from the second area when it is detected that the user moves from the first area (e.g. from the living room) to the second area (e.g. the bedroom), in addition to switching the camera for shooting, other devices can also be switched, for example: the display unit from the second area.
  • the display unit in one area for example, the display screen of the smart TV 31
  • the display unit in the second area for example, the desktop computer 30 in the study room
  • the microphone is also switched from the microphone in the first area to the microphone in the second area, so that the microphone in the second area continues to collect the user's voice. It is also possible to switch other components, which are not listed in detail in the embodiment of the present invention, and are not limited.
  • the video captured by multiple cameras is sent to the electronic device 100, synthesized according to the timestamp, and then sent to the peer electronic device, or stored locally in the electronic device 100, where at least two cameras can also be captured by the electronic device 100
  • the video is optimized to achieve seamless switching.
  • the electronic device 100 may also control multiple cameras to perform video shooting (the multiple cameras may be determined based on scores or user selections), so that videos of multiple angles of the captured content can be obtained at the same time.
  • all electronic devices can be the master device, part of the master device, and part of the controlled device.
  • the foregoing steps can be executed on the electronic device 100 or on the server.
  • the present invention provides an image capturing method.
  • the method can be applied to a server or an electronic device 100.
  • the electronic device 100 is an electronic device included in a smart home scene.
  • the smart home scene is, for example, a picture For the smart home scene shown in 3, please refer to Figure 8.
  • the image capturing method includes the following steps:
  • S800 The electronic device 100 receives a shooting instruction; as to what kind of instruction the shooting instruction is, it has been introduced above, so it will not be repeated here.
  • the electronic device 100 determines other electronic devices associated with the electronic device 100.
  • the electronic device 100 can determine the electronic device bound to it in a variety of ways. Three of them are listed below for introduction. Of course, in the specific implementation process, it is not limited to the following three situations:
  • the electronic device 100 queries the router to which it is connected to other electronic devices connected to the router, and these electronic devices are the electronic devices associated with the electronic device 100.
  • the electronic device 100 queries the server for electronic devices bound to the same account, and these electronic devices are the electronic devices associated with the electronic device 100.
  • the electronic device 100 sends broadcast information through short-range communication (such as Bluetooth, WIFI direct connection), other electronic devices generate response information based on the broadcast information, and the electronic device 100 regards the electronic device that generated the response information as its associated electronic device. equipment.
  • short-range communication such as Bluetooth, WIFI direct connection
  • the electronic device 100 sends a shooting instruction to other electronic devices associated with it; after receiving the shooting instruction, these electronic devices collect photos of the content to be shot, and then send them to the electronic device 100.
  • the electronic device 100 may remotely send a photographing instruction to other electronic devices associated with it, or may send a photographing instruction to other electronic devices associated with it via a local area network.
  • the electronic device 100 can send shooting instructions to all electronic devices associated with it, and can also send shooting instructions to some of the electronic devices associated with it.
  • Some electronic devices can be determined in a variety of ways. Some of them are listed below. To introduce, of course, in the specific implementation process, it is not limited to the following situations.
  • the electronic device 100 may first determine the location information of the captured content corresponding to the shooting instruction, and then determine the electronic device for shooting based on the location information, for example: the shooting instruction is "take me A picture of a living room", after obtaining the shooting instruction, the electronic device 100 first determines that the photographed object is located in the living room through semantic analysis. In this case, the electronic device 100 first selects the electronic device 100 bound to it. Identify the electronic devices located in the living room, and then send shooting instructions to these electronic devices to collect photos of the living room.
  • the electronic device 100 may first obtain the location information of other electronic devices through the positioning device, and obtain the location information of other electronic devices, and then send the distance to the preset value of the electronic device 100.
  • the electronic device within the distance range sends a shooting instruction, and the preset distance is, for example, 10 meters, 20 meters, etc., which is not limited in the embodiment of the present invention.
  • the electronic device 100 can determine the content to be shot in the shooting instruction; determine the shooting mode based on the content to be shot; and determine part of the electronic device based on the shooting mode.
  • the electronic device 100 may first recognize the voice instruction, and then perform semantic analysis based on the recognized content, so as to determine the content to be shot, such as a person, a landscape, a still life, and so on. For example: if the shooting instruction is "take me a photo”, the content to be shot includes people, if the shooting instruction is "take a picture of the bedroom", the content to be shot includes still life, if the shooting instruction is "take a picture in front of you” Scenery", the content to be shot is scenery, etc.
  • the determined shooting mode is, for example, portrait mode and large aperture mode; if the content to be shot is "landscape”, the determined shooting mode is, for example, landscape mode.
  • the electronic device 100 may query other electronic devices for electronic devices with portrait mode, so as to determine these electronic devices as electronic devices for shooting; if the determined shooting mode is the landscape mode, Then, the electronic device 100 can query other electronic devices for electronic devices in the landscape mode, so as to use these electronic devices as electronic devices for shooting, and so on. Or, the electronic device 100 pre-stores the shooting mode of each electronic device, and then directly performs a query based on the pre-stored shooting mode of the electronic device, so as to determine the electronic device used for shooting.
  • the shooting instruction may carry a shooting mode.
  • the shooting mode is a portrait mode.
  • the electronic device that receives the shooting instruction uses the portrait mode to capture the photo, and sends it to the electronic device 100; if the content to be shot corresponding to the shooting instruction is landscape, the shooting mode can be landscape mode, and the electronic device that receives the shooting instruction The device uses the landscape mode to capture the photos, and sends them to the electronic device 100.
  • the electronic device that has received the shooting instruction has multiple cameras, it may use some of its cameras to perform image capturing, or may use all of its cameras to perform image capturing, which is not limited in the embodiment of the present invention.
  • the electronic device that receives the shooting instruction does not have a corresponding shooting mode, it will use the shooting mode closest to the shooting mode to take pictures (for example: the shooting instruction specifies that the shooting mode is portrait mode, but the electronic device that receives the shooting instruction If the device does not have a portrait mode, it can choose a large aperture mode to take photos), or use the favorite shooting mode of the user of the electronic device 100 to take photos (for example, the default camera mode, the most historically used camera mode, etc.).
  • the electronic device 100 After receiving the photos sent by these devices, the electronic device 100 scores the photos, selects the highest-scoring photo from the photos, and uses the photo as the shooting result of the shooting instruction. Wherein, if the electronic device 100 itself includes a camera, the electronic device 100 also collects photos through its own camera, and then scores the photos together with photos collected by other electronic devices to obtain a photo with the highest score. How to determine the score of each photo will be described in detail later.
  • the electronic device 100 After the electronic device 100 determines the photo with the highest score from the photos sent by multiple devices, it can directly output it as the shooting result of the camera application, for example: store it in the photo album of the electronic device 100 and display it In the photo preview interface of the camera application, etc.
  • the electronic device 100 can also process the finalized photo before outputting it, for example: crop the size to meet the size requirements of the electronic device 100, perform beautiful image processing on it (adjust the hue, saturation, brightness, and add beautiful image filters) Etc.), add various special effects and so on.
  • the shooting instruction is a shooting instruction sent by another electronic device (for example, a car navigator)
  • the electronic device 100 obtains the photo
  • the obtained photo may be sent to the car navigator.
  • the electronic device 100 can also obtain the screen size or screen ratio of the car navigator, so as to adapt the photo based on the screen size or screen ratio.
  • the other electronic device collects and obtains the photo, it performs beautiful image processing, adds various special effects, and then sends it to the electronic device 100, which is not limited in the embodiment of the present invention.
  • the electronic device 100 may send it to another electronic device for beautification processing, and the electronic device may send the photo to the electronic device 100 after beautification processing.
  • the electronic device 100 can also confirm whether the Meitu application is installed on each other electronic device after determining the photo with the highest score. If an electronic device (for example: PAD32) has a photo application, the electronic device 100 can send photos to PAD32 for photo processing, and then receive photos processed by PAD32.
  • PAD32 For example: PAD32
  • the electronic device 100 can ask each bound device whether the Meitu application is installed after obtaining the photo with the highest score, or it can pre-store and query the functions of each electronic device. After obtaining the photo with the highest score, directly pass The functions of each electronic device determine the electronic device with the Meitu application.
  • the electronic device that took the photo can be controlled to continue shooting, and other electronic devices can be controlled to be turned off ;
  • other electronic devices can also be kept in an on state, which is not limited in the embodiment of the present invention.
  • FIG. 9 Another embodiment of the present invention provides a control method, please refer to FIG. 9, including:
  • S900 The electronic device 100 receives a shooting instruction
  • the shooting instruction is, for example, an instruction to shoot a video, and the manner of generating the shooting instruction is similar to that in S800, and will not be repeated here.
  • S910 The electronic device 100 determines other electronic devices associated with it. This step is similar to S810 and will not be repeated here.
  • S920 The electronic device 100 sends a shooting instruction to other electronic devices associated with it. This step is similar to S920 and will not be repeated here.
  • S930 The electronic device 100 receives the photos sent by these devices, and then scores the photos, and selects the shooting device with the highest score. This step is similar to S930 and will not be repeated here.
  • the electronic device 100 After determining the electronic device corresponding to the photo with the highest score, the electronic device 100 acquires a video of the current user through the electronic device, and sends the video to the opposite electronic device of the video communication.
  • the electronic device 100 detects that the user clicks the video call button and analyzes the user's wish of the electronic device 100 Take your own video and provide it to the peer electronic device.
  • the electronic device 100 searches for electronic devices within a preset distance, collects photos from these electronic devices and its own camera, and then determines the electronic device corresponding to the photo with the highest score as
  • the electronic device used in the video communication, and the determined electronic device used in the video communication may be the electronic device 100 itself or other electronic devices.
  • the electronic device 100 determines that the photo collected by the smart TV 31 has the highest score. In this case, the electronic device 100 determines that the smart electronic device 31 is an electronic device used for video communication, so as to communicate with the peer electronic device. When performing video communication, the video collected by the smart TV 31 is sent to the opposite electronic device.
  • the electronic device 100 After the electronic device 100 determines the capture device of the photo with the highest score, it can control the capture device to be turned on, capture and obtain video data, and send the video data to the electronic device 100, and then send it to the peer through the electronic device 100 Electronic equipment to realize video communication. At the same time, other electronic devices can be controlled to be in the on state or the off state, which is not limited in the embodiment of the present invention.
  • the capture device has been used to capture and obtain the video for video communication; in another In an alternative embodiment, after the electronic device determines the capture device of the photo with the highest score, if the user or the capture device is displaced, the other electronic device can be re-determined as the capture device for video communication. To determine other electronic devices in this way, two of them are listed below for introduction. Of course, in the specific implementation process, it is not limited to the following two situations:
  • the first one is to control other electronic devices to be always on, and every preset time interval (for example: 10 seconds, 20 seconds, 1 minute, etc.) to collect photos through these electronic devices, and collect photos from these electronic devices Score the photos collected by the collection device. If the score of the photo collected by the collection device is still the highest, the device will still be used as the collection device; if the score of the photo collected by other devices is higher than the score of the photo collected by the collection device Score, set the corresponding electronic device as a new collection device.
  • preset time interval for example: 10 seconds, 20 seconds, 1 minute, etc.
  • the second method is to control other electronic devices except the acquisition device to stop shooting, and detect the movement amount of the acquisition device and the movement amount of the photographed object at preset time intervals (such as 20 seconds, 1 minute, etc.), and then The amount of movement of the device is greater than the preset amount of movement (for example: 5 meters, 7 meters, etc.), or the amount of movement of the object being photographed is greater than the amount of preset movement, or the amount of movement of the object being photographed is equivalent to the relative movement amount of the acquisition device greater than the preset amount of movement, control Other electronic equipment is in the shooting state, the photos are collected, and the scores of the pictures collected by the electronic devices are compared.
  • preset time intervals such as 20 seconds, 1 minute, etc.
  • the device When the scores of the photos collected by the collection device are still the highest, the device is still used as the collection device; when there are other devices In the case where the score value of the photos obtained by the collection is higher than the score value of the photos collected by the collection device, the corresponding electronic device is used as the new collection device. For example: when the user walks from the living room to the study room, the collection device automatically switches from the smart TV 31 in the living room to the desktop computer 30 in the study room.
  • the video collected by the smart TV 31, the desktop computer 30 (or other collection device) is sent to the electronic device 100, synthesized according to the timestamp, and then sent to the peer electronic device.
  • the electronic device 100 can also compare the two videos
  • the video captured by the capture device is optimized to achieve seamless switching.
  • the foregoing video capture control process can be applied to video calls, video shooting, and other scenes that require video capture, which is not limited in the embodiment of the present invention.
  • the score value of the photo can be calculated by the following formula:
  • E represents the score of the photo
  • the distance parameter (x) is based on the best shooting distance of 50cm in the physical distance as the maximum value, and the gradient decreases toward the far or near.
  • represents the weight value of the distance parameter (x), and its value range is [0, 1] ;
  • the angle parameter (y) takes the facing camera as the maximum value, and the deflection gradient to the three-axis angle decreases.
  • represents the weight value of the angle parameter (y), and its value range is [0, 1];
  • the aesthetic composition parameter (z) takes the maximum score through the aesthetic composition scoring model as the maximum value, and the gradient decreases.
  • represents the weight value of the aesthetic composition parameter (z), and its value range is [0, 1].
  • weighting coefficients can be given to the three factors.
  • the weight value can also adopt other values, which are not limited in the embodiment of the present invention.
  • computer vision technology can be used to calculate the distance between each electronic device and the object being photographed.
  • the electronic device when the electronic device contains a binocular camera, the electronic device can be determined by the visual difference between the two cameras to capture the object. The distance between the device and the object being photographed; or, when the object being photographed is the current user, the current user can be located through the current user’s handheld electronic device, and other electronic devices can be located through the positioning device of other electronic devices Positioning, based on the positioning to determine the distance between the electronic device and the object being photographed, and so on.
  • the distance of other electronic devices relative to the electronic device 100 can also be obtained based on Bluetooth indoor positioning, wireless WIFI positioning, or infrared optical positioning technology.
  • Face key point detection technology for example: corner detection algorithm Harris
  • Harris corner detection algorithm Harris
  • the pose estimation algorithm is used to estimate the three axes of the face based on the key points of the face Angle.
  • the angle parameter within the range of the user's front face for example, the pitch angle, yaw angle and roll angle are within -30° ⁇ 30°
  • the aesthetic composition (z) can be calculated by the aesthetic quality evaluation algorithm, which usually includes two stages: 1The feature extraction stage, which can be manually designed, for example, the clear contrast, brightness contrast, color simplicity, and harmony of the image can be manually adjusted , The degree of compliance with the rule of thirds is used to mark the characteristics of the photo; or, the deep convolutional neural network can also be used to automatically extract the image aesthetic features; 2The decision-making stage, the decision-making stage refers to the training of the extracted image aesthetic features into one Classifier or regression model to classify and regress the image. The trained model can distinguish images into high aesthetic quality images and low aesthetic quality images, and can also give the image an aesthetic quality score.
  • an aesthetics scoring system can be set locally on the electronic device 100, and the aesthetics evaluation algorithm can be built in the aesthetics scoring system, or the aesthetics evaluation algorithm can be set on the server, which is not limited in the embodiment of the invention.
  • the score of each photo is obtained after using different cameras to collect photos. It can be seen from Figure 10 that the score of the photo that contains the front face is higher than the score of the photo that does not contain the front face. And in the case where the front face is included, the score of the photo closer to the camera is higher.
  • the electronic device 100 can determine whether the "person” is included in the photos collected by each camera through gesture recognition, and if it includes, the photos can be scored based on the above formula (1) , If it does not, the photo can be directly excluded without scoring; or, the electronic device 100 can use face recognition to determine whether the photo collected by each camera contains a "human” face, and if it does, it is based on the above formula ( 1) Rate the photo. If it is not included, the photo can be directly excluded without rating.
  • the score value of the photo can be calculated by the following formula:
  • the electronic device 100 can select any of the above methods to calculate the score of the photo by default. In another embodiment, the electronic device 100 can also select different calculation methods based on different objects being photographed, for example: if the object being photographed contains For people, formula (1) is used to calculate the score value of the photo, and if the subject does not contain people, the formula (2) is used to calculate the score value of the photo.
  • photos can also be scored separately based on the above parameters, for example: separate score based on distance, separate score based on angle, separate score based on aesthetic composition, and so on.
  • FIG. 11 Another embodiment of the present invention also provides a control method. Please refer to FIG. 11, which includes the following steps:
  • the electronic device 100 receives a shooting instruction, which is similar to the shooting instruction described above, and will not be repeated here;
  • the electronic device 100 determines the first camera from at least two cameras.
  • the at least two cameras include the physical cameras of the electronic device 100, cameras of other electronic devices, and the physical cameras of the electronic device 100. There can be one or more, and the cameras of other electronic devices can be one or more.
  • the electronic device 100 may determine the first camera in a variety of ways, for example: 1Determine the location information of the captured content corresponding to the shooting instruction, and then determine the camera for shooting based on the location information. 2 Determine the shooting mode based on the content to be shot; determine the camera containing the shooting mode as the first camera based on the shooting mode. As for how to determine the specific details, since it has been introduced above, I will not repeat it here.
  • S1120 Obtain data corresponding to the shooting instruction through the first camera.
  • the data may be video data or image data.
  • cameras of other electronic devices can be registered with the virtual camera of the electronic device 100, so that in step S1220, the virtual camera corresponding to the first camera can be called to obtain Data corresponding to the shooting instruction; in another embodiment, the electronic device 100 may send a shooting instruction to the electronic device corresponding to the first camera, and the electronic device of the first camera responds to the shooting instruction to collect the data and then returns it to the electronic device.
  • Equipment 100 may send a shooting instruction to the electronic device corresponding to the first camera, and the electronic device of the first camera responds to the shooting instruction to collect the data and then returns it to the electronic device.
  • the first electronic device can also utilize other functions of other electronic devices, such as: using software of the second electronic device (for example: reading software, video playback software, video processing software, etc.) , Hardware (for example: monitor, microphone, etc.).
  • software of the second electronic device for example: reading software, video playback software, video processing software, etc.
  • Hardware for example: monitor, microphone, etc.
  • other functions of other electronic devices are used, other electronic devices are determined (or corresponding hardware is similar to the way of determining a camera).
  • the user of the first electronic device wants to play a video and receives a shooting instruction; the first electronic device responds to the shooting instruction and determines that the current location is the living room. If it detects that the living room contains a smart TV (the second electronic device), it will The video content is projected to the smart TV for playback.
  • the first electronic device determines the second electronic device it can consider the distance and angle between the first electronic device and the second electronic device and the user, and the size of the respective display, and comprehensively determine whether to use the first electronic device or the second electronic device.
  • the display of the device is a shooting instruction.
  • control method including:
  • the first electronic device obtains a shooting instruction
  • the location information of the content to be shot corresponding to the shooting instruction is determined. Based on the location information of the content to be shot, the most suitable camera for shooting the content to be shot is determined from the at least two cameras that it can control. Target camera; and/or,
  • the shooting mode is determined based on the content to be shot; the camera containing the shooting mode is determined as the target camera from the at least two cameras that can be controlled; the at least two cameras include: the first A camera on an electronic device and a camera on a second electronic device, the first electronic device is different from the second electronic device;
  • the first electronic device controls the target camera to execute the shooting instruction to obtain image data collected by the target camera.
  • determining the camera with the most suitable position for photographing the content to be photographed from the at least two cameras that can be controlled as the target camera includes:
  • the camera Based on the location information of the content to be shot, determine the camera whose shooting range covers the location information from the at least two cameras that it can control, and if there is only one camera determined, determine that the camera is the target camera;
  • the shooting mode is determined based on the content to be shot; the camera containing the shooting mode is determined from at least two cameras that it can control, and if there is only one camera determined, the camera is determined to be the target camera.
  • determining the camera with the most suitable position for photographing the content to be photographed from the at least two cameras that can be controlled as the target camera includes:
  • the at least two cameras that it can control Based on the location information of the content to be shot, from the at least two cameras that it can control, determine the camera whose shooting range covers the location information, and if there are multiple cameras determined, control the multiple cameras to collect and obtain photos , So as to obtain at least two photos; score the at least two photos according to the first preset rule, and determine that the camera corresponding to the photo with the highest score is the target camera; or
  • the first electronic device controls the target camera to execute the shooting instruction to obtain image data collected by the target camera includes:
  • the first electronic device sends a shooting request to the electronic device where the target camera is located, and receives image data sent by the electronic device where the target camera is located; or,
  • the first electronic device calls the target camera as a virtual camera of the first electronic device, and acquires image data collected by the virtual camera.
  • the first preset rule includes:
  • the software architecture of the first electronic device includes: an application framework layer, including: a camera framework, used to provide camera functions to the outside; a camera interface, used to obtain data collected by the camera, the camera includes a physical camera And virtual camera; MSDP is used to virtualize the camera of other electronic equipment as a virtual camera of the hardware abstraction layer; the hardware abstraction layer includes a camera, the camera includes a physical camera and a virtual camera, the physical camera is different from the virtual camera
  • the label of the first electronic device to call the target camera as the virtual camera of the first electronic device includes:
  • the first electronic device When the first electronic device needs to call the camera function of the CaaS, it first registers the CaaS service with the system.
  • the CaaS service queries the MSDP for the existence of a virtual camera, and when there is a virtual camera, obtains the virtual camera video data through the camera interface.
  • an electronic device including:
  • One or more processors are One or more processors;
  • the one or more computer programs include instructions, and when the instructions are executed by the first electronic device, The first electronic device executes the method according to any one of claims 1-7.
  • an electronic device including:
  • the first obtaining module is used to obtain shooting instructions
  • the first determining module is configured to determine the location information of the content to be shot corresponding to the shooting instruction according to the shooting instruction, and determine the location information of the content to be shot based on the location information of the content to be shot, from at least two cameras that it can control.
  • the camera with the most suitable content location is used as the target camera; and/or,
  • the shooting mode is determined based on the content to be shot; the camera containing the shooting mode is determined as the target camera from the at least two cameras that can be controlled; the at least two cameras include: the first A camera on an electronic device and a camera on a second electronic device, the first electronic device is different from the second electronic device;
  • the control module is configured to control the target camera to execute the shooting instruction to obtain image data collected by the target camera.
  • the first determining module includes:
  • the first determining unit is configured to determine the camera whose shooting range covers the position information from the at least two cameras that can be controlled based on the position information of the content to be shot, and if there is only one camera determined, determine the camera Is the target camera;
  • the second determining unit is used to determine the shooting mode based on the content to be shot; the third determining unit is used to determine the camera that contains the shooting mode from the at least two cameras that it can control, if only one camera is determined , The camera is determined to be the target camera.
  • the first determining module includes:
  • the fourth determining unit includes: determining a camera whose shooting range covers the position information from at least two cameras that can be controlled based on the position information of the content to be shot, and if there are more than one camera determined, controlling all the cameras.
  • the multiple cameras acquire photos to obtain at least two photos;
  • the fifth determining unit includes: scoring the at least two photos according to a first preset rule, and determining that the camera corresponding to the highest-scoring photo is the target camera ;or
  • the sixth determining unit includes: determining the shooting mode based on the content to be shot; the seventh determining unit includes: determining the camera that contains the shooting mode from at least two cameras that it can control, and if the number of cameras is determined Control the multiple cameras to collect and obtain photos, thereby obtaining at least two photos; an eighth determining unit, configured to score the at least two photos according to the first preset rule, and determine the photo corresponding to the highest score
  • the camera is the target camera.
  • control module is used for:
  • the first electronic device sends a shooting request to the electronic device where the target camera is located, and receives image data sent by the electronic device where the target camera is located; or,
  • the first electronic device calls the target camera as a virtual camera of the first electronic device, and acquires image data collected by the virtual camera.
  • the first preset rule includes: at least one of the performance parameters of the camera, the distance between the camera and the content to be shot, and the angle between the camera and the content to be shot.
  • control module includes:
  • the software architecture of the first electronic device includes: an application framework layer, including: a camera framework for providing camera functions to the outside world; a camera interface for acquiring data collected by the camera, the camera including a physical camera and a virtual camera; MSDP is used to virtualize cameras of other electronic devices as virtual cameras of the hardware abstraction layer; the hardware abstraction layer includes cameras, the cameras include physical cameras and virtual cameras, and the physical cameras and the virtual cameras have different tags;
  • the control module includes:
  • Virtual single-use for preliminarily virtualizing the target camera as a virtual camera of the first electronic device
  • a calling unit configured to call a CaaS function, and the CaaS function is provided to the first electronic device to be called through the CaaS service;
  • the trigger unit is used to notify MSDP distributed device virtualization to register the cameras of other electronic devices as virtual cameras of the hardware abstraction layer when the trigger conditions are met;
  • the acquiring unit is used to register the CaaS service with the system when it needs to call the camera function of the CaaS.
  • the CaaS service queries the MSDP for the existence of a virtual camera, and when there is a virtual camera, acquires the virtual camera video data through the camera interface.
  • control method including:
  • the first electronic device obtains a shooting instruction
  • the first electronic device controls at least two cameras that it can control to execute the shooting instruction to obtain image data collected by the target camera, thereby obtaining at least two photos;
  • the first electronic device determines the shooting result of the shooting instruction according to a preset second rule and the at least two photos.
  • the first electronic device determines the shooting result according to a preset second rule and the at least two photos, including;
  • Score based on at least one of the performance parameters of the camera, the distance between the camera and the object being photographed, and the angle of the camera and the object being photographed;
  • the photo that meets the preset score value is taken as the shooting result of the shooting instruction.
  • the first electronic device determines the shooting result according to a preset second rule and the at least two photos, including:
  • the first electronic device stitches the at least two photos as the shooting result of the shooting instruction; or,
  • the at least two photos are output, and in response to the user's selection operation, the photo selected by the user is taken as the shooting result of the shooting instruction.
  • an electronic device including:
  • One or more processors are One or more processors;
  • the one or more computer programs include instructions, and when the instructions are executed by the first electronic device, The first electronic device executes the method introduced in any embodiment of the present invention.
  • an electronic device including:
  • the second obtaining module is used to obtain shooting instructions
  • the response module is configured to respond to the shooting instruction and control at least two cameras that can be controlled to execute the shooting instruction to obtain image data collected by the target camera, thereby obtaining at least two photos;
  • the second determining module is configured to determine the shooting result of the shooting instruction according to a preset second rule and the at least two photos.
  • the second determining module includes;
  • the scoring unit performs scoring according to at least one of the performance parameters of the camera, the distance between the camera and the object being photographed, and the angle of the camera and the object being photographed;
  • the ninth determining unit is configured to use a photo that meets the preset score value as the shooting result of the shooting instruction.
  • the second determining module is configured to:
  • the first electronic device stitches the at least two photos as the shooting result of the shooting instruction; or,
  • the at least two photos are output, and in response to the user's selection operation, the photo selected by the user is taken as the shooting result of the shooting instruction.
  • another embodiment of the present invention provides a computer-readable storage medium, including instructions, which when run on an electronic device, cause the electronic device to execute the method described in any embodiment of the present invention .
  • another embodiment of the present invention provides a computer program product, the computer program product includes software code, and the software code is used to execute the method described in any embodiment of the present invention.
  • another embodiment of the present invention provides a chip containing instructions.
  • the chip runs on an electronic device, the electronic device executes the method described in any embodiment of the present invention.
  • the above-mentioned electronic devices and the like include hardware structures and/or software modules corresponding to the respective functions.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the embodiments of the present invention.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiment of the present invention is illustrative, and is only a logical function division, and there may be other division methods in actual implementation. The following is an example of dividing each function module corresponding to each function:
  • the methods provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a network device, an electronic device, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server, or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, SSD).
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本发明涉及计算机视觉领域,公开了一种控制方法、电子设备、计算机可读存储介质、计算机程序产品及芯片,以解决现有技术中通过电子设备的摄像头采集的图像质量不佳的技术问题。该方法包括:第一电子设备获得拍摄指令;根据拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息或拍摄模式,基于待拍摄内容的位置信息或拍摄模式,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;;至少两个摄像头包含:第一电子设备上的摄像头和第二电子设备的摄像头,第一电子设备与第二电子设备不同;第一电子设备控制目标摄像头执行拍摄指令,获得目标摄像采集的图像数据。该方法可用于人工智能设备。

Description

一种控制方法、电子设备、计算机可读存储介质、芯片
本申请要求在2019年12月18日提交中国国家知识产权局、申请号为201911310883.5的中国专利申请的优先权,发明名称为“一种控制方法、电子设备、计算机可读存储介质、芯片”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及物联网领域,尤其涉及一种控制方法、电子设备、计算机可读存储介质、芯片。
背景技术
目前我们使用的多种终端设备,例如:手机、平板、监控摄像头、TV、车机、眼镜等,只能基于用户手动选择,单独控制,或者利用手机连接摄像头或者智能家居产品进行远程控制。
现有技术中,很多电子设备中都包含摄像头,用户可以手动选择手机、平板、监控摄像头来进行拍摄,采集照片或者视频,但是由于用户所处的角度、距离的限制,导致某些场景下存在着拍照质量较低的技术问题。
发明内容
本申请提供的一种控制方法、电子设备、计算机可读存储介质、芯片,以通过第二电子设备的摄像头进行协同拍照,以提高拍照质量。
第一方面,本发明实施例提供一种控制方法,包括:
第一电子设备获得拍摄指令;
根据所述拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息,基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;和/或,
根据所述拍摄指令,基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头作为目标摄像头;所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
所述第一电子设备控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据。
可选的,所述基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,包括:
基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头只有一个,则确定该摄 像头为目标摄像头;
或者,
基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头。
可选的,所述基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,包括:
基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头;或者基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头。
可选的,所述所述第一电子设备控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,包括:
所述第一电子设备向目标摄像头所在的电子设备发送拍摄请求,并接收所述目标摄像头所在的电子设备发送的图像数据;或者,
所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用,获取该虚拟相机采集的图像数据。
可选的,第一预设规则包括:
摄像头的性能参数、摄像头与待拍摄内容的距离、摄像头与待拍摄内容的角度中的至少一种。
可选的,在所述所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用之前,还包括:所述第一电子设备通过其分布式器件虚拟化模块MSDP将所述第二电子设备的摄像头虚拟为所述虚拟相机;所述所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用,,获取该虚拟相机采集的图像数据,具体包括:
所述第一电子设备调用CaaS功能,所述CaaS功能通过CaaS服务提供给所述第一电子设备调用;
所述CaaS服务向MSDP查询是否存在虚拟相机,在存在虚拟相机时,通过相机接口获取虚拟相机采集的图像数据。
第二方面,本发明实施例提供一种电子设备,包括:
一个或多个处理器;
存储器;
多个应用程序;
以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,所述第一电子设备执行本发明任一实施例所述的方法。
第三方面,本发明实施例提供一种电子设备,包括:
第一获得模块,用于获得拍摄指令;
第一确定模块,用于根据所述拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息,基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;和/或,
根据所述拍摄指令,基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头作为目标摄像头,;所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
控制模块,用于控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据。
第四方面,本发明实施例提供一种控制方法,包括:
第一电子设备获得拍摄指令;
响应所述拍摄指令,所述第一电子设备控制其可以控制的至少两个摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,从而获得至少两张照片,所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
根据采集各个照片的摄像头与被拍摄物体的距离、人脸的角度、美学构图参数中的至少一种参数计算出各张照片的评分值;
将满足预设评分值的照片作为所述拍摄指令的拍摄结果;和/或,在所述拍摄指令为视频采集指令时,通过满足预设评分值的照片所对应的摄像头进行视频采集。
可选的,在照片中包含人物的情况下,可以通过以下公式计算照片的分数值:
E=αx+βy+γz
其中,E表示照片的评分值;
距离参数x是以物理距离中最佳拍摄距离50cm为最大值,向远处或者向近处梯度递减,α表示距离参数x的权重值,其取值范围为[0,1];
角度参数y以正对摄像头为最大值,向三轴角度的偏转梯度递减,β表示角度参数y的权重值,其取值范围为[0,1];
美学构图参数z以通过美学构图评分模型的评分最大为最大值,梯度递减,γ表示美学构图参数z的权重值,其取值范围为[0,1]。
可选的,在照片中不包含用户的情况下,可以通过以下公式计算照片的分数值:
E=αx+γz
其中,E表示照片的评分值;
距离参数x是以物理距离中最佳拍摄距离50cm为最大值,向远处或者向近处梯度递减,α表示距离参数x的权重值,其取值范围为[0,1];
美学构图参数z以通过美学构图评分模型的评分最大为最大值,梯度递减,γ表示美学构图参数z的权重值,其取值范围为[0,1]。
第五方面,本发明实施例通过一种电子设备,包括:一个或多个处理器;
存储器;
多个应用程序;
以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,所述第一电子设备执行本发明任一实施例所述的方法。
第六方面,本发明实施例提供一种电子设备,包括:
获得模块,用于获得拍摄指令;
响应模块,用于响应所述拍摄指令,所述第一电子设备控制其可以控制的至少两个摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,从而获得至少两张照片,所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
评分模块,用于根据采集各个照片的摄像头与被拍摄物体的距离、人脸的角度、美学构图参数中的至少一种参数计算出各张照片的评分值;
确定模块,用于将满足预设评分值的照片作为所述拍摄指令的拍摄结果;和/或,在所述拍摄指令为视频采集指令时,通过满足预设评分值的照片所对应的摄像头进行视频采集。
第七方面,本发明实施例提供一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行本发明任一实施例所述的方法。
第八方面,本发明实施例提供一种计算机程序产品,所述计算机程序产品包括软件代码,所述软件代码用于执行本发明任一实施例所述的方法。
第九方面,本发明实施例提供一种包含指令的芯片,当所述芯片在电子设备上运行时,使得所述电子设备执行如本发明任一实施例所述的方法。
由于在本发明实施例中,第一电子设备在获得拍摄指令之后,可以基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,或者,基于待拍摄内容确定出拍摄模式,从其可控制的至少两个摄像头中确定出包含该拍摄模式的摄像头作为目标摄像头,从而控制目标摄像头执行该拍摄指令,其中目标摄像头可以至少两个电子设备上的摄像头,从而可以基于待拍摄内容的位置信息、拍摄模式选择不同的摄像头,而不局限于第一电子设备自身的摄像头,以达到提高所采集的图像数据的质量的技术效果。
附图说明
图1为本发明实施例中电子设备的结构图;
图2为本发明实施例提供的电子设备的软件框架图;
图3为本发明实施例提供的智慧家居系统的框架图;
图4为本发明一实施例提供的控制方法的流程图;
图5为本发明实施例中将其他电子设备的摄像头虚拟为电子设备100的摄像头的软件框架图;
图6为本发明实施例中对照片进行处理的界面交互图;
图7为本发明一实施例提供的控制方法的流程图;
图8为本发明另一实施例提供的控制方法的流程图;
图9为本发明另一实施例提供的控制方法的流程图;
图10为本发明实施例中不同摄像头采集的照片的示意图;
图11为本发明另一个实施例提供的控制方法的流程图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
下面介绍本申请实施例涉及的应用场景。电子设备中配置了摄像头、麦克风、全球定位系统(global positioning system,GPS)芯片、各类传感器(例如磁场传感器、重力传感器、陀螺仪传感器等)等器件,用于感知外部的环境、用户的动作等。根据感知到的外部的环境和用户的动作,电子设备向用户提供个性化的、情景化的业务体验。其中,摄像头能够获取丰富、准确的信息使得电子设备感知外部的环境、用户的动作。本申请实施例提供一种电子设备,电子设备可以实现为以下任意一种包含摄像头的设备:手机、平板电脑(pad)、便携式游戏机、掌上电脑(personal digital assistant,PDA)、笔记本电脑、超级移动个人计算机(ultra mobile personal computer,UMPC)、手持计算机、上网本、车载媒体播放设备、可穿戴电子设备、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备等数显产品。
首先,介绍本申请以下实施例中提供的示例性的电子设备100。
图1示出了电子设备100的结构示意图。
下面以电子设备100为例对实施例进行具体说明。应该理解的是,图1所示电子设备100仅是一个范例,并且电子设备100可以具有比图1中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。电子设备100的详细结构介绍,请参考在先专利申请:CN201910430270.9。
图2是本申请实施例的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。软件功能的详细介绍,请参考在先专利申请:CN201910430270.9。
本发明实施例提供一种控制方法,可以用于图像的拍摄,也可以用于视频的拍摄,该方法应用于电子设备100,该电子设备100用于调用与之关联的其他电子设备的能力,尤其是摄像头功能,从而利用其他电子设备的摄像头功能实现图像采集。其他电子设备例如为:与电子设备100绑定同一账号的电子设备、与电子设备100在同一局域网内的电子设备、与电子设备100属于同一用户的电子设备、与电子设备100的属于同一家庭的电子设备(例如:电子设备100的用户为家庭中的母亲,则其他电子设备包含母亲、父亲、小孩的电子设备)、其他电子设备100能够控制的电子设备,本发明实施例不再详细列举,并且不做限制。例如:各个电子设备如果愿意共享功能(例如:摄像头),则可以在云端注册,云端记录具备共享功能(例如:共享摄像头、麦克风、显示屏等等)的电子设备, 则电子设备100在需要拍摄时,可以基于将当前位置、拍摄需求发送给云端,由云端选择合适的拍摄用电子设备提供给电子设备100。
随着智能家居、智慧生活的升级,通过万物互联的网络,未来将是一个以某一个终端如手机或者云为中心的全场景多种不同终端(如平板,监控摄像头TV,车机,眼镜)智慧协同的一体化生活圈。电子设备100可以为智慧家居场景中包含的电子设备,请参考图3,该智慧家居场景包括以下设备:台式机30(包含摄像头,摄像头ID为30a)、智能电视31(包含摄像头,摄像头ID为31a)、PAD32(包含摄像头,摄像头ID为32a)、智能手表33(包含摄像头,摄像头ID为33a)、手机34(包含前置摄像头和后置摄像头,前置摄像头ID为34a,后置摄像头ID为34b),汽车35(包含五个摄像头,摄像头ID分别为35a、35b、35c、35d、35e)。
该电子设备100例如为手机34,该智慧家居场景还可以包含其他设备,该电子设备100也可以为智慧家居场景中的其他设备,通常为智慧家居场景中计算能力较为强大的电子设备。电子设备100可以认定为主控设备,能力(例如:摄像头、麦克风、显示器等等)被注册到主控设备(电子设备100)的电子设备可以被认定为被控设备。本发明实施例提供的控制方法,也可以应用于云端,电子设备在需要拍摄照片、或者视频时,例如:用户产生拍摄指令之后,发送给云端,由云端基于各个电子设备所具备的能力,选择最合适的电子设备进行拍摄。
下面将以该方法为电子设备100为例,介绍一种控制方法,请参考图4,该方法包括以下步骤:
S400:电子设备100接收到拍摄指令。
该拍摄指令例如为拍摄照片的指令,该拍摄指令可以为语音指令、触发拍照应用的拍照按键的指令、预设手势等等。例如:电子设备100的用户可以向电子设备100发出语音指令,该语音指令例如为“给我拍一张照”、“拍一张车里的照片”等等,其中用户可以在电子设备100处于锁屏状态时发出该语音指令,也可以在电子设备100解锁之后发出语音指令,电子设备100都可以响应该语音指令;又或者,用户可以打开电子设备100的相机应用,通过该相机应用发出拍摄指令;又或者,用户可以打开即时通信应用,通过点击即时通信应用的拍照按钮触发拍摄指令,或者通过点击即时通信应用的视频通信按钮触发该拍摄指令等等,当然,还可以通过其他方式产生拍摄指令,本发明实施例不再详细列举,并且不做限制。该电子设备100例如为图3所示的手机34。在本实施例中,将其用户通过相机应用的拍摄按钮进行拍照为例进行介绍。在一种可选的实施例中,该拍摄指令由电子设备100的用户发出,例如:用户A拿起手机34,然后说“拍一张车内的照片”,则该电子设备100直接采集到该拍摄指令;又例如:用户A拿起手机34,打开车辆监控应用,并触发远程拍照功能,则电子设备100的车辆监控应用基于用户操作产生拍摄指令等等;另一种可选的实施例中,电子设备100接收其他电子设备发送的拍摄指令,例如:用户向车载导航仪发送语音指令“给我拍摄一张 厨房的照片”,车载导航仪接收到该拍摄指令之后,将其发送给电子设备100,从而基于电子设备100强大的运算功能,运行该拍摄指令。
又或者,车辆导航仪检测到该拍摄指令之后,先判断自己是否具备处理该拍摄指令的能力,如果具备处理该拍摄指令的能力,则自身处理该拍摄指令,否则将拍摄指令发送给电子设备100。
示例来说,车辆导航仪在检测到拍摄指令可能判断自身能否响应拍摄指令,获得该拍摄指令对应的照片或视频,如果自身能够响应,则可以认为自身具备处理该拍摄指令的能力,否则,认为自身不具备处理该拍摄指令能力。
S410:电子设备100响应该拍摄指令,通过电子设备100的至少两个摄像头分别采集获得照片,以获得至少两张照片,电子设备100的至少两个摄像头包含物理摄像头和虚拟摄像头中的至少一种,物理摄像头可以为一个或多个,虚拟摄像头也可以为一个或多个。
在一种可选的实施方式中,电子设备100的至少两个摄像头既包含物理摄像头,又包含虚拟摄像头,从而可以通过自身的摄像头与其他电子设备的摄像头同时响应拍摄指令。
在具体实施过程中,电子设备100的物理摄像头为电子设备100自带的摄像头,其例如为电子设备100的前置摄像头、后置摄像头等等;电子设备100的虚拟摄像头为电子设备100将别的电子设备的摄像头虚拟为自身的摄像头,电子设备100可以在多种时机触发注册虚拟摄像头的操作,下面列举其中的两种进行介绍,当然,在具体实施过程中,不限于以下两种情况。以图3所示的智慧家居系统为例,则手机34具备两个物理摄像头(前置摄像头和后置摄像头,前置摄像头的ID为:34a,后置摄像头的ID为:34b)以及9个虚拟摄像头(摄像头ID分别为:30a、31a、32a、33a、35a、35b、35c、35d、35e)。
电子设备100在调用虚拟相机的功能之前,先需要将其他电子设备的摄像头采集的数据(图像或视频)注册为虚拟相机,其可以在多种时机将其他电子设备的相机注册为虚拟相机,下面列举其中的两种进行介绍,当然,在具体实施过程中,不限于以下两种时机。
第一种,电子设备100(作为主控设备)在上电(或者连接到路由器,或者打开蓝牙功能)之后,通过近距离通信方式(例如:蓝牙、WIFI等等)发送广播信息,查找通信范围内的其他电子设备,向电子设备100发送设备信息,该设备信息例如包括:设备能力(摄像头、麦克风、显示器等等)、设备位置、设备ID(Identity document:身份标识码)等等,电子设备100如果希望将对应电子设备的摄像头注册为虚拟摄像头,则向对应电子设备发送请求信息,对应电子设备如果同意将摄像头作为电子设备100的虚拟摄像头,则该电子设备产生一确认信息(例如:点击预设按钮、产生预设语音指令、产生语音手势等等),电子设备100接收到该确认信息之后,执行将对应电子设备的摄像头虚拟为本机摄像头的虚拟化操作。同理,电子设备100也可以将其他电子设备的其他能力(比如:麦克风、显示器等等)虚拟为本机的设备。
而在其他电子设备上电(或者连接到路由器)之后,也可以产生广播信息,查找主控设备(电子设备100),在查找到电子设备100之后,电子设备100将其摄像头虚拟为电子设备的摄像头。同样,电子设备100还可以将其他电子设备的其他能力虚拟为电子设备100的能力。
第二种,电子设备100在通过某一账号注册到服务器之后,可以查找与电子设备100注册同一账号的其他电子设备,并将其他电子设备的摄像头注册为电子设备100的摄像头,还可以将其他电子设备的其他能力注册为电子设备100的能力;其他电子设备(被控设备)在通过某一账号注册到服务器之后,也可以查找与其注册同一账号的主控设备,并将其功能注册到该主控设备。
在具体实施过程中,通过将摄像头注册为系统的虚拟相机,可以实现电子设备100对其他电子设备的摄像头采集的数据的调用,对于其他电子设备的其他能力,电子设备100可以采用类似的方式调用。
请参考图5,实现该方案的软件架构包括:
电子设备100可以调用CaaS功能(例如:CaaS Kit),CaaS(Communications-as-a-Service:通讯即服务)指的是将基于互联网的通信能力如消息、语音、视频、会议、通信协同等封装成API(Application Programming Interface,应用软件编程接口)或者SDK(Software Development Kit,软件开发工具包)对外开放,提供给第三方调用。该CaaS功能包含很多内容,例如:通话的信令、媒体的传输、CaaS服务等等,该CaaS功能通过CaaS服务提供给电子设备100调用,电子设备100在满足触发条件时,先告知MSDP(MSDP Mobile Sensing development platform:移动感知平台)将其他电子设备的摄像头注册为硬件抽象层的虚拟相机。
应用程序框架层,包含:相机框架,用于对外界提供相机功能;相机接口,用于获取相机采集的数据;MSDP用于注册虚拟相机,即:将其他电子设备的摄像头虚拟为硬件抽象层的虚拟相机。
硬件抽象层,包含相机(物理相机和虚拟相机),通过相机接口既可以访问电子设备100的物理相机(比如:前置摄像头的数据、后置摄像头的数据等),也可以访问虚拟相机。硬件抽象层位于图2所示的软件系统框架中的系统库和内核层之间。
电子设备100在需要调用CaaS的摄像头功能时,先向系统注册CaaS服务,CaaS服务向MSDP查询是否存在虚拟相机,在存在虚拟相机时,通过相机接口获取虚拟相机视频数据。虚拟相机与物理相机会存在不同的标签,从而CaaS服务可以基于虚拟相机的标签准确获取虚拟相机的视频数据,当然可以采用类似的方式为电子设备提供其他能力,本发明实施例不做限制。
其中,在将其他电子设备的摄像头注册为电子设备100的虚拟摄像头时,可以记录虚拟摄像头的一些信息,比如:位置信息、所具备的功能(例如:具备的拍摄模式、是否变焦、分辨率等等)、位于哪个电子设备等等。另外,每个摄像头都有一个摄像头ID,用来供电子设备100识别该摄像头的身份。
在具体实施过程中,电子设备100可以控制其全部摄像头进行拍摄,也可以控制其部分摄像头进行拍摄。电子设备可以通过多种方式从其所包含的所有摄像头中选择出目标摄像头,下面列举其中的两种进行介绍,当然,在具体实施过程中,不限于以下两种情况。
第一种,电子设备100在获得拍摄指令之后,可以先确定拍摄指令所对应的被拍摄内容的位置信息,然后基于该位置信息确定出拍摄用的摄像头,例如,用户担心车窗未关或者车内遭遇小偷,因此希望查看以下车内情况,则产生如下拍摄指令“看一下车内的情况”,电子设备100在获得该拍摄指令之后,先通过语义分析,确定出被拍摄内容位于车内,在这种情况下,电子设备100调用各个相机(物理相机和虚拟相机)中位于车内的摄像头,也即:摄像头35a、35b、35c、35d、35e,然后控制这些摄像头进行图像采集。
又例如:拍摄指令为“嗨,小E,给我拍摄一张照片”,且该拍摄指令由电子设备100接收,则电子设备100可以确定预设距离范围内的电子设备的摄像头,作为目前采集所用的摄像头。例如:电子设备100可以先通过定位装置(或者通过采集环境图像分析)确定出自身所在区域(例如:位于客厅),然后确定出该区域内的电子设备的摄像头作为采集用的摄像头。例如:目前位于客厅的电子设备包括:PAD32、智能电视31(还包括电子设备100自身),则电子设备100可以确定摄像头32a、31a、34a和34b为采集用的摄像头。
其中,电子设备在接收到用户产生的语音信息之后,可以将该语音信息信息与预设的用户声纹信息进行匹配,以核实发送语音信息的用户身份。电子设备100的拾音器采集到用户的语音信息之后,通过主版传输到CPU或者NPU(Neural-network Processing Unit:嵌入式神经网络处理器)中,进行语音识别,转换为电子设备100可识别的语音指令。
又或者,电子设备100可以先通过定位装置获取其他电子设备的位置信息,并获得其他电子设备的位置信息,然后控制与其距离位于预设距离范围内的电子设备的摄像头作为采集用摄像头,该预设距离例如为:10米、20米等等,本发明实施例不做限制。
第二种,电子设备100可以确定所述拍摄指令中的待拍摄内容;基于待拍摄内容确定出拍摄模式;基于拍摄模式确定包含该拍摄模式的摄像头作为采集用摄像头。
示例来说,如果该拍摄指令为语音指令,则电子设备100可以先识别语音指令,然后基于识别内容进行语义分析,从而确定出待拍摄内容,为人、风景、静物等等。例如:如果拍摄指令为“给我拍一张照片”,则待拍摄内容包含人,如果拍摄指令为“拍一下卧室的情况”,则待拍摄内容包含静物,如果拍摄指令为“拍一张眼前的景色”,则待拍摄内容为风景等等。
如果待拍摄内容中包含“人”,则摄像头的拍摄模式例如为:人像模式、大光圈模式;如果待拍摄内容为“风景”,则摄像头的拍摄模式例如为:风景模式。如果确定出的拍摄模式为人像模式,则电子设备100可以先确定出具备人像模式的 摄像头,然后控制这些摄像头进行图像采集;如果确定出的拍摄模式为风景模式,则电子设备100可以先确定出具备风景模式的摄像头,然后控制这些摄像头进行图像采集等等等等。可选的,在电子设备100控制摄像头进行拍照时,该拍摄指令中可以携带拍摄模式,例如:如果拍摄指令对应的待拍摄内容包含“人”,则拍摄模式为人像模式,接收到该拍摄指令的电子设备采用人像模式采集获得照片,并将其发送至电子设备100;如果拍摄指令对应的待拍摄内容为风景,则拍摄模式可以为风景模式,接收到该拍摄指令的电子设备采用风景模式采集获得照片,并将其发送至电子设备100。
另外,如果接收到拍摄指令的电子设备没有对应的拍摄模式,则采用与该拍摄模式最接近的拍摄模式进行拍照,或者采用电子设备100的用户最喜欢的拍摄模式进行拍照(例如:默认拍照模式、历史使用最多次数的拍照模式等等)。
电子设备100在控制摄像头进行拍摄时,还可以告知摄像头拍摄的参数,例如:照片尺寸、曝光度、拍摄模式等等。
S420:电子设备100获得前述至少两张照片中每张照片的分值。
电子设备100可以将这至少两张照片发送至服务器,由服务器对至少两张照片打分之后,将分值返回给电子设备100;电子设备100也可以本地对这两张照片进行打分,具体如何获得每张照片的分值,将在后续介绍。
假设步骤S320中采集用的摄像头为35a、35b、35c、35d、35e,所采集的照片的分值如表1所示:
摄像头ID 35a 35b 35c 35d 35e
分值 7.1 7.5 8.3 5.7 6.9
表1
当然,基于不同的情况,各摄像头所采集的照片的分值也不同,在此不再赘述。
S430:电子设备100基于至少两张照片的分值,确定出最终提供给用户的照片。
在具体实施过程中,电子设备100可以直接把分值最高的(或者分值排序位于前预设位、或者分值大于预设值)的照片提供给用户,例如直接把表1中分值为8.3的照片提供给用户;电子设备100也可以确定出分值最高(或者分值排序位于前预设位、或者分值大于预设值)的照片的摄像头作为采集摄像头,重新采集获得照片作为提供给用户的照片。
在将照片提供给用户时,可以直接将各摄像头采集获得的照片提供给用户,也可以先对照片进行处理,例如:裁剪、美图(例如:磨皮、美颜、瘦腿、去红眼等)、拼接、特效处理(例如:人像模式、夜景模式、大光圈模式)等等。其中,电子设备100可以将照片发送至服务器进行处理、可以本地处理、也可以发送给其他电子设备进行处理,例如:电子设备100希望对照片通过美图软件进行美图处理,但是电子设备100上并未安装该美图软件,且电子设备100查询到 PAD32上有美图软件,则电子设备100可以通过PAD32对照片进行处理之后提供给用户。
又或者,在将照片提供给用户之后,在用户希望对照片进行处理时,电子设备100可以提示用户可以借用其他电子设备的美图软件进行处理,例如如图6所示,电子设备100的用户点击编辑按钮60(当然也可以通过其他方式触发编辑操作),电子设备100响应该操作之后,显示选择菜单61,选择菜单61上显示多种编辑方式,用于让用户选择编辑方式,编辑方式可以为电子设备100所具备的编辑方式,例如:图6所示的“本地编辑”61a;也可以为其他电子设备所具备的编辑方式,例如:图6所示的“美图软件1位于PAD”61b,其表示可以通过安装于PAD的美图软件1对照片进行处理,图6所示的“美图软件2位于台式机”61c,其表示可以通过安装于台式机的美图软件2对照片进行处理。
其中,电子设备100在检测到用户选择其他电子设备的编辑方式进行照片处理时,可以控制对应电子设备开启对应应用,例如:用户选择“美图软件1位于PAD”61b,则电子设备100控制PAD控制美图软件1处于开启状态,并在电子设备1上同步显示该美图软件1的处理界面,通过将电子设备100接收对照片的处理指令,并将其处理指令发送给PAD32,则可以在电子设备100上就可以实现通过PAD的图像处理应用对电子设备100的照片进行处理;又或者,电子设备100检测到用户选择其他电子设备的编辑方式进行照片处理时,电子设备100将照片发送给对应电子设备,并控制对应电子设备开启该应用,并在该应用中打开照片,例如:电子设备100控制PAD32的美图软件1处于开启状态,并在美图软件1中打开照片,然后由用户在PAD32上完成对照片的处理,然后发送给电子设备100。
在具体实施过程中,电子设备100在基于S410通过电子设备100的至少两个摄像头分别采集获得照片之后,也可以将照片全部显示于电子设备100的显示界面,由用户选择最喜欢的照片。
在具体实施过程中,电子设备100在基于S310获得至少两张照片之后,还可以对至少两张照片拼接之后提供用户,从而用户在拍照时,可以同时实现多角度拍照。
在一种实施例中,智慧家居系统中的每个电子设备都可以作为主控设备,从而每个电子设备在接收到拍摄指令之后,都可以响应该拍摄指令,执行前述步骤。在另一种实施例中,智慧家居系统中部分电子设备为主控设备,部分电子设备为被控设备,在主控设备接收到拍摄指令之后,直接响应拍摄指令,执行前述步骤;在被控设备接收到拍摄指令之后,将拍摄指令发送至主控设备,从而由主控设备执行前述步骤,以图3所示的智慧家居系统为例,手机34为主控设备,智能手表为被控设备,则手机34接收到拍摄指令之后,直接响应该拍摄指令,而智能手表接收到拍摄指令之后,将拍摄指令发送至手机34,由手机接收该拍摄指令。主控设备采集获得照片之后,可以将其存储于主控设备本地,还可以将 其发送给被控设备,在主控设备将照片发送给被控设备前,还可以调整照片的显示尺寸,以使其适应被控设备的显示单元。
在一种实施例中,由电子设备100执行前述步骤;在另一种实施例中,电子设备100接收到拍摄指令之后,由电子设备100将拍摄指令服务器,由服务器执行前述S400-S430中电子设备100执行的步骤。
基于上述方案,能够使拍照不受限于当前电子设备,解决了仅仅通过当前电子设备拍照所导致的角度、距离选择不好从而使拍照效果差的技术问题;能够基于各个摄像头所拍摄的照片的分值来选择最合适的摄像头进行图像采集,达到了能够提高采集的照片的质量的技术效果。另外,该方案中,直接基于电子设备(或者云服务器)进行选择,而不需要用户手动选择,故而提高了选择的效率;另外,该方案中,在某一瘦终端(处理能力弱的电子设备)接收到拍摄指令时,可以将其发送处理能力强的电子设备进行处理,利用处理能力强的电子设备的强大的图像算法能力和拍照模式优势,由此能够辅助瘦终端提高拍摄效果,从而达到了瘦终端也可以拍出高质量照片的技术效果。
另外,在上述方案中,电子设备100还可以利用其它电子设备所安装进行应用在当前电子设备上对数据进行处理(例如:对照片进行美化),从而达到了可以协同各个电子设备所具备的功能,在电子设备没有安装某应用时,也可以使用该应用的技术效果。
本发明另一实施例提供一种控制方法,请参考图7,包括以下步骤:
S700:电子设备100接收到拍摄指令。
该拍摄指令例如为视频采集指令,该拍摄指令的产生方式与S400中类似,在此不再赘述。该拍摄指令可以用于采集视频,也可以用于与另一电子设备进行视频通信。例如:电子设备100的用户产生语音指令“给我拍一个活动视频”,在这种情况下电子设备100通过拍摄指令拍摄视频;又例如电子设备100的用户打开即时通信应用,开启视频通话功能,电子设备100检测到该操作之后,则通过拍摄指令启动摄像头与另一电子设备进行视频通话。
S710:电子设备100响应该拍摄指令,通过电子设备100的至少两个摄像头分别采集获得照片,以获得至少两张照片,电子设备100的至少两个摄像头包含物理摄像头和虚拟摄像头中的至少一种,物理摄像头可以为一个或多个,虚拟摄像头也可以为一个或多个。该步骤与S410类似,在此不再赘述。
S720:电子设备100基于前述至少两张照片确定出第一摄像头。
示例来说,电子设备100可以获得前述至少两张照片的分值,然后通过至少两张照片的分值确定出第一摄像头,其具体确定方式S420中已做介绍,故而在此不再赘述。电子设备100也可以将各个摄像头采集获得的照片显示于电子设备100的显示单元,并提示用户选择她认为最好的照片,然后将用户选择的照片所对应的摄像头作为第一摄像头。其中,第一摄像头可以一个摄像头,也可以多个摄像头,例如:可以选择一个拍摄效果最佳(分值最高)的摄像头作为第一摄像头,又或者,可以选择几个角度不同、拍摄效果较佳(分值大于预设值)的摄像 头作为第一摄像头,从而拍摄获得多个视频,以给用户提供不同角度的视频,也可以给用户更多的选择机会。
可选的,上述S710中电子设备100也可以控制各个摄像头采集获得视频,S720中可以通过视频的分值(通过视频每一帧的分值取平均值)或者用户所选择的视频来确定出第一摄像头。
S730:电子设备100控制第一摄像头采集获得视频。
同样,电子设备100控制第一摄像头采集获得视频之后,可以直接将其作为拍摄指令的拍摄结果,也可以对其进行处理。另外,电子设备100还可以借助其他电子设备上所包含的应用对视频进行处理,在此不再赘述。
在电子设备100控制第一摄像头进行视频采集时,可以控制其他摄像头处于开启状态或者关闭状态,本发明实施例不做限制。
在一种实施例中,在电子设备100确定出第一摄像头之后,在本次拍摄过程中,一直采用第一摄像头进行视频采集;在另一种可选的实施例中,在电子设备确定出第一摄像头之后,如果被拍摄内容、第一摄像头中的至少一个位置发生变化,如果被拍摄内容相对于第一摄像头的位置发生变化,则可以重新确定出用于视频采集的摄像头,可以采用多种方式确定,下面列举其中的两种进行介绍,当然,在具体实施过程中,不限于以下两种情况。
第一种,控制其他摄像头一直处于开启状态,每隔预设时间间隔(例如:10秒、20秒、1分钟等等),通过这些摄像头采集获得照片(或者视频),并将这些电子设备采集获得的照片(或者视频)与第一摄像头采集获得的照片(或者视频)分别进行评分,如果第一摄像头采集获得的照片分值依然最高(或者依然符合S420、S720中的条件),则依然将第一摄像头作为视频采集的摄像头;如果有其他摄像头采集获得的照片的分值高于第一摄像头采集获得的照片的分值(或者比第一摄像头更符合S420、S720的条件),则将对应的电子设备设置为新的用于视频采集的摄像头。
还是以电子设备100控制摄像头35a、35b、35c、35d、35e进行图像采集为例进行介绍,电子设备100在初始阶段所选择的摄像头为摄像头35a,假设所采集的图像的分值如表3所示:
Figure PCTCN2020136645-appb-000001
表3
由于1分钟之后依然是摄像头35c所采集的照片的分值最高,因此依然将摄像头35c作为用于视频采集的摄像头。而2分钟之后,变成摄像头35b所采集的照片的分值最高,则将摄像头35b作为视频采集的摄像头,后续将通过摄像头35b采集获得视频数据。
如果上述方案用于视频采集,则电子设备100最终采集获得的视频数据为通过至少两个摄像头采集获得的视频,如果该方案用于视频通话,则在不同的时刻,对端电子设备接收到的视频为通过不同的摄像头采集到的视频。
第二种,控制除第一摄像头之外的其他摄像头处于停止采集状态,每隔预设时间间隔(比如:20秒、1分钟等等)检测第一摄像头的运动量、被拍摄内容的运动量,在第一摄像头的运动量大于预设运动量(例如:5米、7米等等)、或者被拍摄内容的运动量大于预设运动量、或者被拍摄内容相当对于第一摄像头的相对运动量大于预设运动量的情况下,控制其他摄像头处于采集状态,采集获得照片,并比较各摄像头采集获得的照片的分值,从而确定出是否需要更新摄像头,其确定方式前面已做介绍,故而在此不再赘述。例如:用户刚开始位于客厅,第一摄像头为智能电视31的摄像头31a,在用户由客厅走进书房时,采集视频的摄像头由智能电视31的摄像头切换为书房的台式机30的摄像头30a。
另外,在检测到用户由第一区域(例如:从客厅)移动到第二区域(例如:卧室)时,除了切换拍摄用的摄像头之外,还可以切换其他设备,例如:将显示单元从第一区域的显示单元(例如:智能电视31的显示屏)切换为第二区域的显示单元(例如:书房的台式机30),从而通过切换后的显示单元延续播放切换前的显示单元所显示的内容;将麦克风也由第一区域的麦克风切换为第二区域的麦克风,从而通过第二区域的麦克风继续采集用户的声音。还可以进行其他部件的切换,本发明实施例不再详细列举,并且不做限制。
通过多个摄像头采集获得的视频被发送给电子设备100,按照时间戳进行合成,然后发送给对端电子设备,或者保存到电子设备100本地,其中还可以通过电子设备100对至少两个摄像头采集的视频进行优化,以实现无缝切换。
上述视频采集过程,可以应用于视频通话、也可以应用于视频拍摄、以及其他需要采集视频的场景,本发明实施例不做限制。在具体实施过程中,电子设备100也可以控制多个摄像头进行视频拍摄(该多个摄像头可以基于分值确定、也可以基于用户选择确定),从而可以同时获得被拍摄内容多个角度的视频。
同理,智慧家居系统中既可以所有电子设备都为主控设备,也可以部分为主控设备、部分为被控设备,前述步骤既可以在电子设备100执行,又可以在服务器执行。
现有技术中,如果需要跟拍往往需要用户手持拍摄设备跟踪被拍摄物体移动,由此容易导致手持拍摄设备晃动,从而导致存在导致照片抖动、模糊的技术问题,而基于上述方案,则不需要功通过手持拍摄设备跟拍,而是在被拍摄物体移动至不同位置时,切换至不同的拍摄设备,由于解决了跟拍必须用户手持跟拍、导致拍摄质量较低的技术问题。
本发明又一实施例提供一种图像的拍摄方法,该方法可以应用于服务器,也可以应用于电子设备100,电子设备100为一智慧家居场景中包含的电子设备,该智慧家居场景例如为图3所示的智慧家居场景,请参考图8,该图像的拍摄方法包括以下步骤:
S800:电子设备100接收到拍摄指令;对于该拍摄指令为何种指令由于前面已做介绍,故而在此不再赘述。
S810:电子设备100确定与该电子设备100关联的其他电子设备。在具体实施过程中,电子设备100可以通过多种方式确定与其绑定的电子设备,下面列举其中的三种进行介绍,当然,在具体实施过程中,不限于以下三种情况:
第一种,电子设备100向其连接的路由器查询连接到该路由器的其他电子设备,这些电子设备即为电子设备100关联的电子设备。
第二种,电子设备100向服务器查询与其绑定同一账号的电子设备,这些电子设备即为与电子设备100关联的电子设备。
第三种,电子设备100通过短距离通信(比如:蓝牙、WIFI直连)发送广播信息,其他电子设备基于该广播信息产生应答信息,电子设备100将产生应答信息的电子设备作为与其关联的电子设备。
S820:电子设备100向与其关联的其他电子设备发送拍摄指令;这些电子设备设备在接收到拍摄指令之后,采集获得待拍摄内容的照片,然后将其发送至电子设备100。
电子设备100既可以远程向与其关联的其他电子设备发送拍照指令,也可以通过局域网的方式向与其存在关联的其他电子设备发送拍照指令。
其中,电子设备100可以向与其关联的所有电子设备发送拍摄指令,也可以向与其关联的电子设备中的部分电子设备发送拍摄指令,可以通过多种方式确定部分电子设备,下面列举其中的几种进行介绍,当然,在具体实施过程中,不限于以下几种情况。
第一种,电子设备100在获得拍摄指令之后,可以先确定拍摄指令所对应的被拍摄内容的位置信息,然后基于该位置信息确定出拍摄用的电子设备,例如:拍摄指令为“给我拍一张客厅的照片”,则电子设备100在获得该拍摄指令之后,先通过语义分析,确定出被拍摄物体位于客厅,在这种情况下,电子设备100先从与其绑定的电子设备100中确定出位于客厅的电子设备,然后向这些电子设备发送拍摄指令,从而采集获得客厅的照片。又例如:拍摄指令为“给我拍摄一张照片”,则电子设备100可以先通过定位装置获取其他电子设备的位置信息,并获得其他电子设备的位置信息,然后向距离电子设备100的预设距离范围内的电子设备发送拍摄指令,该预设距离例如为:10米、20米等等,本发明实施例不做限制。
第二种,电子设备100可以确定所述拍摄指令中的待拍摄内容;基于待拍摄内容确定出拍摄模式;基于拍摄模式确定出部分电子设备。
示例来说,如果该拍摄指令为语音指令,则电子设备100可以先识别语音指令,然后基于识别内容进行语义分析,从而确定出待拍摄内容,为人、风景、静物等等。例如:如果拍摄指令为“给我拍一张照片”,则待拍摄内容包含人,如果拍摄指令为“拍一下卧室的情况”,则待拍摄内容包含静物,如果拍摄指令为“拍一张眼前的景色”,则待拍摄内容为风景等等。
如果待拍摄内容中包含“人”,则确定出的拍摄模式例如为:人像模式、大光圈模式;如果待拍摄内容为“风景”,则确定出的拍摄模式例如为:风景模式。如果确定出的拍摄模式为人像模式,则电子设备100可以向其他电子设备查询存在人像模式的电子设备,从而将这些电子设备确定为拍摄用的电子设备;如果确定出的拍摄模式为风景模式,则电子设备100可以向其他电子设备查询存在风景模式的电子设备,从而将这些电子设备作为拍摄用的电子设备等等。又或者,电子设备100预先存储有各电子设备的拍摄模式,则直接基于该预存的电子设备的拍摄模式进行查询,从而确定出拍摄用的电子设备。
可选的,在电子设备100向选定的电子设备发送拍摄指令时,该拍摄指令中可以携带拍摄模式,例如:如果拍摄指令对应的待拍摄内容包含“人”,则拍摄模式为人像模式,接收到该拍摄指令的电子设备采用人像模式采集获得照片,并将其发送至电子设备100;如果拍摄指令对应的待拍摄内容为风景,则拍摄模式可以为风景模式,接收到该拍摄指令的电子设备采用风景模式采集获得照片,并将其发送至电子设备100。接收到拍摄指令的电子设备在存在多个摄像头时,可以通过其部分摄像头进行图像拍摄,也可以通过其全部摄像头进行图像拍摄,本发明实施例不做限制。
另外,如果接收到拍摄指令的电子设备没有对应的拍摄模式,则采用与该拍摄模式最接近的拍摄模式进行拍照(例如:拍摄指令中规定拍摄模式为人像模式,但是接收到该拍摄指令的电子设备不具备人像模式,则其可以选择大光圈模式进行拍照),或者采用电子设备100的用户最喜欢的拍摄模式进行拍照(例如:默认拍照模式、历史使用最多次数的拍照模式等等)。
S830:电子设备100接收到这些设备发送的照片之后,对这些照片进行评分,并从中选择出分值最高照片,将该照片作为拍摄指令的拍摄结果。其中,如果电子设备100自身包含摄像头的话,电子设备100通过自身的摄像头也采集获得照片,然后将该照片与其他电子设备采集获得的照片一起打分,获得分值最高的照片。对于具体如何确定出各照片的分值,将在后续详细介绍。
其中,电子设备100在从多个设备发送的照片中确定出分值最高的照片之后,可以直接将其输出作为相机应用的拍摄结果,例如:将其存储于电子设备100的相册,将其显示于拍照应用的照片预览界面等等。电子设备100也可以对最终确定的照片进行处理之后再输出,例如:裁剪使其尺寸符合电子设备100的尺寸要求、对其进行美图处理(调整色相、饱和度、亮度、添加美图滤镜等等)、添加各种特效等等。
又或者,如果拍摄指令为其他电子设备(例如:车载导航仪)发送的拍摄指令,在电子设备100获得照片之后,可以将获得的照片发送给车载导航仪。电子设备100在照片发送给车载导航仪之前,还可以获得车载导航仪的屏幕尺寸或者屏幕比例,从而将照片基于该屏幕尺寸或者屏幕比例适应性调整。
又或者,其他电子设备在采集获得照片之后,就将其进行美图处理、添加各种特效然后才将其发送至电子设备100,本发明实施例不做限制。
又或者,电子设备100在确定出分值最高的照片之后,可以将其发送至另一电子设备进行美化处理,由该电子设备将照片进行美化处理之后发送给电子设备100。例如:手机38虽然运算功能强大,但是其并未安装美图应用,则电子设备100在确定分值最高的照片之后,还可以确认各个其他电子设备是否安装有美图应用,如果某电子设备(例如:PAD32)具备美图应用,则电子设备100可以将照片发送给PAD32进行美图处理,然后再接收PAD32美图处理后的照片。其中,电子设备100可以在获得分值最高的照片之后,询问各绑定设备是否安装有美图应用,也可以预存询问各电子设备所具备的功能,在获得分值最高的照片之后,直接通过各电子设备所具备的功能,确定出具备美图应用的电子设备。
在一种可选的实施例中,在基于S830对照片进行打分,并从中确定出分值最高的照片之后,可以控制拍摄该照片的电子设备进行持续拍摄,并且控制其他电子设备设备处于关闭状态;又或者,也可以保持其他电子设备处于开启状态,本发明实施例不做限制。
本发明又一实施例提供一种控制方法,请参考图9,包括:
S900:电子设备100接收到拍摄指令;
该拍摄指令例如为拍摄视频的指令,该拍摄指令的产生方式与S800中类似,在此不再赘述。
S910:电子设备100确定与其关联的其他电子设备,该步骤与S810类似,在此不再赘述。
S920:电子设备100向与其关联的其他电子设备发送拍摄指令,该步骤与S920类似,在此不再赘述。
S930:电子设备100接收到这些设备发送的照片,然后对这些照片进行打分,并从中选择出分值最高照片的拍摄设备。该步骤与S930类似,在此不再赘述。
S940:电子设备100在确定出分值最高照片的所对应的电子设备之后,通过该电子设备采集获得当前用户的视频,并将该视频发送给视频通信的对端电子设备。
示例来说,电子设备100的用户点开即时通信应用于对端用户进行通信,同时点击视频通话按钮,电子设备100检测到用户点击该视频通话按钮的操作之后,分析出电子设备100的用户希望拍摄自己的视频提供给对端电子设备,电子设备100查找获得预设距离范围内的电子设备,通过这些电子设备以及自身的摄像头采集获得照片,然后确定出分值最高的照片对应的电子设备作为视频通信所采用的电子设备,所确定出的视频通信所采用的电子设备可以为电子设备100自身,也可以为其他电子设备。
例如:电子设备100确定出智能电视31所采集的照片的分值最高,在这种情况下,则电子设备100确定智能电子设备31为视频通信所采用的电子设备,从而在与对端电子设备进行视频通信时,将智能电视31采集的视频发送给对端电子设备。
在电子设备100确定出分数最高的照片的采集设备之后,可以控制该采集设 备处于开启状态,采集获得视频数据,并将其该视频数据发送至电子设备100,然后通过电子设备100发送给对端电子设备,以实现视频通信。同时,可以控制其他电子设备处于开启状态或关闭状态,本发明实施例不做限制。
在一种可选的实施例中,在电子设备确定出分值最高的照片的采集设备之后,在本次视频通信过程中,一直采用该采集设备采集获得用于视频通信的视频;在另一种可选的实施例中,在电子设备确定出分值最高的照片的采集设备之后,如果用户、或者该采集设备发生位移,则可以重新确定其他电子设备作为视频通信的采集设备,可以通过多种方式确定出其他电子设备,下面列举其中的两种进行介绍,当然,在具体实施过程中,不限于以下两种情况:
第一种,控制其他电子设备一直处于开启状态,每隔预设时间间隔(例如:10秒、20秒、1分钟等等),通过这些电子设备采集获得照片,并将这些电子设备采集获得照片与采集设备采集获得的照片进行评分,如果采集设备采集的照片分值依然最高,则依然将该设备作为采集设备;如果有其他设备采集获得的照片的分值高于采集设备采集获得的照片的分值,则将对应的电子设备设置为新的采集设备。
第二种,控制除采集设备之外的其他电子设备处于停止拍摄状态,每隔预设时间间隔(比如:20秒、1分钟等等)检测采集设备的运动量、被拍摄物体的运动量,在采集设备的运动量大于预设运动量(例如:5米、7米等等)、或者被拍摄物体的运动量大于预设运动量、或者被拍摄物体相当对于采集设备的相对运动量大于预设运动量的情况下,控制其他电子设备处于拍摄状态,采集获得照片,并比较各电子设备采集获得的图片的分值,在采集设备采集获得的照片的分值依然最高时,依然采用该设备作为采集设备;在存在其他设备采集获得的照片的分值高于采集设备采集的照片的分值的情况下,将对应的电子设备作为新的采集设备。例如:在用户由客厅走到书房时,采集设备自动从客厅的智能电视31,切换为书房的台式机30。
通过智能电视31、台式机30(或者其他采集设备)采集获得的视频被发送给电子设备100,按照时间戳进行合成,然后发送给对端电子设备,其中还可以通过电子设备100对两个视频采集设备采集的视频进行优化,以实现无缝切换。
上述视频采集的控制过程,可以应用于视频通话、也可以应用于视频拍摄、以及其他需要采集视频的场景,本发明实施例不做限制。
下面来介绍如何对各个电子设备拍摄的照片进行打分。
第一种,在照片中包含人物的情况下,可以通过以下公式计算照片的分数值:
E=αx+βy+γz…………………………………………………………………(1)
其中,E表示照片的分数;
距离参数(x)是以物理距离中最佳拍摄距离50cm为最大值,向远处或者向近处梯度递减,α表示距离参数(x)的权重值,其取值范围为[0,1];
角度参数(y)以正对摄像头为最大值,向三轴角度的偏转梯度递减,β表示角 度参数(y)的权重值,其取值范围为[0,1];
美学构图参数(z)以通过美学构图评分模型的评分最大为最大值,梯度递减,γ表示美学构图参数(z)的权重值,其取值范围为[0,1]。
根据拍摄的对象的差异,还可以对于三因素给予不同的权重系数,如在拍摄人物的场景模式中,给予角度参数稍高的权重如β=0.5,γ=0.3,α=0.2;如在拍摄物品的场景中,更加注重拍摄的清晰度,给予距离参数更高的权重如α=0.4,β=0.3,γ=0.3,当然权重值还可以采用其他值,本发明实施例不做限制。
其中,可以通过计算机视觉技术计算出各电子设备与被拍摄物体之间的距离,例如:在电子设备包含双目摄像头的情况下,可以通过两个摄像头拍摄到被拍摄物体的视觉差来确定电子设备与被拍摄物体之间的距离;又或者,在被拍摄物体为当前用户的情况下,可以通过当前用户的手持电子设备对当前用户进行定位,通过其他电子设备的定位装置对其他电子设备进行定位,基于定位来确定出电子设备与被拍摄物体之间的距离等等。另外还可以基于蓝牙室内定位、无线WIFI定位或者红外光学定位技术,来获得其他电子设备相对于电子设备100的距离。
可以利用人脸关键点检测技术(例如:角点检测算法Harris)检测人脸的关键点,在检测到人脸的关键点后,通过姿态估计算法基于人脸的关键点估计出人脸三轴的角度。在用户正脸范围内(如在俯仰角,偏航角和翻滚角均在-30°~30°内)的角度参数为最大值。
美学构图(z)可以通过美学质量评估算法来计算,其通常包括两个阶段:①特征提取阶段,可以通过人工设计特征,例如可以手动对图像的清晰对比度、亮度对比度、颜色简洁性、和谐度、三分法则的符合程度来标记照片的特征;又或者,还可以通过深度卷积神经网络来自动提取图像美学特征;②决策阶段,决策阶段指的是将提取到的图像美学特征训练成一个分类器或者回归模型,从而对图像进行分类和回归。训练到的模型可以将图像区分为高美学质量图像和低美学质量图像,也可以给图像一个美学质量得分。常用的方法有朴素贝叶斯分类器、支持向量机和深度分类器等等。其中,可以在电子设备100本地设置美学评分系统,通过该美学评分系统内置美学评估算法,也可以在服务器设置美学评估算法,本发明实施例不做限制。
如图10所示,为基于上述公式(1)采用不同摄像头采集获得照片之后,各照片的分值,从图10可以看出包含正脸的照片的分值高于没包含正脸的照片,且在都包含正脸的情况下,距离摄像头较近的照片的分值较高。
在电子设备100的拍摄指令中包含“人”,电子设备100可以通过姿态识别确定出各个摄像头所采集的照片中是否包含该“人”,如果包含则可以基于上述公式(1)对照片进行评分,如果不包含则可以直接剔除该照片,不评分;又或者,电子设备100可以通过人脸识别确定出各摄像头所采集的照片中是否包含“人”正脸,如果包含,则基于上述公式(1)对照片进行评分,如果不包含,则可以直接剔除该照片,不评分。
第二种,在照片中不包含用户的情况下,可以通过以下公式计算照片的分数 值:
E=αx+γz…………………………………………………………………(2)
其中,E、α、x、γ、z在前述公式(1)里面已做介绍,故而在此不再赘述。
其中,电子设备100可以默认选择以上任一方式计算照片的分值,在另一种实施例中,电子设备100也可以基于被拍摄物体不同,选择不同的计算方式,例如:如果被拍摄物体包含人物,则采用公式(1)计算照片的评分值,如果被拍摄物体不包含人物,则采用公式(2)计算照片的评分值。
其他内容参考上文相关内容的描述,不再赘述。
在具体实施过程中,也可以基于上述参数单独为照片进行评分,例如:基于距离单独评分、基于角度单独评分、基于美学构图单独评分等等。
本发明另一实施例还提供了一种控制方法,请参考图11,包括以下步骤:
S1100:电子设备100接收到拍摄指令,该拍摄指令与前面介绍的拍摄指令类似,在此不再赘述;
S1110:电子设备100响应该拍摄指令,从至少两个摄像头中确定出第一摄像头,至少两个摄像头包含电子设备100自身的物理摄像头,也包含其他电子设备的摄像头,电子设备100自身的物理摄像头可以为一个或多个,其他电子设备的摄像头可以为一个或多个。
在具体实施过程中,电子设备100可以通过多种方式确定第一摄像头,例如:①确定拍摄指令所对应的被拍摄内容的位置信息,然后基于该位置信息确定出拍摄用的摄像头。②基于待拍摄内容确定出拍摄模式;基于拍摄模式确定包含该拍摄模式的摄像头作为第一摄像头。对于具体如何确定,由于前面已做介绍,在此不再赘述。
S1120:通过第一摄像头采集获得该拍摄指令对应的数据,该数据可以为视频数据或图像数据。
在一种实施方式中,在初始阶段可以将其他电子设备的摄像头(包括第一摄像头在内)注册电子设备100的虚拟相机,从而步骤S1220中,可以通过调用第一摄像头对应的虚拟相机来获得拍摄指令对应的数据;在另一种实施方式中,电子设备100可以向第一摄像头所对应电子设备发送拍摄指令,由第一摄像头的电子设备响应拍摄指令采集获得数据之后,将其返回给电子设备100。
在具体实施过程中,多个电子设备之间除了协同使用摄像头之后,还可以协同使用其他功能,例如:麦克风、显示屏、输入装置、应用软件等等。例如:某个电子设备在接收到音频数据之后,因该电子设备的麦克风损坏或者没有麦克风,则导致无法播放,在这种情况下,可以选择预设距离范围内的麦克风进行播放。又例如,在检测到用户选择文件的编辑方式或者浏览方式时,不仅可以提示本机所存在的编辑方式或者浏览方式,还可以提供与其关联的其他电子设备的编辑方式或浏览方式等等。
本发明实施例所介绍的控制方法中,第一电子设备还可以利用其他电子设备的其他功能,比如:利用第二电子设备的软件(例如:阅读软件、视频播放软件、 视频处理软件等等)、硬件(例如:显示器、麦克风等等)。其中,利用其他电子设备的其他功能时,确定其他电子设备(或者对应的硬件时,与确定摄像头的方式类似)。
例如:第一电子设备的用户希望播放视频,且接收到一个拍摄指令;第一电子设备响应该拍摄指令,确定出当前位置为客厅,检测到客厅包含智能电视(第二电子设备),则将视频内容投影到智能电视播放。第一电子设备在确定第二电子设备时,可以考虑第一电子设备、第二电子设备与用户的距离、角度、各自的显示器的大小,综合确定出具体用第一电子设备、还是第二电子设备的显示器。
基于同一发明构思,本发明另一实施例提供一种控制方法,包括:
第一电子设备获得拍摄指令;
根据所述拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息,基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;和/或,
根据所述拍摄指令,基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头作为目标摄像头,;所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
所述第一电子设备控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据。
可选的,所述基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,包括:
基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头;
或者,
基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头。
可选的,所述基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,包括:
基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头;或者
基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头。
可选的,所述所述第一电子设备控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,包括:
所述第一电子设备向目标摄像头所在的电子设备发送拍摄请求,并接收所述目标摄像头所在的电子设备发送的图像数据;或者,
所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用,获取该虚拟相机采集的图像数据。
可选的,第一预设规则包括:
摄像头的性能参数、摄像头与待拍摄内容的距离、摄像头与待拍摄内容的角度中的至少一种。
可选的,所述第一电子设备的软件架构包括:应用程序框架层,包含:相机框架,用于对外界提供相机功能;相机接口,用于获取相机采集的数据,所述相机包括物理相机和虚拟相机;MSDP用于将其他电子设备的摄像头虚拟为硬件抽象层的虚拟相机;硬件抽象层,包含相机,所述相机包括物理相机和虚拟相机,所述物理相机与所述虚拟相机存在不同的标签,所述所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用,包括:
预先将所述目标摄像头虚拟为所述第一电子设备的虚拟相机;
所述第一电子设备调用CaaS功能,所述CaaS功能通过CaaS服务提供给所述第一电子设备调用;
所述第一电子设备在满足触发条件时,告知MSDP分布式器件虚拟化将其他电子设备的摄像头注册为硬件抽象层的虚拟相机;
所述第一电子设备在需要调用CaaS的摄像头功能时,先向系统注册CaaS服务,CaaS服务向MSDP查询是否存在虚拟相机,在存在虚拟相机时,通过相机接口获取虚拟相机视频数据。
基于同一发明构思,本发明另一实施例提供一种电子设备,包括:
一个或多个处理器;
存储器;
多个应用程序;
以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,所述第一电子设备执行如权利要求1-7任一所述的方法。
基于同一发明构思,本发明另一实施例提供一种电子设备,包括:
第一获得模块,用于获得拍摄指令;
第一确定模块,用于根据所述拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息,基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;和/或,
根据所述拍摄指令,基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头作为目标摄像头,;所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第 一电子设备与所述第二电子设备不同;
控制模块,用于控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据。
可选的,所述第一确定模块,包括:
第一确定单元,用于基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头;
或者,
第二确定单元,用于基于待拍摄内容确定出拍摄模式;第三确定单元,用于从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头。
可选的,所述第一确定模块,包括:
第四确定单元,包括:基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;第五确定单元,包括:根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头;或者
第六确定单元,包括:基于待拍摄内容确定出拍摄模式;第七确定单元,包括:从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;第八确定单元,用于根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头。
可选的,所述控制模块,用于:
所述第一电子设备向目标摄像头所在的电子设备发送拍摄请求,并接收所述目标摄像头所在的电子设备发送的图像数据;或者,
所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用,获取该虚拟相机采集的图像数据。
可选的,第一预设规则包括:摄像头的性能参数、摄像头与待拍摄内容的距离、摄像头与待拍摄内容的角度中的至少一种。
可选的,所述控制模块,包括:
所述第一电子设备的软件架构包括:应用程序框架层,包含:相机框架,用于对外界提供相机功能;相机接口,用于获取相机采集的数据,所述相机包括物理相机和虚拟相机;MSDP用于将其他电子设备的摄像头虚拟为硬件抽象层的虚拟相机;硬件抽象层,包含相机,所述相机包括物理相机和虚拟相机,所述物理相机与所述虚拟相机存在不同的标签;所述控制模块包括:
虚拟单用,用于预先将所述目标摄像头虚拟为所述第一电子设备的虚拟相机;
调用单元,用于调用CaaS功能,所述CaaS功能通过CaaS服务提供给所述第一电子设备调用;
触发单元,用于在满足触发条件时,告知MSDP分布式器件虚拟化将其他电子设备的摄像头注册为硬件抽象层的虚拟相机;
获取单元,用于在需要调用CaaS的摄像头功能时,先向系统注册CaaS服务,CaaS服务向MSDP查询是否存在虚拟相机,在存在虚拟相机时,通过相机接口获取虚拟相机视频数据。
基于同一发明构思,本发明另一实施例提供一种控制方法,包括:
第一电子设备获得拍摄指令;
响应所述拍摄指令,所述第一电子设备控制其可以控制的至少两个摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,从而获得至少两张照片;
所述第一电子设备根据预设第二规则和所述至少两张照片,确定所述拍摄指令的拍摄结果。
可选的,所述第一电子设备根据预设第二规则和所述至少两张照片,确定拍摄结果,包括;
根据摄像头的性能参数、摄像头与被拍摄物体的距离、摄像头与被拍摄物体的角度中的至少一种参数进行评分;
将满足预设评分值的照片作为所述拍摄指令的拍摄结果。
可选的,所述第一电子设备根据预设第二规则和所述至少两张照片,确定拍摄结果,包括:
所述第一电子设备将所述至少两张照片拼接作为所述拍摄指令的拍摄结果;或,
输出所述至少两张照片,响应于用户的选择操作,将用户选择的那一张照片作为所述拍摄指令的拍摄结果。
基于同一发明构思,本发明另一实施例提供一种电子设备,包括:
一个或多个处理器;
存储器;
多个应用程序;
以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,所述第一电子设备执行本发明任一实施例所介绍的方法。
基于同一发明构思,本发明另一实施例提供一种电子设备,包括:
第二获得模块,用于获得拍摄指令;
响应模块,用于响应所述拍摄指令,控制其可以控制的至少两个摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,从而获得至少两张照片;
第二确定模块,用于根据预设第二规则和所述至少两张照片,确定所述拍摄指令的拍摄结果。
可选的,所述第二确定模块,包括;
评分单元,根据摄像头的性能参数、摄像头与被拍摄物体的距离、摄像头与被拍摄物体的角度中的至少一种参数进行评分;
第九确定单元,用于将满足预设评分值的照片作为所述拍摄指令的拍摄结果。
可选的,所述第二确定模块,用于:
所述第一电子设备将所述至少两张照片拼接作为所述拍摄指令的拍摄结果;或,
输出所述至少两张照片,响应于用户的选择操作,将用户选择的那一张照片作为所述拍摄指令的拍摄结果。
基于同一发明构思,本发明另一实施例提供一种计算机可读存储介质,包括指令,当所述指令在电子设备上运行时,使得所述电子设备执行本发明任一实施例所述的方法。
基于同一发明构思,本发明另一实施例提供一种计算机程序产品,所述计算机程序产品包括软件代码,所述软件代码用于执行本发明任一实施例所述的方法。
基于同一发明构思,本发明另一实施例提供一种包含指令的芯片,当所述芯片在电子设备上运行时,使得所述电子设备执行本发明任一实施例所述的方法。
可以理解的是,上述电子设备等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。下面以采用对应各个功能划分各个功能模块为例进行说明:
本申请实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例描述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、电子设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以 是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,SSD)等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上,仅为本申请的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以权利要求的保护范围为准。

Claims (16)

  1. 一种控制方法,其特征在于,包括:
    第一电子设备获得拍摄指令;
    根据所述拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息,基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;和/或,
    根据所述拍摄指令,基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头作为目标摄像头;所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
    所述第一电子设备控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据。
  2. 如权利要求1所述的方法,其特征在于,所述基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,包括:
    基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头;
    或者,
    基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头只有一个,则确定该摄像头为目标摄像头。
  3. 如权利要求1所述的方法,其特征在于,所述基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头,包括:
    基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄范围覆盖所述位置信息的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头;或者
    基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头,如果确定出的摄像头有多个,则控制所述多个摄像头采集获得照片,从而获得至少两张照片;根据第一预设规则,对所述至少两张照片进行评分,确定得分最高照片所对应的摄像头为目标摄像头。
  4. 如权利要求1所述的方法,其特征在于,所述所述第一电子设备控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,包括:
    所述第一电子设备向目标摄像头所在的电子设备发送拍摄请求,并接收所述目标摄像头所在的电子设备发送的图像数据;或者,
    所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行 调用,获取该虚拟相机采集的图像数据。
  5. 如权利要求1-4任一所述的方法,其特征在于,第一预设规则包括:
    摄像头的性能参数、摄像头与待拍摄内容的距离、摄像头与待拍摄内容的角度中的至少一种。
  6. 如权利要求4所述的方法,其特征在于,在所述所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用之前,还包括:所述第一电子设备通过其分布式器件虚拟化模块MSDP将所述第二电子设备的摄像头虚拟为所述虚拟相机;所述所述第一电子设备将所述目标摄像头作为所述第一电子设备的虚拟相机进行调用,,获取该虚拟相机采集的图像数据,具体包括:
    所述第一电子设备调用CaaS功能,所述CaaS功能通过CaaS服务提供给所述第一电子设备调用;
    所述CaaS服务向MSDP查询是否存在虚拟相机,在存在虚拟相机时,通过相机接口获取虚拟相机采集的图像数据。
  7. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    多个应用程序;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,所述第一电子设备执行如权利要求1-6任一所述的方法。
  8. 一种电子设备,其特征在于,包括:
    第一获得模块,用于获得拍摄指令;
    第一确定模块,用于根据所述拍摄指令,确定拍摄指令所对应的待拍摄内容的位置信息,基于待拍摄内容的位置信息,从其可以控制的至少两个摄像头中,确定出拍摄待拍摄内容位置最合适的摄像头作为目标摄像头;和/或,
    根据所述拍摄指令,基于待拍摄内容确定出拍摄模式;从其可以控制的至少两个摄像头中,确定出包含该拍摄模式的摄像头作为目标摄像头,;所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
    控制模块,用于控制所述目标摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据。
  9. 一种控制方法,其特征在于,包括:
    第一电子设备获得拍摄指令;
    响应所述拍摄指令,所述第一电子设备控制其可以控制的至少两个摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,从而获得至少两张照片,所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
    根据采集各个照片的摄像头与被拍摄物体的距离、人脸的角度、美学构图参 数中的至少一种参数计算出各张照片的评分值;
    将满足预设评分值的照片作为所述拍摄指令的拍摄结果;和/或,在所述拍摄指令为视频采集指令时,通过满足预设评分值的照片所对应的摄像头进行视频采集。
  10. 如权利要求9所述的方法,其特征在于,在照片中包含人物的情况下,可以通过以下公式计算照片的分数值:
    E=αx+βy+γz
    其中,E表示照片的评分值;
    距离参数x是以物理距离中最佳拍摄距离50cm为最大值,向远处或者向近处梯度递减,α表示距离参数x的权重值,其取值范围为[0,1];
    角度参数y以正对摄像头为最大值,向三轴角度的偏转梯度递减,β表示角度参数y的权重值,其取值范围为[0,1];
    美学构图参数z以通过美学构图评分模型的评分最大为最大值,梯度递减,γ表示美学构图参数z的权重值,其取值范围为[0,1]。
  11. 如权利要求9所述的方法,其特征在于,在照片中不包含用户的情况下,可以通过以下公式计算照片的分数值:
    E=αx+γz
    其中,E表示照片的评分值;
    距离参数x是以物理距离中最佳拍摄距离50cm为最大值,向远处或者向近处梯度递减,α表示距离参数x的权重值,其取值范围为[0,1];
    美学构图参数z以通过美学构图评分模型的评分最大为最大值,梯度递减,γ表示美学构图参数z的权重值,其取值范围为[0,1]。
  12. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    多个应用程序;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述第一电子设备执行时,所述第一电子设备执行如权利要求9-11任一所述的方法。
  13. 一种电子设备,其特征在于,包括:
    获得模块,用于获得拍摄指令;
    响应模块,用于响应所述拍摄指令,所述第一电子设备控制其可以控制的至少两个摄像头执行所述拍摄指令,获得所述目标摄像采集的图像数据,从而获得至少两张照片,所述至少两个摄像头包含:所述第一电子设备上的摄像头和第二电子设备的摄像头,所述第一电子设备与所述第二电子设备不同;
    评分模块,用于根据采集各个照片的摄像头与被拍摄物体的距离、人脸的角度、美学构图参数中的至少一种参数计算出各张照片的评分值;
    确定模块,用于将满足预设评分值的照片作为所述拍摄指令的拍摄结果;和 /或,在所述拍摄指令为视频采集指令时,通过满足预设评分值的照片所对应的摄像头进行视频采集。
  14. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-6、9-11中任一项所述的方法。
  15. 一种计算机程序产品,其特征在于,所述计算机程序产品包括软件代码,所述软件代码用于执行如权利要求1-6、9-11中任一项所述的方法。
  16. 一种包含指令的芯片,其特征在于,当所述芯片在电子设备上运行时,使得所述电子设备执行如权利要求1-6、9-11中任一项所述的方法。
PCT/CN2020/136645 2019-12-18 2020-12-16 一种控制方法、电子设备、计算机可读存储介质、芯片 WO2021121236A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/757,673 US11991441B2 (en) 2019-12-18 2020-12-16 Control method, electronic device, computer-readable storage medium, and chip
EP20902272.2A EP4064683A4 (en) 2019-12-18 2020-12-16 CONTROL METHOD, ELECTRONIC DEVICE, COMPUTER READABLE STORAGE MEDIUM AND CHIP

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911310883.5 2019-12-18
CN201911310883.5A CN111083364B (zh) 2019-12-18 2019-12-18 一种控制方法、电子设备、计算机可读存储介质、芯片

Publications (1)

Publication Number Publication Date
WO2021121236A1 true WO2021121236A1 (zh) 2021-06-24

Family

ID=70315698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136645 WO2021121236A1 (zh) 2019-12-18 2020-12-16 一种控制方法、电子设备、计算机可读存储介质、芯片

Country Status (4)

Country Link
US (1) US11991441B2 (zh)
EP (1) EP4064683A4 (zh)
CN (4) CN118042262A (zh)
WO (1) WO2021121236A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473012A (zh) * 2021-06-30 2021-10-01 维沃移动通信(杭州)有限公司 虚化处理方法、装置和电子设备

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118042262A (zh) 2019-12-18 2024-05-14 荣耀终端有限公司 一种控制方法、电子设备、计算机可读存储介质、芯片
CN111586352A (zh) * 2020-04-26 2020-08-25 上海鹰觉科技有限公司 多光电最佳适配联合调度系统及方法
CN111526260B (zh) * 2020-04-30 2022-07-08 维沃移动通信(杭州)有限公司 传输方法、传输装置及电子设备
CN111753777A (zh) * 2020-06-29 2020-10-09 北京联想软件有限公司 一种用户注册方法及装置
CN114071003B (zh) * 2020-08-06 2024-03-12 北京外号信息技术有限公司 一种基于光通信装置的拍摄方法和系统
CN112399127B (zh) * 2020-10-29 2024-05-14 维沃移动通信有限公司 视频通信的控制方法、装置和电子设备
CN112422822A (zh) * 2020-11-02 2021-02-26 维沃移动通信有限公司 拍摄方法、拍摄装置和电子设备
CN114500822B (zh) * 2020-11-11 2024-03-05 华为技术有限公司 控制相机的方法与电子设备
CN112272191B (zh) * 2020-11-16 2022-07-12 Oppo广东移动通信有限公司 数据转移方法及相关装置
CN114519935B (zh) * 2020-11-20 2023-06-06 华为技术有限公司 道路识别方法以及装置
CN114520868B (zh) * 2020-11-20 2023-05-12 华为技术有限公司 视频处理方法、装置及存储介质
CN115484404B (zh) * 2020-11-20 2023-06-02 华为技术有限公司 基于分布式控制的相机控制方法及终端设备
CN112714250A (zh) * 2020-12-22 2021-04-27 广州富港生活智能科技有限公司 基于多设备间的拍摄控制方法及相关设备
CN112887620A (zh) * 2021-01-28 2021-06-01 维沃移动通信有限公司 视频拍摄方法、装置及电子设备
CN114827514B (zh) * 2021-01-29 2023-11-17 华为技术有限公司 电子设备及其与其他电子设备的数据传输方法和介质
CN114911603A (zh) * 2021-02-09 2022-08-16 华为技术有限公司 分布式设备能力虚拟化方法、介质和电子设备
CN113329138A (zh) * 2021-06-03 2021-08-31 维沃移动通信有限公司 视频拍摄方法、视频播放方法和电子设备
CN113473011B (zh) * 2021-06-29 2023-04-25 广东湾区智能终端工业设计研究院有限公司 一种拍摄方法、系统及存储介质
CN114040106A (zh) * 2021-11-17 2022-02-11 维沃移动通信有限公司 视频通话的控制方法、装置、电子设备及可读存储介质
CN116320690B (zh) * 2023-04-10 2024-04-05 张家港市金盾保安服务有限公司 一种ar摄像头结合物联网系统的远程定位方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103329518A (zh) * 2011-01-11 2013-09-25 松下电器产业株式会社 拍摄系统及其使用的摄像机控制装置、拍摄方法及摄像机控制方法、以及计算机程序
CN104349054A (zh) * 2013-07-29 2015-02-11 卡西欧计算机株式会社 摄像装置、摄像方法
US20150092099A1 (en) * 2013-09-30 2015-04-02 The Hong Kong Research Institute Of Textiles And Apparel Limited Fast focusing method and device for multi-spectral imaging
US20170208239A1 (en) * 2016-01-20 2017-07-20 Chiun Mai Communication Systems, Inc. Multiple lenses system, operation method and electronic device employing the same
CN108322670A (zh) * 2018-04-27 2018-07-24 Oppo广东移动通信有限公司 一种多摄像头系统的控制方法、移动终端及存储介质
CN110248094A (zh) * 2019-06-25 2019-09-17 珠海格力电器股份有限公司 拍摄方法及拍摄终端
CN111083364A (zh) * 2019-12-18 2020-04-28 华为技术有限公司 一种控制方法、电子设备、计算机可读存储介质、芯片

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000069346A (ja) * 1998-06-12 2000-03-03 Canon Inc カメラ制御装置、方法、カメラ、追尾カメラシステム及びコンピュ―タ読み取り可能な記憶媒体
GB2368482B (en) * 2000-10-26 2004-08-25 Hewlett Packard Co Optimal image capture
JP2004128646A (ja) * 2002-09-30 2004-04-22 Canon Inc 監視システムおよび制御装置
JP4292891B2 (ja) * 2003-06-26 2009-07-08 ソニー株式会社 撮像装置、画像記録装置および画像記録方法
JP2007312113A (ja) * 2006-05-18 2007-11-29 Matsushita Electric Ind Co Ltd 映像切替装置および映像切替方法
TW200818884A (en) 2006-09-20 2008-04-16 Macnica Inc Control system of image photographing apparatus, as digital camera, digital video-camera or the like using a mobile communication terminal, and image photographing apparatus
WO2014089807A1 (en) 2012-12-13 2014-06-19 Thomson Licensing Remote control of a camera module
US9497380B1 (en) 2013-02-15 2016-11-15 Red.Com, Inc. Dense field imaging
CN104427228B (zh) * 2013-08-22 2017-09-08 展讯通信(上海)有限公司 协作拍摄系统及其拍摄方法
CN103841322A (zh) * 2014-02-20 2014-06-04 宇龙计算机通信科技(深圳)有限公司 一种终端及协同拍照方法
JP5979396B2 (ja) * 2014-05-27 2016-08-24 パナソニックIpマネジメント株式会社 画像撮影方法、画像撮影システム、サーバ、画像撮影装置及び画像撮影プログラム
JP6343194B2 (ja) * 2014-07-08 2018-06-13 キヤノン株式会社 通信装置およびその制御方法、並びにプログラム
CN107079084B (zh) * 2014-09-11 2020-02-28 富士胶片株式会社 即时预览控制装置、即时预览控制方法、即时预览系统
WO2016038976A1 (ja) * 2014-09-11 2016-03-17 富士フイルム株式会社 マルチ撮像装置、マルチ撮像方法、プログラム、及び記録媒体
KR20160059765A (ko) * 2014-11-19 2016-05-27 삼성전자주식회사 전자 장치의 디스플레이 방법 및 장치
CN105120099A (zh) 2015-08-31 2015-12-02 小米科技有限责任公司 拍摄控制方法和装置
CN105611167B (zh) * 2015-12-30 2020-01-31 联想(北京)有限公司 一种对焦平面调整方法及电子设备
US10652449B2 (en) * 2016-03-08 2020-05-12 Sony Corporation Information processing apparatus, information processing method, and program
CN106060406B (zh) * 2016-07-27 2020-03-06 维沃移动通信有限公司 一种拍照方法及移动终端
US10237493B2 (en) 2016-08-09 2019-03-19 Shenzhen Realis Multimedia Technology Co., Ltd Camera configuration method and apparatus
CN106506957A (zh) * 2016-11-17 2017-03-15 维沃移动通信有限公司 一种拍照方法及移动终端
CN106850964A (zh) 2016-12-27 2017-06-13 宇龙计算机通信科技(深圳)有限公司 一种多摄像头拍摄装置及其方法
CN107257439B (zh) * 2017-07-26 2019-05-17 维沃移动通信有限公司 一种拍摄方法及移动终端
JP7025198B2 (ja) * 2017-12-19 2022-02-24 キヤノン株式会社 通信システム、通信装置とその制御方法、プログラム
CN108712609A (zh) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 对焦处理方法、装置、设备及存储介质
CN108803683B (zh) * 2018-05-18 2021-09-10 南京邮电大学 基于ZigBee无线传感网络的多摄像头追踪拍摄系统和方法
CN109121194B (zh) * 2018-08-31 2019-12-10 百度在线网络技术(北京)有限公司 用于电子设备的状态转换的方法和装置
CN109274824A (zh) * 2018-09-27 2019-01-25 西安易朴通讯技术有限公司 拍摄方法及电子设备
CN109919116B (zh) * 2019-03-14 2022-05-17 Oppo广东移动通信有限公司 场景识别方法、装置、电子设备及存储介质
CN110198421B (zh) * 2019-06-17 2021-08-10 Oppo广东移动通信有限公司 视频处理方法及相关产品
CN110267009B (zh) * 2019-06-28 2021-03-12 Oppo广东移动通信有限公司 图像处理方法、装置、服务器及存储介质
CN110177216B (zh) * 2019-06-28 2021-06-15 Oppo广东移动通信有限公司 图像处理方法、装置、移动终端以及存储介质
CN110445966B (zh) * 2019-08-09 2021-09-21 润博全景文旅科技有限公司 一种全景相机视频拍摄方法、装置、电子设备及存储介质
CN110505411B (zh) * 2019-09-03 2021-05-07 RealMe重庆移动通信有限公司 图像拍摄方法、装置、存储介质及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103329518A (zh) * 2011-01-11 2013-09-25 松下电器产业株式会社 拍摄系统及其使用的摄像机控制装置、拍摄方法及摄像机控制方法、以及计算机程序
CN104349054A (zh) * 2013-07-29 2015-02-11 卡西欧计算机株式会社 摄像装置、摄像方法
US20150092099A1 (en) * 2013-09-30 2015-04-02 The Hong Kong Research Institute Of Textiles And Apparel Limited Fast focusing method and device for multi-spectral imaging
US20170208239A1 (en) * 2016-01-20 2017-07-20 Chiun Mai Communication Systems, Inc. Multiple lenses system, operation method and electronic device employing the same
CN108322670A (zh) * 2018-04-27 2018-07-24 Oppo广东移动通信有限公司 一种多摄像头系统的控制方法、移动终端及存储介质
CN110248094A (zh) * 2019-06-25 2019-09-17 珠海格力电器股份有限公司 拍摄方法及拍摄终端
CN111083364A (zh) * 2019-12-18 2020-04-28 华为技术有限公司 一种控制方法、电子设备、计算机可读存储介质、芯片

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473012A (zh) * 2021-06-30 2021-10-01 维沃移动通信(杭州)有限公司 虚化处理方法、装置和电子设备

Also Published As

Publication number Publication date
CN116320782B (zh) 2024-03-26
EP4064683A1 (en) 2022-09-28
CN115103106A (zh) 2022-09-23
US20230022153A1 (en) 2023-01-26
CN116320782A (zh) 2023-06-23
EP4064683A4 (en) 2022-12-28
US11991441B2 (en) 2024-05-21
CN118042262A (zh) 2024-05-14
CN111083364A (zh) 2020-04-28
CN111083364B (zh) 2023-05-02

Similar Documents

Publication Publication Date Title
WO2021121236A1 (zh) 一种控制方法、电子设备、计算机可读存储介质、芯片
WO2021238325A1 (zh) 一种图像处理方法及装置
JP7058760B2 (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
EP3179408B1 (en) Picture processing method and apparatus, computer program and recording medium
CN110865754B (zh) 信息展示方法、装置及终端
CN105654039B (zh) 图像处理的方法和装置
US12020472B2 (en) Image processing method and image processing apparatus
CN110650379B (zh) 视频摘要生成方法、装置、电子设备及存储介质
WO2022048398A1 (zh) 多媒体数据拍摄方法及终端
WO2021093595A1 (zh) 验证用户身份的方法以及电子设备
US20230076109A1 (en) Method and electronic device for adding virtual item
WO2020113534A1 (zh) 一种拍摄长曝光图像的方法和电子设备
CN115918108A (zh) 一种功能切换入口的确定方法与电子设备
WO2022156473A1 (zh) 一种播放视频的方法及电子设备
US20230224574A1 (en) Photographing method and apparatus
CN110853124A (zh) 生成gif动态图的方法、装置、电子设备及介质
WO2022206605A1 (zh) 确定目标对象的方法、拍摄方法和装置
RU2801590C1 (ru) Способ управления и система управления для фотографирования
CN115695860A (zh) 一种推荐视频片段的方法、电子设备及服务器
WO2024046162A1 (zh) 一种图片推荐方法及电子设备
WO2022228010A1 (zh) 一种生成封面的方法及电子设备
CN110543862B (zh) 数据获取方法、装置及存储介质
WO2023221895A1 (zh) 一种目标信息的处理方法、装置以及电子设备
WO2023142690A1 (zh) 一种复原拍摄的方法及电子设备
WO2022111701A1 (zh) 投屏方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20902272

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020902272

Country of ref document: EP

Effective date: 20220623

NENP Non-entry into the national phase

Ref country code: DE