CN115701113A - Shooting method, shooting parameter training method, electronic device and storage medium - Google Patents

Shooting method, shooting parameter training method, electronic device and storage medium Download PDF

Info

Publication number
CN115701113A
CN115701113A CN202110861888.8A CN202110861888A CN115701113A CN 115701113 A CN115701113 A CN 115701113A CN 202110861888 A CN202110861888 A CN 202110861888A CN 115701113 A CN115701113 A CN 115701113A
Authority
CN
China
Prior art keywords
shooting
shooting scene
preview picture
parameters
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110861888.8A
Other languages
Chinese (zh)
Inventor
杨剑
倪茂森
东巍
李扬
苏诚
朱洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110861888.8A priority Critical patent/CN115701113A/en
Priority to PCT/CN2022/107648 priority patent/WO2023005882A1/en
Publication of CN115701113A publication Critical patent/CN115701113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable

Abstract

The embodiment of the application provides a shooting method, a shooting parameter training method, electronic equipment and a storage medium, and relates to the technical field of computers, wherein the method comprises the following steps: acquiring a preview picture and environmental information; acquiring shooting parameters based on the preview picture and the environmental information; and shooting by using the shooting parameters. The method provided by the embodiment of the application can improve the shooting quality.

Description

Shooting method, shooting parameter training method, electronic device and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a shooting method, a shooting parameter training method, electronic equipment and a storage medium.
Background
With the continuous improvement of the terminal software and hardware performance, the photographing function of the terminal is increasingly powerful. As a mobile phone is a terminal type commonly used in daily life, users have an increasing demand for taking pictures of the mobile phone. The parameters determining the high-quality photographing effect include various setting parameters of the camera and picture parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing mode, focal length, white balance, exposure compensation, and the like. In the daily shooting process, no matter in an automatic shooting mode or a professional shooting mode, shooting parameters of the camera are set quickly and accurately, and a picture satisfactory to a user can be shot.
The current common shooting modes include an automatic mode and a professional mode, wherein the automatic shooting mode mostly adopts a photometry mode, and a small number of style types are applied to adjust shooting parameters. Because the environment and scene of the user are different, the mode of setting the basis by taking the light intensity as the main parameter causes the color reduction degree to have larger difference under different scenes, and the quality of the shot photo can not meet the requirements of the user.
In addition, in order to achieve a certain shooting effect and meet the requirements of users, some devices support a professional shooting mode, in the mode, automatic adjustment is only provided for the camera ISO and the shutter speed according to the light intensity, and the recommendation values of various other setting parameters such as white balance, exposure compensation, saturation, contrast and the like, which are not initialized, need to be manually adjusted and combined repeatedly by the users, even the adjustable range of some parameters is very large, the whole process is complicated, the time consumption is long, the accuracy is poor, and the user experience is reduced. The professional model threshold is too high, most users have limited shooting level and professional knowledge, and satisfactory pictures are difficult to shoot.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting parameter training method, electronic equipment and a storage medium, and aims to provide a shooting mode, so that the shooting quality can be improved.
In a first aspect, an embodiment of the present application provides a shooting method, which is applied to a first device, and includes:
acquiring a preview picture and environmental information; the preview picture may be a picture acquired by the first device through a camera and displayed in the preview interface.
Acquiring shooting parameters based on the preview picture and the environmental information; photographing is performed using the photographing parameters.
In the embodiment of the application, the shooting parameters are determined by previewing the real-time information such as the photos, the environmental information and the like, and shooting is carried out by using the shooting parameters, so that the shooting quality can be improved.
In one possible implementation manner, obtaining the shooting parameters based on the preview picture and the environmental information includes:
determining the shooting scene type based on the preview picture and the environmental information;
and inputting the preview picture into a preset parameter decision model corresponding to the shooting scene category to obtain shooting parameters.
In the embodiment of the application, the shooting parameters are obtained through self-calculation of the first equipment, so that the obtaining efficiency of the shooting parameters can be improved.
In one possible implementation manner, obtaining the shooting parameters based on the preview picture and the environmental information includes:
sending the preview picture and the environmental information to the second equipment; the preview picture and the environment information are used for the second equipment to determine shooting parameters; wherein the second device may be a server.
And receiving the shooting parameters sent by the second equipment.
In the embodiment of the application, the shooting parameters are obtained through calculation by the second device, the calculation burden of the first device can be reduced, and the second device has strong calculation capacity, so that the accuracy of the shooting parameters can be improved.
In one possible implementation manner, the environment information includes one or more of location information, time information, weather information, and light information.
In one possible implementation, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focusing mode, focal length, white balance, and exposure compensation.
In one possible implementation manner, the first device includes a mobile phone or a tablet.
The embodiment of the application further provides a shooting parameter training method, which comprises the following steps:
acquiring a training data set; the training data set comprises a plurality of training data subsets of shooting scene types, each training data subset comprises a plurality of training data, and each training data comprises a preview picture corresponding to the shooting scene type and a preset shooting parameter corresponding to the shooting scene type;
and training a preset parameter decision model by using a training data set, wherein the preset parameter decision model is used for inputting a preview picture and outputting a predicted shooting parameter.
In one possible implementation manner, the shooting scene type is determined by a shot photo in a sample data set, the sample data set includes a plurality of sample data, and each sample data includes the shot photo, a preview photo, and preset shooting parameters.
In one possible implementation manner, the sample data set further includes environment information corresponding to the shot photos, and the determining of the shooting scene type from the shot photos in the sample data set includes:
identifying the shot picture to obtain content characteristics;
determining a shooting scene based on the content features;
if the shooting scene is indoor, determining the shooting scene type corresponding to each shot picture based on the content characteristics; or
If the shooting scene is outdoor, determining the shooting scene type corresponding to each shot photo based on the environmental characteristics and the content characteristics; wherein the environmental characteristics are obtained from the environmental information.
In one possible implementation manner, the preset parameter decision model includes a plurality of models, and each model corresponds to a shooting scene category.
In a second aspect, an embodiment of the present application provides a shooting device, which is applied to a first device, and includes:
the acquisition module is used for acquiring the preview picture and the environmental information;
the calculation module is used for obtaining shooting parameters based on the preview picture and the environmental information;
and the shooting module is used for shooting by using the shooting parameters.
In one possible implementation manner, the computing module is further configured to determine a shooting scene type based on the preview picture and the environmental information; and inputting the preview picture into a preset parameter decision model corresponding to the shooting scene category to obtain shooting parameters.
In one possible implementation manner, the computing module is further configured to send the preview picture and the environmental information to the second device; the preview picture and the environment information are used for the second equipment to determine shooting parameters;
and receiving the shooting parameters sent by the second equipment.
In one possible implementation manner, the environment information includes one or more of location information, time information, weather information, and light information.
In one possible implementation, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focusing mode, focal length, white balance, and exposure compensation.
In one possible implementation manner, the first device includes a mobile phone or a tablet.
The embodiment of the present application further provides a shooting parameter training device, including:
the acquisition module is used for acquiring a training data set; the training data set comprises a plurality of training data subsets of shooting scene types, each training data subset comprises a plurality of training data, and each training data comprises a preview picture corresponding to the shooting scene type and a preset shooting parameter corresponding to the shooting scene type;
and the training module is used for training the preset parameter decision model by using a training data set, wherein the preset parameter decision model is used for inputting the preview picture and outputting the predicted shooting parameters.
In one possible implementation manner, the type of the shooting scene is determined by a shot picture in a sample data set, the sample data set comprises a plurality of sample data, and each sample data comprises the shot picture, a preview picture and preset shooting parameters.
In one possible implementation manner, the sample data set further includes environment information corresponding to the shot picture, and the obtaining module is further configured to identify the shot picture to obtain the content feature; determining a shooting scene based on the content features; if the shooting scene is indoor, determining the shooting scene type corresponding to each shot photo based on the content characteristics; or
If the shooting scene is outdoor, determining the shooting scene type corresponding to each shot photo based on the environmental characteristics and the content characteristics; wherein the environmental characteristics are obtained from the environmental information.
In one possible implementation manner, the preset parameter decision model includes a plurality of models, and each model corresponds to a shooting scene category.
In a third aspect, an embodiment of the present application provides a first device, including:
a memory for storing computer program code, the computer program code including instructions that, when read from the memory, cause the first device to perform the steps of:
acquiring a preview picture and environmental information;
acquiring shooting parameters based on the preview picture and the environmental information;
photographing is performed using the photographing parameters.
In one possible implementation manner, when the instruction is executed by the first device, the step of the first device executing obtaining of the shooting parameter based on the preview picture and the environment information includes:
determining a shooting scene type based on the preview picture and the environment information;
and inputting the preview picture into a preset parameter decision model corresponding to the shooting scene category to obtain the shooting parameters.
In one possible implementation manner, when the instruction is executed by the first device, the step of the first device executing obtaining the shooting parameter based on the preview picture and the environmental information includes:
sending the preview picture and the environmental information to the second equipment; the preview picture and the environment information are used for the second equipment to determine shooting parameters;
and receiving the shooting parameters sent by the second equipment.
In one possible implementation, the environment information includes one or more of location information, time information, weather information, and light information.
In one possible implementation, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focusing mode, focal length, white balance, and exposure compensation.
In one possible implementation manner, the first device includes a mobile phone or a tablet.
An embodiment of the present application further provides a third device, including:
a memory, said memory being used to store computer program code, said computer program code comprising instructions, when said third device reads said instructions from said memory, to cause said third device to perform the steps of:
acquiring a training data set; the training data set comprises a plurality of training data subsets of shooting scene types, each training data subset comprises a plurality of training data, and each training data comprises a preview picture corresponding to the shooting scene type and a preset shooting parameter corresponding to the shooting scene type;
and training a preset parameter decision model by using a training data set, wherein the preset parameter decision model is used for inputting a preview picture and outputting a predicted shooting parameter.
In one possible implementation manner, the type of the shooting scene is determined by a shot picture in a sample data set, the sample data set comprises a plurality of sample data, and each sample data comprises the shot picture, a preview picture and preset shooting parameters.
In one possible implementation manner, the sample data set further includes environment information corresponding to a shot photo, and when the instruction is executed by the third device, the step of causing the third device to execute the step of determining that the shooting scene category is determined by the shot photo in the sample data set includes:
identifying the shot picture to obtain content characteristics;
determining a shooting scene based on the content features;
if the shooting scene is indoor, determining the shooting scene type corresponding to each shot photo based on the content characteristics; or
If the shooting scene is outdoor, determining the shooting scene type corresponding to each shot photo based on the environmental characteristics and the content characteristics; wherein the environmental characteristics are obtained from the environmental information.
In one possible implementation manner, the preset parameter decision model includes a plurality of models, and each model corresponds to a shooting scene category.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program, which, when run on a computer, causes the computer to perform the method according to the first aspect.
In a fifth aspect, the present application provides a computer program, which is configured to perform the method of the first aspect when the computer program is executed by a computer.
In a possible design, the program of the fifth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
Drawings
Fig. 1 is a schematic hardware structure diagram of an embodiment of an electronic device provided in the present application;
FIG. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an embodiment of a shooting method provided in the present application;
FIG. 4 is a schematic view of a light ray provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for classifying shooting scenes according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another embodiment of a shooting method provided in the present application;
FIG. 7 is a schematic flowchart of an embodiment of a shooting parameter training method provided in the present application;
fig. 8 is a schematic diagram of a classification of shooting scenes provided in the embodiment of the present application;
fig. 9 is a schematic diagram of a shooting parameter training architecture according to an embodiment of the present application;
fig. 10 is a schematic hardware structure diagram of another embodiment of an electronic device provided in the present application;
fig. 11 is a schematic structural diagram of a shooting device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a shooting parameter training apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, the meaning of "a plurality" is two or more unless otherwise specified.
With the continuous improvement of the terminal software and hardware performance, the photographing function of the terminal is increasingly powerful. As a mobile phone is a terminal type commonly used in daily life, users have an increasing demand for taking pictures of the mobile phone. The parameters determining the high-quality photographing effect include various setting parameters of the camera and picture parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing mode, focal length, white balance, exposure compensation, and the like. In the daily shooting process, no matter in an automatic shooting mode or a professional shooting mode, shooting parameters of the camera are set quickly and accurately, and a picture satisfactory to a user can be shot.
The current common shooting modes include an automatic mode and a professional mode, wherein the automatic shooting mode mostly adopts a photometry mode, and a small number of style types are applied to adjust shooting parameters. Because the environment and scene where the user is located are different, the mode of setting the basis by taking the light intensity as the main parameter makes the color reduction degree have larger difference under different scenes, and the quality of the shot photos can not meet the requirements of the user.
In addition, in order to achieve a certain shooting effect and meet the requirements of users, some devices support a professional shooting mode, in the mode, only automatic adjustment is provided for the camera ISO and the shutter speed according to the light intensity, and the recommendation values of various other setting parameters such as white balance, exposure compensation, saturation, contrast and the like without initialization need to be manually adjusted and combined repeatedly by the users, even the adjustable range of some parameters is large, the whole process is complicated, the time consumption is long, the accuracy is poor, and the user experience is reduced. The professional model threshold is too high, most users have limited shooting level and professional knowledge, and satisfactory photos are difficult to take.
Based on the above problem, the embodiment of the application provides a shooting method, which can improve shooting quality.
Referring to fig. 1 to fig. 6, a shooting method provided in an embodiment of the present application is described, where the shooting method is applied to a first device 10, where the first device 10 may be an intelligent device with a camera, and the first device 10 may also be referred to as a mobile terminal, a terminal device, a User Equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a User terminal, a wireless communication device, a User agent, or a User Equipment. The first device 10 may be a cellular telephone, a cordless telephone, a Personal Digital Assistant (PDA) device, a handheld device with wireless communication capability, a computing device or other processing device connected to a wireless modem, a computer, a laptop, a handheld communication device, a handheld computing device, a satellite radio, a Customer Premises Equipment (CPE) and/or other devices used for communication over a wireless system, as well as next generation communication systems, e.g., a Mobile terminal in a 5G Network or a Mobile terminal in a future evolved Public Land Mobile Network (PLMN) Network, etc. The embodiment of the present application does not specifically limit the form of the first apparatus 10.
An exemplary electronic device provided in the following embodiments of the present application is first described below with reference to fig. 1. Fig. 1 shows a schematic structural diagram of an electronic device 100, and the electronic device 100 may be the first device 10 described above.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus comprising a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of receiving a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit the audio signal to the wireless communication module 160 through the PCM interface, so as to implement the function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to implement the function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as the display screen 194, the camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. Processor 110 and display screen 194 communicate via a DSI interface to implement display functions of electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into a sound signal. When the electronic apparatus 100 receives a call or voice information, it is possible to receive voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. Pressure sensor 180A
Such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for identifying the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint characteristics to unlock a fingerprint, access an application lock, photograph a fingerprint, answer an incoming call with a fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M can acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects in response to touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
Fig. 2 is a schematic view of an application scenario according to an embodiment of the present application, as shown in fig. 2, the application scenario includes a first device 10 and a second device 20, where the second device 20 may be a server in a cloud. The second device 20 may be used to provide the parameters of the current shot to the first device 10.
Fig. 3 is a schematic flowchart of an embodiment of a shooting method provided in the present application, including:
in step 301, the first device 10 obtains the preview picture and the environment information.
Specifically, the user may turn on the camera of the first device 10, causing the first device 10 to enter a shooting mode. For example, the user may click on the camera application on the desktop of the first device 10 to open the camera, or may invoke the camera in a third-party application (e.g., social software). The embodiment of the present application does not specifically limit the manner of opening the camera.
In response to the user's operation of opening the camera, the first device 10 acquires a preview screen, where the preview screen may be a screen of a current environment captured by the current camera. The first device 10 may then further retrieve the current preview photograph. It is understood that the preview picture is a picture corresponding to the current preview screen.
Further, the first device 10 may also obtain current environment information, where the environment information may include: location, time, weather and light. It is understood that the above environment information is only an exemplary illustration and is not a limitation to the embodiments of the present application, and in some embodiments, more environment information may be included. In a specific implementation, the location information may be obtained through a Global Positioning System (GPS) in the first device 10. The above time information may be obtained by the system time of the first device 10. After the position information and the time information are obtained, weather information (e.g., sunny days, cloudy days, rainy days, etc.) can be obtained through the weather application in the first device 10. Then, orientation information may be further acquired, wherein the orientation information may be obtained by the magnetic sensor 180D and the gyro sensor 180B in the first device 10, and the orientation information may be used to characterize the orientation of the first device 10. Further, specific light data can be obtained through the weather information, wherein the light data can include the illumination intensity and the direction of the natural light relative to the camera (e.g., forward light, side light, backward light, etc., wherein the side light can be further divided into front light, back light, left light, right light, etc.).
Illustratively, the light intensity (unit: lux) of the shooting environment can be acquired by the ambient light sensor 180L of the first device 10. If the weather information is sunny, the direction of the natural light relative to the camera can be further calculated, and the calculation method is that the sun direction is obtained through the geographical position and the time information; then, obtaining the direction of the camera 193 according to the installation position (for example, the front side or the back side) of the camera 193 in the first device 10 and the obtained orientation of the first device 10; finally, the relative position of the solar azimuth and the camera direction is obtained, as shown in fig. 4, so that the direction type of the natural rays of the sunlight relative to the camera 193 can be obtained, wherein the direction type can be forward light, side light, backward light and the like.
In step 302, the first device 10 sends the preview picture and the environment information to the second device 20.
Specifically, after the first device 10 obtains the preview photo and the environmental information, the preview photo and the environmental information may be sent to the second device 20. The first device 10 may be connected to the second device 20 through a mobile communication network (e.g., a network such as 4g,5g, etc.) or a local wireless network (e.g., WIFI), so that the first device 10 may transmit the preview photograph and the environment information to the second device 20 using the mobile communication network or the local wireless network. It should be understood that, in the embodiment of the present application, a manner of sending the preview photo and the environment information from the first device 10 to the second device 20 is not particularly limited.
In step 303, the second device 20 generates shooting parameters based on the preview picture and the environment information.
Specifically, after the second device 20 receives the preview picture and the environment information sent by the first device 10, it may generate a shooting parameter based on the preview picture and the environment information, where the shooting parameter may be a corresponding parameter used for performing shooting in a camera, for example, an aperture size, a shutter speed, an ISO, a focusing manner, a focal length, a white balance, exposure compensation, and the like.
The specific process of generating the shooting parameters is shown in fig. 5, and may include the following sub-steps:
step 3031, the second device 20 extracts the features of the actual shooting scene based on the preview picture and the environmental information.
Specifically, the second device 20 may recognize the preview picture by using a preset image recognition model, so as to obtain a feature of an actual shooting scene corresponding to the preview picture, where the feature of the actual shooting scene may include a content feature and an environmental feature.
In a specific implementation, the preview picture may be input into a preset image recognition model. The preset image recognition model may be a model using a depth image segmentation neural network, and optionally, the image recognition model may also use a convolutional neural network with other image recognition functions.
Through the calculation of the preview photo by the image recognition model, the content features in the preview photo can be recognized, and the content features can include main features such as human figures, buildings, snow scenes, animals, plants and the like. In addition, the content feature may further include a distance between the main body and the camera. Then, it is also possible to determine whether the shooting scene corresponding to the preview picture is indoors or outdoors by the image recognition model.
If the shooting scene corresponding to the preview picture is an outdoor environment, the second device 20 may extract environmental features such as weather and light from the environmental information.
In step 3032, the second device 20 determines the shooting scene category based on the obtained characteristics of the actual shooting scene.
In a specific implementation, the shooting scene category may be preset, and the preset shooting scene may include a plurality of categories, for example, the shooting scene category may include category 1 (building-distant view-outdoor-sunny-bright light), category 2 (portrait-close view-outdoor-sunny-backlight), category 3 (marine museum-animal-indoor-bright light), and the like. When the shooting scene category is determined, a preset scene classification model, such as a bayesian network model, may be used. For example, taking the bayesian network model as an example, the obtained characteristics of the actual shooting scene may be used as an occurred event, so as to obtain a joint probability that the actual shooting scene belongs to each preset shooting scene category. Based on Bayes theory: the more events that occur that support an attribute, the greater the likelihood that the attribute will be true. And finally, selecting the shooting scene category with the maximum probability as the category of the current shooting scene. It should be noted that, besides the foregoing bayesian network model, other types of probabilistic graphical network models may also be used as the scene classification model, and the specific form of the scene classification model is not particularly limited in the present application.
If the shooting scene corresponding to the preview photo is an indoor environment, the second device 20 may determine the shooting scene type directly according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and the environmental characteristics in the preview photo). In specific implementation, the content features and the environmental features in the preview picture may be input into a preset scene classification model, such as a bayesian network model, so as to obtain a corresponding shooting scene category.
Step 3033, the second device 20 loads a parameter decision model corresponding to the shooting scene type based on the shooting scene type, and calculates and obtains the shooting parameters by taking the preview picture as an input.
Specifically, after the second device 20 determines the shooting scene category, a parameter decision model corresponding to the shooting scene category may be loaded. Then, the preview picture may be input into the parameter decision model, and the shooting parameters corresponding to the preview picture are calculated and obtained by running the model. The parameter decision model can be obtained through deep learning and pre-training. The specific training mode may be described in the shooting parameter training method below, and is not described herein again.
In step 304, the second device 20 sends the shooting parameters to the first device 10.
In step 305, the first device 10 performs shooting using the shooting parameters.
Specifically, after receiving the shooting parameters sent by the second device 20, the first device 10 initializes the shooting configuration parameters of the camera to the shooting parameters, and can use the initialized shooting parameters to shoot. The user can also manually adjust the initialized shooting parameters. The actual shot can thus be obtained.
It is understood that, in the above embodiments, steps 301 to 305 are optional steps, and this application only provides one possible embodiment, and may further include more or less steps than steps 301 to 305, which is not limited in this application.
It should be noted that, in an alternative embodiment, the application scenario shown in fig. 3 may not include the second device 20, that is, all of the steps 301 to 305 may be executed on the first device 10. In the scene with only the first device 10, the first device 10 may include a preset image recognition model, a preset scene classification model, and a preset parameter decision model.
Fig. 6 is a schematic flowchart of another embodiment of a shooting method provided in the present application, including:
in step 601, the first device 10 obtains the preview picture and the environment information.
Specifically, the user may turn on the camera of the first device 10, causing the first device 10 to enter a shooting mode. For example, the user may click on the camera application on the desktop of the first device 10 to open the camera, or may invoke the camera in a third-party application (e.g., social software). The embodiment of the present application does not specifically limit the manner of opening the camera.
In response to an operation of the user to turn on the camera, the first device 10 acquires a preview screen, where the preview screen may be a screen of a current environment captured by the current camera. The first device 10 may then further retrieve the current preview photograph. It is understood that the preview picture is a picture corresponding to the current preview screen.
Further, the first device 10 may also obtain current environment information, where the environment information may include: location, time, weather, and light. It is understood that the above environment information is only an exemplary illustration and is not a limitation to the embodiments of the present application, and in some embodiments, more environment information may be included. In a specific implementation, the location information may be obtained through a Global Positioning System (GPS) in the first device 10. The above time information may be obtained by the system time of the first device 10. After the location information and the time information are obtained, weather information (e.g., sunny day, cloudy day, rainy day, etc.) may be obtained through a weather application in the first device 10. Then, orientation information may be further acquired, wherein the orientation information may also be obtained by the magnetic sensor 180D and the gyroscope 180B sensor in the first device 10, and the orientation information may be used to characterize the orientation of the first device 10. Further, specific light data can be obtained through the weather information, wherein the light data can include the illumination intensity and the direction of the natural light relative to the camera (e.g., forward light, side light, backward light, etc., wherein the side light can be further divided into front light, back light, left light, right light, etc.).
In step 602, the first device 10 generates shooting parameters based on the preview picture and the environment information.
Specifically, after the first device 10 obtains the preview picture and the environment information, it may generate a shooting parameter based on the preview picture and the environment information, where the shooting parameter may be a corresponding parameter used for performing shooting in a camera, such as an aperture size, a shutter speed, an ISO, a focusing manner, a focal length, a white balance, an exposure compensation, and the like.
The specific process of generating the shooting parameters may include the following sub-steps:
in step 6021, the first device 10 extracts features of the actual shooting scene based on the preview picture and the environment information.
Specifically, the first device 10 may recognize the preview picture by using a preset image recognition model, so as to obtain a feature of an actual shooting scene corresponding to the preview picture, where the feature of the actual shooting scene may include a content feature and an environmental feature.
In a specific implementation, the preview picture may be input into a preset image recognition model. The preset image recognition model may be a model using a depth image segmentation neural network, optionally, the image recognition model may also use a convolution neural network with other image recognition functions, and the specific type of the image recognition model is not particularly limited in the embodiment of the present application.
Through the calculation of the preview photo by the image recognition model, the content features in the preview photo can be recognized, and the content features can include main features such as human figures, buildings, snow scenes, animals, plants and the like. In addition, the content feature may further include a distance between the main body and the camera. Next, it is possible to determine whether the shot scene corresponding to the preview picture is indoors or outdoors by using the image recognition model.
If the shooting scene corresponding to the preview picture is an outdoor environment, the first device 10 may extract environmental features such as weather and light from the environmental information.
In step 6022, the first device 10 determines a shooting scene category based on the acquired characteristics of the actual shooting scene.
In a specific implementation, the shooting scene category may be preset, and the preset shooting scene may include multiple categories, for example, the shooting scene category may include category 1 (building-distant view-outdoor-sunny-bright light), category 2 (portrait-close view-outdoor-sunny-backlight), category 3 (ocean-animal-indoor-bright light), and the like. When the shooting scene category is determined, a preset scene classification model, such as a bayesian network model, may be used. For example, taking a bayesian network model as an example, the obtained characteristics of the actual shooting scene may be used as an occurred event, so as to obtain a joint probability that the actual shooting scene belongs to each preset shooting scene category. Based on Bayesian theory: the more events that occur that support an attribute, the greater the likelihood that the attribute will be true. And finally, selecting the shooting scene category with the maximum probability as the category of the current shooting scene. It should be noted that, in addition to the foregoing bayesian network model, other types of probabilistic graphical network models may be used as the scene classification model. The specific form of the scene classification model is not particularly limited in the present application.
If the shooting scene corresponding to the preview picture is an indoor environment, the first device 10 may determine the shooting scene type directly according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and the environmental characteristics in the preview picture). In specific implementation, the content features and the environmental features in the preview picture may be input into a preset scene classification model, such as a bayesian network model, so as to obtain a corresponding shooting scene category.
In step 6023, the first device 10 loads a parameter decision model corresponding to the shooting scene type based on the shooting scene type, and calculates and obtains the shooting parameters using the preview picture as input.
Specifically, after the first device 10 determines the shooting scene type, a parameter decision model corresponding to the shooting scene type may be loaded. Then, the preview picture may be input into the parameter decision model, and the shooting parameters corresponding to the preview picture are calculated and obtained by running the model. Wherein, the parameter decision model can be obtained by deep learning and pre-training. The specific training mode may be described in the shooting parameter training method below, and is not described herein again.
In step 603, the first device 10 performs shooting using the shooting parameters.
Specifically, after determining the shooting parameters, the first device 10 initializes the shooting configuration parameters of the camera to the shooting parameters, and can perform shooting using the initialized shooting parameters. The user may also make manual adjustments to these recommended-based initial shooting parameters. The actual shot can thus be obtained.
It is understood that, in the above embodiments, steps 601 to 603 are optional steps, and this application only provides one possible embodiment, and may further include more or less steps than steps 601 to 603, which is not limited in this application.
Next, the following describes the training process of the above-described parameter decision model in detail.
The embodiment of the present application further provides a shooting parameter training method, which is applied to a third device 30, where the third device 30 may be embodied in a form of a computer, and for example, the third device 30 may be a cloud server (for example, the second device 20 described above), but is not limited to the second device 20, and in some embodiments, the third device 30 may also be a local desktop computer. Alternatively, the third device 30 may be a terminal device (e.g., the first device 10 described above). The third device 30 is taken as a computer, and the shooting parameter training method is described with reference to fig. 7-9. Fig. 7 is a schematic flowchart of an embodiment of a shooting parameter training method provided in the present application, including:
step 701, obtaining a sample data set.
Specifically, the sample data set may include a plurality of sample data, where each sample data may include a preview picture, a set of professional mode parameters, a shot picture, and environment information corresponding to the shot picture. The preview picture can be a picture in a preview picture acquired by the camera, the professional mode parameter can be a parameter set by a user in a professional mode, the shot picture can be a picture shot by the camera by using the professional mode parameter, and the environment information can comprise information such as position, time, weather and light. The detailed description of the environment information may refer to step 301, which is not repeated herein.
Optionally, the shot photos can be screened manually and/or by a machine, and for example, the shot photos can be screened by using an image aesthetic tool and an image quality evaluation tool, so that high-quality shot photos can be screened.
Table 1 exemplarily shows the sample data set described above.
TABLE 1
Figure BDA0003186022020000181
As shown in table 1, the sample data set includes N sample data, and each sample data includes a preview picture, a professional mode parameter, a shot picture, environment information, and the like.
Step 702, inputting each shot picture in the sample data set into a preset image recognition model for recognition, and obtaining content characteristics.
Specifically, the preset image recognition model may be a model using a depth image segmentation neural network, optionally, the image recognition model may also use a convolutional neural network with other image recognition functions, and the embodiment of the present application does not specifically limit the specific type of the image recognition model.
After the shot picture is recognized through the preset image recognition model, content features corresponding to the shot picture can be obtained, wherein the content features can include main features such as human figures, buildings, snow scenes, animals, plants and the like.
In addition, the content feature may further include a distance between the main body and the camera. Then, the image recognition model can also determine whether the shooting scene corresponding to the shot picture is indoors or outdoors.
And 703, classifying the shooting scenes based on the content characteristics to obtain the shooting scene category.
Specifically, after the shooting environment (e.g., indoors or outdoors) is determined, the shooting scene classification may be performed on each shot photo in the sample data set based on the content characteristics, so that the shooting scene classification of each shot photo may be obtained.
Fig. 8 is a flowchart illustrating the classification of the shooting scene, as shown in fig. 8,
if the shooting scene corresponding to the shot photo is an outdoor environment, the shooting scene of the shot photo can be classified based on the environmental characteristics and the content characteristics, so that the shooting scene category can be obtained, wherein the environmental characteristics can be obtained through the environmental information. In a specific implementation, the shooting scene categories may include a plurality of categories, such as category 1 (building-distant scene-outdoor-sunny-bright light), category 2 (portrait-close scene-outdoor-sunny-backlight), category 3 (ocean-animal-indoor-bright light), and so on.
If the shooting scene corresponding to the preview picture is an indoor environment, the shooting scene type can be directly determined according to the content characteristics.
Step 704, construct training data set
Specifically, after the captured scene category of each captured picture is obtained, all captured pictures in the sample data set may be grouped, and the grouping may be performed according to the captured scene category, for example, the captured pictures of the captured scenes of the same category may be grouped into one group. After the shot pictures are grouped, corresponding preview pictures and professional mode parameters can be found according to the shot pictures, for example, taking table 1 as an example, corresponding preview pictures 1 and professional mode parameters 1 can be found through the shot pictures 1, so that multiple groups of training data can be obtained, the multiple groups of training data form a training data set, wherein each group of training data comprises multiple training data under the same shooting scene type, and each training data comprises the preview pictures and the professional mode parameters under the shooting scene type.
Table 2 exemplarily shows the training data set.
Figure BDA0003186022020000191
As shown in table 2, the training data set includes M shooting scene categories, each shooting scene category may include a plurality of training data, and each training data may include a preview photograph belonging to the shooting scene category and a professional mode parameter.
Step 705, training a preset parameter decision model based on the training data set.
Specifically, the training data set may be divided into a training set and a verification set. The distribution ratio of the training set and the verification set may be preset, and this is not particularly limited in the embodiment of the present application. Then, the training set can be input into a preset parameter decision model for training.
It should be noted that, because there are a plurality of shooting scene categories, and each shooting scene category may correspond to one parameter decision model, a plurality of parameter decision models may be trained respectively.
In a specific implementation, the preview photos in the training set may be input into the preset parameter decision model for calculation, so as to obtain predicted shooting parameters. The predicted shooting parameters may include parameters such as aperture size, shutter speed, ISO, focusing method, focal length, white balance, exposure compensation, and the like.
FIG. 9 is a diagram of a training architecture of a parametric decision model. As shown in fig. 9, when the parameter decision model for any particular shooting scene type is trained, the preview picture is input data, and the output data is predicted shooting parameters.
It is understood that the professional mode parameters in the training set can be used as the label data. That is, the training data in the training set may include feature data and label data. Where feature data may be used for input and calculation, for example, the feature data may include a preview photograph or the like. The label data, which may be pre-identified professional mode parameters, may be used to compare to the output during the training process to converge the model loss through training. In addition, in the training process of any parameter decision model, the objective function may be the mean square error of the predicted shooting parameters and the professional mode parameters, that is, the mean square error of the predicted data and the label data. Then, through the training of the training data, the iteration is repeated until the parameter decision model converges.
Similarly, for training all other parameter decision models, a parameter decision model corresponding to the shooting scene category can be obtained.
Further, after the parameter decision models of the different shooting scene types are trained, verification can be performed through the verification set, if the parameter decision models reach preset requirements after verification, the training is completed, and if the parameter decision models do not reach the preset requirements after verification, further training can be performed, for example, the sample data set is obtained again, and the steps 701 to 705 are repeated for training again.
By distinguishing different shooting scene categories, the extraction of the neural network on the environmental characteristics in a specific scene can be improved, the convergence process of the model can be accelerated, the abnormal conditions such as overfitting or incapability of convergence are avoided, and the adaptability of the model to the scene can be improved.
By the shooting parameter training method, the parameter decision model can be obtained, so that the second device 20 can calculate the preview picture and the environment information sent by the first device 10 based on the parameter decision model to obtain the corresponding shooting parameters, further reduce the calculation amount of the first device 10, and improve the shooting quality.
Exemplary electronic devices provided in the following embodiments of the present application are further described below in conjunction with fig. 10. Fig. 10 shows a schematic structural diagram of an electronic device 1000, and the electronic device 1000 may be the third device 30 described above.
The electronic device 1000 may include: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the method provided by the embodiments of fig. 7-9 of the present application.
Fig. 10 shows a block diagram of an exemplary electronic device 1000 suitable for implementing embodiments of the present application. The electronic device 1000 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: one or more processors 1010, a memory 1020, a communication bus 1040 that couples the various system components (including the memory 1020 and the processors 1010), and a communication interface 1030.
Communication bus 1040 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 1000 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by an electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 1020 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) and/or cache Memory. The electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. Although not shown in FIG. 10, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to the communication bus 1040 by one or more data media interfaces. Memory 1020 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility having a set (at least one) of program modules, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in memory 1020, each of which examples or some combination may include an implementation of a network environment. The program modules generally perform the functions and/or methodologies of the embodiments described herein.
The electronic device 1000 may also communicate with one or more external devices (e.g., keyboard, pointing device, display, etc.), one or more devices that enable a user to interact with the electronic device, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device to communicate with one or more other computing devices. Such communication may occur over communication interface 1030. Furthermore, the electronic device 1000 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via a Network adapter (not shown in FIG. 10) that may communicate with other modules of the electronic device via the communication bus 1040. It should be appreciated that although not shown in FIG. 10, other hardware and/or software modules may be used in conjunction with the electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape Drives, and data backup storage systems, among others.
The processor 1010 executes programs stored in the memory 1020 to execute various functional applications and data processing, for example, to implement the shooting parameter training method provided by the embodiment of the present application.
Fig. 11 is a schematic structural diagram of an embodiment of the present invention, and as shown in fig. 11, the camera 1100 is applied to the first device 10, and may include: an acquisition module 1110, a calculation module 1120, and a photographing module 1130; wherein, the first and the second end of the pipe are connected with each other,
an obtaining module 1110, configured to obtain a preview photo and environment information;
a calculating module 1120, configured to obtain shooting parameters based on the preview picture and the environmental information;
a photographing module 1130 for photographing using the photographing parameters.
In one possible implementation manner, the calculating module 1120 is further configured to determine a shooting scene type based on the preview picture and the environmental information; and inputting the preview picture into a preset parameter decision model corresponding to the shooting scene category to obtain shooting parameters.
In one possible implementation manner, the computing module 1120 is further configured to send the preview photo and the environmental information to the second device; the preview picture and the environment information are used for the second equipment to determine shooting parameters;
and receiving the shooting parameters sent by the second equipment.
In one possible implementation, the environment information includes one or more of location information, time information, weather information, and light information.
In one possible implementation, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focusing mode, focal length, white balance, and exposure compensation.
In one possible implementation manner, the first device includes a mobile phone or a tablet.
Fig. 12 is a schematic structural diagram of an embodiment of the shooting parameter training apparatus of the present application, and as shown in fig. 12, the shooting parameter training apparatus 1200 may include: an acquisition module 1210 and a training module 1220; wherein the content of the first and second substances,
an obtaining module 1210 configured to obtain a training data set; the training data set comprises a plurality of training data subsets of shooting scene categories, each training data subset comprises a plurality of training data, and each training data comprises a preview picture corresponding to the shooting scene category and a preset shooting parameter corresponding to the shooting scene category;
the training module 1220 is configured to train a preset parameter decision model using the training data set, where the preset parameter decision model is configured to input a preview picture and output a predicted shooting parameter.
In one possible implementation manner, the type of the shooting scene is determined by a shot picture in a sample data set, the sample data set comprises a plurality of sample data, and each sample data comprises the shot picture, a preview picture and preset shooting parameters.
In one possible implementation manner, the sample data set further includes environment information corresponding to the shot photo, and the obtaining module 1210 is further configured to identify the shot photo to obtain the content feature; determining a shooting scene based on the content features; if the shooting scene is indoor, determining the shooting scene type corresponding to each shot picture based on the content characteristics; or
If the shooting scene is outdoor, determining the shooting scene type corresponding to each shot photo based on the environmental characteristics and the content characteristics; wherein the environmental characteristics are obtained from the environmental information.
In one possible implementation manner, the preset parameter decision model includes a plurality of models, and each model corresponds to a shooting scene category.
The photographing apparatus 1100 provided in the embodiment shown in fig. 11 and the photographing parameter training apparatus 1200 provided in the embodiment shown in fig. 12 can be used to respectively execute the technical solutions of the method embodiments shown in fig. 1 to 6 and fig. 7 to 9 of the present application, and the implementation principle and the technical effects thereof can be further described with reference to the related descriptions in the method embodiments.
It should be understood that the division of the modules of the shooting device shown in fig. 11 and the shooting parameter training device shown in fig. 12 is merely a logical function division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of software called by the processing element, and part of the modules can be realized in the form of hardware. For example, the detection module may be a separate processing element, or may be integrated into a chip of the electronic device. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), one or more microprocessors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, these modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the electronic device. In other embodiments of the present application, the electronic device may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
It is to be understood that the electronic device and the like described above include a hardware structure and/or a software module for performing each function in order to realize the functions described above. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
In the embodiment of the present application, the electronic device and the like may be divided into functional modules according to the method example, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Through the description of the foregoing embodiments, it will be clear to those skilled in the art that, for convenience and simplicity of description, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the apparatus may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as flash memory, removable hard drive, read-only memory, random-access memory, magnetic or optical disk, etc.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A shooting method applied to a first device is characterized by comprising the following steps:
acquiring a preview picture and environmental information;
acquiring shooting parameters based on the preview picture and the environment information;
and shooting by using the shooting parameters.
2. The method of claim 1, wherein the obtaining shooting parameters based on the preview picture and the environmental information comprises:
determining a shooting scene type based on the preview picture and the environment information;
and inputting the preview picture into a preset parameter decision model corresponding to the shooting scene category to obtain shooting parameters.
3. The method of claim 1, wherein obtaining capture parameters based on the preview picture and the environmental information comprises:
sending the preview picture and the environment information to second equipment; the preview picture and the environment information are used for the second equipment to determine shooting parameters;
and receiving the shooting parameters sent by the second equipment.
4. The method of any one of claims 1-3, wherein the environmental information includes one or more of location information, time information, weather information, and light information.
5. The method according to any one of claims 1 to 4, wherein the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focusing mode, focal length, white balance, and exposure compensation.
6. The method of any of claims 1-5, wherein the first device comprises a cell phone or tablet.
7. A shooting parameter training method, the method comprising:
acquiring a training data set; the training data set comprises a plurality of training data subsets of shooting scene types, each training data subset comprises a plurality of training data, and each training data comprises a preview picture corresponding to the shooting scene type and a preset shooting parameter corresponding to the shooting scene type;
and training a preset parameter decision model by using the training data set, wherein the preset parameter decision model is used for inputting a preview picture and outputting a predicted shooting parameter.
8. The method of claim 7, wherein the capture scene category is determined from a capture photograph in a sample data set, the sample data set comprising a plurality of sample data, each sample data comprising the capture photograph, the preview photograph, and the preset capture parameters.
9. The method of claim 8, wherein the sample data set further includes environmental information corresponding to the captured photograph, and wherein determining the captured scene category from the captured photograph in the sample data set comprises:
identifying the shot picture to obtain content characteristics;
determining a shooting scene based on the content features;
if the shooting scene is indoor, determining the shooting scene type corresponding to each shot photo based on the content features; or
If the shooting scene is outdoor, determining the shooting scene type corresponding to each shot photo based on the environmental characteristics and the content characteristics; wherein the environmental characteristic is obtained from the environmental information.
10. The method according to any one of claims 7 to 9, wherein the predetermined parameter decision model comprises a plurality of models, each model corresponding to a category of the shooting scene.
11. A first device, comprising: a memory for storing computer program code, the computer program code comprising instructions that, when read from the memory by the first device, cause the first device to perform the method of any of claims 1-6.
12. A third apparatus, comprising: a memory for storing computer program code, the computer program code comprising instructions which, when read from the memory by the third device, cause the third device to perform the method of any of claims 7-10.
13. A computer readable storage medium comprising computer instructions which, when run on the first device or a third device, cause the first device to perform the method of any of claims 1-6, or cause the third device to perform the method of any of claims 7-10.
CN202110861888.8A 2021-07-29 2021-07-29 Shooting method, shooting parameter training method, electronic device and storage medium Pending CN115701113A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110861888.8A CN115701113A (en) 2021-07-29 2021-07-29 Shooting method, shooting parameter training method, electronic device and storage medium
PCT/CN2022/107648 WO2023005882A1 (en) 2021-07-29 2022-07-25 Photographing method, photographing parameter training method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110861888.8A CN115701113A (en) 2021-07-29 2021-07-29 Shooting method, shooting parameter training method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115701113A true CN115701113A (en) 2023-02-07

Family

ID=85086291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110861888.8A Pending CN115701113A (en) 2021-07-29 2021-07-29 Shooting method, shooting parameter training method, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN115701113A (en)
WO (1) WO2023005882A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017047012A1 (en) * 2015-09-18 2017-03-23 パナソニックIpマネジメント株式会社 Imaging device and system including imaging device and server
CN107622281B (en) * 2017-09-20 2021-02-05 Oppo广东移动通信有限公司 Image classification method and device, storage medium and mobile terminal
CN110012210B (en) * 2018-01-05 2020-09-22 Oppo广东移动通信有限公司 Photographing method and device, storage medium and electronic equipment
CN108848308B (en) * 2018-06-27 2019-12-31 维沃移动通信有限公司 Shooting method and mobile terminal
CN111405180A (en) * 2020-03-18 2020-07-10 惠州Tcl移动通信有限公司 Photographing method, photographing device, storage medium and mobile terminal

Also Published As

Publication number Publication date
WO2023005882A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
CN110458902B (en) 3D illumination estimation method and electronic equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN113973173B (en) Image synthesis method and electronic equipment
CN111625670A (en) Picture grouping method and device
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN110851067A (en) Screen display mode switching method and device and electronic equipment
CN111147667A (en) Screen-off control method and electronic equipment
CN112700377A (en) Image floodlight processing method and device and storage medium
CN114610193A (en) Content sharing method, electronic device, and storage medium
CN111930335A (en) Sound adjusting method and device, computer readable medium and terminal equipment
WO2022062884A1 (en) Text input method, electronic device, and computer-readable storage medium
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN113467735A (en) Image adjusting method, electronic device and storage medium
CN112584037A (en) Method for saving image and electronic equipment
WO2023005706A1 (en) Device control method, electronic device, and storage medium
CN113674258B (en) Image processing method and related equipment
WO2022022319A1 (en) Image processing method, electronic device, image processing system and chip system
CN115412678A (en) Exposure processing method and device and electronic equipment
CN111885768B (en) Method, electronic device and system for adjusting light source
WO2023005882A1 (en) Photographing method, photographing parameter training method, electronic device, and storage medium
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN115714890A (en) Power supply circuit and electronic device
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN114661258A (en) Adaptive display method, electronic device, and storage medium
CN114466238A (en) Frame demultiplexing method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination