WO2023005882A1 - Photographing method, photographing parameter training method, electronic device, and storage medium - Google Patents

Photographing method, photographing parameter training method, electronic device, and storage medium Download PDF

Info

Publication number
WO2023005882A1
WO2023005882A1 PCT/CN2022/107648 CN2022107648W WO2023005882A1 WO 2023005882 A1 WO2023005882 A1 WO 2023005882A1 CN 2022107648 W CN2022107648 W CN 2022107648W WO 2023005882 A1 WO2023005882 A1 WO 2023005882A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
category
shooting scene
preview
photo
Prior art date
Application number
PCT/CN2022/107648
Other languages
French (fr)
Chinese (zh)
Inventor
杨剑
倪茂森
东巍
李扬
苏诚
朱洲
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023005882A1 publication Critical patent/WO2023005882A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable

Definitions

  • the embodiments of the present application relate to the field of computer technology, and in particular, to a shooting method, a shooting parameter training method, electronic equipment, and a storage medium.
  • the parameters that determine the high-quality shooting effect include various setting parameters of the camera and photo parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing method, focal length, white balance, exposure compensation and so on.
  • ISO sensitivity
  • the automatic shooting mode mostly uses light metering and applies a small number of style types to adjust shooting parameters. Since the environment and scene of the user vary greatly, this way of setting parameters based on light intensity makes the color reproduction degree vary greatly in different scenes, and the quality of the photos taken often cannot meet the user's requirements.
  • some devices support professional photo mode.
  • this mode only the camera ISO and shutter speed are automatically adjusted according to the light intensity.
  • Many other setting parameters such as brightness and contrast do not have recommended values for initialization, requiring users to manually adjust and combine repeatedly, and even some parameters can be adjusted in a large range.
  • the whole process is cumbersome, time-consuming, and has poor accuracy, which reduces the user experience.
  • the threshold for professional models is too high, and most users have limited shooting skills and professional knowledge, making it difficult to take satisfactory photos.
  • Embodiments of the present application provide a shooting method, a shooting parameter training method, an electronic device, and a storage medium, so as to provide a shooting method, thereby improving the shooting quality.
  • the embodiment of the present application provides a shooting method applied to the first device, including:
  • the preview photos may be photos captured by the first device through a camera and displayed on a preview interface.
  • shooting parameters are determined by previewing photos and real-time information such as environmental information, and the shooting parameters are used for shooting, which can improve the shooting quality.
  • obtaining shooting parameters based on preview photos and environmental information includes:
  • the preview photo is input into the preset parameter decision model corresponding to the category of the shooting scene to obtain the shooting parameters.
  • the first device calculates and obtains the shooting parameters by itself, which can improve the efficiency of obtaining the shooting parameters.
  • obtaining shooting parameters based on preview photos and environmental information includes:
  • the second device Sending the preview photo and the environment information to the second device; where the preview photo and the environment information are used by the second device to determine shooting parameters; where the second device may be a server.
  • the shooting parameters sent by the second device are received.
  • the second device calculates and obtains the shooting parameters, which can reduce the calculation burden of the first device, and the second device has powerful computing capabilities, thereby improving the accuracy of the shooting parameters.
  • the environment information includes one or more of location information, time information, weather information, and light information.
  • the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
  • the first device includes a mobile phone or a tablet.
  • the embodiment of the present application also provides a shooting parameter training method, including:
  • the training data set includes training data subsets of multiple shooting scene categories, each training data subset includes multiple training data, and each training data includes preview photos and shooting scenes corresponding to the shooting scene category
  • the training data set is used to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
  • the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
  • the sample data set also includes environmental information corresponding to the photos taken, and the category of the shooting scene is determined by the photos taken in the sample data set, including:
  • the shooting scene is indoors, determine the shooting scene category corresponding to each shot based on content characteristics; or
  • the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
  • the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
  • the embodiment of the present application provides a photographing device applied to the first device, including:
  • the acquisition module is used to obtain preview photos and environmental information
  • a computing module configured to obtain shooting parameters based on preview photos and environmental information
  • the shooting module is used for shooting with shooting parameters.
  • the calculation module is further used to determine the category of the shooting scene based on the preview photo and environmental information; input the preview photo into a preset parameter decision model corresponding to the category of the shooting scene to obtain shooting parameters.
  • the calculation module is also used to send the preview photos and environmental information to the second device; wherein, the preview photos and environmental information are used by the second device to determine shooting parameters;
  • the shooting parameters sent by the second device are received.
  • the environment information includes one or more of location information, time information, weather information, and light information.
  • the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
  • the first device includes a mobile phone or a tablet.
  • the embodiment of the present application also provides a shooting parameter training device, including:
  • the obtaining module is used to obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each training data subset includes a plurality of training data, and each training data includes shooting scene categories corresponding to Preview photos and preset shooting parameters corresponding to shooting scene categories;
  • the training module is used to use the training data set to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
  • the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
  • the sample data set also includes environmental information corresponding to the photos taken
  • the acquisition module is also used to identify the photos taken to obtain content features; determine the shooting scene based on the content features; if the shooting scene is indoor , then determine the shooting scene category corresponding to each shot based on the content feature; or
  • the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
  • the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
  • the embodiment of the present application provides a first device, including:
  • the above-mentioned memory is used to store computer program codes, and the above-mentioned computer program codes include instructions, when the above-mentioned first device reads the above-mentioned instructions from the above-mentioned memory so that the above-mentioned first device performs the following steps:
  • making the above-mentioned first device execute the step of obtaining shooting parameters based on preview photos and environmental information includes:
  • the preview photo is input into the preset parameter decision model corresponding to the category of the shooting scene to obtain the shooting parameters.
  • making the above-mentioned first device execute the step of obtaining shooting parameters based on preview photos and environmental information includes:
  • the shooting parameters sent by the second device are received.
  • the environment information includes one or more of location information, time information, weather information, and light information.
  • the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
  • the first device includes a mobile phone or a tablet.
  • the embodiment of the present application also provides a third device, including:
  • the above-mentioned memory is used to store computer program code
  • the above-mentioned computer program code includes instructions, when the above-mentioned third device reads the above-mentioned instructions from the above-mentioned memory, so that the above-mentioned third device performs the following steps:
  • the training data set includes training data subsets of multiple shooting scene categories, each training data subset includes multiple training data, and each training data includes preview photos and shooting scenes corresponding to the shooting scene category
  • the training data set is used to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
  • the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
  • the sample data set further includes environmental information corresponding to the photographs taken, and when the above-mentioned instructions are executed by the third device, the third device executes the Steps include:
  • the shooting scene is indoors, determine the shooting scene category corresponding to each shot based on content characteristics; or
  • the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
  • the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it is run on a computer, the computer executes the method described in the first aspect.
  • an embodiment of the present application provides a computer program, which is used to execute the method described in the first aspect when the above computer program is executed by a computer.
  • all or part of the program in the fifth aspect may be stored on a storage medium packaged with the processor, or part or all may be stored on a memory not packaged with the processor.
  • FIG. 1 is a schematic diagram of a hardware structure of an embodiment of an electronic device provided by the present application.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 3 is a schematic flow diagram of an embodiment of the shooting method provided by the present application.
  • FIG. 4 is a schematic diagram of light rays provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a shooting scene classification method provided in an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another embodiment of the shooting method provided by the present application.
  • FIG. 7 is a schematic flow diagram of an embodiment of the shooting parameter training method provided by the present application.
  • FIG. 8 is a schematic diagram of shooting scene classification provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a shooting parameter training framework provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a hardware structure of another embodiment of an electronic device provided by the present application.
  • FIG. 11 is a schematic structural diagram of a photographing device provided in an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a shooting parameter training device provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, "plurality” means two or more.
  • the parameters that determine the high-quality shooting effect include various setting parameters of the camera and photo parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing method, focal length, white balance, exposure compensation and so on.
  • ISO sensitivity
  • the automatic shooting mode mostly uses light metering and applies a small number of style types to adjust shooting parameters. Since the environment and scene of the user vary greatly, this way of setting parameters based on light intensity makes the color reproduction degree vary greatly in different scenes, and the quality of the photos taken often cannot meet the user's requirements.
  • some devices support professional photo mode.
  • this mode only the camera ISO and shutter speed are automatically adjusted according to the light intensity.
  • Many other setting parameters such as brightness and contrast do not have recommended values for initialization, requiring users to manually adjust and combine repeatedly, and even some parameters can be adjusted in a large range.
  • the whole process is cumbersome, time-consuming, and has poor accuracy, which reduces the user experience.
  • the threshold for professional models is too high, and most users have limited shooting skills and professional knowledge, making it difficult to take satisfactory photos.
  • the embodiment of the present application proposes a shooting method, which can improve the shooting quality.
  • UE User Equipment
  • the first device 10 may be a cellular telephone, a cordless telephone, a Personal Digital Assistant (PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer , handheld communication equipment, handheld computing equipment, satellite wireless equipment, Customer Premise Equipment (CPE) and/or other equipment used to communicate over wireless systems and next-generation communication systems, for example, in 5G networks mobile terminal or a mobile terminal in a public land mobile network (Public Land Mobile Network, PLMN) network that will evolve in the future.
  • PLMN Public Land Mobile Network
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 , which may be the above-mentioned first device 10 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • a capacitive pressure sensor may be comprised of at least two parallel plates of conductive material.
  • the electronic device 100 determines the intensity of pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch device”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • FIG. 2 is a schematic diagram of an application scenario of an embodiment of the present application.
  • the above application scenario includes a first device 10 and a second device 20, wherein the second device 20 may be a cloud server.
  • the second device 20 can be used to provide the first device 10 with the parameters of the current shooting.
  • FIG. 3 it is a schematic flow diagram of an embodiment of the shooting method provided by the present application, including:
  • step 301 the first device 10 acquires preview photos and environment information.
  • the user may turn on the camera of the first device 10, so that the first device 10 enters a shooting mode.
  • the user can click the camera application program on the desktop of the first device 10 to open the camera, or call the camera in third-party application software (eg, social software).
  • third-party application software eg, social software
  • the first device 10 acquires a preview image, where the preview image may be an image of the current environment captured by the current camera.
  • the first device 10 may further acquire the current preview photo. It can be understood that the above preview photo is a photo corresponding to the current preview image.
  • the first device 10 may also acquire current environment information, where the environment information may include information such as location, time, weather, and light.
  • the environment information may include information such as location, time, weather, and light.
  • the above environment information is only an illustration, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, more environment information may be included.
  • the above location information may be obtained through a Global Positioning System (Global Positioning System, GPS) in the first device 10.
  • GPS Global Positioning System
  • the above time information can be obtained through the system time of the first device 10 .
  • weather information for example, sunny, cloudy, or rainy, etc.
  • weather application for example, sunny, cloudy, or rainy, etc.
  • orientation information may be further acquired, wherein the orientation information may be obtained through the magnetic sensor 180D and the gyro sensor 180B in the first device 10 , and the orientation information may be used to characterize the orientation of the first device 10 .
  • specific light data can be obtained through the above meteorological information, wherein the light data can include light intensity and the direction of natural light relative to the camera (for example, front light, side light, back light, etc., wherein side light can be divided into Front side light, rear side light, left side light, right side light, etc.).
  • the above-mentioned light intensity (unit: Lux) of the shooting environment may be acquired by the ambient light sensor 180L of the first device 10 . If the meteorological information is sunny, the direction of the natural light relative to the camera can be further calculated. The calculation method is first to obtain the sun orientation through the geographic location and time information; back) and the orientation of the first device 10 obtained above to obtain the direction of the camera 193; finally obtain the relative position of the sun azimuth and the direction of the camera, as shown in Figure 4, thus the direction of the natural light of the sun relative to the camera 193 can be obtained category, where the direction category can be front light, side light, back light, etc.
  • Step 302 the first device 10 sends the aforementioned preview photo and environment information to the second device 20 .
  • the first device 10 may send the preview photo and environment information to the second device 20 .
  • the above-mentioned first device 10 can be connected with the second device 20 through a mobile communication network (for example, 4G, 5G, etc.) or a local wireless network (for example, WIFI), so that the first device 10 can use the above-mentioned mobile communication
  • the network or the local wireless network sends the preview photo and the environment information to the second device 20 .
  • step 303 the second device 20 generates shooting parameters based on the preview photo and environment information.
  • the second device 20 after the second device 20 receives the preview photo and the environment information sent by the first device 10, it can generate shooting parameters based on the preview photo and the environment information, wherein the shooting parameters can be corresponding parameters used in the camera for shooting.
  • Parameters such as aperture size, shutter speed, ISO, focus mode, focal length, white balance, exposure compensation and other parameters, it can be understood that the above parameter examples are only illustrative and do not constitute a limitation to the embodiment of the present application. In some embodiments, more or fewer parameters may be included.
  • step 3031 the second device 20 extracts features of the actual shooting scene based on the aforementioned preview photos and environmental information.
  • the second device 20 can use a preset image recognition model to identify the above-mentioned preview photo, thereby obtaining the characteristics of the actual shooting scene corresponding to the above-mentioned preview photo, wherein the characteristics of the actual shooting scene can include content characteristics and environmental characteristics.
  • the above-mentioned preview photo can be input into a preset image recognition model.
  • the preset image recognition model may be a model using a deep image segmentation neural network.
  • the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions.
  • the specific type of is not specifically limited.
  • the content features in the above-mentioned preview photo can be identified.
  • the content features may include main features such as portraits, buildings, snow scenes, animals, and plants.
  • the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera.
  • the image recognition model it can also be determined whether the shooting scene corresponding to the preview photo is indoors or outdoors.
  • the second device 20 may extract environmental features such as weather and light from the environmental information.
  • step 3032 the second device 20 determines the category of the shooting scene based on the acquired features of the actual shooting scene.
  • the above-mentioned shooting scene category can be preset, and the preset shooting scene can include multiple categories, for example, the above-mentioned shooting scene category can include category 1 (building-distance view-outdoor-sunny day-light brightness is strong), Category 2 (portrait-close-up-outdoor-sunny-backlight), category 3 (aquarium-animal-indoor-dark light), etc.
  • a preset scene classification model such as a Bayesian network model
  • the acquired features of the actual shooting scene may be used as events that have occurred to obtain the joint probability that the actual shooting scene belongs to each preset shooting scene category.
  • Bayesian theory the more events that support a certain property, the greater the possibility of the property being established. Finally, the category of the shooting scene with the highest probability is selected as the category of the current shooting scene. It should be noted that, in addition to the above-mentioned Bayesian network model, other types of probabilistic graphical network models can also be used as the scene classification model, and this application does not specifically limit the specific form of the above-mentioned scene classification model.
  • the second device 20 may directly determine the shooting scene category according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and environmental characteristics in the above-mentioned preview photo).
  • the content features and environmental features in the preview photo can be input into a preset scene classification model, such as a Bayesian network model, so that the corresponding shooting scene category can be obtained.
  • step 3033 the second device 20 loads a parameter decision model corresponding to the category of the shooting scene based on the category of the shooting scene, and uses the preview photo as an input to calculate and obtain shooting parameters.
  • the second device 20 may load a parameter decision model corresponding to the category of the shooting scene.
  • the preview photo may be input into the parameter decision-making model, the model is run, and shooting parameters corresponding to the preview photo are obtained through calculation.
  • the parameter decision model can be obtained through deep learning pre-training. The specific training method can be described in the shooting parameter training method below, and will not be repeated here.
  • Step 304 the second device 20 sends the aforementioned shooting parameters to the first device 10 .
  • step 305 the first device 10 uses the above shooting parameters to shoot.
  • the first device 10 after receiving the shooting parameters sent by the second device 20, the first device 10 initializes the shooting configuration parameters of the camera to the above shooting parameters, and can use the above initialized shooting parameters for shooting.
  • the user can also manually adjust the above-mentioned shooting parameters after initialization. Thereby, an actual photograph can be obtained.
  • step 301-step 305 are all optional steps, this application only provides a feasible embodiment, and may also include more or less steps than step 301-step 305, this application Applications are not limited to this.
  • the first device 10 may include a preset image recognition model, a scene classification model, and a parameter decision model.
  • Fig. 6 is a schematic flowchart of another embodiment of the shooting method provided by the present application, including:
  • step 601 the first device 10 acquires preview photos and environment information.
  • the user may turn on the camera of the first device 10, so that the first device 10 enters a shooting mode.
  • the user can click the camera application program on the desktop of the first device 10 to open the camera, or call the camera in third-party application software (eg, social software).
  • third-party application software eg, social software
  • the first device 10 acquires a preview image, where the preview image may be an image of the current environment captured by the current camera.
  • the first device 10 may further acquire the current preview photo. It can be understood that the above preview photo is a photo corresponding to the current preview image.
  • the first device 10 may also acquire current environment information, where the environment information may include information such as location, time, weather, and light.
  • the environment information may include information such as location, time, weather, and light.
  • the above environment information is only an illustration, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, more environment information may be included.
  • the above location information may be obtained through a Global Positioning System (Global Positioning System, GPS) in the first device 10.
  • GPS Global Positioning System
  • the above time information can be obtained through the system time of the first device 10 .
  • weather information for example, sunny, cloudy, or rainy, etc.
  • weather application for example, sunny, cloudy, or rainy, etc.
  • orientation information may be further obtained, wherein the above orientation information may also be obtained through the magnetic sensor 180D and the gyroscope 180B sensors in the first device 10 , and the above orientation information may be used to characterize the orientation of the first device 10 .
  • specific light data can be obtained through the above meteorological information, wherein the light data can include light intensity and the direction of natural light relative to the camera (for example, front light, side light, back light, etc., wherein side light can be divided into front side light, rear side light, left light, right light, etc.).
  • step 602 the first device 10 generates shooting parameters based on the preview photo and environment information.
  • the first device 10 may generate shooting parameters based on the above-mentioned preview photo and environmental information, wherein the shooting parameters may be corresponding parameters used in the camera for shooting, for example, aperture size , shutter speed, ISO, focus mode, focal length, white balance, exposure compensation and other parameters, it can be understood that the above parameter examples are only illustrative and do not constitute a limitation to the embodiments of the present application. In some embodiments, you can Include more or fewer parameters.
  • the above specific process of generating shooting parameters may include the following sub-steps:
  • step 6021 the first device 10 extracts the features of the actual shooting scene based on the above-mentioned preview photos and environmental information.
  • the first device 10 may use a preset image recognition model to identify the above-mentioned preview photo, thereby obtaining the features of the actual shooting scene corresponding to the above-mentioned preview photo, wherein the features of the actual shooting scene may include content characteristics and environmental characteristics.
  • the above-mentioned preview photo can be input into a preset image recognition model.
  • the preset image recognition model may be a model using a deep image segmentation neural network.
  • the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions.
  • the specific type of is not specifically limited.
  • the content features in the above-mentioned preview photo can be identified.
  • the content features may include main features such as portraits, buildings, snow scenes, animals, and plants.
  • the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera.
  • the image recognition model it can also be determined whether the shooting scene corresponding to the preview photo is indoors or outdoors.
  • the first device 10 may extract environmental features such as weather and light from the environmental information.
  • Step 6022 the first device 10 determines the category of the shooting scene based on the acquired features of the actual shooting scene.
  • the above-mentioned shooting scene category can be preset, and the preset shooting scene can include multiple categories, for example, the above-mentioned shooting scene category can include category 1 (building-distance view-outdoor-sunny day-light brightness is strong), Category 2 (portrait-close-up-outdoor-sunny-backlight), category 3 (aquarium-animal-indoor-dark light), etc.
  • a preset scene classification model such as a Bayesian network model
  • the acquired features of the actual shooting scene may be used as events that have occurred to obtain the joint probability that the actual shooting scene belongs to each preset shooting scene category.
  • the first device 10 may directly determine the shooting scene category according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and environmental characteristics in the above-mentioned preview photo).
  • the content features and environmental features in the preview photo can be input into a preset scene classification model, such as a Bayesian network model, so that the corresponding shooting scene category can be obtained.
  • Step 6023 based on the shooting scene category, the first device 10 loads a parameter decision model corresponding to the shooting scene category, and uses the preview photo as input to calculate and obtain shooting parameters.
  • the first device 10 may load a parameter decision model corresponding to the category of the shooting scene.
  • the preview photo may be input into the parameter decision-making model, the model is run, and shooting parameters corresponding to the preview photo are obtained through calculation.
  • the parameter decision model can be obtained through deep learning pre-training. The specific training method can be described in the shooting parameter training method below, and will not be repeated here.
  • step 603 the first device 10 uses the above shooting parameters to shoot.
  • the first device 10 initializes the shooting configuration parameters of the camera to the above-mentioned shooting parameters, and can use the above-mentioned initialized shooting parameters to perform shooting. Users can also manually adjust these recommended-based initialization shooting parameters. Thereby, an actual photograph can be obtained.
  • step 601-step 603 are all optional steps, and this application only provides a feasible embodiment, and may also include more or fewer steps than step 601-step 603. Applications are not limited to this.
  • the embodiment of the present application also provides a shooting parameter training method, which is applied to a third device 30.
  • the third device 30 may be embodied in the form of a computer.
  • the third device 30 may be a cloud server (for example, The aforementioned second device 20), but not limited to the second device 20, in some embodiments, the third device 30 may also be a local desktop computer.
  • the third device 30 may be a terminal device (for example, the above-mentioned first device 10).
  • FIG. 7 is a schematic flow diagram of an embodiment of the shooting parameter training method provided by the present application, including:
  • Step 701 acquire a sample data set.
  • the above sample data set may include multiple pieces of sample data, wherein each piece of sample data may include a preview photo, a set of professional mode parameters, a taken photo and environmental information corresponding to the taken photo.
  • the preview photo may be a photo in the preview screen collected by the camera
  • the professional mode parameter may be a parameter set by the user in the professional mode
  • the captured photo may be a photo obtained by the camera using the above professional mode parameter
  • the environment information Can include information such as location, time, weather and light.
  • the above-mentioned photographs can be screened manually and/or by machine.
  • image aesthetic tools and image quality evaluation tools can be used to screen the above-mentioned photographs, so that high-quality photographs can be selected.
  • Table 1 exemplarily shows the above sample data set.
  • the above sample data set includes N sample data, and each sample data includes preview photos, professional mode parameters, taken photos, and environmental information.
  • step 702 input each photograph taken in the above sample data set into a preset image recognition model for recognition to obtain content features.
  • the preset image recognition model may be a model using a deep image segmentation neural network.
  • the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions.
  • the specific type of the model is not particularly limited.
  • content features corresponding to the aforementioned photographs can be obtained, wherein the content features can include subject features such as portraits, buildings, snow scenes, animals, plants, etc.
  • the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera.
  • the above image recognition model it can also be determined whether the shooting scene corresponding to the above photo is indoors or outdoors.
  • Step 703 classify the shooting scene based on the content feature, and obtain the shooting scene category.
  • the shooting scene category of each shot in the above-mentioned sample data set can be performed based on the above-mentioned content characteristics, so that the shooting scene category of each shot can be obtained.
  • FIG. 8 is a schematic flow chart of the above shooting scene classification, as shown in FIG. 8 .
  • the shooting scene of the above-mentioned photo can be classified based on the environmental characteristics and content characteristics, thereby obtaining the shooting scene category, wherein the above-mentioned environmental characteristics can be obtained through the above-mentioned environmental information.
  • the above shooting scene category may include multiple categories, for example, category 1 (building-distance view-outdoor-sunny day-light brightness is strong), category 2 (portrait-close view-outdoor-sunny day-backlight), category 3 (Aquarium-animals-indoor-light brightness dark), etc.
  • the shooting scene category may be determined directly according to the content characteristics.
  • Step 704 constructing a training data set
  • the shooting scene category of each photo can be grouped, and the grouping method can be carried out according to the category of the shooting scene, for example, the shooting scenes of the same category can be grouped Photos are grouped together.
  • the corresponding preview photos and professional mode parameters can be found according to the photographs taken. For example, taking Table 1 as an example, the corresponding preview photos 1 and professional mode parameters 1 can be found by taking photo 1.
  • multiple sets of training data can be obtained, and the multiple sets of training data constitute a training data set, wherein each set of training data includes a plurality of training data under the same shooting scene category, and each training data includes multiple training data under the shooting scene category.
  • Table 2 exemplarily shows the above training data set.
  • the above-mentioned training data set includes M shooting scene categories, and each shooting scene category can include multiple training data, and each training data can include preview photos and professional mode parameters belonging to the shooting scene category .
  • Step 705 based on the above training data set, train the preset parameter decision model.
  • the above training data set may be divided into a training set and a verification set.
  • the distribution ratio of the training set and the verification set may be preset, which is not specifically limited in this embodiment of the present application.
  • the above training set can be input into a preset parameter decision model for training.
  • each shooting scene category can correspond to a parameter decision model
  • multiple parameter decision models can be trained respectively.
  • the preview photos in the above training set can be input into the above-mentioned preset parameter decision-making model for calculation, so that the predicted shooting parameters can be obtained.
  • the preview photos input above can be data in YUV format , may also be in RGB format, which is not specifically limited in this embodiment of the present application.
  • the above-mentioned predicted shooting parameters may include parameters such as aperture size, shutter speed, ISO, focusing mode, focal length, white balance, exposure compensation and the like.
  • Fig. 9 is a schematic diagram of the training architecture of the parameter decision model. As shown in FIG. 9 , when training the parameter decision model of any specific shooting scene category, the preview photo is the input data, and the output data is the predicted shooting parameters.
  • the professional pattern parameters in the above training set can be used as label data.
  • the training data in the above training set may include feature data and label data.
  • the feature data can be used for input and calculation, for example, the feature data can include a preview photo and the like.
  • the label data can be used to compare with the output during the training process, so that the loss of the model can be converged through training, and the label data can be pre-identified professional mode parameters.
  • the objective function may be the mean square error of the predicted shooting parameters and the professional mode parameters, that is, the mean square error of the predicted data and the label data.
  • a parameter decision-making model corresponding to the shooting scene category can be obtained.
  • the training can also be verified through the above-mentioned verification set. If the preset requirements are met after verification, the training is completed. If the preset requirements are not met after verification, then the training can be Perform further training, for example, reacquire the sample data set, and repeat steps 701-705 for retraining.
  • the neural network can improve the extraction of environmental features in specific scenes, accelerate the convergence process of the model, avoid over-fitting or failure to converge and other abnormal situations, and then improve the adaptability of the model to the scene sex.
  • the above-mentioned parameter decision-making model can be obtained, so that the second device 20 can calculate the preview photos and environmental information sent by the first device 10 based on the above-mentioned parameter decision-making model, and obtain corresponding shooting parameters, and then The calculation amount of the first device 10 can be reduced, and the shooting quality can be improved.
  • FIG. 10 shows a schematic structural diagram of an electronic device 1000 , which may be the above-mentioned third device 30 .
  • the above-mentioned electronic device 1000 may include: at least one processor; and at least one memory connected to the above-mentioned processor in communication, wherein: the above-mentioned memory stores program instructions that can be executed by the above-mentioned processor, and the processor calls the above-mentioned program instructions to execute the application.
  • FIG. 10 shows a block diagram of an exemplary electronic device 1000 suitable for implementing embodiments of the present application.
  • the electronic device 1000 shown in FIG. 10 is only an example, and should not limit the functions and scope of use of the embodiments of the present application.
  • electronic device 1000 takes the form of a general-purpose computing device.
  • Components of electronic device 1000 may include, but are not limited to: one or more processors 1010 , memory 1020 , communication bus 1040 connecting different system components (including memory 1020 and processor 1010 ), and communication interface 1030 .
  • Communication bus 1040 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture; hereinafter referred to as: ISA) bus, Micro Channel Architecture (Micro Channel Architecture; hereinafter referred to as: MAC) bus, enhanced ISA bus, video electronics Standards Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and Peripheral Component Interconnection (hereinafter referred to as: PCI) bus.
  • Electronic device 1000 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the electronic device and include both volatile and nonvolatile media, removable and non-removable media.
  • the memory 1020 may include a computer system-readable medium in the form of a volatile memory, such as a random access memory (Random Access Memory; hereinafter referred to as RAM) and/or a cache memory.
  • the electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • a disk drive for reading and writing to a removable nonvolatile disk such as a "floppy disk”
  • a disk drive for a removable nonvolatile disk such as a CD-ROM (Compact Disc Read Only Memory; hereinafter referred to as: CD-ROM), Digital Video Disc Read Only Memory (hereinafter referred to as: DVD-ROM) or other optical media).
  • CD-ROM Compact Disc Read Only Memory
  • DVD-ROM Digital Video Disc Read Only Memory
  • each drive may be connected to communication bus 1040 through one or more data media interfaces.
  • the memory 1020 may include at least one program product, which has a set of (for example, at least one) program modules configured to execute the functions of the various embodiments of the present application.
  • a program/utility having a set (at least one) of program modules may be stored in memory 1020, such program modules including - but not limited to - an operating system, one or more application programs, other program modules, and program data , each or some combination of these examples may include implementations of network environments.
  • the program modules generally perform the functions and/or methods in the embodiments described herein.
  • the electronic device 1000 may also communicate with one or more external devices (such as keyboards, pointing devices, displays, etc.), and may also communicate with one or more devices that enable the user to interact with the electronic device, and/or communicate with the device that enables the electronic device to
  • a device communicates with any device (eg, network card, modem, etc.) that is capable of communicating with one or more other computing devices. Such communication may occur through communication interface 1030 .
  • the electronic device 1000 can also communicate with one or more networks (such as a local area network (Local Area Network; hereinafter referred to as: LAN), a wide area network (Wide Area Network; hereinafter referred to as: WAN) and (or a public network, such as the Internet), the above-mentioned network adapter can communicate with other modules of the electronic device through the communication bus 1040 .
  • networks such as a local area network (Local Area Network; hereinafter referred to as: LAN), a wide area network (Wide Area Network; hereinafter referred to as: WAN) and (or a public network, such as the Internet
  • RAID Redundant Arrays of Independent Drives
  • the processor 1010 executes various functional applications and data processing by running the programs stored in the memory 1020, for example, implementing the shooting parameter training method provided in the embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an embodiment of the photographing device of the present application. As shown in FIG. 11, the above-mentioned photographing device 1100 is applied to the first device 10, and may include: an acquisition module 1110, a calculation module 1120, and a photographing module 1130; wherein,
  • Obtaining module 1110 configured to obtain preview photos and environmental information
  • Calculation module 1120 used to obtain shooting parameters based on preview photos and environmental information
  • a shooting module 1130 configured to use shooting parameters to shoot.
  • the calculation module 1120 is further configured to determine the category of the shooting scene based on the preview photo and environmental information; input the preview photo into a preset parameter decision model corresponding to the category of the shooting scene to obtain shooting parameters.
  • the calculation module 1120 is further configured to send the preview photo and environment information to the second device; where the preview photo and environment information are used by the second device to determine shooting parameters;
  • the shooting parameters sent by the second device are received.
  • the environment information includes one or more of location information, time information, weather information, and light information.
  • the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
  • the first device includes a mobile phone or a tablet.
  • Fig. 12 is a schematic structural diagram of an embodiment of the shooting parameter training device of the present application.
  • the shooting parameter training device 1200 may include: an acquisition module 1210 and a training module 1220; wherein,
  • the obtaining module 1210 is used to obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each training data subset includes a plurality of training data, and each training data includes training data corresponding to the shooting scene category The preset shooting parameters corresponding to the preview photo and shooting scene category;
  • the training module 1220 is configured to use the training data set to train the preset parameter decision model, wherein the preset parameter decision model is used to input preview photos and output predicted shooting parameters.
  • the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
  • the sample data set also includes environmental information corresponding to the photos taken
  • the acquisition module 1210 is also used to identify the photos taken to obtain content features; determine the shooting scene based on the content features; if the shooting scene is Indoor, based on the content characteristics, determine the shooting scene category corresponding to each photo taken; or
  • the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
  • the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
  • the shooting device 1100 provided by the embodiment shown in Figure 11 and the shooting parameter training device 1200 provided by the embodiment shown in Figure 12 can be used to implement the technical solutions of the method embodiments shown in Figures 1-6 and 7-9 of this application respectively , for its realization principles and technical effects, further reference may be made to the relevant descriptions in the method embodiments.
  • each module of the shooting device shown in FIG. 11 and the shooting parameter training device shown in FIG. can be physically separated.
  • these modules can all be implemented in the form of software called by the processing element; they can also be implemented in the form of hardware; some modules can also be implemented in the form of software called by the processing element, and some modules can be implemented in the form of hardware.
  • the detection module may be a separately established processing element, or may be integrated into a certain chip of the electronic device for implementation.
  • the implementation of other modules is similar.
  • all or part of these modules can be integrated together, and can also be implemented independently.
  • each step of the above method or each module above can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or, one or more microprocessors A Digital Signal Processor (hereinafter referred to as: DSP), or, one or more field programmable gate arrays (Field Programmable Gate Array; hereinafter referred to as: FPGA), etc.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • these modules can be integrated together and implemented in the form of a System-On-a-Chip (hereinafter referred to as SOC).
  • SOC System-On-a-Chip
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device.
  • the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the above-mentioned electronic devices include corresponding hardware structures and/or software modules for performing each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the example units and algorithm steps described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the embodiments of the present application.
  • the embodiment of the present application may divide the above-mentioned electronic equipment into functional modules according to the above-mentioned method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • Each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage
  • the medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other various media capable of storing program codes.

Abstract

Embodiments of the present application provide a photographing method, a photographing parameter training method, an electronic device, and a storage medium, which relate to the technical field of computers. The method comprises: acquiring a preview photo and environment information; obtaining a photographing parameter on the basis of the preview photo and the environment information; and photographing using the photographing parameter. The methods provided in the embodiments of the present application can improve photographing quality.

Description

拍摄方法、拍摄参数训练方法、电子设备及存储介质Shooting method, shooting parameter training method, electronic equipment and storage medium
本申请要求于2021年07月29日提交中国专利局、申请号为202110861888.8、申请名称为“拍摄方法、拍摄参数训练方法、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110861888.8 and the application name "shooting method, shooting parameter training method, electronic equipment and storage medium" submitted to the China Patent Office on July 29, 2021, the entire content of which is passed References are incorporated in this application.
技术领域technical field
本申请实施例涉及计算机技术领域,尤其涉及一种拍摄方法、拍摄参数训练方法、电子设备及存储介质。The embodiments of the present application relate to the field of computer technology, and in particular, to a shooting method, a shooting parameter training method, electronic equipment, and a storage medium.
背景技术Background technique
随着终端软硬件性能的不断提升,终端的拍照功能也日益强大。手机作为日常生活中常用的终端类型,用户对手机拍照的需求越来越高。其中,决定高质量拍摄效果的参数包括相机的多种设置参数和照片参数,例如光圈大小、快门速度、感光度(ISO)、对焦方式、焦距、白平衡、曝光补偿等等。在日常拍摄的过程中,无论是自动拍照模式还是专业拍照模型,快速准确地设置相机拍摄参数,才能拍出用户满意的照片。With the continuous improvement of the performance of terminal software and hardware, the camera function of the terminal is also becoming more and more powerful. As a terminal type commonly used in daily life, mobile phones have higher and higher demands for taking pictures with mobile phones. Among them, the parameters that determine the high-quality shooting effect include various setting parameters of the camera and photo parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing method, focal length, white balance, exposure compensation and so on. In the process of daily shooting, whether it is an automatic photo mode or a professional photo model, only by setting the camera shooting parameters quickly and accurately can the photos that users are satisfied with be taken.
目前常用的拍摄模式包括自动模式及专业模式,其中,自动拍照模式大多采用测光的方式,并套用少量风格类型,来调节拍摄参数。由于用户所处的环境和场景千差万别,这种以光线强弱为主要参数设置依据的方式使得在不同场景下对色彩还原度有较大的差异,拍出来的照片质量往往无法满足用户的要求。Currently commonly used shooting modes include automatic mode and professional mode. Among them, the automatic shooting mode mostly uses light metering and applies a small number of style types to adjust shooting parameters. Since the environment and scene of the user vary greatly, this way of setting parameters based on light intensity makes the color reproduction degree vary greatly in different scenes, and the quality of the photos taken often cannot meet the user's requirements.
此外,为了达到一定程度的拍摄效果,满足用户的需求,一些设备支持专业拍照模式,在该模式下,依据光线强弱仅仅对相机ISO和快门速度提供自动调节,对于白平衡、曝光补偿、饱和度、对比度等多种其他设置参数没有初始化的推荐值,需要用户反复手动调节和组合,甚至有些参数可调范围很大,整个过程繁琐、耗时长、准确度差,降低了用户体验。专业模型门槛过高,大多数用户拍摄水平和专业知识有限,很难拍摄满意的照片。In addition, in order to achieve a certain degree of shooting effect and meet the needs of users, some devices support professional photo mode. In this mode, only the camera ISO and shutter speed are automatically adjusted according to the light intensity. For white balance, exposure compensation, saturation, etc. Many other setting parameters such as brightness and contrast do not have recommended values for initialization, requiring users to manually adjust and combine repeatedly, and even some parameters can be adjusted in a large range. The whole process is cumbersome, time-consuming, and has poor accuracy, which reduces the user experience. The threshold for professional models is too high, and most users have limited shooting skills and professional knowledge, making it difficult to take satisfactory photos.
发明内容Contents of the invention
本申请实施例提供了一种拍摄方法、拍摄参数训练方法、电子设备及存储介质,以提供一种拍摄的方式,由此可以提高拍摄质量。Embodiments of the present application provide a shooting method, a shooting parameter training method, an electronic device, and a storage medium, so as to provide a shooting method, thereby improving the shooting quality.
第一方面,本申请实施例提供了一种拍摄方法,应用于第一设备,包括:In the first aspect, the embodiment of the present application provides a shooting method applied to the first device, including:
获取预览照片及环境信息;其中,预览照片可以是第一设备通过摄像头采集的显示在预览界面中的照片。Acquiring preview photos and environment information; wherein, the preview photos may be photos captured by the first device through a camera and displayed on a preview interface.
基于预览照片及环境信息获得拍摄参数;使用拍摄参数进行拍摄。Obtain shooting parameters based on the preview photos and environmental information; use the shooting parameters to shoot.
本申请实施例中,通过预览照片及环境信息等实时信息确定拍摄参数,并使用该拍摄参数进行拍摄,可以提高拍摄质量。In the embodiment of the present application, shooting parameters are determined by previewing photos and real-time information such as environmental information, and the shooting parameters are used for shooting, which can improve the shooting quality.
其中一种可能的实现方式中,基于预览照片及环境信息获得拍摄参数包括:In one of the possible implementation manners, obtaining shooting parameters based on preview photos and environmental information includes:
基于预览照片及环境信息,确定拍摄场景类别;Based on the preview photo and environmental information, determine the category of the shooting scene;
将预览照片输入与拍摄场景类别对应的预设参数决策模型,得到拍摄参数。The preview photo is input into the preset parameter decision model corresponding to the category of the shooting scene to obtain the shooting parameters.
本申请实施例中,通过第一设备自身计算获得拍摄参数,可以提高拍摄参数的获取效率。In the embodiment of the present application, the first device calculates and obtains the shooting parameters by itself, which can improve the efficiency of obtaining the shooting parameters.
其中一种可能的实现方式中,基于预览照片及环境信息获得拍摄参数包括:In one of the possible implementation manners, obtaining shooting parameters based on preview photos and environmental information includes:
将预览照片及环境信息发送给第二设备;其中,预览照片及环境信息用于第二设备确定拍摄参数;其中,第二设备可以是服务器。Sending the preview photo and the environment information to the second device; where the preview photo and the environment information are used by the second device to determine shooting parameters; where the second device may be a server.
接收第二设备发送的拍摄参数。The shooting parameters sent by the second device are received.
本申请实施例中,由第二设备计算获得拍摄参数,可以减轻第一设备的计算负担,且第二设备具有强大的计算能力,由此也可以提高拍摄参数的准确度。In the embodiment of the present application, the second device calculates and obtains the shooting parameters, which can reduce the calculation burden of the first device, and the second device has powerful computing capabilities, thereby improving the accuracy of the shooting parameters.
其中一种可能的实现方式中,环境信息包括位置信息、时间信息、气象信息及光线信息中的一种或多种。In one possible implementation manner, the environment information includes one or more of location information, time information, weather information, and light information.
其中一种可能的实现方式中,拍摄参数包括光圈大小、快门速度、感光度ISO、对焦方式、焦距、白平衡及曝光补偿中的一个或多个。In one possible implementation manner, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
其中一种可能的实现方式中,第一设备包括手机或平板。In one possible implementation manner, the first device includes a mobile phone or a tablet.
本申请实施例还提供了一种拍摄参数训练方法,包括:The embodiment of the present application also provides a shooting parameter training method, including:
获取训练数据集;其中,训练数据集包括多个拍摄场景类别的训练数据子集,每个训练数据子集包括多个训练数据,每个训练数据包括与拍摄场景类别对应的预览照片及拍摄场景类别对应的预设拍摄参数;Obtain a training data set; wherein, the training data set includes training data subsets of multiple shooting scene categories, each training data subset includes multiple training data, and each training data includes preview photos and shooting scenes corresponding to the shooting scene category The preset shooting parameters corresponding to the category;
使用训练数据集对预设参数决策模型进行训练,其中,预设参数决策模型用于输入预览照片,输出预测拍摄参数。The training data set is used to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
其中一种可能的实现方式中,拍摄场景类别由样本数据集中的拍摄照片确定,样本数据集包括多个样本数据,每个样本数据包括拍摄照片、预览照片及预设拍摄参数。In one possible implementation manner, the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
其中一种可能的实现方式中,样本数据集还包括与拍摄照片对应的环境信息,拍摄场景类别由样本数据集中的拍摄照片确定包括:In one of the possible implementations, the sample data set also includes environmental information corresponding to the photos taken, and the category of the shooting scene is determined by the photos taken in the sample data set, including:
对拍摄照片进行识别,得到内容特征;Identify the photos taken to obtain content features;
基于内容特征确定拍摄场景;Determine the shooting scene based on the content characteristics;
若拍摄场景为室内,则基于内容特征确定与每张拍摄照片对应的拍摄场景类别;或If the shooting scene is indoors, determine the shooting scene category corresponding to each shot based on content characteristics; or
若拍摄场景为室外,则基于环境特征及内容特征确定与每张拍摄照片对应的拍摄场景类别;其中,环境特征由环境信息获得。If the shooting scene is outdoors, the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
其中一种可能的实现方式中,预设参数决策模型包括多个模型,每个模型与拍摄场景类别对应。In one possible implementation manner, the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
第二方面,本申请实施例提供一种拍摄装置,应用于第一设备,包括:In the second aspect, the embodiment of the present application provides a photographing device applied to the first device, including:
获取模块,用于获取预览照片及环境信息;The acquisition module is used to obtain preview photos and environmental information;
计算模块,用于基于预览照片及环境信息获得拍摄参数;A computing module, configured to obtain shooting parameters based on preview photos and environmental information;
拍摄模块,用于使用拍摄参数进行拍摄。The shooting module is used for shooting with shooting parameters.
其中一种可能的实现方式中,上述计算模块还用于基于预览照片及环境信息,确定拍摄场景类别;将预览照片输入与拍摄场景类别对应的预设参数决策模型,得到拍摄参数。In one possible implementation manner, the calculation module is further used to determine the category of the shooting scene based on the preview photo and environmental information; input the preview photo into a preset parameter decision model corresponding to the category of the shooting scene to obtain shooting parameters.
其中一种可能的实现方式中,上述计算模块还用于将预览照片及环境信息发送给 第二设备;其中,预览照片及环境信息用于第二设备确定拍摄参数;In one of the possible implementations, the calculation module is also used to send the preview photos and environmental information to the second device; wherein, the preview photos and environmental information are used by the second device to determine shooting parameters;
接收第二设备发送的拍摄参数。The shooting parameters sent by the second device are received.
其中一种可能的实现方式中,环境信息包括位置信息、时间信息、气象信息及光线信息中的一种或多种。In one possible implementation manner, the environment information includes one or more of location information, time information, weather information, and light information.
其中一种可能的实现方式中,拍摄参数包括光圈大小、快门速度、感光度ISO、对焦方式、焦距、白平衡及曝光补偿中的一个或多个。In one possible implementation manner, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
其中一种可能的实现方式中,第一设备包括手机或平板。In one possible implementation manner, the first device includes a mobile phone or a tablet.
本申请实施例还提供一种拍摄参数训练装置,包括:The embodiment of the present application also provides a shooting parameter training device, including:
获取模块,用于获取训练数据集;其中,训练数据集包括多个拍摄场景类别的训练数据子集,每个训练数据子集包括多个训练数据,每个训练数据包括与拍摄场景类别对应的预览照片及拍摄场景类别对应的预设拍摄参数;The obtaining module is used to obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each training data subset includes a plurality of training data, and each training data includes shooting scene categories corresponding to Preview photos and preset shooting parameters corresponding to shooting scene categories;
训练模块,用于使用训练数据集对预设参数决策模型进行训练,其中,预设参数决策模型用于输入预览照片,输出预测拍摄参数。The training module is used to use the training data set to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
其中一种可能的实现方式中,拍摄场景类别由样本数据集中的拍摄照片确定,样本数据集包括多个样本数据,每个样本数据包括拍摄照片、预览照片及预设拍摄参数。In one possible implementation manner, the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
其中一种可能的实现方式中,样本数据集还包括与拍摄照片对应的环境信息,上述获取模块还用于对拍摄照片进行识别,得到内容特征;基于内容特征确定拍摄场景;若拍摄场景为室内,则基于内容特征确定与每张拍摄照片对应的拍摄场景类别;或In one of the possible implementations, the sample data set also includes environmental information corresponding to the photos taken, and the acquisition module is also used to identify the photos taken to obtain content features; determine the shooting scene based on the content features; if the shooting scene is indoor , then determine the shooting scene category corresponding to each shot based on the content feature; or
若拍摄场景为室外,则基于环境特征及内容特征确定与每张拍摄照片对应的拍摄场景类别;其中,环境特征由环境信息获得。If the shooting scene is outdoors, the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
其中一种可能的实现方式中,预设参数决策模型包括多个模型,每个模型与拍摄场景类别对应。In one possible implementation manner, the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
第三方面,本申请实施例提供一种第一设备,包括:In a third aspect, the embodiment of the present application provides a first device, including:
存储器,上述存储器用于存储计算机程序代码,上述计算机程序代码包括指令,当上述第一设备从上述存储器中读取上述指令,以使得上述第一设备执行以下步骤:memory, the above-mentioned memory is used to store computer program codes, and the above-mentioned computer program codes include instructions, when the above-mentioned first device reads the above-mentioned instructions from the above-mentioned memory so that the above-mentioned first device performs the following steps:
获取预览照片及环境信息;Obtain preview photos and environmental information;
基于预览照片及环境信息获得拍摄参数;Obtain shooting parameters based on preview photos and environmental information;
使用拍摄参数进行拍摄。Shoot with shooting parameters.
其中一种可能的实现方式中,上述指令被上述第一设备执行时,使得上述第一设备执行基于预览照片及环境信息获得拍摄参数的步骤包括:In one possible implementation manner, when the above-mentioned instruction is executed by the above-mentioned first device, making the above-mentioned first device execute the step of obtaining shooting parameters based on preview photos and environmental information includes:
基于所述预览照片及所述环境信息,确定拍摄场景类别;Based on the preview photo and the environmental information, determine the category of the shooting scene;
将预览照片输入与拍摄场景类别对应的预设参数决策模型,得到拍摄参数。The preview photo is input into the preset parameter decision model corresponding to the category of the shooting scene to obtain the shooting parameters.
其中一种可能的实现方式中,上述指令被上述第一设备执行时,使得上述第一设备执行基于预览照片及环境信息获得拍摄参数的步骤包括:In one possible implementation manner, when the above-mentioned instruction is executed by the above-mentioned first device, making the above-mentioned first device execute the step of obtaining shooting parameters based on preview photos and environmental information includes:
将预览照片及环境信息发送给第二设备;其中,预览照片及环境信息用于第二设备确定拍摄参数;Sending the preview photo and environmental information to the second device; wherein, the preview photo and environmental information are used by the second device to determine shooting parameters;
接收第二设备发送的拍摄参数。The shooting parameters sent by the second device are received.
其中一种可能的实现方式中,环境信息包括位置信息、时间信息、气象信息及光线信息中的一种或多种。In one possible implementation manner, the environment information includes one or more of location information, time information, weather information, and light information.
其中一种可能的实现方式中,拍摄参数包括光圈大小、快门速度、感光度ISO、对焦方式、焦距、白平衡及曝光补偿中的一个或多个。In one possible implementation manner, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
其中一种可能的实现方式中,第一设备包括手机或平板。In one possible implementation manner, the first device includes a mobile phone or a tablet.
本申请实施例还提供一种第三设备,包括:The embodiment of the present application also provides a third device, including:
存储器,上述存储器用于存储计算机程序代码,上述计算机程序代码包括指令,当上述第三设备从上述存储器中读取上述指令,以使得上述第三设备执行以下步骤:memory, the above-mentioned memory is used to store computer program code, and the above-mentioned computer program code includes instructions, when the above-mentioned third device reads the above-mentioned instructions from the above-mentioned memory, so that the above-mentioned third device performs the following steps:
获取训练数据集;其中,训练数据集包括多个拍摄场景类别的训练数据子集,每个训练数据子集包括多个训练数据,每个训练数据包括与拍摄场景类别对应的预览照片及拍摄场景类别对应的预设拍摄参数;Obtain a training data set; wherein, the training data set includes training data subsets of multiple shooting scene categories, each training data subset includes multiple training data, and each training data includes preview photos and shooting scenes corresponding to the shooting scene category The preset shooting parameters corresponding to the category;
使用训练数据集对预设参数决策模型进行训练,其中,预设参数决策模型用于输入预览照片,输出预测拍摄参数。The training data set is used to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
其中一种可能的实现方式中,拍摄场景类别由样本数据集中的拍摄照片确定,样本数据集包括多个样本数据,每个样本数据包括拍摄照片、预览照片及预设拍摄参数。In one possible implementation manner, the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
其中一种可能的实现方式中,样本数据集还包括与拍摄照片对应的环境信息,上述指令被上述第三设备执行时,使得上述第三设备执行拍摄场景类别由样本数据集中的拍摄照片确定的步骤包括:In one possible implementation manner, the sample data set further includes environmental information corresponding to the photographs taken, and when the above-mentioned instructions are executed by the third device, the third device executes the Steps include:
对拍摄照片进行识别,得到内容特征;Identify the photos taken to obtain content features;
基于内容特征确定拍摄场景;Determine the shooting scene based on the content characteristics;
若拍摄场景为室内,则基于内容特征确定与每张拍摄照片对应的拍摄场景类别;或If the shooting scene is indoors, determine the shooting scene category corresponding to each shot based on content characteristics; or
若拍摄场景为室外,则基于环境特征及内容特征确定与每张拍摄照片对应的拍摄场景类别;其中,环境特征由环境信息获得。If the shooting scene is outdoors, the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
其中一种可能的实现方式中,预设参数决策模型包括多个模型,每个模型与拍摄场景类别对应。In one possible implementation manner, the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
第四方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行如第一方面所述的方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it is run on a computer, the computer executes the method described in the first aspect.
第五方面,本申请实施例提供一种计算机程序,当上述计算机程序被计算机执行时,用于执行第一方面所述的方法。In a fifth aspect, an embodiment of the present application provides a computer program, which is used to execute the method described in the first aspect when the above computer program is executed by a computer.
在一种可能的设计中,第五方面中的程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。In a possible design, all or part of the program in the fifth aspect may be stored on a storage medium packaged with the processor, or part or all may be stored on a memory not packaged with the processor.
附图说明Description of drawings
图1为本申请提供的电子设备一个实施例的硬件结构示意图;FIG. 1 is a schematic diagram of a hardware structure of an embodiment of an electronic device provided by the present application;
图2为本申请实施例提供应用场景示意图;FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application;
图3为本申请提供的拍摄方法的一个实施例的流程示意图;FIG. 3 is a schematic flow diagram of an embodiment of the shooting method provided by the present application;
图4为本申请实施例提供的光线示意图;FIG. 4 is a schematic diagram of light rays provided by the embodiment of the present application;
图5为本申请实施例提供的拍摄场景分类方法的流程示意图;FIG. 5 is a schematic flowchart of a shooting scene classification method provided in an embodiment of the present application;
图6为本申请提供的拍摄方法的另一个实施例的流程示意图;FIG. 6 is a schematic flowchart of another embodiment of the shooting method provided by the present application;
图7为本申请提供的拍摄参数训练方法的一个实施例的流程示意图;FIG. 7 is a schematic flow diagram of an embodiment of the shooting parameter training method provided by the present application;
图8为本申请实施例提供的拍摄场景分类示意图;FIG. 8 is a schematic diagram of shooting scene classification provided by the embodiment of the present application;
图9为本申请实施例提供的拍摄参数训练架构示意图;FIG. 9 is a schematic diagram of a shooting parameter training framework provided by an embodiment of the present application;
图10为本申请提供的电子设备另一个实施例的硬件结构示意图;FIG. 10 is a schematic diagram of a hardware structure of another embodiment of an electronic device provided by the present application;
图11为本申请实施例提供的拍摄装置的结构示意图;FIG. 11 is a schematic structural diagram of a photographing device provided in an embodiment of the present application;
图12为本申请实施例提供的拍摄参数训练装置的结构示意图。FIG. 12 is a schematic structural diagram of a shooting parameter training device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Among them, in the description of the embodiments of this application, unless otherwise specified, "/" means or means, for example, A/B can mean A or B; "and/or" in this article is only a description of associated objects The association relationship of indicates that there may be three kinds of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。Hereinafter, the terms "first" and "second" are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, "plurality" means two or more.
随着终端软硬件性能的不断提升,终端的拍照功能也日益强大。手机作为日常生活中常用的终端类型,用户对手机拍照的需求越来越高。其中,决定高质量拍摄效果的参数包括相机的多种设置参数和照片参数,例如光圈大小、快门速度、感光度(ISO)、对焦方式、焦距、白平衡、曝光补偿等等。在日常拍摄的过程中,无论是自动拍照模式还是专业拍照模型,快速准确地设置相机拍摄参数,才能拍出用户满意的照片。With the continuous improvement of the performance of terminal software and hardware, the camera function of the terminal is also becoming more and more powerful. As a terminal type commonly used in daily life, mobile phones have higher and higher demands for taking pictures with mobile phones. Among them, the parameters that determine the high-quality shooting effect include various setting parameters of the camera and photo parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing method, focal length, white balance, exposure compensation and so on. In the process of daily shooting, whether it is an automatic photo mode or a professional photo model, only by setting the camera shooting parameters quickly and accurately can the photos that users are satisfied with be taken.
目前常用的拍摄模式包括自动模式及专业模式,其中,自动拍照模式大多采用测光的方式,并套用少量风格类型,来调节拍摄参数。由于用户所处的环境和场景千差万别,这种以光线强弱为主要参数设置依据的方式使得在不同场景下对色彩还原度有较大的差异,拍出来的照片质量往往无法满足用户的要求。Currently commonly used shooting modes include automatic mode and professional mode. Among them, the automatic shooting mode mostly uses light metering and applies a small number of style types to adjust shooting parameters. Since the environment and scene of the user vary greatly, this way of setting parameters based on light intensity makes the color reproduction degree vary greatly in different scenes, and the quality of the photos taken often cannot meet the user's requirements.
此外,为了达到一定程度的拍摄效果,满足用户的需求,一些设备支持专业拍照模式,在该模式下,依据光线强弱仅仅对相机ISO和快门速度提供自动调节,对于白平衡、曝光补偿、饱和度、对比度等多种其他设置参数没有初始化的推荐值,需要用户反复手动调节和组合,甚至有些参数可调范围很大,整个过程繁琐、耗时长、准确度差,降低了用户体验。专业模型门槛过高,大多数用户拍摄水平和专业知识有限,很难拍摄满意的照片。In addition, in order to achieve a certain degree of shooting effect and meet the needs of users, some devices support professional photo mode. In this mode, only the camera ISO and shutter speed are automatically adjusted according to the light intensity. For white balance, exposure compensation, saturation, etc. Many other setting parameters such as brightness and contrast do not have recommended values for initialization, requiring users to manually adjust and combine repeatedly, and even some parameters can be adjusted in a large range. The whole process is cumbersome, time-consuming, and has poor accuracy, which reduces the user experience. The threshold for professional models is too high, and most users have limited shooting skills and professional knowledge, making it difficult to take satisfactory photos.
基于上述问题,本申请实施例提出了一种拍摄方法,可以提高拍摄质量。Based on the above problems, the embodiment of the present application proposes a shooting method, which can improve the shooting quality.
现结合图1-图6对本申请实施例提供的拍摄方法进行说明,上述拍摄方法应用于第一设备10,上述第一设备10可以是具有摄像头的智能设备,第一设备10也可以称为移动终端、终端设备、用户设备(User Equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。第一设备10可以是蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、电脑、膝上型计算机、手持式通信设备、手持式计算设备、卫星无线设备、用户驻地设备(Customer Premise Equipment,CPE) 和/或用于在无线系统上进行通信的其它设备以及下一代通信系统,例如,5G网络中的移动终端或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的移动终端等。本申请实施例对上述第一设备10的形式不做特殊限定。The shooting method provided by the embodiment of the present application will now be described in conjunction with FIGS. terminal, terminal equipment, user equipment (User Equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent or user device. The first device 10 may be a cellular telephone, a cordless telephone, a Personal Digital Assistant (PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer , handheld communication equipment, handheld computing equipment, satellite wireless equipment, Customer Premise Equipment (CPE) and/or other equipment used to communicate over wireless systems and next-generation communication systems, for example, in 5G networks mobile terminal or a mobile terminal in a public land mobile network (Public Land Mobile Network, PLMN) network that will evolve in the future. The embodiment of the present application does not specifically limit the form of the above-mentioned first device 10 .
下面结合图1首先介绍本申请以下实施例中提供的示例性电子设备。图1示出了电子设备100的结构示意图,该电子设备100可以是上述第一设备10。An exemplary electronic device provided in the following embodiments of the present application is first introduced below with reference to FIG. 1 . FIG. 1 shows a schematic structural diagram of an electronic device 100 , which may be the above-mentioned first device 10 .
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包 含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160 . For example: the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 can receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile  communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。 Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动 态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By referring to the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process input information and continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。The internal memory 121 may be used to store computer-executable program codes including instructions. The internal memory 121 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like. The storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。Speaker 170A, also referred to as a "horn", is used to convert audio electrical signals into sound signals. Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or a voice message, the receiver 170B can be placed close to the human ear to receive the voice.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used for connecting wired earphones. The earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180AThe pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 180A may be disposed on display screen 194 . Pressure sensor 180A
的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电 容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。There are many types of pressure sensors, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. A capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case. In some embodiments, when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The light emitting diodes may be infrared light emitting diodes. The electronic device 100 emits infrared light through the light emitting diode. Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 . The electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used for sensing ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。The touch sensor 180K is also called "touch device". The touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation can be provided through the display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone. The audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used for connecting a SIM card. The SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication. In some embodiments, the electronic device 100 adopts an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
图2为本申请实施例的应用场景示意图,如图2所示,上述应用场景包括第一设备10及第二设备20,其中,该第二设备20可以为云端的服务器。该第二设备20可 以用于向第一设备10提供当前拍摄的参数。FIG. 2 is a schematic diagram of an application scenario of an embodiment of the present application. As shown in FIG. 2 , the above application scenario includes a first device 10 and a second device 20, wherein the second device 20 may be a cloud server. The second device 20 can be used to provide the first device 10 with the parameters of the current shooting.
如图3所示为本申请提供的拍摄方法一个实施例的流程示意图,包括:As shown in Figure 3, it is a schematic flow diagram of an embodiment of the shooting method provided by the present application, including:
步骤301,第一设备10获取预览照片及环境信息。In step 301, the first device 10 acquires preview photos and environment information.
具体地,用户可以打开第一设备10的摄像头,使得第一设备10进入拍摄模式。示例性的,用户可以在第一设备10的桌面点击相机应用程序,以打开摄像头,也可以在第三方应用软件(例如,社交软件)中调用摄像头。本申请实施例对上述打开摄像头的方式不做特殊限定。Specifically, the user may turn on the camera of the first device 10, so that the first device 10 enters a shooting mode. Exemplarily, the user can click the camera application program on the desktop of the first device 10 to open the camera, or call the camera in third-party application software (eg, social software). The embodiment of the present application does not specifically limit the foregoing manner of turning on the camera.
响应于用户打开摄像头的操作,第一设备10获取预览画面,其中,该预览画面可以是当前的摄像头采集的当前环境的画面。接着,第一设备10可以进一步获取当前的预览照片。可以理解的是,上述预览照片是与当前的预览画面对应的照片。In response to the user's operation of turning on the camera, the first device 10 acquires a preview image, where the preview image may be an image of the current environment captured by the current camera. Next, the first device 10 may further acquire the current preview photo. It can be understood that the above preview photo is a photo corresponding to the current preview image.
进一步地,第一设备10还可以获取当前的环境信息,其中,该环境信息可以包括:位置、时间、气象及光线等信息。可以理解的是,上述环境信息仅是示例性说明,并不构成对本申请实施例的限定,在一些实施例中,还可以包括更多的环境信息。在具体实现时,上述位置信息可以通过第一设备10中的全球定位系统(Global Positioning System,GPS)获得。上述时间信息可以通过第一设备10的系统时间获得。当获取到上述位置信息和时间信息后,可以通过第一设备10中的天气应用得到气象信息(例如,晴天,阴天,或雨天等)。接着,可以进一步获取朝向信息,其中,上述朝向信息可以通过第一设备10中的磁传感器180D和陀螺仪传感器180B获得,上述朝向信息可以用于表征第一设备10的朝向。进一步地,可以通过上述气象信息获得具体的光线数据,其中,该光线数据可以包括光照强度和自然光线相对摄像头的方向(例如,顺光、侧光、逆光等,其中,侧光又可分为前侧光、后侧光、左侧光、右侧光等)。Further, the first device 10 may also acquire current environment information, where the environment information may include information such as location, time, weather, and light. It can be understood that the above environment information is only an illustration, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, more environment information may be included. During specific implementation, the above location information may be obtained through a Global Positioning System (Global Positioning System, GPS) in the first device 10. The above time information can be obtained through the system time of the first device 10 . After the above location information and time information are acquired, weather information (for example, sunny, cloudy, or rainy, etc.) can be obtained through the weather application in the first device 10 . Next, orientation information may be further acquired, wherein the orientation information may be obtained through the magnetic sensor 180D and the gyro sensor 180B in the first device 10 , and the orientation information may be used to characterize the orientation of the first device 10 . Further, specific light data can be obtained through the above meteorological information, wherein the light data can include light intensity and the direction of natural light relative to the camera (for example, front light, side light, back light, etc., wherein side light can be divided into Front side light, rear side light, left side light, right side light, etc.).
示例性的,上述拍摄环境的光照强度(单位:Lux)可以通过第一设备10的环境光传感器180L获取。如果气象信息是晴天,则可以进一步计算自然光线相对摄像头的方向,计算方法是,首先通过地理位置和时间信息得到太阳方位;再根据摄像头193在第一设备10中的安装位置(例如,正面或者背面)和上述获取的第一设备10的朝向,得到摄像头193的方向;最后得到太阳方位和摄像头方向的相对位置,如图4所示,由此可以得到太阳光的自然光线相对摄像头193的方向类别,其中,该方向类别可以是顺光、侧光、逆光等。Exemplarily, the above-mentioned light intensity (unit: Lux) of the shooting environment may be acquired by the ambient light sensor 180L of the first device 10 . If the meteorological information is sunny, the direction of the natural light relative to the camera can be further calculated. The calculation method is first to obtain the sun orientation through the geographic location and time information; back) and the orientation of the first device 10 obtained above to obtain the direction of the camera 193; finally obtain the relative position of the sun azimuth and the direction of the camera, as shown in Figure 4, thus the direction of the natural light of the sun relative to the camera 193 can be obtained category, where the direction category can be front light, side light, back light, etc.
步骤302,第一设备10将上述预览照片及环境信息发送给第二设备20。Step 302 , the first device 10 sends the aforementioned preview photo and environment information to the second device 20 .
具体地,当第一设备10获得上述预览照片及环境信息后,可以将该预览照片及环境信息发送给第二设备20。其中,上述第一设备10可以通过移动通信网络(例如,4G,5G等网络)或本地无线网络(例如,WIFI)与第二设备20连接,由此可以使得第一设备10可以使用上述移动通信网络或本地无线网络将上述预览照片及环境信息发送给第二设备20。可以理解的是,本申请实施例对第一设备10向第二设备20发送上述预览照片及环境信息的方式不做特殊限定。Specifically, after the first device 10 obtains the aforementioned preview photo and environment information, it may send the preview photo and environment information to the second device 20 . Among them, the above-mentioned first device 10 can be connected with the second device 20 through a mobile communication network (for example, 4G, 5G, etc.) or a local wireless network (for example, WIFI), so that the first device 10 can use the above-mentioned mobile communication The network or the local wireless network sends the preview photo and the environment information to the second device 20 . It can be understood that, in the embodiment of the present application, there is no special limitation on the manner in which the first device 10 sends the aforementioned preview photo and environment information to the second device 20 .
步骤303,第二设备20基于预览照片及环境信息生成拍摄参数。In step 303, the second device 20 generates shooting parameters based on the preview photo and environment information.
具体地,当第二设备20接收到第一设备10发送的预览照片及环境信息后,可以基于上述预览照片及环境信息生成拍摄参数,其中,该拍摄参数可以是摄像头中用于执行拍摄的对应参数,例如,光圈大小、快门速度、ISO、对焦方式、焦距、白平衡、 曝光补偿等参数,可以理解的是,上述参数示例仅是示例性说明,并不构成对本申请实施例的限定,在一些实施例中,可以包括更多或更少的参数。Specifically, after the second device 20 receives the preview photo and the environment information sent by the first device 10, it can generate shooting parameters based on the preview photo and the environment information, wherein the shooting parameters can be corresponding parameters used in the camera for shooting. Parameters, such as aperture size, shutter speed, ISO, focus mode, focal length, white balance, exposure compensation and other parameters, it can be understood that the above parameter examples are only illustrative and do not constitute a limitation to the embodiment of the present application. In some embodiments, more or fewer parameters may be included.
上述生成拍摄参数的具体过程如图5所示,可以包括如下子步骤:The above-mentioned specific process of generating shooting parameters is shown in Figure 5, and may include the following sub-steps:
步骤3031,第二设备20基于上述预览照片及环境信息,提取实际拍摄场景的特征。In step 3031, the second device 20 extracts features of the actual shooting scene based on the aforementioned preview photos and environmental information.
具体地,第二设备20可以使用预设的图像识别模型,对上述预览照片进行识别,由此可以得到与上述预览照片对应的实际拍摄场景的特征,其中,该实际拍摄场景的特征可以包括内容特征及环境特征。Specifically, the second device 20 can use a preset image recognition model to identify the above-mentioned preview photo, thereby obtaining the characteristics of the actual shooting scene corresponding to the above-mentioned preview photo, wherein the characteristics of the actual shooting scene can include content characteristics and environmental characteristics.
在具体实现时,可以将上述预览照片输入预设的图像识别模型。其中,该预设的图像识别模型可以是使用深度图像分割神经网络的模型,可选地,上述图像识别模型也可以使用其他图像识别功能的卷积神经网络,本申请实施例对上述图像识别模型的具体类型不做特殊限定。In a specific implementation, the above-mentioned preview photo can be input into a preset image recognition model. Wherein, the preset image recognition model may be a model using a deep image segmentation neural network. Optionally, the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions. The specific type of is not specifically limited.
通过上述图像识别模型对上述预览照片的计算,可以识别出上述预览照片中的内容特征,示例性的,该内容特征可以包括人像、建筑物、雪景、动物、植物等主体特征。此外,上述内容特征还可以包括上述主体与摄像头之间的距离。接着,通过上述图像识别模型还可以判断出上述预览照片对应的拍摄场景是在室内还是室外。Through the calculation of the above-mentioned preview photo by the above-mentioned image recognition model, the content features in the above-mentioned preview photo can be identified. Exemplarily, the content features may include main features such as portraits, buildings, snow scenes, animals, and plants. In addition, the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera. Next, through the image recognition model, it can also be determined whether the shooting scene corresponding to the preview photo is indoors or outdoors.
若上述预览照片对应的拍摄场景为室外环境,则第二设备20可以从上述环境信息中提取例如气象、光线等环境特征。If the shooting scene corresponding to the preview photo is an outdoor environment, the second device 20 may extract environmental features such as weather and light from the environmental information.
步骤3032,第二设备20基于获取的实际拍摄场景的特征,确定拍摄场景类别。In step 3032, the second device 20 determines the category of the shooting scene based on the acquired features of the actual shooting scene.
在具体实现时,上述拍摄场景类别可以预设,该预设的拍摄场景可以包括多个类别,例如,上述拍摄场景类别可以包括类别1(建筑物-远景-室外-晴天-光线亮度强),类别2(人像-近景-室外-晴天-逆光),类别3(海洋馆-动物-室内-光线亮度暗)等。其中,在确定上述拍摄场景类别时,可以使用预设的场景分类模型,例如贝叶斯网络模型。示例性的,以贝叶斯网络模型为例,可以将上述已获取的实际拍摄场景的特征作为已出现的事件,得到实际拍摄场景属于每个预设拍摄场景类别的联合概率。基于贝叶斯理论:支持某项属性的事件发生得愈多,则该属性成立的可能性就愈大。最后选择最大概率的拍摄场景类别作为当前拍摄场景的类别。需要说明的是,除了上述贝叶斯网络模型之外,也可以使用其他类型的概率图形网络模型作为场景分类模型,本申请对上述场景分类模型的具体形式不做特殊限定。During specific implementation, the above-mentioned shooting scene category can be preset, and the preset shooting scene can include multiple categories, for example, the above-mentioned shooting scene category can include category 1 (building-distance view-outdoor-sunny day-light brightness is strong), Category 2 (portrait-close-up-outdoor-sunny-backlight), category 3 (aquarium-animal-indoor-dark light), etc. Wherein, when determining the shooting scene category, a preset scene classification model, such as a Bayesian network model, may be used. Exemplarily, taking the Bayesian network model as an example, the acquired features of the actual shooting scene may be used as events that have occurred to obtain the joint probability that the actual shooting scene belongs to each preset shooting scene category. Based on Bayesian theory: the more events that support a certain property, the greater the possibility of the property being established. Finally, the category of the shooting scene with the highest probability is selected as the category of the current shooting scene. It should be noted that, in addition to the above-mentioned Bayesian network model, other types of probabilistic graphical network models can also be used as the scene classification model, and this application does not specifically limit the specific form of the above-mentioned scene classification model.
若上述预览照片对应的拍摄场景为室内环境,则第二设备20可以直接根据拍摄场景的特征(例如,该拍摄场景的特征可以是上述预览照片中的内容特征和环境特征)确定拍摄场景类别。在具体实现时,可以将上述预览照片中的内容特征及环境特征输入预设的场景分类模型,如贝叶斯网络模型,由此可以得到对应的拍摄场景类别。If the shooting scene corresponding to the above-mentioned preview photo is an indoor environment, the second device 20 may directly determine the shooting scene category according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and environmental characteristics in the above-mentioned preview photo). In a specific implementation, the content features and environmental features in the preview photo can be input into a preset scene classification model, such as a Bayesian network model, so that the corresponding shooting scene category can be obtained.
步骤3033,第二设备20基于上述拍摄场景类别,加载与该拍摄场景类别对应的参数决策模型,将上述预览照片作为输入,计算获得拍摄参数。In step 3033, the second device 20 loads a parameter decision model corresponding to the category of the shooting scene based on the category of the shooting scene, and uses the preview photo as an input to calculate and obtain shooting parameters.
具体地,当第二设备20确定上述拍摄场景类别后,可以加载与上述拍摄场景类别对应的参数决策模型。接着,可以将上述预览照片输入上述参数决策模型中,运行模型,并由此计算获得与上述预览照片对应的拍摄参数。其中,该参数决策模型可以通过深度学习预先训练获得。具体训练的方式可以在下文中的拍摄参数训练方法中进行 描述,在此不再赘述。Specifically, after the second device 20 determines the category of the shooting scene, it may load a parameter decision model corresponding to the category of the shooting scene. Next, the preview photo may be input into the parameter decision-making model, the model is run, and shooting parameters corresponding to the preview photo are obtained through calculation. Wherein, the parameter decision model can be obtained through deep learning pre-training. The specific training method can be described in the shooting parameter training method below, and will not be repeated here.
步骤304,第二设备20将上述拍摄参数发送给第一设备10。Step 304 , the second device 20 sends the aforementioned shooting parameters to the first device 10 .
步骤305,第一设备10使用上述拍摄参数进行拍摄。In step 305, the first device 10 uses the above shooting parameters to shoot.
具体地,第一设备10接收到上述第二设备20发送的拍摄参数后,将摄像头的拍摄配置参数初始化为上述拍摄参数,并可以使用上述初始化后的拍摄参数进行拍摄。用户亦可对上述初始化后的拍摄参数做手动调节。由此可以得到实际的拍摄照片。Specifically, after receiving the shooting parameters sent by the second device 20, the first device 10 initializes the shooting configuration parameters of the camera to the above shooting parameters, and can use the above initialized shooting parameters for shooting. The user can also manually adjust the above-mentioned shooting parameters after initialization. Thereby, an actual photograph can be obtained.
可以理解的是,上面实施例中,步骤301-步骤305均为可选步骤,本申请只提供一种可行的实施例,还可以包括比步骤301-步骤305更多或更少的步骤,本申请对此不做限定。It can be understood that, in the above embodiment, step 301-step 305 are all optional steps, this application only provides a feasible embodiment, and may also include more or less steps than step 301-step 305, this application Applications are not limited to this.
需要说明的是,在一种可选的实施例中,上述如图3所示的应用场景中可以不包含第二设备20,也就是说,上述步骤301-步骤305都可以在第一设备10上执行。在上述只有第一设备10的场景中,上述第一设备10中可以包含预设的图像识别模型、场景分类模型及参数决策模型。It should be noted that, in an optional embodiment, the above-mentioned application scenario shown in FIG. to execute. In the aforementioned scenario where only the first device 10 is present, the first device 10 may include a preset image recognition model, a scene classification model, and a parameter decision model.
图6为本申请提供的拍摄方法另一个实施例的流程示意图,包括:Fig. 6 is a schematic flowchart of another embodiment of the shooting method provided by the present application, including:
步骤601,第一设备10获取预览照片及环境信息。In step 601, the first device 10 acquires preview photos and environment information.
具体地,用户可以打开第一设备10的摄像头,使得第一设备10进入拍摄模式。示例性的,用户可以在第一设备10的桌面点击相机应用程序,以打开摄像头,也可以在第三方应用软件(例如,社交软件)中调用摄像头。本申请实施例对上述打开摄像头的方式不做特殊限定。Specifically, the user may turn on the camera of the first device 10, so that the first device 10 enters a shooting mode. Exemplarily, the user can click the camera application program on the desktop of the first device 10 to open the camera, or call the camera in third-party application software (eg, social software). The embodiment of the present application does not specifically limit the foregoing manner of turning on the camera.
响应于用户打开摄像头的操作,第一设备10获取预览画面,其中,该预览画面可以是当前的摄像头采集的当前环境的画面。接着,第一设备10可以进一步获取当前的预览照片。可以理解的是,上述预览照片是与当前的预览画面对应的照片。In response to the user's operation of turning on the camera, the first device 10 acquires a preview image, where the preview image may be an image of the current environment captured by the current camera. Next, the first device 10 may further acquire the current preview photo. It can be understood that the above preview photo is a photo corresponding to the current preview image.
进一步地,第一设备10还可以获取当前的环境信息,其中,该环境信息可以包括:位置、时间、气象及光线等信息。可以理解的是,上述环境信息仅是示例性说明,并不构成对本申请实施例的限定,在一些实施例中,还可以包括更多的环境信息。在具体实现时,上述位置信息可以通过第一设备10中的全球定位系统(Global Positioning System,GPS)获得。上述时间信息可以通过第一设备10的系统时间获得。当获取到上述位置信息和时间信息后,可以通过第一设备10中的天气应用得到气象信息(例如,晴天,阴天,或雨天等)。接着,可以进一步获取朝向信息,其中,上述朝向信息也可以通过第一设备10中的磁传感器180D和陀螺仪180B传感器获得,上述朝向信息可以用于表征第一设备10的朝向。进一步地,可以通过上述气象信息获得具体的光线数据,其中,该光线数据可以包括光照强度和自然光线相对摄像头的方向(例如,顺光、侧光、逆光等,其中侧光又可分为前侧光、后侧光、左侧光、右侧光等)。Further, the first device 10 may also acquire current environment information, where the environment information may include information such as location, time, weather, and light. It can be understood that the above environment information is only an illustration, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, more environment information may be included. During specific implementation, the above location information may be obtained through a Global Positioning System (Global Positioning System, GPS) in the first device 10. The above time information can be obtained through the system time of the first device 10 . After the above location information and time information are acquired, weather information (for example, sunny, cloudy, or rainy, etc.) can be obtained through the weather application in the first device 10 . Next, orientation information may be further obtained, wherein the above orientation information may also be obtained through the magnetic sensor 180D and the gyroscope 180B sensors in the first device 10 , and the above orientation information may be used to characterize the orientation of the first device 10 . Further, specific light data can be obtained through the above meteorological information, wherein the light data can include light intensity and the direction of natural light relative to the camera (for example, front light, side light, back light, etc., wherein side light can be divided into front side light, rear side light, left light, right light, etc.).
步骤602,第一设备10基于预览照片及环境信息生成拍摄参数。In step 602, the first device 10 generates shooting parameters based on the preview photo and environment information.
具体地,当第一设备10获得上述预览照片及环境信息后,可以基于上述预览照片及环境信息生成拍摄参数,其中,该拍摄参数可以是摄像头中用于执行拍摄的对应参数,例如,光圈大小、快门速度、ISO、对焦方式、焦距、白平衡、曝光补偿等参数,可以理解的是,上述参数示例仅是示例性说明,并不构成对本申请实施例的限定,在一些实施例中,可以包括更多或更少的参数。Specifically, after the first device 10 obtains the above-mentioned preview photo and environmental information, it may generate shooting parameters based on the above-mentioned preview photo and environmental information, wherein the shooting parameters may be corresponding parameters used in the camera for shooting, for example, aperture size , shutter speed, ISO, focus mode, focal length, white balance, exposure compensation and other parameters, it can be understood that the above parameter examples are only illustrative and do not constitute a limitation to the embodiments of the present application. In some embodiments, you can Include more or fewer parameters.
上述生成拍摄参数的具体过程可以包括如下子步骤:The above specific process of generating shooting parameters may include the following sub-steps:
步骤6021,第一设备10基于上述预览照片及环境信息,提取实际拍摄场景的特征。In step 6021, the first device 10 extracts the features of the actual shooting scene based on the above-mentioned preview photos and environmental information.
具体地,第一设备10可以使用预设的图像识别模型,对上述预览照片进行识别,由此可以得到与上述预览照片对应的实际拍摄场景的特征,其中,该实际拍摄场景的特征可以包括内容特征及环境特征。Specifically, the first device 10 may use a preset image recognition model to identify the above-mentioned preview photo, thereby obtaining the features of the actual shooting scene corresponding to the above-mentioned preview photo, wherein the features of the actual shooting scene may include content characteristics and environmental characteristics.
在具体实现时,可以将上述预览照片输入预设的图像识别模型。其中,该预设的图像识别模型可以是使用深度图像分割神经网络的模型,可选地,上述图像识别模型也可以使用其他图像识别功能的卷积神经网络,本申请实施例对上述图像识别模型的具体类型不做特殊限定。In a specific implementation, the above-mentioned preview photo can be input into a preset image recognition model. Wherein, the preset image recognition model may be a model using a deep image segmentation neural network. Optionally, the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions. The specific type of is not specifically limited.
通过上述图像识别模型对上述预览照片的计算,可以识别出上述预览照片中的内容特征,示例性的,该内容特征可以包括人像、建筑物、雪景、动物、植物等主体特征。此外,上述内容特征还可以包括上述主体与摄像头之间的距离。接着,通过上述图像识别模型还可以判断出上述预览照片对应的拍摄场景是在室内还是室外。Through the calculation of the above-mentioned preview photo by the above-mentioned image recognition model, the content features in the above-mentioned preview photo can be identified. Exemplarily, the content features may include main features such as portraits, buildings, snow scenes, animals, and plants. In addition, the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera. Next, through the image recognition model, it can also be determined whether the shooting scene corresponding to the preview photo is indoors or outdoors.
若上述预览照片对应的拍摄场景为室外环境,则第一设备10可以从上述环境信息中提取例如气象、光线等环境特征。If the shooting scene corresponding to the preview photo is an outdoor environment, the first device 10 may extract environmental features such as weather and light from the environmental information.
步骤6022,第一设备10基于获取的实际拍摄场景的特征,确定拍摄场景类别。Step 6022, the first device 10 determines the category of the shooting scene based on the acquired features of the actual shooting scene.
在具体实现时,上述拍摄场景类别可以预设,该预设的拍摄场景可以包括多个类别,例如,上述拍摄场景类别可以包括类别1(建筑物-远景-室外-晴天-光线亮度强),类别2(人像-近景-室外-晴天-逆光),类别3(海洋馆-动物-室内-光线亮度暗)等。其中,在确定上述拍摄场景类别时,可以使用预设的场景分类模型,例如贝叶斯网络模型。示例性的,以贝叶斯网络模型为例,可以将上述已获取的实际拍摄场景的特征作为已出现的事件,得到实际拍摄场景属于每个预设拍摄场景类别的联合概率。基于贝叶斯理论:支持某项属性的事件发生得愈多,则该属性成立的可能性就愈大。最后选择最大概率的拍摄场景类别作为当前拍摄场景的类别。需要说明的是,除了上述贝叶斯网络模型之外,也可以使用其他类型的概率图形网络模型作为场景分类模型。本申请对上述场景分类模型的具体形式不做特殊限定。During specific implementation, the above-mentioned shooting scene category can be preset, and the preset shooting scene can include multiple categories, for example, the above-mentioned shooting scene category can include category 1 (building-distance view-outdoor-sunny day-light brightness is strong), Category 2 (portrait-close-up-outdoor-sunny-backlight), category 3 (aquarium-animal-indoor-dark light), etc. Wherein, when determining the shooting scene category, a preset scene classification model, such as a Bayesian network model, may be used. Exemplarily, taking the Bayesian network model as an example, the acquired features of the actual shooting scene may be used as events that have occurred to obtain the joint probability that the actual shooting scene belongs to each preset shooting scene category. Based on Bayesian theory: the more events that support a certain property, the greater the possibility of the property being established. Finally, the category of the shooting scene with the highest probability is selected as the category of the current shooting scene. It should be noted that, in addition to the above Bayesian network model, other types of probabilistic graphical network models can also be used as the scene classification model. The present application does not specifically limit the specific form of the above-mentioned scene classification model.
若上述预览照片对应的拍摄场景为室内环境,则第一设备10可以直接根据拍摄场景的特征(例如,该拍摄场景的特征可以是上述预览照片中的内容特征和环境特征)确定拍摄场景类别。在具体实现时,可以将上述预览照片中的内容特征及环境特征输入预设的场景分类模型,如贝叶斯网络模型,由此可以得到对应的拍摄场景类别。If the shooting scene corresponding to the above-mentioned preview photo is an indoor environment, the first device 10 may directly determine the shooting scene category according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and environmental characteristics in the above-mentioned preview photo). In a specific implementation, the content features and environmental features in the preview photo can be input into a preset scene classification model, such as a Bayesian network model, so that the corresponding shooting scene category can be obtained.
步骤6023,第一设备10基于上述拍摄场景类别,加载与该拍摄场景类别对应的参数决策模型,将上述预览照片作为输入,计算获得拍摄参数。Step 6023, based on the shooting scene category, the first device 10 loads a parameter decision model corresponding to the shooting scene category, and uses the preview photo as input to calculate and obtain shooting parameters.
具体地,当第一设备10确定上述拍摄场景类别后,可以加载与上述拍摄场景类别对应的参数决策模型。接着,可以将上述预览照片输入上述参数决策模型中,运行模型,并由此计算获得与上述预览照片对应的拍摄参数。其中,该参数决策模型可以通过深度学习预先训练获得。具体训练的方式可以在下文中的拍摄参数训练方法中进行描述,在此不再赘述。Specifically, after the first device 10 determines the category of the shooting scene, it may load a parameter decision model corresponding to the category of the shooting scene. Next, the preview photo may be input into the parameter decision-making model, the model is run, and shooting parameters corresponding to the preview photo are obtained through calculation. Wherein, the parameter decision model can be obtained through deep learning pre-training. The specific training method can be described in the shooting parameter training method below, and will not be repeated here.
步骤603,第一设备10使用上述拍摄参数进行拍摄。In step 603, the first device 10 uses the above shooting parameters to shoot.
具体地,第一设备10确定上述拍摄参数后,将摄像头的拍摄配置参数初始化为上述拍摄参数,可以使用上述初始化后的拍摄参数进行拍摄。用户亦可对这些基于推荐的初始化拍摄参数做手动调节。由此可以得到实际的拍摄照片。Specifically, after determining the above-mentioned shooting parameters, the first device 10 initializes the shooting configuration parameters of the camera to the above-mentioned shooting parameters, and can use the above-mentioned initialized shooting parameters to perform shooting. Users can also manually adjust these recommended-based initialization shooting parameters. Thereby, an actual photograph can be obtained.
可以理解的是,上面实施例中,步骤601-步骤603均为可选步骤,本申请只提供一种可行的实施例,还可以包括比步骤601-步骤603更多或更少的步骤,本申请对此不做限定。It can be understood that, in the above embodiment, step 601-step 603 are all optional steps, and this application only provides a feasible embodiment, and may also include more or fewer steps than step 601-step 603. Applications are not limited to this.
接着,下文对上述参数决策模型的训练过程进行详细说明。Next, the training process of the above-mentioned parameter decision-making model will be described in detail below.
本申请实施例还提供了一种拍摄参数训练方法,应用于第三设备30,该第三设备30可以是以计算机的形式体现,示例性的,该第三设备30可以是云端服务器(例如,上述第二设备20),但并不限定于第二设备20,在一些实施例中,该第三设备30也可以是本地的台式计算机。可选地,该第三设备30页可以是终端设备(例如,上述第一设备10)。下文以第三设备30为计算机为例,并结合图7-图9-对上述拍摄参数训练方法进行说明。图7为本申请提供的拍摄参数训练方法一个实施例的流程示意图,包括:The embodiment of the present application also provides a shooting parameter training method, which is applied to a third device 30. The third device 30 may be embodied in the form of a computer. Exemplarily, the third device 30 may be a cloud server (for example, The aforementioned second device 20), but not limited to the second device 20, in some embodiments, the third device 30 may also be a local desktop computer. Optionally, the third device 30 may be a terminal device (for example, the above-mentioned first device 10). Hereinafter, taking the third device 30 as a computer as an example, and referring to FIG. 7-FIG. 9 , the above shooting parameter training method will be described. FIG. 7 is a schematic flow diagram of an embodiment of the shooting parameter training method provided by the present application, including:
步骤701,获取样本数据集。Step 701, acquire a sample data set.
具体地,上述样本数据集可以包括多份样本数据,其中,每份样本数据可以包括一张预览照片、一组专业模式参数、一张拍摄照片及与拍摄照片对应的环境信息。其中,该预览照片可以是摄像头采集的预览画面中的照片,该专业模式参数可以是在专业模式下用户设置的参数,该拍摄照片可以是摄像头使用上述专业模式参数拍摄获得的照片,该环境信息可以包括位置、时间、气象及光线等信息。上述环境信息的具体描述可参考步骤301,在此不再赘述。Specifically, the above sample data set may include multiple pieces of sample data, wherein each piece of sample data may include a preview photo, a set of professional mode parameters, a taken photo and environmental information corresponding to the taken photo. Wherein, the preview photo may be a photo in the preview screen collected by the camera, the professional mode parameter may be a parameter set by the user in the professional mode, the captured photo may be a photo obtained by the camera using the above professional mode parameter, the environment information Can include information such as location, time, weather and light. For a specific description of the above environment information, reference may be made to step 301, which will not be repeated here.
可选地,上述拍摄照片还可以通过人工和/或机器筛选,示例性的,还可以使用图像美学工具和图像质量评价工具对上述拍摄照片进行筛选,由此可以筛选出高质量的拍摄照片。Optionally, the above-mentioned photographs can be screened manually and/or by machine. For example, image aesthetic tools and image quality evaluation tools can be used to screen the above-mentioned photographs, so that high-quality photographs can be selected.
表1示例性的示出了上述样本数据集。Table 1 exemplarily shows the above sample data set.
表1Table 1
Figure PCTCN2022107648-appb-000001
Figure PCTCN2022107648-appb-000001
如表1所示,上述样本数据集包括N个样本数据,每个样本数据包括预览照片、专业模式参数、拍摄照片及环境信息等。As shown in Table 1, the above sample data set includes N sample data, and each sample data includes preview photos, professional mode parameters, taken photos, and environmental information.
步骤702,将上述样本数据集中的每张拍摄照片输入预设的图像识别模型中进行识别,获得内容特征。In step 702, input each photograph taken in the above sample data set into a preset image recognition model for recognition to obtain content features.
具体地,该预设的图像识别模型可以是使用深度图像分割神经网络的模型,可选地,上述图像识别模型也可以使用其他图像识别功能的卷积神经网络,本申请实施例对上述图像识别模型的具体类型不做特殊限定。Specifically, the preset image recognition model may be a model using a deep image segmentation neural network. Optionally, the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions. The specific type of the model is not particularly limited.
当通过上述预设的图像识别模型对上述拍摄照片进行识别后,可以得到与上述拍 摄照片对应的内容特征,其中,该内容特征可以包括例如人像、建筑物、雪景、动物、植物等主体特征。After the aforementioned photographs are identified by the preset image recognition model, content features corresponding to the aforementioned photographs can be obtained, wherein the content features can include subject features such as portraits, buildings, snow scenes, animals, plants, etc.
此外,上述内容特征还可以包括上述主体与摄像头之间的距离。接着,通过上述图像识别模型还可以判断出上述拍摄照片对应的拍摄场景是在室内还是室外。In addition, the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera. Next, through the above image recognition model, it can also be determined whether the shooting scene corresponding to the above photo is indoors or outdoors.
步骤703,基于内容特征进行拍摄场景的分类,得到拍摄场景类别。Step 703, classify the shooting scene based on the content feature, and obtain the shooting scene category.
具体地,当确定上述拍摄环境(例如,室内或室外)后,可以基于上述内容特征对上述样本数据集中每张拍摄照片进行拍摄场景分类,由此可以得到每张拍摄照片的拍摄场景类别。Specifically, after the above-mentioned shooting environment (for example, indoor or outdoor) is determined, the shooting scene category of each shot in the above-mentioned sample data set can be performed based on the above-mentioned content characteristics, so that the shooting scene category of each shot can be obtained.
图8为上述拍摄场景分类的流程示意图,如图8所示,FIG. 8 is a schematic flow chart of the above shooting scene classification, as shown in FIG. 8 ,
若上述拍摄照片对应的拍摄场景为室外环境,则可以基于环境特征及内容特征对上述拍摄照片的拍摄场景进行分类,由此可以得到拍摄场景类别,其中,上述环境特征可以通过上述环境信息获得。在具体实现时,上述拍摄场景类别可以包括多个类别,例如,类别1(建筑物-远景-室外-晴天-光线亮度强),类别2(人像-近景-室外-晴天-逆光),类别3(海洋馆-动物-室内-光线亮度暗)等。If the shooting scene corresponding to the above-mentioned photo is an outdoor environment, the shooting scene of the above-mentioned photo can be classified based on the environmental characteristics and content characteristics, thereby obtaining the shooting scene category, wherein the above-mentioned environmental characteristics can be obtained through the above-mentioned environmental information. During specific implementation, the above shooting scene category may include multiple categories, for example, category 1 (building-distance view-outdoor-sunny day-light brightness is strong), category 2 (portrait-close view-outdoor-sunny day-backlight), category 3 (Aquarium-animals-indoor-light brightness dark), etc.
若上述预览照片对应的拍摄场景为室内环境,则可以直接根据内容特征确定拍摄场景类别。If the shooting scene corresponding to the preview photo is an indoor environment, the shooting scene category may be determined directly according to the content characteristics.
步骤704,构建训练数据集Step 704, constructing a training data set
具体地,当获取到每张拍摄照片的拍摄场景类别后,可以将样本数据集中的所有拍摄照片进行分组,分组的方式可以按照拍摄场景的类别进行,例如,可以将相同类别的拍摄场景的拍摄照片分为一组。当对上述拍摄照片进行分组后,可以根据拍摄照片找到对应的预览照片及专业模式参数,示例性的,以表1为例,可以通过拍摄照片1找到对应的预览照片1及专业模式参数1,由此可以得到多组训练数据,该多组训练数据构成了训练数据集,其中,每组训练数据包括同一个拍摄场景类别下的多个训练数据,每个训练数据包括所属该拍摄场景类别下的预览照片及专业模式参数。Specifically, after the shooting scene category of each photo is obtained, all the photos in the sample data set can be grouped, and the grouping method can be carried out according to the category of the shooting scene, for example, the shooting scenes of the same category can be grouped Photos are grouped together. After the above-mentioned photographs are grouped, the corresponding preview photos and professional mode parameters can be found according to the photographs taken. For example, taking Table 1 as an example, the corresponding preview photos 1 and professional mode parameters 1 can be found by taking photo 1. Thereby, multiple sets of training data can be obtained, and the multiple sets of training data constitute a training data set, wherein each set of training data includes a plurality of training data under the same shooting scene category, and each training data includes multiple training data under the shooting scene category. preview photos and professional mode parameters.
表2示例性的示出了上述训练数据集。Table 2 exemplarily shows the above training data set.
Figure PCTCN2022107648-appb-000002
Figure PCTCN2022107648-appb-000002
Figure PCTCN2022107648-appb-000003
Figure PCTCN2022107648-appb-000003
如表2所示,上述训练数据集包括M个拍摄场景类别,每个拍摄场景类别下可以包括多个训练数据,每个训练数据可以包括所属于该拍摄场景类别下的预览照片及专业模式参数。As shown in Table 2, the above-mentioned training data set includes M shooting scene categories, and each shooting scene category can include multiple training data, and each training data can include preview photos and professional mode parameters belonging to the shooting scene category .
步骤705,基于上述训练数据集,对预设参数决策模型进行训练。Step 705, based on the above training data set, train the preset parameter decision model.
具体地,可以将上述训练数据集分为训练集及验证集。其中,训练集与验证集的分配比例可以预先设定,本申请实施例对此不做特殊限定。接着,可以将上述训练集输入预设的参数决策模型中进行训练。Specifically, the above training data set may be divided into a training set and a verification set. Wherein, the distribution ratio of the training set and the verification set may be preset, which is not specifically limited in this embodiment of the present application. Next, the above training set can be input into a preset parameter decision model for training.
需要说明的是,由于拍摄场景类别有多个,而每个拍摄场景类别可以对应一个参数决策模型,因此,可以分别多个参数决策模型进行训练。It should be noted that since there are multiple shooting scene categories, and each shooting scene category can correspond to a parameter decision model, multiple parameter decision models can be trained respectively.
在具体实现时,可以将上述训练集中的预览照片输入上述预设的参数决策模型中进行计算,由此可以得到预测的拍摄参数,可以理解的是,上述输入的预览照片可以是YUV格式的数据,也可以是RGB格式,本申请实施例对此不做特殊限定。其中,上述预测的拍摄参数可以包括例如光圈大小、快门速度、ISO、对焦方式、焦距、白平衡、曝光补偿等参数。In a specific implementation, the preview photos in the above training set can be input into the above-mentioned preset parameter decision-making model for calculation, so that the predicted shooting parameters can be obtained. It can be understood that the preview photos input above can be data in YUV format , may also be in RGB format, which is not specifically limited in this embodiment of the present application. Wherein, the above-mentioned predicted shooting parameters may include parameters such as aperture size, shutter speed, ISO, focusing mode, focal length, white balance, exposure compensation and the like.
图9为参数决策模型的训练架构示意图。如图9所示,当对任一个特定拍摄场景类别的参数决策模型进行训练时,预览照片为输入数据,输出数据为预测的拍摄参数。Fig. 9 is a schematic diagram of the training architecture of the parameter decision model. As shown in FIG. 9 , when training the parameter decision model of any specific shooting scene category, the preview photo is the input data, and the output data is the predicted shooting parameters.
可以理解的是,上述训练集中的专业模式参数可以作为标签数据。也就是说,上述训练集中的训练数据可以包括特征数据及标签数据。其中,特征数据可以用于输入,并进行计算,例如,该特征数据可以包括预览照片等。标签数据可以用于在训练过程中与输出进行比对,以便通过训练将模型的损失进行收敛,该标签数据可以是预先标识的专业模式参数。此外,在对任一个参数决策模型的训练过程中,目标函数可以为预测拍摄参数与专业模式参数的均方差,也就是预测数据与标签数据的均方差。接着,通过上述训练数据的训练,重复迭代直到该参数决策模型收敛为止。It can be understood that the professional pattern parameters in the above training set can be used as label data. That is to say, the training data in the above training set may include feature data and label data. Wherein, the feature data can be used for input and calculation, for example, the feature data can include a preview photo and the like. The label data can be used to compare with the output during the training process, so that the loss of the model can be converged through training, and the label data can be pre-identified professional mode parameters. In addition, during the training process of any parameter decision-making model, the objective function may be the mean square error of the predicted shooting parameters and the professional mode parameters, that is, the mean square error of the predicted data and the label data. Next, through the above training data training, iterations are repeated until the parameter decision model converges.
同样地,对其他所有参数决策模型的训练,可以获得与拍摄场景类别对应的参数决策模型。Similarly, for the training of all other parameter decision-making models, a parameter decision-making model corresponding to the shooting scene category can be obtained.
进一步地,当对上述不同拍摄场景类别的参数决策模型训练完成后,还可以通过上述验证集进行验证,若验证后达到预设要求,则训练完成,若验证后未达到预设要求,则可以进行进一步训练,例如,重新获取样本数据集,重复步骤701-步骤705进行再次训练。Further, after the above-mentioned parameter decision-making model training for different shooting scene categories is completed, it can also be verified through the above-mentioned verification set. If the preset requirements are met after verification, the training is completed. If the preset requirements are not met after verification, then the training can be Perform further training, for example, reacquire the sample data set, and repeat steps 701-705 for retraining.
通过区分不同的拍摄场景类别,可以提高神经网络对特定场景下环境特征的提取,能够加速模型的收敛过程,避免出现过拟合或无法收敛等异常情况的发生,进而可以提高模型对场景的适应性。By distinguishing different shooting scene categories, the neural network can improve the extraction of environmental features in specific scenes, accelerate the convergence process of the model, avoid over-fitting or failure to converge and other abnormal situations, and then improve the adaptability of the model to the scene sex.
通过上述拍摄参数训练方法,可以获得上述参数决策模型,由此可以使得第二设备20可以基于上述参数决策模型对第一设备10发送的预览照片及环境信息进行计算,得到对应的拍摄参数,进而可以减轻第一设备10的计算量,并可以提高拍摄质量。Through the above-mentioned shooting parameter training method, the above-mentioned parameter decision-making model can be obtained, so that the second device 20 can calculate the preview photos and environmental information sent by the first device 10 based on the above-mentioned parameter decision-making model, and obtain corresponding shooting parameters, and then The calculation amount of the first device 10 can be reduced, and the shooting quality can be improved.
下面结合图10进一步介绍本申请以下实施例中提供的示例性电子设备。图10示出了电子设备1000的结构示意图,该电子设备1000可以是上述第三设备30。The exemplary electronic device provided in the following embodiments of the present application is further introduced below with reference to FIG. 10 . FIG. 10 shows a schematic structural diagram of an electronic device 1000 , which may be the above-mentioned third device 30 .
上述电子设备1000可以包括:至少一个处理器;以及与上述处理器通信连接的至少一个存储器,其中:上述存储器存储有可被上述处理器执行的程序指令,处理器调用上述程序指令能够执行本申请图7-图9所示实施例提供的方法。The above-mentioned electronic device 1000 may include: at least one processor; and at least one memory connected to the above-mentioned processor in communication, wherein: the above-mentioned memory stores program instructions that can be executed by the above-mentioned processor, and the processor calls the above-mentioned program instructions to execute the application. The method provided by the embodiment shown in Fig. 7-Fig. 9 .
图10示出了适用于实现本申请实施方式的示例性电子设备1000的框图。图10显示的电子设备1000仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。FIG. 10 shows a block diagram of an exemplary electronic device 1000 suitable for implementing embodiments of the present application. The electronic device 1000 shown in FIG. 10 is only an example, and should not limit the functions and scope of use of the embodiments of the present application.
如图10所示,电子设备1000以通用计算设备的形式表现。电子设备1000的组件可以包括但不限于:一个或者多个处理器1010,存储器1020,连接不同系统组件(包括存储器1020和处理器1010)的通信总线1040以及通信接口1030。As shown in FIG. 10, electronic device 1000 takes the form of a general-purpose computing device. Components of electronic device 1000 may include, but are not limited to: one or more processors 1010 , memory 1020 , communication bus 1040 connecting different system components (including memory 1020 and processor 1010 ), and communication interface 1030 .
通信总线1040表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture;以下简称:ISA)总线,微通道体系结构(Micro Channel Architecture;以下简称:MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association;以下简称:VESA)局域总线以及外围组件互连(Peripheral Component Interconnection;以下简称:PCI)总线。 Communication bus 1040 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures. For example, these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture; hereinafter referred to as: ISA) bus, Micro Channel Architecture (Micro Channel Architecture; hereinafter referred to as: MAC) bus, enhanced ISA bus, video electronics Standards Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and Peripheral Component Interconnection (hereinafter referred to as: PCI) bus.
电子设备1000典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。 Electronic device 1000 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the electronic device and include both volatile and nonvolatile media, removable and non-removable media.
存储器1020可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory;以下简称:RAM)和/或高速缓存存储器。电子设备可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。尽管图10中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如:光盘只读存储器(Compact Disc Read Only Memory;以下简称:CD-ROM)、数字多功能只读光盘(Digital Video Disc Read Only Memory;以下简称:DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与通信总线1040相连。存储器1020可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。The memory 1020 may include a computer system-readable medium in the form of a volatile memory, such as a random access memory (Random Access Memory; hereinafter referred to as RAM) and/or a cache memory. The electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. Although not shown in FIG. 10, a disk drive for reading and writing to a removable nonvolatile disk (such as a "floppy disk") may be provided, as well as a disk drive for a removable nonvolatile disk (such as a CD-ROM (Compact Disc Read Only Memory; hereinafter referred to as: CD-ROM), Digital Video Disc Read Only Memory (hereinafter referred to as: DVD-ROM) or other optical media). In these cases, each drive may be connected to communication bus 1040 through one or more data media interfaces. The memory 1020 may include at least one program product, which has a set of (for example, at least one) program modules configured to execute the functions of the various embodiments of the present application.
具有一组(至少一个)程序模块的程序/实用工具,可以存储在存储器1020中,这样的程序模块包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块通常执行本申请所描述的实施例中的功能和/或方法。A program/utility having a set (at least one) of program modules may be stored in memory 1020, such program modules including - but not limited to - an operating system, one or more application programs, other program modules, and program data , each or some combination of these examples may include implementations of network environments. The program modules generally perform the functions and/or methods in the embodiments described herein.
电子设备1000也可以与一个或多个外部设备(例如键盘、指向设备、显示器等)通信,还可与一个或者多个使得用户能与该电子设备交互的设备通信,和/或与使得该电子设备能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过通信接口1030进行。并且,电子设备1000还可以通过网络适配器(图10中未示出)与一个或者多个网络(例如局域网(Local Area Network;以下简称:LAN),广域网(Wide Area Network;以下简称:WAN)和/或公共网络,例如因特网)通信,上述网络适配器可以通过通信总线1040与电子设备的其它模块通 信。应当明白,尽管图10中未示出,可以结合电子设备1000使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、磁盘阵列(Redundant Arrays of Independent Drives;以下简称:RAID)系统、磁带驱动器以及数据备份存储系统等。The electronic device 1000 may also communicate with one or more external devices (such as keyboards, pointing devices, displays, etc.), and may also communicate with one or more devices that enable the user to interact with the electronic device, and/or communicate with the device that enables the electronic device to A device communicates with any device (eg, network card, modem, etc.) that is capable of communicating with one or more other computing devices. Such communication may occur through communication interface 1030 . Moreover, the electronic device 1000 can also communicate with one or more networks (such as a local area network (Local Area Network; hereinafter referred to as: LAN), a wide area network (Wide Area Network; hereinafter referred to as: WAN) and (or a public network, such as the Internet), the above-mentioned network adapter can communicate with other modules of the electronic device through the communication bus 1040 . It should be appreciated that although not shown in FIG. 10 , other hardware and/or software modules may be used in conjunction with electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk arrays (Redundant Arrays of Independent Drives; hereinafter referred to as: RAID) system, tape drive and data backup storage system, etc.
处理器1010通过运行存储在存储器1020中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例提供的拍摄参数训练方法。The processor 1010 executes various functional applications and data processing by running the programs stored in the memory 1020, for example, implementing the shooting parameter training method provided in the embodiment of the present application.
图11为本申请拍摄装置一个实施例的结构示意图,如图11所示,上述拍摄装置1100应用于第一设备10,可以包括:获取模块1110、计算模块1120及拍摄模块1130;其中,FIG. 11 is a schematic structural diagram of an embodiment of the photographing device of the present application. As shown in FIG. 11, the above-mentioned photographing device 1100 is applied to the first device 10, and may include: an acquisition module 1110, a calculation module 1120, and a photographing module 1130; wherein,
获取模块1110,用于获取预览照片及环境信息;Obtaining module 1110, configured to obtain preview photos and environmental information;
计算模块1120,用于基于预览照片及环境信息获得拍摄参数; Calculation module 1120, used to obtain shooting parameters based on preview photos and environmental information;
拍摄模块1130,用于使用拍摄参数进行拍摄。A shooting module 1130, configured to use shooting parameters to shoot.
其中一种可能的实现方式中,上述计算模块1120还用于基于预览照片及环境信息,确定拍摄场景类别;将预览照片输入与拍摄场景类别对应的预设参数决策模型,得到拍摄参数。In one possible implementation manner, the calculation module 1120 is further configured to determine the category of the shooting scene based on the preview photo and environmental information; input the preview photo into a preset parameter decision model corresponding to the category of the shooting scene to obtain shooting parameters.
其中一种可能的实现方式中,上述计算模块1120还用于将预览照片及环境信息发送给第二设备;其中,预览照片及环境信息用于第二设备确定拍摄参数;In one possible implementation manner, the calculation module 1120 is further configured to send the preview photo and environment information to the second device; where the preview photo and environment information are used by the second device to determine shooting parameters;
接收第二设备发送的拍摄参数。The shooting parameters sent by the second device are received.
其中一种可能的实现方式中,环境信息包括位置信息、时间信息、气象信息及光线信息中的一种或多种。In one possible implementation manner, the environment information includes one or more of location information, time information, weather information, and light information.
其中一种可能的实现方式中,拍摄参数包括光圈大小、快门速度、感光度ISO、对焦方式、焦距、白平衡及曝光补偿中的一个或多个。In one possible implementation manner, the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
其中一种可能的实现方式中,第一设备包括手机或平板。In one possible implementation manner, the first device includes a mobile phone or a tablet.
图12为本申请拍摄参数训练装置一个实施例的结构示意图,如图12所示,上述拍摄参数训练装置1200可以包括:获取模块1210及训练模块1220;其中,Fig. 12 is a schematic structural diagram of an embodiment of the shooting parameter training device of the present application. As shown in Fig. 12, the shooting parameter training device 1200 may include: an acquisition module 1210 and a training module 1220; wherein,
获取模块1210,用于获取训练数据集;其中,训练数据集包括多个拍摄场景类别的训练数据子集,每个训练数据子集包括多个训练数据,每个训练数据包括与拍摄场景类别对应的预览照片及拍摄场景类别对应的预设拍摄参数;The obtaining module 1210 is used to obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each training data subset includes a plurality of training data, and each training data includes training data corresponding to the shooting scene category The preset shooting parameters corresponding to the preview photo and shooting scene category;
训练模块1220,用于使用训练数据集对预设参数决策模型进行训练,其中,预设参数决策模型用于输入预览照片,输出预测拍摄参数。The training module 1220 is configured to use the training data set to train the preset parameter decision model, wherein the preset parameter decision model is used to input preview photos and output predicted shooting parameters.
其中一种可能的实现方式中,拍摄场景类别由样本数据集中的拍摄照片确定,样本数据集包括多个样本数据,每个样本数据包括拍摄照片、预览照片及预设拍摄参数。In one possible implementation manner, the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
其中一种可能的实现方式中,样本数据集还包括与拍摄照片对应的环境信息,上述获取模块1210还用于对拍摄照片进行识别,得到内容特征;基于内容特征确定拍摄场景;若拍摄场景为室内,则基于内容特征确定与每张拍摄照片对应的拍摄场景类别;或In one of the possible implementations, the sample data set also includes environmental information corresponding to the photos taken, and the acquisition module 1210 is also used to identify the photos taken to obtain content features; determine the shooting scene based on the content features; if the shooting scene is Indoor, based on the content characteristics, determine the shooting scene category corresponding to each photo taken; or
若拍摄场景为室外,则基于环境特征及内容特征确定与每张拍摄照片对应的拍摄场景类别;其中,环境特征由环境信息获得。If the shooting scene is outdoors, the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
其中一种可能的实现方式中,预设参数决策模型包括多个模型,每个模型与拍摄 场景类别对应。In one possible implementation manner, the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
图11所示实施例提供的拍摄装置1100和图12所示实施例提供的拍摄参数训练装置1200可用于分别执行本申请图1-图6及图7-图9所示方法实施例的技术方案,其实现原理和技术效果可以进一步参考方法实施例中的相关描述。The shooting device 1100 provided by the embodiment shown in Figure 11 and the shooting parameter training device 1200 provided by the embodiment shown in Figure 12 can be used to implement the technical solutions of the method embodiments shown in Figures 1-6 and 7-9 of this application respectively , for its realization principles and technical effects, further reference may be made to the relevant descriptions in the method embodiments.
应理解,以上图11所示的拍摄装置和图12所示的拍摄参数训练装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块以软件通过处理元件调用的形式实现,部分模块通过硬件的形式实现。例如,检测模块可以为单独设立的处理元件,也可以集成在电子设备的某一个芯片中实现。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。It should be understood that the division of each module of the shooting device shown in FIG. 11 and the shooting parameter training device shown in FIG. can be physically separated. And these modules can all be implemented in the form of software called by the processing element; they can also be implemented in the form of hardware; some modules can also be implemented in the form of software called by the processing element, and some modules can be implemented in the form of hardware. For example, the detection module may be a separately established processing element, or may be integrated into a certain chip of the electronic device for implementation. The implementation of other modules is similar. In addition, all or part of these modules can be integrated together, and can also be implemented independently. In the implementation process, each step of the above method or each module above can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit;以下简称:ASIC),或,一个或多个微处理器(Digital Signal Processor;以下简称:DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array;以下简称:FPGA)等。再如,这些模块可以集成在一起,以片上系统(System-On-a-Chip;以下简称:SOC)的形式实现。For example, the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or, one or more microprocessors A Digital Signal Processor (hereinafter referred to as: DSP), or, one or more field programmable gate arrays (Field Programmable Gate Array; hereinafter referred to as: FPGA), etc. For another example, these modules can be integrated together and implemented in the form of a System-On-a-Chip (hereinafter referred to as SOC).
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备的结构限定。在本申请另一些实施例中,电子设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device. In other embodiments of the present application, the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
可以理解的是,上述电子设备等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。It can be understood that, in order to realize the above-mentioned functions, the above-mentioned electronic devices include corresponding hardware structures and/or software modules for performing each function. Those skilled in the art should easily realize that the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the example units and algorithm steps described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the embodiments of the present application.
本申请实施例可以根据上述方法示例对上述电子设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiment of the present application may divide the above-mentioned electronic equipment into functional modules according to the above-mentioned method examples. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Through the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated according to needs It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. For the specific working process of the above-described system, device, and unit, reference may be made to the corresponding process in the foregoing method embodiments, and details are not repeated here.
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成 的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。Each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage The medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other various media capable of storing program codes.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above is only a specific implementation of the application, but the protection scope of the application is not limited thereto, and any changes or replacements within the technical scope disclosed in the application should be covered within the protection scope of the application . Therefore, the protection scope of the present application should be determined by the protection scope of the claims.

Claims (13)

  1. 一种拍摄方法,应用于第一设备,其特征在于,所述方法包括:A shooting method applied to a first device, characterized in that the method includes:
    获取预览照片及环境信息;Obtain preview photos and environmental information;
    基于所述预览照片及所述环境信息获得拍摄参数;obtaining shooting parameters based on the preview photo and the environmental information;
    使用所述拍摄参数进行拍摄。Shooting is performed using the shooting parameters.
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述预览照片及所述环境信息获得拍摄参数包括:The method according to claim 1, wherein said obtaining shooting parameters based on said preview photo and said environmental information comprises:
    基于所述预览照片及所述环境信息,确定拍摄场景类别;Based on the preview photo and the environmental information, determine the category of the shooting scene;
    将所述预览照片输入与所述拍摄场景类别对应的预设参数决策模型,得到拍摄参数。Inputting the preview photo into a preset parameter decision model corresponding to the shooting scene category to obtain shooting parameters.
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述预览照片及所述环境信息获得拍摄参数包括:The method according to claim 1, wherein said obtaining shooting parameters based on said preview photo and said environmental information comprises:
    将所述预览照片及所述环境信息发送给第二设备;其中,所述预览照片及所述环境信息用于所述第二设备确定拍摄参数;Sending the preview photo and the environmental information to a second device; wherein the preview photo and the environmental information are used by the second device to determine shooting parameters;
    接收所述第二设备发送的拍摄参数。Receive shooting parameters sent by the second device.
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述环境信息包括位置信息、时间信息、气象信息及光线信息中的一种或多种。The method according to any one of claims 1-3, wherein the environment information includes one or more of location information, time information, weather information and light information.
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述拍摄参数包括光圈大小、快门速度、感光度ISO、对焦方式、焦距、白平衡及曝光补偿中的一个或多个。The method according to any one of claims 1-4, wherein the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance and exposure compensation .
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述第一设备包括手机或平板。The method according to any one of claims 1-5, wherein the first device includes a mobile phone or a tablet.
  7. 一种拍摄参数训练方法,其特征在于,所述方法包括:A shooting parameter training method, characterized in that the method comprises:
    获取训练数据集;其中,所述训练数据集包括多个拍摄场景类别的训练数据子集,每个所述训练数据子集包括多个训练数据,每个所述训练数据包括与所述拍摄场景类别对应的预览照片及所述拍摄场景类别对应的预设拍摄参数;Obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each of the training data subsets includes a plurality of training data, and each of the training data includes a set of training data related to the shooting scene The preview photo corresponding to the category and the preset shooting parameters corresponding to the category of the shooting scene;
    使用所述训练数据集对预设参数决策模型进行训练,其中,所述预设参数决策模型用于输入预览照片,输出预测拍摄参数。The training data set is used to train a preset parameter decision model, wherein the preset parameter decision model is used to input a preview photo and output a predicted shooting parameter.
  8. 根据权利要求7所述的方法,其特征在于,所述拍摄场景类别由样本数据集中的拍摄照片确定,所述样本数据集包括多个样本数据,每个所述样本数据包括所述拍摄照片、所述预览照片及所述预设拍摄参数。The method according to claim 7, wherein the shooting scene category is determined by photographs in a sample data set, the sample data set includes a plurality of sample data, and each sample data includes the photographs, The preview photo and the preset shooting parameters.
  9. 根据权利要求8所述的方法,其特征在于,所述样本数据集还包括与所述拍摄照片对应的环境信息,所述拍摄场景类别由样本数据集中的拍摄照片确定包括:The method according to claim 8, wherein the sample data set further includes environmental information corresponding to the photographs taken, and the category of the photographed scene determined by the photographs taken in the sample data set includes:
    对所述拍摄照片进行识别,得到内容特征;Identifying the photographs taken to obtain content features;
    基于所述内容特征确定拍摄场景;determining the shooting scene based on the content characteristics;
    若所述拍摄场景为室内,则基于所述内容特征确定与每张所述拍摄照片对应的拍摄场景类别;或If the shooting scene is indoors, then determining the shooting scene category corresponding to each of the shooting photos based on the content characteristics; or
    若所述拍摄场景为室外,则基于环境特征及所述内容特征确定与每张所述拍摄照片对应的拍摄场景类别;其中,所述环境特征由所述环境信息获得。If the shooting scene is outdoors, then determine the shooting scene category corresponding to each of the photographs based on the environmental features and the content features; wherein the environmental features are obtained from the environmental information.
  10. 根据权利要求7-9任一项所述的方法,其特征在于,所述预设参数决策模型 包括多个模型,每个所述模型与所述拍摄场景类别对应。The method according to any one of claims 7-9, wherein the preset parameter decision-making model includes a plurality of models, and each of the models corresponds to the shooting scene category.
  11. 一种第一设备,其特征在于,包括:存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括指令,当所述第一设备从所述存储器中读取所述指令,以使得所述第一设备执行如权利要求1-6中任一项所述的方法。A first device, characterized by comprising: a memory, the memory is used to store computer program codes, the computer program codes include instructions, and when the first device reads the instructions from the memory, the making the first device execute the method according to any one of claims 1-6.
  12. 一种第三设备,其特征在于,包括:存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括指令,当所述第三设备从所述存储器中读取所述指令,以使得所述第三设备执行如权利要求7-10中任一项所述的方法。A third device, characterized in that it includes: a memory, the memory is used to store computer program codes, the computer program codes include instructions, and when the third device reads the instructions from the memory, the making the third device execute the method according to any one of claims 7-10.
  13. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在所述第一设备或第三设备上运行时,使得所述第一设备执行如权利要求1-6中任一项所述的方法,或使得所述第三设备执行如权利要求7-10中任一项所述的方法。A computer-readable storage medium, characterized by comprising computer instructions, and when the computer instructions are run on the first device or the third device, the first device executes any one of claims 1-6. The method according to one of claims, or causing the third device to execute the method according to any one of claims 7-10.
PCT/CN2022/107648 2021-07-29 2022-07-25 Photographing method, photographing parameter training method, electronic device, and storage medium WO2023005882A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110861888.8A CN115701113A (en) 2021-07-29 2021-07-29 Shooting method, shooting parameter training method, electronic device and storage medium
CN202110861888.8 2021-07-29

Publications (1)

Publication Number Publication Date
WO2023005882A1 true WO2023005882A1 (en) 2023-02-02

Family

ID=85086291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107648 WO2023005882A1 (en) 2021-07-29 2022-07-25 Photographing method, photographing parameter training method, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN115701113A (en)
WO (1) WO2023005882A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
US20180198988A1 (en) * 2015-09-18 2018-07-12 Panasonic Intellectual Property Management Co., Ltd. Imaging device and system including imaging device and server
CN108848308A (en) * 2018-06-27 2018-11-20 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN110012210A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Photographic method, device, storage medium and electronic equipment
CN111405180A (en) * 2020-03-18 2020-07-10 惠州Tcl移动通信有限公司 Photographing method, photographing device, storage medium and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180198988A1 (en) * 2015-09-18 2018-07-12 Panasonic Intellectual Property Management Co., Ltd. Imaging device and system including imaging device and server
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN110012210A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Photographic method, device, storage medium and electronic equipment
CN108848308A (en) * 2018-06-27 2018-11-20 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN111405180A (en) * 2020-03-18 2020-07-10 惠州Tcl移动通信有限公司 Photographing method, photographing device, storage medium and mobile terminal

Also Published As

Publication number Publication date
CN115701113A (en) 2023-02-07

Similar Documents

Publication Publication Date Title
WO2021052232A1 (en) Time-lapse photography method and device
CN110458902B (en) 3D illumination estimation method and electronic equipment
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
WO2022017261A1 (en) Image synthesis method and electronic device
WO2021036318A1 (en) Video image processing method, and device
WO2020173379A1 (en) Picture grouping method and device
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113810603B (en) Point light source image detection method and electronic equipment
WO2022100685A1 (en) Drawing command processing method and related device therefor
US20220245778A1 (en) Image bloom processing method and apparatus, and storage medium
CN112532892A (en) Image processing method and electronic device
CN112150499A (en) Image processing method and related device
CN114610193A (en) Content sharing method, electronic device, and storage medium
WO2022022319A1 (en) Image processing method, electronic device, image processing system and chip system
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN113467735A (en) Image adjusting method, electronic device and storage medium
WO2023005706A1 (en) Device control method, electronic device, and storage medium
WO2020078267A1 (en) Method and device for voice data processing in online translation process
WO2022135144A1 (en) Self-adaptive display method, electronic device, and storage medium
WO2022033344A1 (en) Video stabilization method, and terminal device and computer-readable storage medium
WO2023005882A1 (en) Photographing method, photographing parameter training method, electronic device, and storage medium
CN111885768B (en) Method, electronic device and system for adjusting light source
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN115705663B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE