WO2023005882A1 - Procédé de photographie, procédé d'apprentissage de paramètre de photographie, dispositif électronique et support de stockage - Google Patents
Procédé de photographie, procédé d'apprentissage de paramètre de photographie, dispositif électronique et support de stockage Download PDFInfo
- Publication number
- WO2023005882A1 WO2023005882A1 PCT/CN2022/107648 CN2022107648W WO2023005882A1 WO 2023005882 A1 WO2023005882 A1 WO 2023005882A1 CN 2022107648 W CN2022107648 W CN 2022107648W WO 2023005882 A1 WO2023005882 A1 WO 2023005882A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- shooting
- category
- shooting scene
- preview
- photo
- Prior art date
Links
- 238000012549 training Methods 0.000 title claims abstract description 101
- 238000000034 method Methods 0.000 title claims abstract description 84
- 230000007613 environmental effect Effects 0.000 claims description 56
- 238000004590 computer program Methods 0.000 claims description 11
- 230000035945 sensitivity Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 44
- 230000006854 communication Effects 0.000 description 44
- 230000006870 function Effects 0.000 description 41
- 238000012545 processing Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 19
- 238000004364 calculation method Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 16
- 238000007726 management method Methods 0.000 description 15
- 238000010295 mobile communication Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 13
- 210000000988 bone and bone Anatomy 0.000 description 10
- 238000013145 classification model Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000012795 verification Methods 0.000 description 5
- 229920001621 AMOLED Polymers 0.000 description 4
- 238000003491 array Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 239000010985 leather Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000010009 beating Methods 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/53—Constructional details of electronic viewfinders, e.g. rotatable or detachable
Definitions
- the embodiments of the present application relate to the field of computer technology, and in particular, to a shooting method, a shooting parameter training method, electronic equipment, and a storage medium.
- the parameters that determine the high-quality shooting effect include various setting parameters of the camera and photo parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing method, focal length, white balance, exposure compensation and so on.
- ISO sensitivity
- the automatic shooting mode mostly uses light metering and applies a small number of style types to adjust shooting parameters. Since the environment and scene of the user vary greatly, this way of setting parameters based on light intensity makes the color reproduction degree vary greatly in different scenes, and the quality of the photos taken often cannot meet the user's requirements.
- some devices support professional photo mode.
- this mode only the camera ISO and shutter speed are automatically adjusted according to the light intensity.
- Many other setting parameters such as brightness and contrast do not have recommended values for initialization, requiring users to manually adjust and combine repeatedly, and even some parameters can be adjusted in a large range.
- the whole process is cumbersome, time-consuming, and has poor accuracy, which reduces the user experience.
- the threshold for professional models is too high, and most users have limited shooting skills and professional knowledge, making it difficult to take satisfactory photos.
- Embodiments of the present application provide a shooting method, a shooting parameter training method, an electronic device, and a storage medium, so as to provide a shooting method, thereby improving the shooting quality.
- the embodiment of the present application provides a shooting method applied to the first device, including:
- the preview photos may be photos captured by the first device through a camera and displayed on a preview interface.
- shooting parameters are determined by previewing photos and real-time information such as environmental information, and the shooting parameters are used for shooting, which can improve the shooting quality.
- obtaining shooting parameters based on preview photos and environmental information includes:
- the preview photo is input into the preset parameter decision model corresponding to the category of the shooting scene to obtain the shooting parameters.
- the first device calculates and obtains the shooting parameters by itself, which can improve the efficiency of obtaining the shooting parameters.
- obtaining shooting parameters based on preview photos and environmental information includes:
- the second device Sending the preview photo and the environment information to the second device; where the preview photo and the environment information are used by the second device to determine shooting parameters; where the second device may be a server.
- the shooting parameters sent by the second device are received.
- the second device calculates and obtains the shooting parameters, which can reduce the calculation burden of the first device, and the second device has powerful computing capabilities, thereby improving the accuracy of the shooting parameters.
- the environment information includes one or more of location information, time information, weather information, and light information.
- the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
- the first device includes a mobile phone or a tablet.
- the embodiment of the present application also provides a shooting parameter training method, including:
- the training data set includes training data subsets of multiple shooting scene categories, each training data subset includes multiple training data, and each training data includes preview photos and shooting scenes corresponding to the shooting scene category
- the training data set is used to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
- the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
- the sample data set also includes environmental information corresponding to the photos taken, and the category of the shooting scene is determined by the photos taken in the sample data set, including:
- the shooting scene is indoors, determine the shooting scene category corresponding to each shot based on content characteristics; or
- the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
- the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
- the embodiment of the present application provides a photographing device applied to the first device, including:
- the acquisition module is used to obtain preview photos and environmental information
- a computing module configured to obtain shooting parameters based on preview photos and environmental information
- the shooting module is used for shooting with shooting parameters.
- the calculation module is further used to determine the category of the shooting scene based on the preview photo and environmental information; input the preview photo into a preset parameter decision model corresponding to the category of the shooting scene to obtain shooting parameters.
- the calculation module is also used to send the preview photos and environmental information to the second device; wherein, the preview photos and environmental information are used by the second device to determine shooting parameters;
- the shooting parameters sent by the second device are received.
- the environment information includes one or more of location information, time information, weather information, and light information.
- the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
- the first device includes a mobile phone or a tablet.
- the embodiment of the present application also provides a shooting parameter training device, including:
- the obtaining module is used to obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each training data subset includes a plurality of training data, and each training data includes shooting scene categories corresponding to Preview photos and preset shooting parameters corresponding to shooting scene categories;
- the training module is used to use the training data set to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
- the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
- the sample data set also includes environmental information corresponding to the photos taken
- the acquisition module is also used to identify the photos taken to obtain content features; determine the shooting scene based on the content features; if the shooting scene is indoor , then determine the shooting scene category corresponding to each shot based on the content feature; or
- the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
- the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
- the embodiment of the present application provides a first device, including:
- the above-mentioned memory is used to store computer program codes, and the above-mentioned computer program codes include instructions, when the above-mentioned first device reads the above-mentioned instructions from the above-mentioned memory so that the above-mentioned first device performs the following steps:
- making the above-mentioned first device execute the step of obtaining shooting parameters based on preview photos and environmental information includes:
- the preview photo is input into the preset parameter decision model corresponding to the category of the shooting scene to obtain the shooting parameters.
- making the above-mentioned first device execute the step of obtaining shooting parameters based on preview photos and environmental information includes:
- the shooting parameters sent by the second device are received.
- the environment information includes one or more of location information, time information, weather information, and light information.
- the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
- the first device includes a mobile phone or a tablet.
- the embodiment of the present application also provides a third device, including:
- the above-mentioned memory is used to store computer program code
- the above-mentioned computer program code includes instructions, when the above-mentioned third device reads the above-mentioned instructions from the above-mentioned memory, so that the above-mentioned third device performs the following steps:
- the training data set includes training data subsets of multiple shooting scene categories, each training data subset includes multiple training data, and each training data includes preview photos and shooting scenes corresponding to the shooting scene category
- the training data set is used to train the preset parameter decision-making model, wherein the preset parameter decision-making model is used to input preview photos and output predicted shooting parameters.
- the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
- the sample data set further includes environmental information corresponding to the photographs taken, and when the above-mentioned instructions are executed by the third device, the third device executes the Steps include:
- the shooting scene is indoors, determine the shooting scene category corresponding to each shot based on content characteristics; or
- the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
- the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
- an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it is run on a computer, the computer executes the method described in the first aspect.
- an embodiment of the present application provides a computer program, which is used to execute the method described in the first aspect when the above computer program is executed by a computer.
- all or part of the program in the fifth aspect may be stored on a storage medium packaged with the processor, or part or all may be stored on a memory not packaged with the processor.
- FIG. 1 is a schematic diagram of a hardware structure of an embodiment of an electronic device provided by the present application.
- FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
- FIG. 3 is a schematic flow diagram of an embodiment of the shooting method provided by the present application.
- FIG. 4 is a schematic diagram of light rays provided by the embodiment of the present application.
- FIG. 5 is a schematic flowchart of a shooting scene classification method provided in an embodiment of the present application.
- FIG. 6 is a schematic flowchart of another embodiment of the shooting method provided by the present application.
- FIG. 7 is a schematic flow diagram of an embodiment of the shooting parameter training method provided by the present application.
- FIG. 8 is a schematic diagram of shooting scene classification provided by the embodiment of the present application.
- FIG. 9 is a schematic diagram of a shooting parameter training framework provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of a hardware structure of another embodiment of an electronic device provided by the present application.
- FIG. 11 is a schematic structural diagram of a photographing device provided in an embodiment of the present application.
- FIG. 12 is a schematic structural diagram of a shooting parameter training device provided by an embodiment of the present application.
- first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, "plurality” means two or more.
- the parameters that determine the high-quality shooting effect include various setting parameters of the camera and photo parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing method, focal length, white balance, exposure compensation and so on.
- ISO sensitivity
- the automatic shooting mode mostly uses light metering and applies a small number of style types to adjust shooting parameters. Since the environment and scene of the user vary greatly, this way of setting parameters based on light intensity makes the color reproduction degree vary greatly in different scenes, and the quality of the photos taken often cannot meet the user's requirements.
- some devices support professional photo mode.
- this mode only the camera ISO and shutter speed are automatically adjusted according to the light intensity.
- Many other setting parameters such as brightness and contrast do not have recommended values for initialization, requiring users to manually adjust and combine repeatedly, and even some parameters can be adjusted in a large range.
- the whole process is cumbersome, time-consuming, and has poor accuracy, which reduces the user experience.
- the threshold for professional models is too high, and most users have limited shooting skills and professional knowledge, making it difficult to take satisfactory photos.
- the embodiment of the present application proposes a shooting method, which can improve the shooting quality.
- UE User Equipment
- the first device 10 may be a cellular telephone, a cordless telephone, a Personal Digital Assistant (PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer , handheld communication equipment, handheld computing equipment, satellite wireless equipment, Customer Premise Equipment (CPE) and/or other equipment used to communicate over wireless systems and next-generation communication systems, for example, in 5G networks mobile terminal or a mobile terminal in a public land mobile network (Public Land Mobile Network, PLMN) network that will evolve in the future.
- PLMN Public Land Mobile Network
- FIG. 1 shows a schematic structural diagram of an electronic device 100 , which may be the above-mentioned first device 10 .
- the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
- the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
- the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
- the illustrated components can be realized in hardware, software or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- application processor application processor, AP
- modem processor graphics processing unit
- GPU graphics processing unit
- image signal processor image signal processor
- ISP image signal processor
- controller video codec
- digital signal processor digital signal processor
- baseband processor baseband processor
- neural network processor neural-network processing unit
- the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is a cache memory.
- the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
- processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transmitter
- MIPI mobile industry processor interface
- GPIO general-purpose input and output
- subscriber identity module subscriber identity module
- SIM subscriber identity module
- USB universal serial bus
- the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
- processor 110 may include multiple sets of I2C buses.
- the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
- the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
- the I2S interface can be used for audio communication.
- processor 110 may include multiple sets of I2S buses.
- the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
- the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
- the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
- the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
- the UART interface is a universal serial data bus used for asynchronous communication.
- the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
- a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
- the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
- the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
- MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
- the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
- the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
- the GPIO interface can be configured by software.
- the GPIO interface can be configured as a control signal or as a data signal.
- the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
- the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
- the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
- the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
- the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
- the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
- the charging management module 140 is configured to receive a charging input from a charger.
- the charger may be a wireless charger or a wired charger.
- the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
- the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be disposed in the processor 110 .
- the power management module 141 and the charging management module 140 may also be set in the same device.
- the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
- Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
- the antenna may be used in conjunction with a tuning switch.
- the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
- at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
- a modem processor may include a modulator and a demodulator.
- the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
- the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
- the modem processor may be a stand-alone device.
- the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
- System global navigation satellite system, GNSS
- frequency modulation frequency modulation, FM
- near field communication technology near field communication, NFC
- infrared technology infrared, IR
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
- the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
- the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
- GPS global positioning system
- GLONASS global navigation satellite system
- Beidou navigation satellite system beidou navigation satellite system
- BDS Beidou navigation satellite system
- QZSS quasi-zenith satellite system
- SBAS satellite based augmentation systems
- the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
- the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos and the like.
- the display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
- the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
- the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
- the ISP is used for processing the data fed back by the camera 193 .
- the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
- ISP can also perform algorithm optimization on image noise, brightness, and skin color.
- ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be located in the camera 193 .
- Camera 193 is used to capture still images or video.
- the object generates an optical image through the lens and projects it to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other image signals.
- the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
- Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs.
- the electronic device 100 can play or record videos in various encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
- the internal memory 121 may be used to store computer-executable program codes including instructions.
- the internal memory 121 may include an area for storing programs and an area for storing data.
- the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
- the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
- the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
- the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
- the audio module 170 may also be used to encode and decode audio signals.
- the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
- Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
- Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
- Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
- the receiver 170B can be placed close to the human ear to receive the voice.
- the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
- the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
- the earphone interface 170D is used for connecting wired earphones.
- the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA
- the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
- pressure sensor 180A may be disposed on display screen 194 .
- a capacitive pressure sensor may be comprised of at least two parallel plates of conductive material.
- the electronic device 100 determines the intensity of pressure according to the change in capacitance.
- the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
- the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
- the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
- the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
- the air pressure sensor 180C is used to measure air pressure.
- the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
- the magnetic sensor 180D includes a Hall sensor.
- the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
- the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
- features such as automatic unlocking of the flip cover are set.
- the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
- the distance sensor 180F is used to measure the distance.
- the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
- Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
- the light emitting diodes may be infrared light emitting diodes.
- the electronic device 100 emits infrared light through the light emitting diode.
- Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
- the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
- the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
- the ambient light sensor 180L is used for sensing ambient light brightness.
- the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints.
- the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
- the temperature sensor 180J is used to detect temperature.
- the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
- the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature.
- the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- the touch sensor 180K is also called “touch device”.
- the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
- the touch sensor 180K is used to detect a touch operation on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- Visual output related to the touch operation can be provided through the display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
- the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
- the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
- the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
- the keys 190 include a power key, a volume key and the like.
- the key 190 may be a mechanical key. It can also be a touch button.
- the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
- the motor 191 can generate a vibrating reminder.
- the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
- touch operations applied to different applications may correspond to different vibration feedback effects.
- the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
- Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
- the touch vibration feedback effect can also support customization.
- the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
- the SIM card interface 195 is used for connecting a SIM card.
- the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
- the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
- SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
- the SIM card interface 195 is also compatible with different types of SIM cards.
- the SIM card interface 195 is also compatible with external memory cards.
- the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
- the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
- the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
- FIG. 2 is a schematic diagram of an application scenario of an embodiment of the present application.
- the above application scenario includes a first device 10 and a second device 20, wherein the second device 20 may be a cloud server.
- the second device 20 can be used to provide the first device 10 with the parameters of the current shooting.
- FIG. 3 it is a schematic flow diagram of an embodiment of the shooting method provided by the present application, including:
- step 301 the first device 10 acquires preview photos and environment information.
- the user may turn on the camera of the first device 10, so that the first device 10 enters a shooting mode.
- the user can click the camera application program on the desktop of the first device 10 to open the camera, or call the camera in third-party application software (eg, social software).
- third-party application software eg, social software
- the first device 10 acquires a preview image, where the preview image may be an image of the current environment captured by the current camera.
- the first device 10 may further acquire the current preview photo. It can be understood that the above preview photo is a photo corresponding to the current preview image.
- the first device 10 may also acquire current environment information, where the environment information may include information such as location, time, weather, and light.
- the environment information may include information such as location, time, weather, and light.
- the above environment information is only an illustration, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, more environment information may be included.
- the above location information may be obtained through a Global Positioning System (Global Positioning System, GPS) in the first device 10.
- GPS Global Positioning System
- the above time information can be obtained through the system time of the first device 10 .
- weather information for example, sunny, cloudy, or rainy, etc.
- weather application for example, sunny, cloudy, or rainy, etc.
- orientation information may be further acquired, wherein the orientation information may be obtained through the magnetic sensor 180D and the gyro sensor 180B in the first device 10 , and the orientation information may be used to characterize the orientation of the first device 10 .
- specific light data can be obtained through the above meteorological information, wherein the light data can include light intensity and the direction of natural light relative to the camera (for example, front light, side light, back light, etc., wherein side light can be divided into Front side light, rear side light, left side light, right side light, etc.).
- the above-mentioned light intensity (unit: Lux) of the shooting environment may be acquired by the ambient light sensor 180L of the first device 10 . If the meteorological information is sunny, the direction of the natural light relative to the camera can be further calculated. The calculation method is first to obtain the sun orientation through the geographic location and time information; back) and the orientation of the first device 10 obtained above to obtain the direction of the camera 193; finally obtain the relative position of the sun azimuth and the direction of the camera, as shown in Figure 4, thus the direction of the natural light of the sun relative to the camera 193 can be obtained category, where the direction category can be front light, side light, back light, etc.
- Step 302 the first device 10 sends the aforementioned preview photo and environment information to the second device 20 .
- the first device 10 may send the preview photo and environment information to the second device 20 .
- the above-mentioned first device 10 can be connected with the second device 20 through a mobile communication network (for example, 4G, 5G, etc.) or a local wireless network (for example, WIFI), so that the first device 10 can use the above-mentioned mobile communication
- the network or the local wireless network sends the preview photo and the environment information to the second device 20 .
- step 303 the second device 20 generates shooting parameters based on the preview photo and environment information.
- the second device 20 after the second device 20 receives the preview photo and the environment information sent by the first device 10, it can generate shooting parameters based on the preview photo and the environment information, wherein the shooting parameters can be corresponding parameters used in the camera for shooting.
- Parameters such as aperture size, shutter speed, ISO, focus mode, focal length, white balance, exposure compensation and other parameters, it can be understood that the above parameter examples are only illustrative and do not constitute a limitation to the embodiment of the present application. In some embodiments, more or fewer parameters may be included.
- step 3031 the second device 20 extracts features of the actual shooting scene based on the aforementioned preview photos and environmental information.
- the second device 20 can use a preset image recognition model to identify the above-mentioned preview photo, thereby obtaining the characteristics of the actual shooting scene corresponding to the above-mentioned preview photo, wherein the characteristics of the actual shooting scene can include content characteristics and environmental characteristics.
- the above-mentioned preview photo can be input into a preset image recognition model.
- the preset image recognition model may be a model using a deep image segmentation neural network.
- the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions.
- the specific type of is not specifically limited.
- the content features in the above-mentioned preview photo can be identified.
- the content features may include main features such as portraits, buildings, snow scenes, animals, and plants.
- the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera.
- the image recognition model it can also be determined whether the shooting scene corresponding to the preview photo is indoors or outdoors.
- the second device 20 may extract environmental features such as weather and light from the environmental information.
- step 3032 the second device 20 determines the category of the shooting scene based on the acquired features of the actual shooting scene.
- the above-mentioned shooting scene category can be preset, and the preset shooting scene can include multiple categories, for example, the above-mentioned shooting scene category can include category 1 (building-distance view-outdoor-sunny day-light brightness is strong), Category 2 (portrait-close-up-outdoor-sunny-backlight), category 3 (aquarium-animal-indoor-dark light), etc.
- a preset scene classification model such as a Bayesian network model
- the acquired features of the actual shooting scene may be used as events that have occurred to obtain the joint probability that the actual shooting scene belongs to each preset shooting scene category.
- Bayesian theory the more events that support a certain property, the greater the possibility of the property being established. Finally, the category of the shooting scene with the highest probability is selected as the category of the current shooting scene. It should be noted that, in addition to the above-mentioned Bayesian network model, other types of probabilistic graphical network models can also be used as the scene classification model, and this application does not specifically limit the specific form of the above-mentioned scene classification model.
- the second device 20 may directly determine the shooting scene category according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and environmental characteristics in the above-mentioned preview photo).
- the content features and environmental features in the preview photo can be input into a preset scene classification model, such as a Bayesian network model, so that the corresponding shooting scene category can be obtained.
- step 3033 the second device 20 loads a parameter decision model corresponding to the category of the shooting scene based on the category of the shooting scene, and uses the preview photo as an input to calculate and obtain shooting parameters.
- the second device 20 may load a parameter decision model corresponding to the category of the shooting scene.
- the preview photo may be input into the parameter decision-making model, the model is run, and shooting parameters corresponding to the preview photo are obtained through calculation.
- the parameter decision model can be obtained through deep learning pre-training. The specific training method can be described in the shooting parameter training method below, and will not be repeated here.
- Step 304 the second device 20 sends the aforementioned shooting parameters to the first device 10 .
- step 305 the first device 10 uses the above shooting parameters to shoot.
- the first device 10 after receiving the shooting parameters sent by the second device 20, the first device 10 initializes the shooting configuration parameters of the camera to the above shooting parameters, and can use the above initialized shooting parameters for shooting.
- the user can also manually adjust the above-mentioned shooting parameters after initialization. Thereby, an actual photograph can be obtained.
- step 301-step 305 are all optional steps, this application only provides a feasible embodiment, and may also include more or less steps than step 301-step 305, this application Applications are not limited to this.
- the first device 10 may include a preset image recognition model, a scene classification model, and a parameter decision model.
- Fig. 6 is a schematic flowchart of another embodiment of the shooting method provided by the present application, including:
- step 601 the first device 10 acquires preview photos and environment information.
- the user may turn on the camera of the first device 10, so that the first device 10 enters a shooting mode.
- the user can click the camera application program on the desktop of the first device 10 to open the camera, or call the camera in third-party application software (eg, social software).
- third-party application software eg, social software
- the first device 10 acquires a preview image, where the preview image may be an image of the current environment captured by the current camera.
- the first device 10 may further acquire the current preview photo. It can be understood that the above preview photo is a photo corresponding to the current preview image.
- the first device 10 may also acquire current environment information, where the environment information may include information such as location, time, weather, and light.
- the environment information may include information such as location, time, weather, and light.
- the above environment information is only an illustration, and does not constitute a limitation to the embodiment of the present application, and in some embodiments, more environment information may be included.
- the above location information may be obtained through a Global Positioning System (Global Positioning System, GPS) in the first device 10.
- GPS Global Positioning System
- the above time information can be obtained through the system time of the first device 10 .
- weather information for example, sunny, cloudy, or rainy, etc.
- weather application for example, sunny, cloudy, or rainy, etc.
- orientation information may be further obtained, wherein the above orientation information may also be obtained through the magnetic sensor 180D and the gyroscope 180B sensors in the first device 10 , and the above orientation information may be used to characterize the orientation of the first device 10 .
- specific light data can be obtained through the above meteorological information, wherein the light data can include light intensity and the direction of natural light relative to the camera (for example, front light, side light, back light, etc., wherein side light can be divided into front side light, rear side light, left light, right light, etc.).
- step 602 the first device 10 generates shooting parameters based on the preview photo and environment information.
- the first device 10 may generate shooting parameters based on the above-mentioned preview photo and environmental information, wherein the shooting parameters may be corresponding parameters used in the camera for shooting, for example, aperture size , shutter speed, ISO, focus mode, focal length, white balance, exposure compensation and other parameters, it can be understood that the above parameter examples are only illustrative and do not constitute a limitation to the embodiments of the present application. In some embodiments, you can Include more or fewer parameters.
- the above specific process of generating shooting parameters may include the following sub-steps:
- step 6021 the first device 10 extracts the features of the actual shooting scene based on the above-mentioned preview photos and environmental information.
- the first device 10 may use a preset image recognition model to identify the above-mentioned preview photo, thereby obtaining the features of the actual shooting scene corresponding to the above-mentioned preview photo, wherein the features of the actual shooting scene may include content characteristics and environmental characteristics.
- the above-mentioned preview photo can be input into a preset image recognition model.
- the preset image recognition model may be a model using a deep image segmentation neural network.
- the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions.
- the specific type of is not specifically limited.
- the content features in the above-mentioned preview photo can be identified.
- the content features may include main features such as portraits, buildings, snow scenes, animals, and plants.
- the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera.
- the image recognition model it can also be determined whether the shooting scene corresponding to the preview photo is indoors or outdoors.
- the first device 10 may extract environmental features such as weather and light from the environmental information.
- Step 6022 the first device 10 determines the category of the shooting scene based on the acquired features of the actual shooting scene.
- the above-mentioned shooting scene category can be preset, and the preset shooting scene can include multiple categories, for example, the above-mentioned shooting scene category can include category 1 (building-distance view-outdoor-sunny day-light brightness is strong), Category 2 (portrait-close-up-outdoor-sunny-backlight), category 3 (aquarium-animal-indoor-dark light), etc.
- a preset scene classification model such as a Bayesian network model
- the acquired features of the actual shooting scene may be used as events that have occurred to obtain the joint probability that the actual shooting scene belongs to each preset shooting scene category.
- the first device 10 may directly determine the shooting scene category according to the characteristics of the shooting scene (for example, the characteristics of the shooting scene may be the content characteristics and environmental characteristics in the above-mentioned preview photo).
- the content features and environmental features in the preview photo can be input into a preset scene classification model, such as a Bayesian network model, so that the corresponding shooting scene category can be obtained.
- Step 6023 based on the shooting scene category, the first device 10 loads a parameter decision model corresponding to the shooting scene category, and uses the preview photo as input to calculate and obtain shooting parameters.
- the first device 10 may load a parameter decision model corresponding to the category of the shooting scene.
- the preview photo may be input into the parameter decision-making model, the model is run, and shooting parameters corresponding to the preview photo are obtained through calculation.
- the parameter decision model can be obtained through deep learning pre-training. The specific training method can be described in the shooting parameter training method below, and will not be repeated here.
- step 603 the first device 10 uses the above shooting parameters to shoot.
- the first device 10 initializes the shooting configuration parameters of the camera to the above-mentioned shooting parameters, and can use the above-mentioned initialized shooting parameters to perform shooting. Users can also manually adjust these recommended-based initialization shooting parameters. Thereby, an actual photograph can be obtained.
- step 601-step 603 are all optional steps, and this application only provides a feasible embodiment, and may also include more or fewer steps than step 601-step 603. Applications are not limited to this.
- the embodiment of the present application also provides a shooting parameter training method, which is applied to a third device 30.
- the third device 30 may be embodied in the form of a computer.
- the third device 30 may be a cloud server (for example, The aforementioned second device 20), but not limited to the second device 20, in some embodiments, the third device 30 may also be a local desktop computer.
- the third device 30 may be a terminal device (for example, the above-mentioned first device 10).
- FIG. 7 is a schematic flow diagram of an embodiment of the shooting parameter training method provided by the present application, including:
- Step 701 acquire a sample data set.
- the above sample data set may include multiple pieces of sample data, wherein each piece of sample data may include a preview photo, a set of professional mode parameters, a taken photo and environmental information corresponding to the taken photo.
- the preview photo may be a photo in the preview screen collected by the camera
- the professional mode parameter may be a parameter set by the user in the professional mode
- the captured photo may be a photo obtained by the camera using the above professional mode parameter
- the environment information Can include information such as location, time, weather and light.
- the above-mentioned photographs can be screened manually and/or by machine.
- image aesthetic tools and image quality evaluation tools can be used to screen the above-mentioned photographs, so that high-quality photographs can be selected.
- Table 1 exemplarily shows the above sample data set.
- the above sample data set includes N sample data, and each sample data includes preview photos, professional mode parameters, taken photos, and environmental information.
- step 702 input each photograph taken in the above sample data set into a preset image recognition model for recognition to obtain content features.
- the preset image recognition model may be a model using a deep image segmentation neural network.
- the above-mentioned image recognition model may also use a convolutional neural network with other image recognition functions.
- the specific type of the model is not particularly limited.
- content features corresponding to the aforementioned photographs can be obtained, wherein the content features can include subject features such as portraits, buildings, snow scenes, animals, plants, etc.
- the above-mentioned content feature may also include the distance between the above-mentioned subject and the camera.
- the above image recognition model it can also be determined whether the shooting scene corresponding to the above photo is indoors or outdoors.
- Step 703 classify the shooting scene based on the content feature, and obtain the shooting scene category.
- the shooting scene category of each shot in the above-mentioned sample data set can be performed based on the above-mentioned content characteristics, so that the shooting scene category of each shot can be obtained.
- FIG. 8 is a schematic flow chart of the above shooting scene classification, as shown in FIG. 8 .
- the shooting scene of the above-mentioned photo can be classified based on the environmental characteristics and content characteristics, thereby obtaining the shooting scene category, wherein the above-mentioned environmental characteristics can be obtained through the above-mentioned environmental information.
- the above shooting scene category may include multiple categories, for example, category 1 (building-distance view-outdoor-sunny day-light brightness is strong), category 2 (portrait-close view-outdoor-sunny day-backlight), category 3 (Aquarium-animals-indoor-light brightness dark), etc.
- the shooting scene category may be determined directly according to the content characteristics.
- Step 704 constructing a training data set
- the shooting scene category of each photo can be grouped, and the grouping method can be carried out according to the category of the shooting scene, for example, the shooting scenes of the same category can be grouped Photos are grouped together.
- the corresponding preview photos and professional mode parameters can be found according to the photographs taken. For example, taking Table 1 as an example, the corresponding preview photos 1 and professional mode parameters 1 can be found by taking photo 1.
- multiple sets of training data can be obtained, and the multiple sets of training data constitute a training data set, wherein each set of training data includes a plurality of training data under the same shooting scene category, and each training data includes multiple training data under the shooting scene category.
- Table 2 exemplarily shows the above training data set.
- the above-mentioned training data set includes M shooting scene categories, and each shooting scene category can include multiple training data, and each training data can include preview photos and professional mode parameters belonging to the shooting scene category .
- Step 705 based on the above training data set, train the preset parameter decision model.
- the above training data set may be divided into a training set and a verification set.
- the distribution ratio of the training set and the verification set may be preset, which is not specifically limited in this embodiment of the present application.
- the above training set can be input into a preset parameter decision model for training.
- each shooting scene category can correspond to a parameter decision model
- multiple parameter decision models can be trained respectively.
- the preview photos in the above training set can be input into the above-mentioned preset parameter decision-making model for calculation, so that the predicted shooting parameters can be obtained.
- the preview photos input above can be data in YUV format , may also be in RGB format, which is not specifically limited in this embodiment of the present application.
- the above-mentioned predicted shooting parameters may include parameters such as aperture size, shutter speed, ISO, focusing mode, focal length, white balance, exposure compensation and the like.
- Fig. 9 is a schematic diagram of the training architecture of the parameter decision model. As shown in FIG. 9 , when training the parameter decision model of any specific shooting scene category, the preview photo is the input data, and the output data is the predicted shooting parameters.
- the professional pattern parameters in the above training set can be used as label data.
- the training data in the above training set may include feature data and label data.
- the feature data can be used for input and calculation, for example, the feature data can include a preview photo and the like.
- the label data can be used to compare with the output during the training process, so that the loss of the model can be converged through training, and the label data can be pre-identified professional mode parameters.
- the objective function may be the mean square error of the predicted shooting parameters and the professional mode parameters, that is, the mean square error of the predicted data and the label data.
- a parameter decision-making model corresponding to the shooting scene category can be obtained.
- the training can also be verified through the above-mentioned verification set. If the preset requirements are met after verification, the training is completed. If the preset requirements are not met after verification, then the training can be Perform further training, for example, reacquire the sample data set, and repeat steps 701-705 for retraining.
- the neural network can improve the extraction of environmental features in specific scenes, accelerate the convergence process of the model, avoid over-fitting or failure to converge and other abnormal situations, and then improve the adaptability of the model to the scene sex.
- the above-mentioned parameter decision-making model can be obtained, so that the second device 20 can calculate the preview photos and environmental information sent by the first device 10 based on the above-mentioned parameter decision-making model, and obtain corresponding shooting parameters, and then The calculation amount of the first device 10 can be reduced, and the shooting quality can be improved.
- FIG. 10 shows a schematic structural diagram of an electronic device 1000 , which may be the above-mentioned third device 30 .
- the above-mentioned electronic device 1000 may include: at least one processor; and at least one memory connected to the above-mentioned processor in communication, wherein: the above-mentioned memory stores program instructions that can be executed by the above-mentioned processor, and the processor calls the above-mentioned program instructions to execute the application.
- FIG. 10 shows a block diagram of an exemplary electronic device 1000 suitable for implementing embodiments of the present application.
- the electronic device 1000 shown in FIG. 10 is only an example, and should not limit the functions and scope of use of the embodiments of the present application.
- electronic device 1000 takes the form of a general-purpose computing device.
- Components of electronic device 1000 may include, but are not limited to: one or more processors 1010 , memory 1020 , communication bus 1040 connecting different system components (including memory 1020 and processor 1010 ), and communication interface 1030 .
- Communication bus 1040 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
- these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture; hereinafter referred to as: ISA) bus, Micro Channel Architecture (Micro Channel Architecture; hereinafter referred to as: MAC) bus, enhanced ISA bus, video electronics Standards Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and Peripheral Component Interconnection (hereinafter referred to as: PCI) bus.
- Electronic device 1000 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the electronic device and include both volatile and nonvolatile media, removable and non-removable media.
- the memory 1020 may include a computer system-readable medium in the form of a volatile memory, such as a random access memory (Random Access Memory; hereinafter referred to as RAM) and/or a cache memory.
- the electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
- a disk drive for reading and writing to a removable nonvolatile disk such as a "floppy disk”
- a disk drive for a removable nonvolatile disk such as a CD-ROM (Compact Disc Read Only Memory; hereinafter referred to as: CD-ROM), Digital Video Disc Read Only Memory (hereinafter referred to as: DVD-ROM) or other optical media).
- CD-ROM Compact Disc Read Only Memory
- DVD-ROM Digital Video Disc Read Only Memory
- each drive may be connected to communication bus 1040 through one or more data media interfaces.
- the memory 1020 may include at least one program product, which has a set of (for example, at least one) program modules configured to execute the functions of the various embodiments of the present application.
- a program/utility having a set (at least one) of program modules may be stored in memory 1020, such program modules including - but not limited to - an operating system, one or more application programs, other program modules, and program data , each or some combination of these examples may include implementations of network environments.
- the program modules generally perform the functions and/or methods in the embodiments described herein.
- the electronic device 1000 may also communicate with one or more external devices (such as keyboards, pointing devices, displays, etc.), and may also communicate with one or more devices that enable the user to interact with the electronic device, and/or communicate with the device that enables the electronic device to
- a device communicates with any device (eg, network card, modem, etc.) that is capable of communicating with one or more other computing devices. Such communication may occur through communication interface 1030 .
- the electronic device 1000 can also communicate with one or more networks (such as a local area network (Local Area Network; hereinafter referred to as: LAN), a wide area network (Wide Area Network; hereinafter referred to as: WAN) and (or a public network, such as the Internet), the above-mentioned network adapter can communicate with other modules of the electronic device through the communication bus 1040 .
- networks such as a local area network (Local Area Network; hereinafter referred to as: LAN), a wide area network (Wide Area Network; hereinafter referred to as: WAN) and (or a public network, such as the Internet
- RAID Redundant Arrays of Independent Drives
- the processor 1010 executes various functional applications and data processing by running the programs stored in the memory 1020, for example, implementing the shooting parameter training method provided in the embodiment of the present application.
- FIG. 11 is a schematic structural diagram of an embodiment of the photographing device of the present application. As shown in FIG. 11, the above-mentioned photographing device 1100 is applied to the first device 10, and may include: an acquisition module 1110, a calculation module 1120, and a photographing module 1130; wherein,
- Obtaining module 1110 configured to obtain preview photos and environmental information
- Calculation module 1120 used to obtain shooting parameters based on preview photos and environmental information
- a shooting module 1130 configured to use shooting parameters to shoot.
- the calculation module 1120 is further configured to determine the category of the shooting scene based on the preview photo and environmental information; input the preview photo into a preset parameter decision model corresponding to the category of the shooting scene to obtain shooting parameters.
- the calculation module 1120 is further configured to send the preview photo and environment information to the second device; where the preview photo and environment information are used by the second device to determine shooting parameters;
- the shooting parameters sent by the second device are received.
- the environment information includes one or more of location information, time information, weather information, and light information.
- the shooting parameters include one or more of aperture size, shutter speed, sensitivity ISO, focus mode, focal length, white balance, and exposure compensation.
- the first device includes a mobile phone or a tablet.
- Fig. 12 is a schematic structural diagram of an embodiment of the shooting parameter training device of the present application.
- the shooting parameter training device 1200 may include: an acquisition module 1210 and a training module 1220; wherein,
- the obtaining module 1210 is used to obtain a training data set; wherein, the training data set includes a plurality of training data subsets of shooting scene categories, each training data subset includes a plurality of training data, and each training data includes training data corresponding to the shooting scene category The preset shooting parameters corresponding to the preview photo and shooting scene category;
- the training module 1220 is configured to use the training data set to train the preset parameter decision model, wherein the preset parameter decision model is used to input preview photos and output predicted shooting parameters.
- the category of the shooting scene is determined by the photos taken in the sample data set, and the sample data set includes a plurality of sample data, and each sample data includes a photo taken, a preview photo and preset shooting parameters.
- the sample data set also includes environmental information corresponding to the photos taken
- the acquisition module 1210 is also used to identify the photos taken to obtain content features; determine the shooting scene based on the content features; if the shooting scene is Indoor, based on the content characteristics, determine the shooting scene category corresponding to each photo taken; or
- the shooting scene category corresponding to each shot photo is determined based on the environment feature and the content feature; wherein, the environment feature is obtained from the environment information.
- the preset parameter decision model includes multiple models, and each model corresponds to a shooting scene category.
- the shooting device 1100 provided by the embodiment shown in Figure 11 and the shooting parameter training device 1200 provided by the embodiment shown in Figure 12 can be used to implement the technical solutions of the method embodiments shown in Figures 1-6 and 7-9 of this application respectively , for its realization principles and technical effects, further reference may be made to the relevant descriptions in the method embodiments.
- each module of the shooting device shown in FIG. 11 and the shooting parameter training device shown in FIG. can be physically separated.
- these modules can all be implemented in the form of software called by the processing element; they can also be implemented in the form of hardware; some modules can also be implemented in the form of software called by the processing element, and some modules can be implemented in the form of hardware.
- the detection module may be a separately established processing element, or may be integrated into a certain chip of the electronic device for implementation.
- the implementation of other modules is similar.
- all or part of these modules can be integrated together, and can also be implemented independently.
- each step of the above method or each module above can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
- the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or, one or more microprocessors A Digital Signal Processor (hereinafter referred to as: DSP), or, one or more field programmable gate arrays (Field Programmable Gate Array; hereinafter referred to as: FPGA), etc.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- FPGA Field Programmable Gate Array
- these modules can be integrated together and implemented in the form of a System-On-a-Chip (hereinafter referred to as SOC).
- SOC System-On-a-Chip
- the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device.
- the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
- the above-mentioned electronic devices include corresponding hardware structures and/or software modules for performing each function.
- the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the example units and algorithm steps described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the embodiments of the present application.
- the embodiment of the present application may divide the above-mentioned electronic equipment into functional modules according to the above-mentioned method examples.
- each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
- the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
- Each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit.
- the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
- the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage
- the medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other various media capable of storing program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
- Studio Devices (AREA)
Abstract
Des modes de réalisation de la présente invention concernent un procédé de photographie, un procédé d'apprentissage de paramètre de photographie, un dispositif électronique et un support de stockage, qui se rapportent au domaine technique des ordinateurs. Le procédé consiste à : acquérir une photo de prévisualisation et des informations d'environnement ; obtenir un paramètre de photographie sur la base de la photo de prévisualisation et des informations d'environnement ; et photographier à l'aide du paramètre de photographie. Les procédés décrits dans les modes de réalisation de la présente invention peuvent améliorer la qualité de photographie.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110861888.8A CN115701113A (zh) | 2021-07-29 | 2021-07-29 | 拍摄方法、拍摄参数训练方法、电子设备及存储介质 |
CN202110861888.8 | 2021-07-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023005882A1 true WO2023005882A1 (fr) | 2023-02-02 |
Family
ID=85086291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/107648 WO2023005882A1 (fr) | 2021-07-29 | 2022-07-25 | Procédé de photographie, procédé d'apprentissage de paramètre de photographie, dispositif électronique et support de stockage |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115701113A (fr) |
WO (1) | WO2023005882A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118540449A (zh) * | 2023-02-23 | 2024-08-23 | 华为技术有限公司 | 图像处理方法和终端设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622281A (zh) * | 2017-09-20 | 2018-01-23 | 广东欧珀移动通信有限公司 | 图像分类方法、装置、存储介质及移动终端 |
US20180198988A1 (en) * | 2015-09-18 | 2018-07-12 | Panasonic Intellectual Property Management Co., Ltd. | Imaging device and system including imaging device and server |
CN108848308A (zh) * | 2018-06-27 | 2018-11-20 | 维沃移动通信有限公司 | 一种拍摄方法及移动终端 |
CN110012210A (zh) * | 2018-01-05 | 2019-07-12 | 广东欧珀移动通信有限公司 | 拍照方法、装置、存储介质及电子设备 |
CN111405180A (zh) * | 2020-03-18 | 2020-07-10 | 惠州Tcl移动通信有限公司 | 拍照方法、装置、存储介质及移动终端 |
-
2021
- 2021-07-29 CN CN202110861888.8A patent/CN115701113A/zh active Pending
-
2022
- 2022-07-25 WO PCT/CN2022/107648 patent/WO2023005882A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180198988A1 (en) * | 2015-09-18 | 2018-07-12 | Panasonic Intellectual Property Management Co., Ltd. | Imaging device and system including imaging device and server |
CN107622281A (zh) * | 2017-09-20 | 2018-01-23 | 广东欧珀移动通信有限公司 | 图像分类方法、装置、存储介质及移动终端 |
CN110012210A (zh) * | 2018-01-05 | 2019-07-12 | 广东欧珀移动通信有限公司 | 拍照方法、装置、存储介质及电子设备 |
CN108848308A (zh) * | 2018-06-27 | 2018-11-20 | 维沃移动通信有限公司 | 一种拍摄方法及移动终端 |
CN111405180A (zh) * | 2020-03-18 | 2020-07-10 | 惠州Tcl移动通信有限公司 | 拍照方法、装置、存储介质及移动终端 |
Also Published As
Publication number | Publication date |
---|---|
CN115701113A (zh) | 2023-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022017261A1 (fr) | Procédé de synthèse d'image et dispositif électronique | |
CN113810600B (zh) | 终端的图像处理方法、装置和终端设备 | |
CN110458902B (zh) | 3d光照估计方法及电子设备 | |
WO2021036318A1 (fr) | Procédé de traitement d'image vidéo, et dispositif | |
CN113810601B (zh) | 终端的图像处理方法、装置和终端设备 | |
WO2022100685A1 (fr) | Procédé de traitement de commande de dessin et dispositif associé | |
WO2020173379A1 (fr) | Procédé et dispositif de groupement de photographies | |
WO2022022319A1 (fr) | Procédé et système de traitement d'image, dispositif électronique et système de puce | |
US20220245778A1 (en) | Image bloom processing method and apparatus, and storage medium | |
CN113542580B (zh) | 去除眼镜光斑的方法、装置及电子设备 | |
CN114489533A (zh) | 投屏方法、装置、电子设备及计算机可读存储介质 | |
CN112532892A (zh) | 图像处理方法及电子装置 | |
CN113810603B (zh) | 点光源图像检测方法和电子设备 | |
CN114610193A (zh) | 内容共享方法、电子设备及存储介质 | |
CN113467735A (zh) | 图像调整方法、电子设备及存储介质 | |
CN112150499A (zh) | 图像处理方法及相关装置 | |
WO2022135144A1 (fr) | Procédé d'affichage auto-adaptatif, dispositif électronique et support de stockage | |
WO2023005882A1 (fr) | Procédé de photographie, procédé d'apprentissage de paramètre de photographie, dispositif électronique et support de stockage | |
CN112188094B (zh) | 图像处理方法及装置、计算机可读介质及终端设备 | |
WO2023005706A1 (fr) | Procédé de commande de dispositif, dispositif électronique et support de stockage | |
WO2020078267A1 (fr) | Procédé et dispositif de traitement de données vocales dans un processus de traduction en ligne | |
CN115705663B (zh) | 图像处理方法与电子设备 | |
WO2022033344A1 (fr) | Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur | |
CN117319369A (zh) | 文件投送方法、电子设备及存储介质 | |
CN111885768B (zh) | 调节光源的方法、电子设备和系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22848487 Country of ref document: EP Kind code of ref document: A1 |