WO2021136050A1 - Procédé de photographie d'image et appareil associé - Google Patents

Procédé de photographie d'image et appareil associé Download PDF

Info

Publication number
WO2021136050A1
WO2021136050A1 PCT/CN2020/138859 CN2020138859W WO2021136050A1 WO 2021136050 A1 WO2021136050 A1 WO 2021136050A1 CN 2020138859 W CN2020138859 W CN 2020138859W WO 2021136050 A1 WO2021136050 A1 WO 2021136050A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
target
focus
electronic device
Prior art date
Application number
PCT/CN2020/138859
Other languages
English (en)
Chinese (zh)
Inventor
徐思
周承涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021136050A1 publication Critical patent/WO2021136050A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • This application relates to the field of image processing technology, and in particular to an image shooting method and related devices.
  • the embodiments of the present application provide an image shooting method and related devices. After the conventional focusing method is used to focus the target focus subject and the corresponding image is obtained, the conventional focusing method is difficult to achieve in-focus in the current scene, which leads to this When the sharpness of the target focus subject in the image is less than a certain threshold, a neural network model-based focusing method that is more adaptable to the scene is used for focusing to obtain a clear image.
  • the first aspect of the embodiments of the present application provides an image capturing method, which can be applied to a terminal device with a touch screen and a camera, or to an electronic device in the terminal device.
  • the method may include: in response to a user's operation to open the camera application, starting Camera, enter shooting mode; after entering shooting mode, determine the target focus subject in the current scene, that is, determine the subject in the current scene that needs to obtain a clear image; focus on the target focus subject in the current scene through the first focus method, and get A first image; when the sharpness of the target focused subject in the first image is less than a preset threshold, focus the target focused subject in the current scene by the second focusing method to obtain a second image; wherein, the first focusing method and The lens position corresponding to the second focusing method is different, that is, the lens position when the first image is obtained by focusing by the first focusing method is different from the lens position when the second image is obtained by focusing by the second focusing method.
  • the first focusing method may include a phase focusing method or a laser focusing method
  • the second focusing method is a focusing method based on a neural network model
  • the sharpness of the target focus subject in the second image captured based on the second focusing method is not less than a preset Threshold.
  • the value of the preset threshold is not the maximum value corresponding to the sharpness.
  • the focusing method based on the neural network model is used for focusing.
  • the network model is trained based on image data in a large number of scenes. Therefore, the focusing method based on the neural network model is more adaptable to the scene and can achieve aligning focus in most scenes, so as to obtain a clear shot of the target focus subject image.
  • the method further includes: when the sharpness of the target focused subject in the first image is less than a preset threshold, outputting the second image as the target image.
  • the target image may be the preview image displayed in the preview area on the shooting interface, that is, in the case where the sharpness of the target focus subject in the first image is less than the preset threshold, the second image is output as the preview image on the shooting interface.
  • Preview image may also be an image stored in a storage medium (for example, a non-volatile memory (non-volatile memory)) in response to a user's photographing instruction.
  • the method further includes: when the sharpness of the target focused subject in the first image is not less than a preset threshold, outputting the first image as the target image.
  • the target image may be a preview image on the photographing interface, or may be an image stored in a storage medium in response to a user's photographing instruction.
  • focusing on the target focused subject in the current scene by the second focusing method includes: inputting the first image marked with the target focused subject into the neural network model to obtain the neural network model
  • the first output result is the sharpness of the target focus subject in the first image
  • the lens position is adjusted according to the sharpness of the target focus subject in the first image to obtain the second image.
  • the first image marked with the target focused subject can be input into the neural network model, and the target in the first image can be obtained based on the neural network model. Focusing on the sharpness of the subject, then determining the moving position of the lens according to the sharpness of the target focusing subject, and moving the lens to the determined position, thereby completing focusing and obtaining a second image.
  • the movement value of the lens is determined according to the definition of the target focus subject in the first image and the full range, where the full range is the maximum range value that the lens can move, and the movement value is the full range.
  • the first product is the product of the sharpness and the full range; the lens is moved to the target position according to the movement value.
  • the sharpness of the target focus subject in the first image is 80%
  • the movement value of the lens is the difference between the full scale of the lens and (full scale*80%), that is, the movement value of the lens It is the product of full scale and 20% (1-80%).
  • the neural network model may be trained based on image training data marked with the focus subject and the sharpness of the focus subject, that is, before the neural network model is trained, Obtain a large number of images marked with the focus subject and the sharpness of the focus subject as the training data of the neural network model.
  • the training data can be obtained by shooting a large number of scenes with a mobile phone or camera and other imaging equipment; specifically, in the same scene, the mobile phone can move the lens back and forth to capture images at different lens positions. And determine the focus subject of the image; after obtaining images at different lens positions, the sharpness of the focus subject in the image can be marked based on the lens position corresponding to the image.
  • the neural network model In the process of training the neural network model, first select part of the training data and input it into the neural network model, and obtain the sharpness prediction result of the neural network model through the forward propagation algorithm in the neural network model. Because this part of the training data is pre-marked with the correct definition, the gap between the definition prediction result and the marked definition can be calculated, and then based on this gap, the parameter values of the neural network model are updated accordingly through the backpropagation algorithm , So that the prediction result of the neural network model can be closer to the real result. Since the neural network model is based on image training in a large number of various scenes, it has strong adaptability to various scenes. Therefore, the sharpness of the current image can be accurately obtained through the neural network model, so that the mobile phone can be based on the clearness of the image. Degree to control the position of the lens movement, so as to achieve focus and obtain a clear image.
  • the method further includes: when the first image is a multi-depth-of-field image, and the target focused subject is located in a background area in the multi-depth-of-field image, switching the target focused subject to a multi-depth image In the foreground area of the subject, the switched target focus subject is obtained; when the sharpness of the target focus subject in the first image is less than the preset threshold, the second focus method is used to focus the target focus subject in the current scene, Obtaining the second image includes: when the sharpness of the switched target focus subject in the first image is less than a preset threshold, focus the switched target focus subject in the current scene by the second focus method to obtain the second image . That is, after the target focus subject is switched to the subject in the foreground area in the multi-depth image, the second focus method is used to focus the switched target focus subject in the current scene to capture the switched target focus subject Clear second image.
  • focusing the switched target focus subject in the current scene by the second focusing method includes: inputting a first image marked with the switched target focus subject into a neural network Model, the second output result of the neural network model is obtained.
  • the second output result is the sharpness of the target focus subject after switching in the first image; adjust the lens position according to the sharpness of the target focus subject after switching in the first image to obtain The second image.
  • the method further includes: when the switched target focus subject is focused by the second focusing method, displaying a focus frame on the shooting interface according to the switched target focus subject , The focus frame is used to mark the target focus subject after switching to remind the user of the current target focus subject.
  • the method further includes: displaying prompt information 1 on the shooting interface, the prompt information 1 being used to prompt the user to switch the focus method or start the mode of focusing by the second focus method. That is to say, during the process of focusing on the target focus subject by the second focusing method, the prompt message 1 can be displayed on the shooting interface to remind the user that the focus method is currently being switched or the focus method is currently being turned on. mode.
  • the method may further include: focusing on the target in the second image When the sharpness of the subject is less than the preset threshold, a prompt message 2 is displayed on the shooting interface, and the prompt message 2 is used to prompt the user to adjust the shooting distance.
  • the method may further include: focusing on the target in the second image When the sharpness of the subject is less than the preset threshold, a prompt message 3 is displayed on the shooting interface, and the prompt message 3 is used to prompt the user to switch the camera or switch the shooting mode.
  • a second aspect of the embodiments of the present application provides an image capturing device, including: a processing unit, configured to determine a target focus subject in the current scene; and the processing unit, further configured to focus on the target subject in the current scene through a first focus method Performing focusing to obtain a first image; the processing unit is further configured to, when the sharpness of the target focused subject in the first image is less than a preset threshold, focus on the target focused subject in the current scene by the second focusing method to obtain the second Image, the sharpness of the target focus subject in the second image is not less than the preset threshold; wherein the lens positions corresponding to the first focusing method and the second focusing method are different, and the second focusing method is a focusing method based on a neural network model.
  • the image capturing device further includes an output unit configured to output the second image as the target image when the sharpness of the target focused subject in the first image is less than a preset threshold.
  • the image capturing device further includes an output unit configured to output the first image as the target image when the sharpness of the target focus subject in the first image is not less than a preset threshold.
  • the processing unit is further configured to input the first image marked with the target focus subject into the neural network model to obtain the first output result of the neural network model,
  • the first output result is the sharpness of the target focused subject in the first image; the lens position is adjusted according to the sharpness of the target focused subject in the first image to obtain a second image.
  • the processing unit is further configured to determine the movement value of the lens according to the sharpness and the full range of the target focus subject in the first image, wherein the full range The range is the maximum range value that the lens can move, the movement value is the difference between the full range and a first product, and the first product is the product of the sharpness and the full range; according to The movement value moves the lens to the target position.
  • the processing unit is further configured to switch the target focus subject to multiple depth of field when the first image is a multiple depth of field image and the target focus subject is located in the background area in the multiple depth of field image The subject in the foreground area of the image obtains the switched target focus subject; the processing unit is further configured to pass the second focus subject in the current scene when the sharpness of the switched target focus subject in the first image is less than the preset threshold The focus method focuses on the switched target focus subject to obtain a second image.
  • the image capturing device further includes a display unit for displaying a focus frame on the shooting interface according to the switched target focus subject, and the focus frame is used to mark the switched target focus subject.
  • the image capturing device further includes a display unit for displaying prompt information on the shooting interface, and the prompt information is used to prompt the user to switch the focus method or enable the second focus method to focus. mode.
  • the neural network model is obtained through training of image training data marked with the focused subject and the sharpness of the focused subject.
  • the first focusing method includes a phase focusing method or a laser focusing method.
  • the third aspect of the embodiments of the present application provides an electronic device, including: a touch screen, where the touch screen includes a touch-sensitive surface and a display; a camera; a processor; a memory; a plurality of application programs; and a computer program.
  • the computer program is stored in the memory, and the computer program includes instructions.
  • the instruction is executed by the electronic device, the electronic device is caused to execute the image capturing method in any one of the possible implementations of the first aspect.
  • a fourth aspect of the embodiments of the present application provides an electronic device, including a processor and a memory.
  • the memory is coupled with the processor, and the memory is used to store computer instructions.
  • the processor executes the computer instructions
  • the terminal device is caused to execute the image shooting method in any one of the possible implementations of the first aspect.
  • a fifth aspect of the embodiments of the present application provides an electronic device, including a memory and multiple processors.
  • the memory is coupled with multiple processors, and the memory is used to store computer instructions.
  • the terminal device is caused to execute the image capturing method in any one of the possible implementations of the first aspect.
  • the multiple processors may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), and a control Processor, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), among which, AP, modem processor, GPU, ISP, controller, video codec, DSP, baseband processor, etc. can be used for focusing by the first focusing method, and NPU can be used for focusing by the second focusing method.
  • AP application processor
  • modem processor GPU
  • ISP image signal processor
  • NPU neural-network processing unit
  • a sixth aspect of the embodiments of the present application provides a wireless communication device, the wireless communication device includes: a processor and an interface circuit; wherein the processor is coupled to a memory through the interface circuit, and the processor is used to execute a program in the memory Code to implement the image capturing method in any possible implementation manner in the first aspect.
  • a seventh aspect of the embodiments of the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the image capturing method in any one of the possible implementations of the first aspect.
  • the eighth aspect of the embodiments of the present application provides a computer program product, which when the computer program product runs on a terminal device, causes the electronic device to execute the image capturing method in any one of the possible implementations of the first aspect.
  • the embodiments of the present application provide an image shooting method and related devices. After the conventional focusing method is used to focus the target focus subject and the corresponding image is obtained, the conventional focusing method is difficult to achieve in-focus in the current scene, which leads to this When the sharpness of the target focus subject in the image is less than a certain threshold, a neural network model-based focusing method that is more adaptable to the scene is used for focusing to obtain a clear image.
  • FIG. 1a is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application.
  • FIG. 1b is a schematic diagram of the software structure of an electronic device provided by an embodiment of this application.
  • Figure 1c is a schematic diagram of a set of display interfaces provided by an embodiment of the application.
  • FIG. 2 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • Figure 5a is a schematic diagram of a receptive field provided by an embodiment of the application.
  • FIG. 5b is a schematic structural diagram of a neural network model provided by an embodiment of this application.
  • FIG. 5c is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of lens movement provided by an embodiment of the application.
  • FIG. 9A is a schematic diagram of a set of display interfaces provided by an embodiment of the application.
  • FIG. 9B is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 13 is a schematic flowchart of an image shooting method provided by an embodiment of the application.
  • FIG. 14 is a schematic structural diagram of an electronic device provided by an embodiment of this application.
  • FIG. 15 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 16 is a schematic structural diagram of a wireless communication device provided by an embodiment of this application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the present embodiment, unless otherwise specified, “plurality” means two or more.
  • auxiliary information is obtained through multiple devices on the terminal, and then focusing is performed based on the obtained auxiliary information, such as phase focusing method, laser focusing method, contrast focusing method or dual Auxiliary focusing methods such as eye focusing method.
  • auxiliary information obtained by these focusing methods often has limitations and gives wrong data, which ultimately results in the inability to focus sharply.
  • the phase focusing method is through hardware, a new separation lens and a linear sensor pair are added for image processing. After the two images are separated by the separation lens, the linear sensor detects the distance between the two images, thereby pushing the lens to align focus. Position to ensure the clarity of the image.
  • the phase focusing method is often difficult to predict the focus position, which makes it difficult to achieve better focusing results.
  • the laser focusing method predicts the distance between the target object and the lens by means of hardware (such as a laser emitting device and a rangefinder), and converts the distance into a corresponding lens position, thereby pushing the lens to the predicted in-focus position.
  • hardware such as a laser emitting device and a rangefinder
  • the laser emitting device emits infrared laser light
  • the infrared laser light is irradiated on the surface of the target object, and after being reflected by the target object, the infrared laser light is received by the rangefinder.
  • the distance between the target object and the lens can be calculated by calculating the time difference between the transmission time and the reception time of the infrared laser, so that focusing can be achieved based on this distance.
  • the laser focusing method uses infrared lasers to perceive the focusing distance, it is easily disturbed by ambient light, such as a scene where the sun is direct or a strong light is direct, and the rangefinder may receive other reflected light, which makes it difficult to accurately calculate the target
  • ambient light such as a scene where the sun is direct or a strong light is direct
  • the rangefinder may receive other reflected light, which makes it difficult to accurately calculate the target
  • the distance between the object and the lens causes poor focusing.
  • the contrast focusing method is to detect the contrast of the captured image, adjust the lens position continuously before detecting the maximum contrast, and finally find the lens position that can maximize the image contrast, that is, the quasi-focus position.
  • the contrast focusing method is difficult to find the maximum contrast position in flat area scenes, small target object scenes, and night scenes. It is also susceptible to external factors such as hand shake and environmental changes (such as flashing lights), resulting in image loss of focus. .
  • the embodiment of the present application provides an image shooting method, which can be applied to electronic devices. After the conventional focusing method is used to focus the target focus subject and the corresponding image is obtained, the conventional focusing method is used in the current scene.
  • the focus method based on the neural network model that is more adaptable to the scene is used to focus, which is obtained by image training in a large number of scenes
  • the neural network model provides the sharpness of the target focus subject in the image as auxiliary information to determine the lens position to obtain a clear image.
  • the image shooting method provided in the embodiments of the present application can be applied to electronic equipment, and the electronic equipment can include a terminal device or an electronic device.
  • the electronic device includes a processor and a memory and can be deployed on the terminal device.
  • terminal devices may include mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, laptops, ultra-mobile personal computers (ultra-mobile personal computers). , UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and other equipment.
  • the embodiments of the present application do not impose any restrictions on the specific types of terminal equipment and electronic devices.
  • FIG. 1a shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transceiver (universal asynchronous) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, and a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA, CTIA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the camera 193 collects color images
  • the ISP processes the data fed back by the camera 193
  • the NPU in the processor 110 can perform image segmentation on the ISP processed image to determine different objects on the image Or the area where different object types are located.
  • the processor 110 can retain the color of the area where the specific one or more objects are located, and perform gray-scale processing on other areas other than the area where the specific one or more objects are located, so that the entire area where the specific object is located can be grayed out. The color is preserved.
  • gray-scale processing refers to the conversion of pixel values of pixels into gray-scale values, and color images into gray-scale images (also called black-and-white images).
  • the pixel value is used to represent the color of the pixel.
  • the pixel value can be R (red) G (green) B (blue) value
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 by way of example.
  • FIG. 1b is a block diagram of the software structure of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the electronic device as a mobile phone as an example to describe in detail the image capturing method provided in the embodiment of the present application.
  • FIG. 1c shows a graphical user interface (GUI) of the mobile phone, and the GUI is the desktop 101 of the mobile phone.
  • GUI graphical user interface
  • the mobile phone detects that the user has clicked the icon 102 of the camera application (application, APP) on the desktop 101, it can start the camera application and display another GUI as shown in (b) in Figure 1c.
  • This GUI can be called Shooting interface 103.
  • the shooting interface 103 may include a viewing frame 104. In the preview state, the preview image can be displayed in the viewing frame 104 in real time.
  • the image 1 may be displayed in the view frame 104.
  • the shooting interface may also include a control 105 for indicating the shooting mode, a control 106 for indicating the video mode, and a shooting control 107.
  • the camera mode when the mobile phone detects that the user clicks on the shooting control 307, the mobile phone performs the camera operation; in the video mode, when the mobile phone detects the user clicks on the shooting control 107, the mobile phone performs the video shooting operation.
  • the mobile phone After the mobile phone starts the camera, the mobile phone can collect the image of the current scene through the camera and display the collected image in the viewfinder frame. After acquiring the image of the current scene, the mobile phone can determine the target focus subject in the current scene, where the target focus subject may be a subject in the current scene that needs to obtain a clear image.
  • the mobile phone can determine the target focus subject in the auto focus mode. Generally, after the mobile phone starts the camera, the mobile phone can automatically enter the auto focus mode. In the auto focus mode, the mobile phone can automatically select a part of the area in the image of the current scene as the focus area, thereby determining that the target focus subject in the image of the current scene is the subject located in the focus area.
  • the focus area selected by the mobile phone can be preset, such as a square area or a circular area in the center of the image, etc.; the side length or perimeter of the focus area selected by the mobile phone can also be preset Yes, for example, when the mobile phone selects a square area as the focus area, the side length of the square area may be one-fifth of the side length of the view frame.
  • the mobile phone uses the center of the image of the current scene as a reference point, selects a square area in the center of the image as the focus area, and determines the target focus subject as the subject in the focus area.
  • the focus area selected by the mobile phone in the auto focus mode may be hidden, that is, the focus area selected by the mobile phone is not displayed on the shooting interface.
  • the mobile phone can determine the target focus subject in manual focus mode. Among them, after the mobile phone starts the camera, when the mobile phone detects that the user clicks on any position of the image in the viewfinder, the mobile phone can enter the manual focus mode. In manual focus mode, the phone can select the position the user clicks as the focus point, and select a square or circular area with the focus point as the focus area, so as to determine the target focus subject in the image of the current scene as the focus point. The subject in the focus area.
  • the user clicks on the flower in the image of the current scene in the viewfinder, as shown in Figure 3(b), after the mobile phone detects the user’s click, select A square area centered on the position clicked by the user is used as the focus area, and the flower located in the square area is determined as the target focus subject.
  • the mobile phone can determine the target focus subject in an artificial intelligence (AI) focus mode.
  • AI focus mode the mobile phone can detect the object in the image of the current scene, and when a specific object is detected, the object is determined to be the target focus subject. For example, detect people, animals, or buildings in static scenes, and determine the detected objects as the target focus subject; another example, detect moving people or animals in dynamic scenes, and determine the detected people or animals Focusing on the target subject; for another example, by identifying the foreground and the background in the image, detecting the object in the foreground of the image, and determining the object in the foreground of the image as the target focusing subject.
  • the mobile phone can detect the person in the image of the current scene, and when the person is detected, the person or the person’s face is selected as the target focus subject; for example, as As shown in Figure 4(b), the mobile phone can detect the animal in the image of the current scene, and when the animal is detected, the animal is selected as the target focus subject; for example, as shown in Figure 4(c) , The mobile phone can also detect the building in the image of the current scene, and when the building is detected, select the building as the target focus subject.
  • the mobile phone can have a variety of ways to enter the AI focus mode.
  • the mobile phone when the mobile phone detects that the user clicks on the AI control on the shooting interface, the mobile phone enters or exits the AI focus mode.
  • Figure 4 (d) when the mobile phone does not enter the AI focus mode
  • the mobile phone when the mobile phone detects that the user clicks the AI control 401 on the shooting interface, the mobile phone enters the AI focus mode and changes the display color of the AI control 401 (for example, changes the AI control 401 to color); after the mobile phone enters the AI focus mode.
  • the mobile phone detects that the user clicks the AI control 401 on the shooting interface, the mobile phone exits the AI focus mode and restores the original display color of the AI control 401 (for example, the AI control 401 is restored to white).
  • the mobile phone when the mobile phone detects that the user clicks the shooting option control on the shooting interface, the mobile phone can enter the mode selection interface, and when the mobile phone detects that the user clicks the AI mode control in the mode selection interface, the mobile phone enters AI focus Mode; Exemplarily, as shown in Figure 4 (e), when the mobile phone detects that the user clicks on the shooting option control 402, the mobile phone can enter the mode selection interface, as shown in Figure 4 (f), the mobile phone detects the user’s click mode When the AI mode control 403 in the interface is selected, the mobile phone can choose to enter the AI focus mode.
  • the mobile phone when the mobile phone detects the user's preset gesture operation on the shooting interface, it can enter or exit the AI focus mode; for example, when the mobile phone detects that the user draws a circle or drags a certain amount on the shooting interface When tracking, the phone can enter or exit AI focus mode.
  • the mobile phone After determining the target focus subject in the current scene, the mobile phone can focus on the target focus subject in the current scene through the first focusing method to obtain the first image.
  • the mobile phone may focus on the image of the current scene using a phase focusing method, that is, the first focusing method may be a phase focusing method.
  • the mobile phone may focus the image of the current scene through a laser focusing method, that is, the first focusing method may be a laser focusing method.
  • the mobile phone After the mobile phone uses the first focusing method to focus the target focused subject in the current scene and obtains the first image, the mobile phone can determine the definition of the target focused subject in the first image.
  • the mobile phone can determine the sharpness of the focused subject in the first image through a preset neural network model.
  • the mobile phone may input the first image marked with the target focus subject into the neural network model, and the neural network model outputs the clarity of the target focus subject in the first image.
  • the output value of the neural network model may be 30% , 50%, 100%, etc., where the above-mentioned 30%, 50%, 100% are the sharpness corresponding to the target focus subject in the first image.
  • the area where the target focus subject is located in the first image can be marked by the marking frame, so that the neural network model can obtain the area in the first image that needs to be output sharp.
  • the mark frame may be a frame with a preset shape, such as a square frame or a round frame, etc. The size of the mark frame matches the target focus subject, and the target focus subject can be enclosed in the mark frame .
  • the marking frame may also be a contour frame that matches the shape of the target focus subject, that is, the marking frame is a frame formed based on the outline of the target focus subject, and can just surround the target focus subject. Mark inside the box.
  • the neural network model may be obtained after training the machine learning model by using a large amount of training data.
  • the training data refers to the image data marked with the focus subject and the sharpness of the focused subject. By acquiring a large number of original images, the focused subject in the original image is marked, and the sharpness of the focused subject in these original images is marked. The training data used to train the model can be obtained.
  • the training data may be obtained by shooting a large number of scenes with a mobile phone or camera and other imaging devices in advance; specifically, in the same scene, the mobile phone can move the lens back and forth to capture images in different lens positions; After obtaining images at different lens positions, the sharpness of the image can be marked based on the lens position corresponding to the image. For example, suppose that the full range (full range) that the lens can move in a mobile phone is 500, and the lens can move back and forth between positions 100-600, where position 100 and position 600 are the two end positions where the lens can move).
  • the lens position difference refers to the difference between the lens position and the lens position when the image is taken.
  • the sharpness of the focused subject in the image has a corresponding relationship with the lens position when the image was taken. Therefore, the first image marked with the target focused subject is input to the training After the neural network model, the obtained sharpness of the target focus subject also has a corresponding relationship with the lens position, that is, the lens position when the image is in focus can be determined based on the sharpness of the target focus subject.
  • the neural network model is obtained based on image training data in a large number of scenes, the definition of the focus subject in the image based on the neural network model is more adaptable to the scene, that is, the clarity provided by the neural network model
  • the degree of auxiliary information does not have the limitations brought by the hardware, and can achieve quasi-focus in most scenes, and the focusing effect is good.
  • the above-mentioned machine learning model may be a convolutional neural network (convolutional neural network, CNN) model, a superresolution convolutional neural network (superresolution convolutional neural network, SRCNN) model, or a residual network (residual network, ResNet). ) Models and other models.
  • CNN convolutional neural network
  • SRCNN superresolution convolutional neural network
  • ResNet residual network
  • a very deep super resolution (VDSR) method based on a single image may be used to train the CNN model to obtain a trained neural network model.
  • the VDSR method refers to a given low-resolution image to generate a high-definition image
  • the specific implementation process is: through a network with a deeper network level (ie deep network), using a larger receptive field (receptive field) , Fully consider the contextual message, use residual learning and extremely high learning rate to improve the training effect.
  • the receptive field is the size of the area mapped on the input image by the pixels on the feature map output by each layer in the CNN model.
  • the receptive field is a point on the feature map corresponding to the area on the input image.
  • the receptive field after the convolution operation of the two-layer 3*3 convolution kernel is 5*5; the three-layer 3*
  • the receptive field after the 3 convolution kernel operation is 7*7.
  • the image is a 2-layer 3x3 convolution operation, and its receptive field is 5x5. The larger the receptive field, the larger the area on the input map corresponding to each feature point.
  • the process of using the VDSR method to train the CNN model may include:
  • a larger receptive field (for example, a receptive field larger than 41 ⁇ 41) can be used to ensure that more features can be learned, and through the data in the field, the labeling of the target, and the target
  • the data such as the spatial location of the CNN considers the context message, thereby improving the detection accuracy of the CNN model.
  • residual learning can be used to observe the difference between the actual observation value and the estimated value, such as a high learning rate greater than 0.1; and a gradient clipping method to avoid excessive training time.
  • the method of adopting gradient clipping may specifically be: L2norm (that is, L2 norm, where L2 refers to Euclidean distance) clipping according to a vector composed of gradients of multiple parameters.
  • L2norm that is, L2 norm, where L2 refers to Euclidean distance
  • a vector is formed by the rate of change of each parameter, and the L2norm of this vector is calculated by calculating the square and square of each element of the vector.
  • the L2norm of the rate of change of the vector can be made smaller than the preset clipnorm. It is worth noting that if the gradient clipping method is not adopted, the optimization algorithm with excessive gradient will exceed the optimal point.
  • FIG. 5b is specifically a network structure diagram for implementing the VDSR method.
  • the blurred image is subjected to in-depth convolution and activation functions through the vector convolution operator (Conv1) and activation function (Relu.1), Conv12 and Relu.2...Conv.D-1 and Relu.D-1, and finally Get high-precision images.
  • Conv1 vector convolution operator
  • activation function Relu.1
  • Conv12 Conv12 and Relu.2...Conv.D-1 and Relu.D-1
  • Get high-precision images Each layer of convolution Conv is a 3x3 matrix operator.
  • the activation function can effectively avoid gradient explosion.
  • the mobile phone can directly determine the sharpness of the target focus subject in the first image by default; in some cases below, the mobile phone can be after switching the target focus subject in the first image , To determine the sharpness of the switched target focus subject in the first image.
  • the first image may be detected.
  • the mobile phone detects that the first image is a multi-depth image and the target focus subject is located in the background area in the multi-depth image
  • the The target focus subject is switched to the subject in the foreground area in the multi-depth image, and the switched target focus subject is obtained.
  • Multi-depth image refers to an image with multiple depths of field.
  • the mobile phone can detect the sharpness or contrast of different regions in the first image. If the mobile phone detects that there are multiple sharpness or contrast areas in the first image, the mobile phone can determine that the first image is Multi-depth images.
  • the switching target focus subject is the subject in the foreground area, and based on the switched target focus subject passing
  • the neural network model can focus, which can correct the target focus subject, and focus to obtain a clear image of the subject in the foreground area, and the focus effect is good.
  • FIG. 5c shows that after the mobile phone determines that the subject corresponding to the image center of the current scene is the target focus subject, it focuses by the first focusing method to obtain the first image, and The first image is displayed on the shooting interface;
  • Figure 5c (b) shows that the mobile phone detects that the first image is a multi-depth image, and the foreground area in the first image is determined;
  • Figure 5c (c) shows that the mobile phone is in the determination of the first image After the foreground area, the subject in the foreground area is recognized, and the flowers located in the foreground area are identified, so that the target focus subject is switched to the flowers located in the foreground area.
  • the mobile phone may automatically detect the first image every time it obtains the first image, or it may detect the first image when the mobile phone is in a multi-depth shooting mode.
  • a multi-depth-of-field mode control may be displayed on the shooting interface of the mobile phone.
  • the mobile phone detects that the user clicks on the multi-depth-of-field mode control, the mobile phone enters the multi-depth shooting mode.
  • the multi-depth mode control may be a control 601; as shown in FIG. 6(b), the multi-depth mode control may be a control 602; in another embodiment ,
  • the multi-depth mode control can be displayed on the mode selection interface of the mobile phone.
  • the mobile phone When the mobile phone detects that the user clicks the shooting option control on the shooting interface, the mobile phone can enter the mode selection interface, and when the mobile phone detects that the user clicks on the mode selection interface In the multi-depth mode control, the mobile phone enters the multi-depth shooting mode; for example, as shown in FIG. 6(c), the multi-depth mode control may be a control 603.
  • the mobile phone can also detect the image of the current scene. When it detects that the image of the current scene is a multi-depth image, it automatically enters the multi-depth shooting mode, and displays the multi-depth shooting mode on the shooting interface Control to remind the user that the mobile phone has entered the multi-depth shooting mode.
  • the multi-depth mode control displayed on the shooting interface can be the control 604, or it can be as shown in Figure 6 (e).
  • the first image may be detected, and when the mobile phone detects that the first image contains the target object, the target focus subject is switched to the target object in the first image, Get the target focus subject after switching.
  • the target object may be a person; the target object may also be an animal, such as a cat, a dog, or a rabbit; the target object may also be a scene, such as flowers, grass, or trees; the target object may also be some specific objects , Such as a car, a water cup, or a mouse; the target object can also be a building, such as a tall building, iron tower, or temple.
  • Figure 7 shows that the mobile phone uses the first focusing method to focus to obtain a first image, and the first image is displayed on the shooting interface;
  • Figure 7 (b) shows the detection of the mobile phone
  • the first image contains the target object--the temple, and the target area where the temple is located in the first image is determined;
  • Figure 7(c) shows that the mobile phone performs the operation on the target area after determining the target area where the temple is located in the first image. Extract to mark the target object in the first image.
  • the mobile phone After the mobile phone determines the sharpness of the target focused subject in the first image, the mobile phone can determine whether the sharpness of the target focused subject in the current scene is less than a preset threshold.
  • the mobile phone when the mobile phone obtains the switched target focus subject, the mobile phone can determine in the current scene whether the sharpness of the switched target focus subject is less than a preset threshold.
  • the mobile phone can use the second focusing method to Focus on the target focus subject in the current scene or the switched target focus subject to obtain a second image with a better focus effect.
  • the sharpness of the target focused subject may be a degree value, score or percentage obtained based on a pre-trained neural network model, which is used to indicate the sharpness of the target focused subject; where , The higher the score or degree value, it can indicate that the target focus subject in the image is clearer.
  • a value range for expressing clarity can be 0% to 100%, or 0 to 100, or 0 to 10, etc.
  • the value of sharpness also has an association relationship with the position of the lens from which the image is obtained.
  • the sharpness corresponding to the target focus subject can be regarded as the highest;
  • the sharpness can be measured or expressed by brightness.
  • the greater the sharpness of the image the greater the brightness of the image; optionally, the sharpness can also be measured or expressed by chromaticity ,
  • the greater the sharpness of the image the greater the chromaticity of the image.
  • the brightness/chromaticity of the image may be specific to the overall brightness/chromaticity level of the entire area of the subject in the image, or specific to the overall average value of the brightness/chromaticity of each pixel in this area.
  • the sharpness can also be measured or expressed by contrast. For the same subject, the greater the sharpness of the image, the greater the contrast of the image.
  • contrast refers to the contrast between different brightness levels between the brightest white point and the darkest black point in the light and dark areas of the image. In simple terms, it is the difference between the brightest pixel in the area where the target focus subject is located. The brightness ratio between the pixels with the lowest brightness. Generally speaking, the greater the contrast, the clearer and more striking the image, and the more vivid and vivid the color; while the smaller the contrast, the more blurred the image and the grayer the color.
  • the contrast of the image may be specific to the overall contrast level of the entire area of the photographed subject in the image, or specific to the overall average value of the contrast of each pixel in this area.
  • the contrast of the image may be specific to the overall contrast level of the entire area of the photographed subject in the image, or specific to the overall average value of the contrast of each pixel in this area.
  • the blur degree value can also be used to determine whether the target focus subject in the image is out of focus. For the same subject, the larger the blur degree value, the less clear. The smaller the blur degree value, the clearer it is.
  • the blur degree value can be a degree value, score or percentage based on a pre-trained neural network model. When the blur degree value of the target focused subject in the image is greater than the preset threshold, it is determined that the target focused subject is out of focus, and the mobile phone uses the second focusing method to focus the target focused subject in the current scene or the switched target focused subject.
  • the value range of the blur degree value can also be 0 to 100%, and the value of the blur degree value has a corresponding relationship with the lens position when the first image is obtained, and the lens position when the first image is obtained is closer to the in-focus position , The smaller the blur degree value is; the farther the lens position when the first image is obtained is from the quasi-focus position, the larger the blur degree value is.
  • the blur degree value can also be characterized by contrast. During the focusing process, when the contrast of the target focus subject is the largest, the blur degree corresponds to a value of 0, and when the target focus subject has the smallest contrast, the blur degree corresponds to the value Is 100%.
  • the sharpness of the image can also be determined according to the blur degree value.
  • the sum of the blur degree and the sharpness of the image is a constant, specifically, for example, the blur degree of the first image. If it is 20% and the constant is 1, the definition of the first image is 80%.
  • the preset threshold may be a threshold preset by the terminal, and its specific value may be determined by the following exemplary method: for example, the specific value of the preset threshold is determined according to the accuracy or deviation of the neural network model , Or determine the specific value of the preset threshold according to the experience value obtained through a large number of shooting operations; optionally, the preset threshold may also be a threshold obtained by the terminal from the cloud, for example, the threshold set by the system when the terminal system is upgraded Optionally, the preset threshold may also be a threshold set by the user through the terminal, for example, a threshold set through the system interactive interface.
  • the preset threshold is a value used to measure whether the target focused subject in the image is out of focus.
  • the sharpness of the target focused subject in the first image is less than the preset threshold, it can be considered that the first image is out of focus; therefore,
  • the accuracy of the neural network model to detect sharpness and the experience value of out-of-focus can be combined to determine the specific value of the preset threshold.
  • the out-of-focus experience value refers to an out-of-focus value determined based on experience.
  • the image is considered out of focus; that is to say, in practical applications, in the neural network
  • the value of the preset threshold can be closer to the defocus experience value.
  • the difference between the preset threshold value and the defocus experience value can be determined according to the deviation of the neural network model. Difference value, thereby determining the value of the preset threshold.
  • the preset threshold may specifically be 80% or 85%; generally speaking, the preset threshold is generally 100% The value other than the value, that is, the value of the preset threshold is not the maximum value corresponding to the sharpness.
  • the mobile phone determines that the sharpness of the target focused subject in the first image is less than the preset threshold, it can be considered that the target focused subject in the first image obtained after the mobile phone uses the first focusing method to focus is relatively blurred, that is, the first image The sharpness of the target focus subject in an image does not meet the requirements.
  • the mobile phone can focus the image of the current scene again through the second focusing method to obtain a second image with higher sharpness of the target focus subject.
  • the focusing method based on the neural network model may specifically obtain the sharpness of the target focus subject in the image through the neural network model, and then determine the position of the lens to be moved according to the sharpness of the image and the position of the current lens. And through the focus motor drive the lens to move to the determined position, so as to achieve focus. Since the neural network model is based on image training in a large number of various scenes, it is highly adaptable to various scenes. Therefore, the neural network model can accurately obtain the sharpness of the target focus subject in the current image, so that the mobile phone can According to the sharpness of the image, the position of the lens is controlled to achieve focus and obtain a clear image.
  • the lens can move between position 100 and position 600.
  • the first image is determined by the above-mentioned neural network model.
  • the sharpness of the target focus subject in the image is 60%, and it is determined that the position of the lens when the first image is taken is 350, then the lens can be determined according to the sharpness of the target focus subject in the first image 60% and the full range of the lens 500
  • the position is 150 or 550. After determining the position of the lens to be moved, the lens can be moved to the determined position by pushing the focus motor.
  • the lens when it is determined that there are two positions where the lens is to be moved, the lens may be randomly moved to one of the positions, and the image collected by the lens at that position may be obtained, and then the collected image may be obtained If the sharpness of the image captured after the lens is moved is smaller than the sharpness of the target focus subject in the first image, the position where the lens moved is determined to be the in-focus position, and the focus is completed; if the lens is moving The resolution of the obtained image is less than the resolution of the target focus subject in the first image, then the lens is moved to another position to be moved, and the last position of the lens is determined to be the in-focus position, and the focusing is completed.
  • the lens when it is determined that the position of the lens to be moved is 150 or 550, the lens can be moved to position 150 first, and then after the lens is moved to position 150, the position is acquired Download the corresponding image. If the sharpness of the corresponding image when the lens position is 150 is less than that when the lens position is 350, then determine the lens at position 150 as the collimated position; if the corresponding image when the lens position is 150 If the sharpness of the lens is greater than that when the lens position is 350, then the lens is moved to position 550, and the lens is determined to be the in-focus position at position 550.
  • the lens when it is determined that there are two positions of the lens to be moved, it can be determined whether the first image is a multi-depth image, and if the first image is a multi-depth image, the lens can be moved to the position to be moved A position close to the first end position in the middle, where the first end position is the end position where the lens can achieve image aligning focus in a macro scene; if the first image is not a multi-depth image, you can move the lens to the One of the moved positions that is close to the second end position, where the second end position is the end position where the lens can achieve image in-focus in an infinite scene.
  • position 600 is the above-mentioned first end position
  • the lens can be achieved when shooting objects at infinity at position 100
  • position 100 is the second end position described above; in this way, when the first image is a multi-depth image, position 550 is closer to position 600 than position 150, so the lens can be moved to position 550;
  • position 150 is closer to the position 100 than the position 550, so the lens can be moved to the position 150.
  • the first image includes foreground objects and background objects.
  • the lens when the lens is moved to the direction of the first end point, it will be easier to image the foreground objects Clear, achieve quasi-focus; when the first image is not a multi-depth image, the first image usually includes distant objects. Therefore, when the lens is moved to the second end position, it will be easier to make distant objects The image of the object is clear and the focus is achieved.
  • the lens can move between position 100 and position 600.
  • the sharpness of the target focus subject in the image is 60%, and it is determined that the position of the lens when the first image is taken is 250, then the lens can be determined according to the sharpness of the target focus subject in the first image 60% and the full range of the lens 500
  • the position is 50 or 450. Obviously, the position 50 has exceeded the range that the lens can move, and the lens cannot be moved to the position 50. Therefore, it can be determined that the position of the lens to be moved is only 450. At this time, you can push the focus motor to Move the lens to position 450.
  • the above-mentioned neural network model is used to determine that the sharpness of the target focus subject in the first image is 60%, and it is determined that the lens when the first image is taken
  • the position is 450, then the distance between the position of the lens to be moved and the position of the lens when the first image is taken is 500 according to the 60% of the sharpness of the target focus subject in the first image and the full range of the lens 500.
  • *(1-40%) 200, combined with the position 450 of the lens when the first image was taken, it can be calculated that the position of the lens to be moved is 250 or 650.
  • the position 650 has exceeded the range that the lens can move , The lens cannot be moved to position 650, therefore, it can be determined that the position of the lens to be moved is only 250.
  • the target image when the sharpness of the target focused subject in the first image is less than a preset threshold, the second image is output as the target image.
  • the target image may be the preview image displayed in the preview area on the shooting interface, that is, in the case where the sharpness of the target focus subject in the first image is less than the preset threshold, the second image is output as the preview image on the shooting interface.
  • Preview image may also be an image stored in a storage medium (for example, a non-volatile memory) in response to a user's photographing instruction.
  • the first image is output as the target image.
  • the target image may be a preview image on the photographing interface, or may be an image stored in a storage medium in response to a user's photographing instruction.
  • the mobile phone after the mobile phone switches the target focus subject to the subject located in the foreground area of the multi-depth image, in the process that the mobile phone uses the second focusing method to focus the switched target focus subject, in order to facilitate the user to know the switch
  • the mobile phone can display a focus frame on the shooting interface, and the focus frame is used to mark the target focus subject after switching in the current scene.
  • the mobile phone may display a focus frame 901 for marking the target focus subject after switching in the current scene on the shooting interface.
  • the focus frame used to mark the switched target focus subject can also be the focus frame 902; exemplarily, as shown in Figure 9A (c), for The focus frame for marking the target focus subject after the switch may also be the focus frame 903; for example, as shown in FIG. 9A (d), the focus frame for marking the target focus subject after the switch may also be the focus frame 904.
  • the mobile phone may display the focus on the shooting interface.
  • Frame the focus frame is used to mark the target focus subject after switching (that is, the target object in the current scene).
  • the mobile phone may display on the shooting interface for marking the switched target focus subject ⁇ focus frame 905.
  • the focus frame used to mark the switched target focus subject may also be the focus frame 906; for example, as shown in FIG. 9B (c), the focus frame used to mark The focus frame of the switched target focus subject may also be the focus frame 907; for example, as shown in FIG. 9B (d), the focus frame used to mark the switched target focus subject may also be the focus frame 908.
  • the mobile phone when the mobile phone uses the second focusing method to focus, the mobile phone may display the prompt message 1 on the shooting interface to remind the user that the mobile phone is currently switching the focusing method.
  • the prompt message 1 displayed on the shooting interface of the mobile phone may be message 1001, and message 1001 is specifically “the current image is blurred and the focus mode is being switched”; for example, as shown in Fig.
  • the prompt message 1 displayed on the shooting interface of the mobile phone can be information 1002, and information 1002 can be specifically "current image Blurred, the AI focus mode has been automatically switched"; as shown in Figure 10(c), the prompt message 1 displayed on the shooting interface of the mobile phone can be message 1003, and message 1003 is specifically "The current image is blurred, AI focus has been turned on”; As shown in Figure 10(d), the prompt message 1 displayed by the mobile phone on the shooting interface can be information 1004, which is specifically "Please hold the phone steady while switching the focus mode"; as shown in Figure 10(e) , The prompt message 1 displayed by the mobile phone on the shooting interface can be message 1005, which is specifically "Secondary focusing, please hold the phone steady”; as shown in Figure 10 (f), the prompt displayed on the shooting interface by the mobile phone Message 1 can be message 1006, and message 1006 is specifically "Improving image quality, please hold your phone
  • the prompt message 1 displayed on the shooting interface of the mobile phone can automatically disappear; for example, after the prompt message 1 is displayed on the shooting interface of the mobile phone for a preset time ( For example, 1 second or 2 seconds, etc.), the prompt message 1 on the shooting interface can automatically disappear.
  • a preset time For example, 1 second or 2 seconds, etc.
  • the shooting interface After the second focusing method is used to focus the target focused subject in the current scene to obtain the second image, when the sharpness of the target focused subject in the second image is less than a preset threshold, in the shooting interface
  • the prompt message 2 is displayed on the screen, and the prompt message 2 is used to prompt the user to adjust the shooting distance. Since the camera in the mobile phone has a minimum focus distance limit, when the mobile phone is too close to the target object, it is often difficult for the mobile phone to achieve focus. Then when the sharpness of the target focus subject in the second image is less than the preset threshold, it can be considered After the mobile phone has been focused twice, it still cannot achieve quasi-focus.
  • the mobile phone may display the prompt message 2 on the shooting interface for prompting the user to adjust the shooting distance.
  • the prompt message 2 may be information 1101 on the shooting interface, and the information 1101 is specifically "The current shooting distance is too close, please move your phone away"; for example, as shown in Figure 11
  • the prompt message 2 can be the information 1102 on the shooting interface, and the message 1102 is specifically "The current shooting distance is too close, please adjust the shooting distance”; as shown in Figure 11 (c), the prompt message 2 can be It is the information 1103 on the shooting interface, and the information 1103 is specifically "the current shooting distance is less than the minimum focus distance".
  • the mobile phone when the mobile phone is equipped with multiple cameras, when the mobile phone determines that the sharpness of the target focus subject in the second image is less than the preset threshold, the mobile phone may also display the prompt message 3 on the shooting interface.
  • Prompt message 3 is used to prompt the user to switch cameras.
  • a prompt message 3 for prompting the user to switch cameras may be displayed on the shooting interface of the mobile phone.
  • the prompt message 3 may be information 1201, and the information 1201 may specifically be "the current shooting distance is over Close, please switch the macro camera"; for example, as shown in Figure 12(b), the camera switching control 1202 is displayed on the shooting interface of the mobile phone, and the mobile phone detects that the user clicks the camera switching control 1202 to indicate macro When the camera button is pressed, the mobile phone can switch the camera to a macro camera for focusing; for example, as shown in Figure 12(c), in response to the user clicking the button of the camera switch control 1202 that represents the macro camera, the mobile phone will The camera is switched to a macro camera and focused, and the camera switching control 1203 displays that the currently working camera is a macro camera.
  • the wide-angle lens in the mobile phone is configured with a wide-angle shooting mode
  • the macro lens in the mobile phone is configured with a macro shooting mode.
  • Information 3 can also be used to prompt the user to switch the shooting mode.
  • the mobile phone can enter the macro shooting mode and switch the macro lens for shooting.
  • a prompt message 3 for prompting the user to switch the shooting mode may be displayed on the shooting interface of the mobile phone.
  • the prompt message 3 may be information 1204, and the information 1204 may specifically be "current shooting distance Too close, please switch the macro shooting mode"; for example, as shown in Figure 12(e), a macro shooting mode switching control 1205 is displayed on the shooting interface of the mobile phone, and it is detected that the user clicks on the macro shooting on the mobile phone
  • the mode switching control 1205 the mobile phone can enter the macro shooting mode and switch the camera to a macro camera for focusing; for example, as shown in (f) of FIG. 12, in response to the user clicking the macro shooting mode switching control 1205 , The phone switches the camera to a macro camera and focuses, and displays the macro shooting mode control 1206 on the shooting interface.
  • the phone detects that the user clicks the close button on the macro shooting mode control 1206, the phone can exit the macro Shooting mode.
  • the embodiments of the present application provide an image capturing method, which can be implemented by electronic equipment (such as terminal equipment such as mobile phones and tablet computers or electronic devices that can be deployed on terminal equipment). As shown in Figure 13, the method may include the following steps:
  • the first focusing method may be a phase focusing method or a laser focusing method.
  • the electronic device may perform focusing by the first focusing method in the auto-focus mode as shown in FIG. 2; for example, the electronic device may also perform focusing by the first focusing method in the manual focus mode as shown in FIG. Method for focusing; for example, the electronic device may also perform focusing by the first focusing method in the AI focusing mode as shown in FIG. 4.
  • the sharpness of the target focused subject in the first image is less than the preset threshold
  • the target focused subject in the second image is sharp
  • the degree is not less than a preset threshold; wherein, the lens positions corresponding to the first focusing method and the second focusing method are different, and the second focusing method is a focusing method based on a neural network model.
  • the electronic device may determine the sharpness of the target focus subject in the first image through the aforementioned neural network model.
  • the first focusing method may include a phase focusing method or a laser focusing method
  • the second focusing method is a focusing method based on a neural network model, and the sharpness of the target focus subject in the second image captured based on the second focusing method Not less than the preset threshold.
  • the second image is output as the target image.
  • the first image is output as the target image.
  • the target image may be a preview image displayed in the preview area on the shooting interface; or, the target image may also be an image stored in a storage medium in response to a user's photographing instruction.
  • focusing on the target focused subject in the current scene by the second focusing method includes:
  • the first image marked with the target focus subject is input into the neural network model, and the first output result of the neural network model is obtained.
  • the first output result is the sharpness of the target focus subject in the first image; according to the target focus subject in the first image Adjust the lens position to obtain the second image.
  • the first image marked with the target focused subject can be input into the neural network model, and the target in the first image can be obtained based on the neural network model. Focusing on the sharpness of the subject, then determining the moving position of the lens according to the sharpness of the target focusing subject, and moving the lens to the determined position, thereby completing focusing and obtaining a second image.
  • adjusting the lens position according to the sharpness of the target focus subject in the first image includes: determining the movement value of the lens according to the sharpness of the target focus subject in the first image and the full range, where the full range is the lens can The maximum range value of the movement, the movement value is the difference between the full range and the first product, the first product is the product of the sharpness and the full range; the lens is moved to the target position according to the movement value.
  • the target focus subject when the first image is a multi-depth image and the target focus subject is located in the background area of the multi-depth image, the target focus subject is switched to the subject in the foreground area of the multi-depth image to obtain the switched The target focus subject; when the clarity of the target focus subject in the first image is less than the preset threshold, the second focus method is used to focus the target focus subject in the current scene to obtain a second image, including: switching in the first image When the sharpness of the subsequent target focused subject is less than the preset threshold, the switched target focused subject is focused by the second focusing method in the current scene to obtain a second image.
  • focusing on the switched target focus subject in the current scene by the second focus method includes: inputting the first image marked with the switched target focus subject into the neural network model to obtain the neural network model
  • the second output result is the sharpness of the switched target focus subject in the first image; the lens position is adjusted according to the sharpness of the switched target focus subject in the first image to obtain the second image.
  • the neural network model is obtained by training image training data marked with the focus subject and the sharpness of the focus subject.
  • the focus frame may be displayed on the shooting interface according to the switched target focus subject.
  • the focus frame is used to mark the target focus subject after switching.
  • a prompt message 1 may be displayed on the shooting interface, and the prompt message 1 is used to prompt the user to switch the focusing method or to turn on the mode of focusing by the second focusing method.
  • the prompt information 1 may be information 1001 to information 1006 as shown in FIG. 10.
  • the shooting interface After the second focusing method is used to focus the target focused subject in the current scene to obtain the second image, when the sharpness of the target focused subject in the second image is less than a preset threshold, in the shooting interface
  • the prompt message 2 is displayed on the screen, and the prompt message 2 is used to prompt the user to adjust the shooting distance.
  • the prompt information 2 may be information 1101 to information 1106 as shown in FIG. 11.
  • the shooting interface After the second focusing method is used to focus the target focused subject in the current scene to obtain the second image, when the sharpness of the target focused subject in the second image is less than a preset threshold, in the shooting interface
  • the prompt message 3 is displayed on the screen, and the prompt message 3 is used to prompt the user to switch the camera or switch the shooting mode.
  • the prompt information 3 may be information 1201 to information 1206 as shown in FIG. 12.
  • an electronic device in order to implement the above-mentioned functions, includes hardware and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Those skilled in the art can use different methods for each specific application in combination with the embodiments to implement the described functions, but such implementation should not be considered as going beyond the scope of the present application.
  • the electronic device can be divided into functional modules according to the foregoing method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 14 shows a schematic diagram of a possible composition of the electronic device 1400 involved in the foregoing embodiment.
  • the electronic device 1400 may include: a processing unit 1401 and display unit 1402.
  • processing unit 1401 may be used to support the electronic device 1400 to perform the above steps 1301, 1302, and 1303, and/or other processes used in the technology described herein.
  • the display unit 1402 may be used to support the electronic device 1400 to perform the steps of displaying the focus frame, prompt information 1, prompt information 2, and prompt information 3, and/or other processes used in the technology described herein.
  • the electronic device provided in this embodiment is used to execute the above-mentioned image capturing method, and therefore can achieve the same effect as the above-mentioned implementation method.
  • the electronic device may include a processing module, a storage module, and a communication module.
  • the processing module can be used to control and manage the actions of the electronic device, for example, can be used to support the electronic device to execute the steps performed by the processing unit 1401 described above.
  • the storage module can be used to support the storage of program codes and data in the electronic device.
  • the communication module can be used to support the communication between electronic devices and other devices.
  • the processing module may be a processor or a controller. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination of computing functions, for example, a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and so on.
  • the storage module may be a memory.
  • the communication module may specifically be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip, and other devices that interact with other electronic devices.
  • the device of each embodiment of the present application may also be implemented based on an electronic device including a memory and a processor.
  • the memory stores instructions for executing the method of each embodiment of the present application, and the processor executes the foregoing instructions so that the terminal device executes the present application. Apply the method of each embodiment.
  • FIG. 15 is a schematic structural diagram of an electronic device according to an embodiment of the application.
  • An electronic device 1500 provided by an embodiment of the present application includes a processor 1501 and a memory 1502.
  • the memory 1502 stores computer instructions.
  • the processor 1501 is used to implement the following steps when executing the computer instructions on the memory:
  • the second focus method is used to focus the target focus subject in the current scene to obtain a second image.
  • the sharpness of the target focus subject in the second image is not Less than a preset threshold; where the first focusing method and the second focusing method correspond to different lens positions, and the second focusing method is a focusing method based on a neural network model.
  • the processor 1501 is also used to implement the following step when executing the computer instructions on the memory: when the sharpness of the target focus subject in the first image is less than the preset threshold, output the second image as the target image.
  • the processor 1501 is also used to implement the following step when executing the computer instructions on the memory: when the sharpness of the target focus subject in the first image is not less than the preset threshold, output the first image as the target image.
  • the processor 1501 is also used to implement the following steps when executing the computer instructions on the memory: input the first image marked with the target focus subject into the neural network model to obtain the first output result of the neural network model, An output result is the sharpness of the target focused subject in the first image; the lens position is adjusted according to the sharpness of the target focused subject in the first image to obtain a second image.
  • the processor 1501 is also used to implement the following steps when executing the computer instructions on the memory: determining the movement value of the lens according to the sharpness of the target focus subject in the first image and the full range, where the full range is the lens The maximum range value that can be moved, the movement value is the difference between the full range and the first product, and the first product is the product of the sharpness and the full range; the lens is moved to the target position according to the movement value.
  • the processor 1501 is also used to implement the following steps when executing the computer instructions on the memory: when the first image is a multi-depth image and the target focus subject is located in the background area in the multi-depth image, focus the target The subject is switched to the subject in the foreground area in the multi-depth image, and the switched target focus subject is obtained; when the sharpness of the switched target focus subject in the first image is less than the preset threshold, the second focus is used in the current scene The method focuses on the switched target focus subject to obtain a second image.
  • the processor 1501 is also used to implement the following steps when executing the computer instructions on the memory: input the first image marked with the switched target focus subject into the neural network model to obtain the second output of the neural network model As a result, the second output result is the sharpness of the switched target focus subject in the first image; the lens position is adjusted according to the sharpness of the switched target focus subject in the first image to obtain the second image.
  • the neural network model is obtained by training the image training data labeled with the focus subject and the sharpness of the focus subject.
  • the processor 1501 is also used to implement the following steps when executing the computer instructions on the memory: display a focus frame on the shooting interface according to the switched target focus subject, and the focus frame is used to mark the switched target focus subject .
  • the processor 1501 is further configured to implement the following steps when executing computer instructions on the memory:
  • the first signal is sent to the display module of the terminal device, so that the terminal device displays a focus frame on the shooting interface, and the focus frame is used to mark the target focus subject after switching.
  • the processor 1501 is further configured to implement the following steps when executing computer instructions on the memory:
  • the prompt information 2 is used to prompt the user to switch the focus method or enable the second focus method. Focus mode.
  • the processor 1501 mentioned in the embodiments of the present application may include one or more processing units.
  • the processor 1501 may include an application processor, a modem processor, a graphics processor, an image signal processor, and a control unit.
  • the different processing units may be independent devices or integrated in one or more processors.
  • the processor 1501 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter/receiver (universal asynchronous) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter/receiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 1501 may include multiple sets of I2C buses.
  • the processor 1501 may be respectively coupled to the touch sensor, charger, flash, camera, etc. through different I2C bus interfaces.
  • the processor 1501 may couple the touch sensor through an I2C interface, so that the processor 1501 communicates with the touch sensor through the I2C bus interface, so as to realize the touch function of the terminal device.
  • the MIPI interface can be used to connect the processor 1501 with peripheral devices such as display screens and cameras of terminal devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 1501 and the camera communicate through a CSI interface to implement the shooting function of the terminal device.
  • the processor 1501 and the display screen communicate through the DSI interface to realize the display function of the terminal device.
  • the interface connection relationship between the modules illustrated in this embodiment is merely a schematic description, and does not constitute a structural limitation on the terminal device.
  • the terminal device may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the memory 1502 may be a volatile memory or a non-volatile memory (non-volatile memory), or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable Programming read-only memory (electrically EPROM, EEPROM), flash memory (flash memory), hard disk drive (HDD) or solid-state drive (solid-state drive, SSD).
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic random access memory
  • synchronousDRAM synchronous dynamic random access memory
  • doubledatarateSDRAM doubledatarateSDRAM
  • DDRSDRAM Double data rate synchronous dynamic random access memory
  • enhancedSDRAM enhanced synchronous dynamic random access memory
  • seriallinkDRAM seriallinkDRAM
  • directrambusRAM direct memory bus random access memory
  • memory described in this embodiment is intended to include, but is not limited to, these and any other suitable types of memory.
  • FIG. 16 is a schematic structural diagram of a wireless communication device according to an embodiment of this application.
  • An embodiment of the present application further provides a wireless communication device 1600.
  • the wireless communication device 1600 includes a processor 1601 and an interface circuit 1602; wherein, the processor 1601 is coupled to the memory 1603 through the interface circuit 1602, and the processor 1601 is used for The program code in the memory 1603 is executed, so that the wireless communication device executes the above-mentioned related method steps to implement the image shooting method in the above-mentioned embodiment.
  • This embodiment also provides a computer storage medium in which computer instructions are stored.
  • the computer instructions run on an electronic device, the electronic device executes the above-mentioned related method steps to implement the image shooting method in the above-mentioned embodiment.
  • This embodiment also provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the above-mentioned related steps, so as to realize the image shooting method in the above-mentioned embodiment.
  • the embodiments of the present application also provide a device.
  • the device may specifically be a chip, component, or module.
  • the device may include a processor and a memory connected to each other.
  • the memory is used to store computer execution instructions.
  • the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the image capturing method in the foregoing method embodiments.
  • the electronic equipment, computer storage medium, computer program product, or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided above. The beneficial effects of the method will not be repeated here.
  • the specific working process of the above-described system, device, and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Abstract

L'invention concerne un procédé de photographie d'image et un appareil associé, qui peut commuter, lorsqu'une image photographiée au moyen d'un procédé de mise au point classique est floue, pour effectuer une mise au point à l'aide d'un procédé de mise au point, qui est basé sur un modèle de réseau de neurones artificiels et présente une adaptabilité plus forte à des scénarios, de façon à photographier une image claire. La solution consiste plus particulièrement à : déterminer un sujet cible faisant l'objet d'une mise au point dans le scénario actuel ; réaliser une mise au point sur le sujet cible faisant l'objet d'une mise au point dans le scénario actuel au moyen d'un premier procédé de mise au point pour obtenir une première image ; et lorsque la définition du sujet cible faisant l'objet d'une mise au point dans la première image est inférieure à une valeur seuil prédéfinie, réaliser une mise au point sur le sujet cible faisant l'objet d'une mise au point dans le scénario actuel au moyen d'un second procédé de mise au point pour obtenir une seconde image, les positions d'appareil de prise de vues correspondant au premier procédé de mise au point et au second procédé de mise au point étant différentes, le second procédé de mise au point étant un procédé de mise au point basé sur un modèle de réseau de neurones artificiels, et la définition du sujet faisant l'objet d'une mise au point cible dans la seconde image photographiée sur la base du second procédé de de mise au point n'étant pas inférieure à la valeur seuil prédéfinie.
PCT/CN2020/138859 2019-12-31 2020-12-24 Procédé de photographie d'image et appareil associé WO2021136050A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911426173.9 2019-12-31
CN201911426173.9A CN113132620B (zh) 2019-12-31 2019-12-31 一种图像拍摄方法及相关装置

Publications (1)

Publication Number Publication Date
WO2021136050A1 true WO2021136050A1 (fr) 2021-07-08

Family

ID=76686476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138859 WO2021136050A1 (fr) 2019-12-31 2020-12-24 Procédé de photographie d'image et appareil associé

Country Status (2)

Country Link
CN (1) CN113132620B (fr)
WO (1) WO2021136050A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674258A (zh) * 2021-08-26 2021-11-19 展讯通信(上海)有限公司 图像处理方法及相关设备
CN113810615A (zh) * 2021-09-26 2021-12-17 展讯通信(上海)有限公司 对焦处理方法、装置、电子设备和存储介质
CN114164790A (zh) * 2021-12-27 2022-03-11 哈尔滨职业技术学院 一种智能化路面清压实冰雪设备及其使用方法
CN114422708A (zh) * 2022-03-15 2022-04-29 深圳市海清视讯科技有限公司 图像获取方法、装置、设备及存储介质
CN114666497A (zh) * 2022-02-28 2022-06-24 青岛海信移动通信技术股份有限公司 成像方法、终端设备、存储介质及程序产品
CN114760415A (zh) * 2022-04-18 2022-07-15 上海千映智能科技有限公司 一种镜头调焦方法、系统、设备及介质
CN115278089A (zh) * 2022-09-26 2022-11-01 合肥岭雁科技有限公司 人脸模糊图像对焦矫正方法、装置、设备和存储介质
CN115512166A (zh) * 2022-10-18 2022-12-23 湖北华鑫光电有限公司 镜头的智能化制备方法及其系统
CN116051368A (zh) * 2022-06-29 2023-05-02 荣耀终端有限公司 图像处理方法及其相关设备
CN116074624A (zh) * 2022-07-22 2023-05-05 荣耀终端有限公司 一种对焦方法和装置
CN116939363A (zh) * 2022-03-29 2023-10-24 荣耀终端有限公司 图像处理方法与电子设备
CN117132646A (zh) * 2023-10-26 2023-11-28 湖南自兴智慧医疗科技有限公司 基于深度学习的分裂相自动对焦系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286064A (zh) * 2020-09-17 2022-04-05 深圳光峰科技股份有限公司 一种实时对焦方法、装置、系统和计算机可读存储介质
CN114697528A (zh) * 2020-12-30 2022-07-01 Oppo广东移动通信有限公司 图像处理器、电子设备及对焦控制方法
CN114092364B (zh) * 2021-08-12 2023-10-03 荣耀终端有限公司 图像处理方法及其相关设备
CN116935391A (zh) * 2022-04-08 2023-10-24 广州视源电子科技股份有限公司 一种基于摄像头的文本识别方法、装置、设备及存储介质
CN117177062A (zh) * 2022-05-30 2023-12-05 荣耀终端有限公司 一种摄像头切换方法及电子设备
CN115209057B (zh) * 2022-08-19 2023-05-23 荣耀终端有限公司 一种拍摄对焦方法及相关电子设备
CN116132791A (zh) * 2023-03-10 2023-05-16 创视微电子(成都)有限公司 一种获取多运动物体多景深清晰图像的方法及装置
CN116991298B (zh) * 2023-09-27 2023-11-28 子亥科技(成都)有限公司 一种基于对抗神经网络的虚拟镜头控制方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013135459A (ja) * 2011-12-27 2013-07-08 Canon Marketing Japan Inc 撮影装置とその制御方法及びプログラム
CN104079837A (zh) * 2014-07-17 2014-10-01 广东欧珀移动通信有限公司 一种基于图像传感器的对焦方法及装置
US9615016B2 (en) * 2013-02-07 2017-04-04 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, control method and recording medium, where each subject is focused in a reconstructed image
CN106713750A (zh) * 2016-12-19 2017-05-24 广东欧珀移动通信有限公司 对焦控制方法、装置、电子装置及终端设备
CN107483825A (zh) * 2017-09-08 2017-12-15 上海创功通讯技术有限公司 一种自动调整焦距的方法和装置
CN109561257A (zh) * 2019-01-18 2019-04-02 深圳看到科技有限公司 画面对焦方法、装置、终端及对应的存储介质
CN109698901A (zh) * 2017-10-23 2019-04-30 广东顺德工业设计研究院(广东顺德创新设计研究院) 自动对焦方法、装置、存储介质和计算机设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7280149B2 (en) * 2001-12-21 2007-10-09 Flextronics Sales & Marketing (A-P) Ltd. Method and apparatus for detecting optimum lens focus position
US8600186B2 (en) * 2010-04-26 2013-12-03 City University Of Hong Kong Well focused catadioptric image acquisition
US20120019703A1 (en) * 2010-07-22 2012-01-26 Thorn Karl Ola Camera system and method of displaying photos
US8659697B2 (en) * 2010-11-11 2014-02-25 DigitalOptics Corporation Europe Limited Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing
US8648959B2 (en) * 2010-11-11 2014-02-11 DigitalOptics Corporation Europe Limited Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing
CN104601879A (zh) * 2014-11-29 2015-05-06 深圳市金立通信设备有限公司 一种对焦方法
US9715721B2 (en) * 2015-12-18 2017-07-25 Sony Corporation Focus detection
CN105629631B (zh) * 2016-02-29 2020-01-10 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013135459A (ja) * 2011-12-27 2013-07-08 Canon Marketing Japan Inc 撮影装置とその制御方法及びプログラム
US9615016B2 (en) * 2013-02-07 2017-04-04 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, control method and recording medium, where each subject is focused in a reconstructed image
CN104079837A (zh) * 2014-07-17 2014-10-01 广东欧珀移动通信有限公司 一种基于图像传感器的对焦方法及装置
CN106713750A (zh) * 2016-12-19 2017-05-24 广东欧珀移动通信有限公司 对焦控制方法、装置、电子装置及终端设备
CN107483825A (zh) * 2017-09-08 2017-12-15 上海创功通讯技术有限公司 一种自动调整焦距的方法和装置
CN109698901A (zh) * 2017-10-23 2019-04-30 广东顺德工业设计研究院(广东顺德创新设计研究院) 自动对焦方法、装置、存储介质和计算机设备
CN109561257A (zh) * 2019-01-18 2019-04-02 深圳看到科技有限公司 画面对焦方法、装置、终端及对应的存储介质

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674258B (zh) * 2021-08-26 2022-09-23 展讯通信(上海)有限公司 图像处理方法及相关设备
CN113674258A (zh) * 2021-08-26 2021-11-19 展讯通信(上海)有限公司 图像处理方法及相关设备
CN113810615A (zh) * 2021-09-26 2021-12-17 展讯通信(上海)有限公司 对焦处理方法、装置、电子设备和存储介质
CN114164790A (zh) * 2021-12-27 2022-03-11 哈尔滨职业技术学院 一种智能化路面清压实冰雪设备及其使用方法
CN114164790B (zh) * 2021-12-27 2022-05-10 哈尔滨职业技术学院 一种智能化路面清压实冰雪设备及其使用方法
CN114666497B (zh) * 2022-02-28 2024-03-15 青岛海信移动通信技术有限公司 成像方法、终端设备及存储介质
CN114666497A (zh) * 2022-02-28 2022-06-24 青岛海信移动通信技术股份有限公司 成像方法、终端设备、存储介质及程序产品
CN114422708A (zh) * 2022-03-15 2022-04-29 深圳市海清视讯科技有限公司 图像获取方法、装置、设备及存储介质
CN114422708B (zh) * 2022-03-15 2022-06-24 深圳市海清视讯科技有限公司 图像获取方法、装置、设备及存储介质
CN116939363B (zh) * 2022-03-29 2024-04-26 荣耀终端有限公司 图像处理方法与电子设备
CN116939363A (zh) * 2022-03-29 2023-10-24 荣耀终端有限公司 图像处理方法与电子设备
CN114760415A (zh) * 2022-04-18 2022-07-15 上海千映智能科技有限公司 一种镜头调焦方法、系统、设备及介质
CN114760415B (zh) * 2022-04-18 2024-02-02 上海千映智能科技有限公司 一种镜头调焦方法、系统、设备及介质
CN116051368B (zh) * 2022-06-29 2023-10-20 荣耀终端有限公司 图像处理方法及其相关设备
CN116051368A (zh) * 2022-06-29 2023-05-02 荣耀终端有限公司 图像处理方法及其相关设备
CN116074624A (zh) * 2022-07-22 2023-05-05 荣耀终端有限公司 一种对焦方法和装置
CN116074624B (zh) * 2022-07-22 2023-11-10 荣耀终端有限公司 一种对焦方法和装置
CN115278089B (zh) * 2022-09-26 2022-12-02 合肥岭雁科技有限公司 人脸模糊图像对焦矫正方法、装置、设备和存储介质
CN115278089A (zh) * 2022-09-26 2022-11-01 合肥岭雁科技有限公司 人脸模糊图像对焦矫正方法、装置、设备和存储介质
CN115512166B (zh) * 2022-10-18 2023-05-16 湖北华鑫光电有限公司 镜头的智能化制备方法及其系统
CN115512166A (zh) * 2022-10-18 2022-12-23 湖北华鑫光电有限公司 镜头的智能化制备方法及其系统
CN117132646A (zh) * 2023-10-26 2023-11-28 湖南自兴智慧医疗科技有限公司 基于深度学习的分裂相自动对焦系统
CN117132646B (zh) * 2023-10-26 2024-01-05 湖南自兴智慧医疗科技有限公司 基于深度学习的分裂相自动对焦系统

Also Published As

Publication number Publication date
CN113132620A (zh) 2021-07-16
CN113132620B (zh) 2022-10-11

Similar Documents

Publication Publication Date Title
WO2021136050A1 (fr) Procédé de photographie d'image et appareil associé
WO2020168956A1 (fr) Procédé pour photographier la lune, et dispositif électronique
WO2021093793A1 (fr) Procédé de capture et dispositif électronique
WO2021052232A1 (fr) Procédé et dispositif de photographie à intervalle de temps
KR102535607B1 (ko) 사진 촬영 중 이미지를 표시하는 방법 및 전자 장치
WO2020073959A1 (fr) Procédé de capture d'image et dispositif électronique
WO2021129198A1 (fr) Procédé de photographie dans un scénario à longue distance focale, et terminal
EP4020967B1 (fr) Procédé photographique dans un scénario à longue distance focale, et terminal mobile
WO2021052111A1 (fr) Procédé de traitement d'image et dispositif électronique
CN114650363B (zh) 一种图像显示的方法及电子设备
JP2022522453A (ja) 記録フレームレート制御方法及び関連装置
WO2021078001A1 (fr) Procédé et appareil d'amélioration d'image
CN112887583A (zh) 一种拍摄方法及电子设备
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
CN113810603B (zh) 点光源图像检测方法和电子设备
CN113452898A (zh) 一种拍照方法及装置
WO2023273323A1 (fr) Procédé de mise au point et dispositif électronique
CN110138999B (zh) 一种用于移动终端的证件扫描方法及装置
WO2022156473A1 (fr) Procédé de lecture de vidéo et dispositif électronique
WO2021238740A1 (fr) Procédé de capture d'écran et dispositif électronique
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
CN115686182A (zh) 增强现实视频的处理方法与电子设备
CN115967851A (zh) 快速拍照方法、电子设备及计算机可读存储介质
CN115150542A (zh) 一种视频防抖方法及相关设备
CN115880198B (zh) 图像处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909973

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20909973

Country of ref document: EP

Kind code of ref document: A1