CN113194242B - Shooting method in long-focus scene and mobile terminal - Google Patents

Shooting method in long-focus scene and mobile terminal Download PDF

Info

Publication number
CN113194242B
CN113194242B CN202010038444.XA CN202010038444A CN113194242B CN 113194242 B CN113194242 B CN 113194242B CN 202010038444 A CN202010038444 A CN 202010038444A CN 113194242 B CN113194242 B CN 113194242B
Authority
CN
China
Prior art keywords
image
mobile terminal
camera
image processing
shooting object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010038444.XA
Other languages
Chinese (zh)
Other versions
CN113194242A (en
Inventor
李光源
敖欢欢
刘苑文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202010038444.XA priority Critical patent/CN113194242B/en
Priority to US17/779,876 priority patent/US20220417416A1/en
Priority to EP20914044.1A priority patent/EP4020967B1/en
Priority to PCT/CN2020/124545 priority patent/WO2021143269A1/en
Publication of CN113194242A publication Critical patent/CN113194242A/en
Application granted granted Critical
Publication of CN113194242B publication Critical patent/CN113194242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

A shooting method and a mobile terminal in a long-focus scene relate to the technical field of communication, can improve the definition of a shot object in an image, blurring a background, highlighting the shot object and improving the overall visual effect of the image, and specifically comprise the following steps: in a long-focus shooting scene, when a mobile terminal displays a preview or a shooting image, a target shooting object and a background of the target shooting object are identified according to an image collected by a camera; the image definition of the area where the target shooting object is located is improved, and a blurring effect is added to the image of the area where the background of the target shooting object is located.

Description

Shooting method in long-focus scene and mobile terminal
Technical Field
The application relates to the technical field of mobile terminals, in particular to a shooting method in a long-focus scene and a mobile terminal.
Background
At present, a plurality of lenses, such as a wide-angle camera, a middle-focus camera, a long-focus camera and the like, can be integrated on a mobile phone to meet different shooting scenes of a user. Among them, the telephoto camera may be used to photograph a photographic subject (i.e., a telephoto photographing scene) far from the user. However, in practical situations, since the object is far from the user and the specification of the tele camera integrated on the mobile phone is limited, the definition of the object in the image obtained in the tele scene is not high, and the object is not prominent, and the visual effect is not good.
Disclosure of Invention
The shooting method and the mobile terminal in the long-focus scene can improve the definition of the shot object in the image, blur the background, highlight the shot object and improve the overall visual effect of the image.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
in a first aspect, a shooting method in a long-focus scene is provided, and is applied to a mobile terminal including a camera, and the method includes: the method comprises the steps that a camera is started by a mobile terminal, the mobile terminal displays a view frame, the view frame is used for displaying a first preview picture, and the zoom magnification of the camera corresponding to the first preview picture is a first magnification; receiving a first operation of increasing the zoom magnification of the camera input by a user; responding to the first operation, the viewing frame displays a second preview picture, the zooming magnification of the camera corresponding to the second preview picture is a second magnification, and the second magnification is larger than the first magnification; if the second multiplying factor is larger than or equal to the preset multiplying factor, before the mobile terminal displays a second preview picture, the mobile terminal performs first image processing on an image collected by the camera to generate a second preview picture; wherein the first image processing includes: identifying a target shooting object and a background of the target shooting object according to an image acquired by a camera; the image definition of the area where the target shooting object is located is improved, and a blurring effect is added to the image of the area where the background of the target shooting object is located.
In a long-focus shooting scene, because the shooting object is far away from the position of the mobile phone, the definition of an original image collected by a camera of the mobile phone is not high. Therefore, the embodiment of the application provides a shooting method in a long-focus shooting scene, which can intelligently identify a target shooting object, enhance details of the target shooting object and improve definition of a shooting object partial image in an acquired original image. The foreground and background of the image can be automatically identified according to the identified shooting object, the background outside the shooting object is virtualized, the target shooting object is highlighted, the artistic sense of the image in the long-focus shooting scene is improved, and the visual experience of a user is enhanced.
In a possible implementation, the method further includes: and the mobile terminal displays a mark frame on the second preview picture, and the mark frame is used for marking the area where the target shooting object is located.
Therefore, the user can further determine whether the target shooting object automatically identified by the mobile terminal is the object to be shot, whether the target shooting object automatically identified is complete and the like according to the mark frame.
In a possible implementation, the method further includes: and the mobile terminal displays prompt information in the second preview picture, and the prompt information is used for recommending the first image processing corresponding to the target shooting object.
In a possible implementation manner, a mobile terminal performs first image processing on an image acquired by a camera, including: the mobile terminal receives a second operation of selecting the first image processing input by the user; and responding to the second operation, and performing first image processing on the image acquired by the camera by the mobile terminal.
In a possible implementation manner, the adjusting the position of the target photographic object in the second preview screen specifically includes: and adjusting the second preview picture to enable the target shooting object to be positioned in the central area of the second preview picture.
In a specific implementation, the background of the photographic subject in the image may be cut or filled based on the recognized position of the photographic subject in the image, so that the position of the photographic subject is located at the center of the image. Alternatively, the position of the shooting object may be located in other positions of the image, for example, a position for a certain distance to the left or right of the center position, which is not limited in this embodiment of the application.
In one possible implementation manner, after the mobile terminal displays the second preview screen, the method further includes: receiving a third operation input by a user, wherein the third operation is used for indicating the mobile terminal to close a first image processing function corresponding to a target shooting object; and responding to the third operation, and the mobile terminal determines not to perform first image processing on the image acquired by the camera.
Therefore, the method for closing the first image processing is provided, and different use requirements of users are met.
In a possible implementation manner, increasing the image sharpness of the region where the target photographic object is located in the first image processing specifically includes: identifying the category of a target shooting object, and segmenting a first image of the target shooting object from images acquired by a camera; and inputting the first image into a neural network model corresponding to the category of the target photographic object, and outputting a second image of the target photographic object, wherein the definition of the second image is greater than that of the first image.
In one possible implementation, the neural network model corresponding to the category of the target photographic subject is obtained by training images of a plurality of photographic subjects in accordance with the category of the target photographic subject.
It can be understood that the mobile phone may pre-train an AI model for enhancing details of the photographic subject based on the category of the photographic subject. Of course, the handset can also obtain the trained AI model directly from the server. When training an AI model in which details of a photographic subject are enhanced based on the category of the photographic subject, a large number of training samples of the photographic subject, including training samples of each category, need to be input. For example, the training sample aiming at the magpie in the bird class comprises images of different types of magpies, images of different volumes or colors of the same type of magpie, images of different postures of the same type of magpie and the like, images of the same type of magpie in different environments and the like. The detail enhancement of the shot object may include performing intelligent pixel filling on a blurred region in the shot object by using an AI model, so as to improve the definition of an image. Further comprising: the missing part of the imaging target is repaired by performing intelligent pixel filling or the like using an AI model for the missing part of the imaging target. The method can also comprise the following steps: improve the overall sharpness of the image of the photographic subject portion, and the like.
In a possible implementation manner, the identifying the background of the target photographic object in the first image processing specifically includes: adopting a background extraction algorithm to identify the background of a target shooting object from an image collected by a camera, wherein the background extraction algorithm comprises the following steps: any one or any several of an interframe difference method, a background difference method, and an environment algorithm.
In one possible implementation, the first image processing further includes: when the camera collects an image containing a target shooting object, focusing is automatically carried out based on the target shooting object.
It is noted that in the prior art, focusing and exposing are generally performed based on a fixed position, or based on a position selected by a user (e.g., a touch position of a click operation). However, in the embodiment of the present application, focusing and exposure are performed based on the position of the target photographic object automatically identified by the mobile phone, so as to improve the overall definition of the target photographic object.
It can also be noted that in the prior art, the size of the focusing frame is fixed, and is usually set by default for the mobile phone. However, in the automatic focusing process based on the target photographic object in the embodiment of the present application, the size of the focusing frame may be automatically adjusted according to the size of the target photographic object recognized by the mobile terminal. In some examples, the focusing frame may be the same as the marking frame.
Optionally, since the method provided by the embodiment of the present application is applied to a long-focus shooting scene, a closest search distance, for example, 10 meters, may be set when the mobile terminal performs auto-focusing. That is, when the mobile terminal is in auto-focus, the searched distance range is from 10 meters away to infinity from the mobile terminal. In other words, the mobile terminal does not have to search for a range of 10 meters to a macro from the mobile terminal. Therefore, the time for automatic focusing of the mobile terminal can be reduced, and the image processing efficiency is improved.
In one possible implementation, the first image processing further includes: controlling a camera to continuously acquire N images containing the target shooting object, and synthesizing the N images containing the target shooting object into a third image by adopting an ultra-definition technology; and improving the image definition of the area where the target shooting object is located based on the third image, and adding a blurring effect to the image of the area where the background of the target shooting object is located. Therefore, the target shooting object in the image is further highlighted, and the overall artistic sense of the image is improved.
In a possible implementation manner, adding a blurring effect to an image of an area where a background of a target photographic object is located in a first image processing includes: and processing the image of the region where the background of the target shooting object is located by adopting a fuzzy algorithm, wherein the fuzzy algorithm comprises any one or any several of Gaussian filtering, circular filtering, guide filtering and domain filtering.
In a second aspect, a mobile terminal is provided, which includes: a processor, a memory and a touch screen, the memory and the touch screen being coupled to the processor, the memory being configured to store computer program code comprising computer instructions which, when read by the processor from the memory, cause the mobile terminal to perform the method of any of the above aspects and any possible implementation thereof.
In a third aspect, an apparatus is provided, where the apparatus is included in a mobile terminal, and the apparatus has a function of implementing a behavior of the mobile terminal in any one of the methods in the foregoing aspects and possible implementations. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. Such as a receiving module or unit, a display module or unit, and a processing module or unit, etc.
A fourth aspect provides a chip system comprising a processor, which when executing instructions performs the method as described in the above aspects and any one of the possible implementations thereof.
A fifth aspect provides a computer readable storage medium comprising computer instructions which, when run on a mobile terminal, cause the mobile terminal to perform the method as described in the above aspect and any one of its possible implementations.
A sixth aspect provides a computer program product for causing a computer to perform the method as described in the above aspects and any one of the possible implementations when the computer program product runs on the computer.
Drawings
Fig. 1 is a first schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a user interface of some mobile terminals provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a user interface of another mobile terminal according to an embodiment of the present application;
FIG. 5 is a schematic diagram of user interfaces of further mobile terminals according to an embodiment of the present application;
FIG. 6 is a schematic diagram of user interfaces of further mobile terminals according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
The shooting method provided by the embodiment of the application can be applied to a mobile terminal provided with a camera, and the mobile terminal can be, for example, a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a vehicle-mounted device, a smart car, a smart audio, a robot, and the like.
Fig. 1 shows a schematic configuration of a mobile terminal 100. The mobile terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation to the mobile terminal 100. In other embodiments of the present application, the mobile terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the mobile terminal 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate over a CSI interface to implement the camera functions of mobile terminal 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the mobile terminal 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the mobile terminal 100, and may also be used to transmit data between the mobile terminal 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other mobile terminals, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the mobile terminal 100. In other embodiments of the present application, the mobile terminal 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the mobile terminal 100. The charging management module 140 may also supply power to the mobile terminal through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the mobile terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the mobile terminal 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the mobile terminal 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the mobile terminal 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the mobile terminal 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile terminal 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the mobile terminal 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The mobile terminal 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the mobile terminal 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In some embodiments of the present application, the 1 or N cameras 193 may include at least one telephoto camera, which may be used to capture a photographic subject located farther from the mobile terminal 100. The processor 110 (for example, may be specifically one or more of an ISP, a CPU, a DSP, and an NPU) may perform detail enhancement and background blurring on an image acquired by the tele camera, so as to improve image quality in a tele shooting scene of the mobile terminal and improve visual experience of a user.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the mobile terminal 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The mobile terminal 100 may support one or more video codecs. In this way, the mobile terminal 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU may implement applications such as intelligent recognition of the mobile terminal 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the mobile terminal 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The mobile terminal 100 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The mobile terminal 100 may listen to music through the speaker 170A or listen to a hands-free call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the mobile terminal 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The mobile terminal 100 may be provided with at least one microphone 170C. In other embodiments, the mobile terminal 100 may be provided with two microphones 170C to implement a noise reduction function in addition to collecting sound signals. In other embodiments, the mobile terminal 100 may further include three, four or more microphones 170C to collect voice signals, reduce noise, identify voice sources, and implement directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm Open Mobile Terminal Platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The mobile terminal 100 may receive a key input, and generate a key signal input related to user setting and function control of the mobile terminal 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the mobile terminal 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The mobile terminal 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards can be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The mobile terminal 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the mobile terminal 100 employs eSIM, namely: an embedded SIM card. The eSIM card may be embedded in the mobile terminal 100 and may not be separated from the mobile terminal 100.
The software system of the mobile terminal 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the mobile terminal 100.
Fig. 2 is a block diagram of a software configuration of the mobile terminal 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
As shown in fig. 2, the application layer may include a series of application packages, including applications preset before the mobile terminal leaves the factory, or applications installed by the user through, for example, an application market or other means after the mobile terminal leaves the factory. These applications include, but are not limited to, applications including camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, browser, wechat, nay, pay, etc. (only some of which are shown).
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like. The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. The phone manager is used to provide a communication function of the mobile terminal 100. Such as management of call status (including on, off, etc.). The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the mobile terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (MediaLibraries), three-dimensional graphics processing libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
In some embodiments of the present application, the system library may further include a first module, which may be configured to perform detail enhancement and background blurring on an image acquired by a tele camera in the mobile terminal 100, so as to improve image quality in a tele shooting scene of the mobile terminal and improve visual experience of a user.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the mobile terminal 100 software and hardware in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of the application framework layer, starts the camera application, further starts the camera drive by calling the kernel layer, and captures a still image or a video through the camera 193.
The technical solutions in the following embodiments can be implemented in the mobile terminal 100 having the above hardware architecture and software architecture.
The mobile terminal 100 is taken as an example, and technical solutions provided by embodiments of the present application are described in detail with reference to the drawings.
For example, a user may instruct the cell phone to start the camera application by touching a particular control on the cell phone screen, pressing a particular physical key or combination of keys, inputting voice, or a blank gesture. And after the instruction of opening the camera by the user is received, starting the camera by the mobile phone and displaying a shooting interface.
For example: as shown in (1) in fig. 3, the user may instruct the mobile phone to open the camera application by clicking a "camera" application icon 301 on the desktop of the mobile phone, and the mobile phone displays a shooting interface as shown in (2) in fig. 3.
For another example: when the mobile phone is in the screen locking state, the user may also instruct the mobile phone to start the camera application by a gesture of sliding rightward on the screen of the mobile phone, and the mobile phone may also display a shooting interface as shown in (2) in fig. 3.
Or, when the mobile phone is in the screen lock state, the user may instruct the mobile phone to open the camera application by clicking the shortcut icon of the "camera" application on the screen lock interface, and the mobile phone may also display the shooting interface shown in (2) in fig. 3.
Another example is: when the mobile phone runs other applications, the user can also enable the mobile phone to open the camera application to take a picture by clicking the corresponding control. Such as: when the user is using an instant messaging application (such as a WeChat application), the user can also instruct the mobile phone to start the camera application to take a picture and a video by selecting a control of the camera function.
As shown in (2) of fig. 3, the camera shooting interface generally includes a viewfinder frame 302, shooting controls, and other functional controls ("large aperture", "portrait", "photograph", "video", etc.). The viewfinder 302 can be used to preview an image (or a picture) captured by the camera, and the user can decide the timing of instructing the mobile phone to execute the shooting operation based on the image (or the picture) in the viewfinder 302. The user may instruct the mobile phone to perform the shooting operation, for example, the user may click a shooting control, or the user may press a volume key. In some embodiments, a zoom magnification indication 303 may also be included in the capture interface. In general, the default zoom magnification of the mobile phone is "1 ×" which is the basic magnification.
For example, a mobile phone is described as an example in which three cameras, namely a short-focus (wide-angle) camera, a middle-focus camera and a long-focus camera, are integrated. In general, a user uses the most scenes of the middle focus camera, and thus, the middle focus camera is generally set as the main camera. The focal length of the main camera is set to a reference focal length, and the zoom magnification is "1 ×". In some embodiments, digital zoom (or referred to as digital zoom) may be performed on the image captured by the main camera, that is, each pixel area of the image captured by the main camera by "1 ×" is enlarged by the ISP or other processors in the mobile phone, and the framing range of the image is correspondingly reduced, so that the processed image appears equivalent to an image taken by the main camera with other zoom magnifications (e.g., "2 ×"). That is, an image captured using the main camera may correspond to a zoom magnification of one section, for example: from "1X" to "5X".
Similarly, the focal length of the telephoto camera and the focal length of the main camera are multiples of the zoom magnification of the telephoto camera. For example, the focal length of the tele camera may be 5 times the focal length of the main camera, i.e., the zoom magnification of the tele camera is "5 ×". Similarly, digital zooming may also be performed on images captured by a tele camera. That is, an image captured using a telephoto camera may correspond to a zoom magnification of another section, for example: "5X" to "50X".
Similarly, the focal length of the short-focus (wide) camera, which is a multiple of the focal length of the main camera, can be used as the zoom magnification of the short-focus (wide) camera. For example, the focal length of the short focus camera may be 0.5 times the focal length of the main camera, i.e., the zoom magnification of the long focus camera is "0.5 ×". Similarly, digital zooming may also be performed on images captured by a short-focus (wide-angle) camera. That is, an image captured using a telephoto camera may correspond to a zoom magnification of another section, for example: "0.5X" to "1X".
In an actual shooting scene, a short-focus (wide-angle) camera and a medium-focus camera are mostly used for shooting a shooting object at a close position to a mobile phone. The telephoto camera is generally used to photograph a photographic subject located far from the mobile phone. Of course, in some scenes, a shooting object far away from the mobile phone position can be shot by using the high zoom magnification of the middle-focus camera. The telephoto shooting scene in the embodiment of the present application may be understood as a shooting scene in which the zoom magnification of the mobile phone is greater than a preset magnification (for example, "5 ×", "10 ×", "20 ×"). In a long-focus shooting scene, the mobile phone may use an image shot by the middle-focus camera or an image shot by the long-focus camera, which is not specifically limited in this embodiment of the present application.
It should be noted that, in a long-focus shooting scene, because the shooting object is far away from the mobile phone, the definition of an original image acquired by a camera of the mobile phone is not high. Moreover, when the shooting object is far away from the mobile phone, and the mobile phone uses a long-focus high-magnification shooting mode, the mobile phone cannot achieve the shooting effect of background blurring through means such as a large aperture and the like because the depth of field of the image is shallow. Therefore, the embodiment of the application provides a shooting method in a long-focus shooting scene, which can process an original image acquired by a camera of a mobile phone. Specifically, the shot object is intelligently identified, the details of the shot object are enhanced, and the definition of the partial image of the shot object in the original image is improved. The foreground and background of the image can be automatically identified according to the identified shooting object, the background outside the shooting object is virtualized, the artistic feeling of the image in the long-focus shooting scene is improved, and the visual experience of a user is enhanced.
The definition refers to the definition of the texture detail and the boundary of each part on the image. For example, the image 1 is an original image captured by a camera of a mobile phone. If the method provided by the embodiment of the application is adopted, the image 2 is obtained after the image 1 is processed. The sharpness of the subject in image 2 is higher than the sharpness of the subject in image 1. If the image 1 and the image 2 are enlarged by the same magnification, and the enlarged image 1 and the enlarged image 2 are compared, it is found that the texture details of the shooting object in the enlarged image 2 are richer, and the boundaries of each part of the shooting object are clearer.
Hereinafter, the entire process from the switching of the mobile phone to the telephoto shooting scene, the processing of the original image acquired by the camera, the preview display of the processed image, and the shooting of the generated picture or video will be described in detail.
1. Mobile phone switching to long-focus shooting scene
In some embodiments, the user can manually adjust the zoom magnification used when shooting with the mobile phone.
For example: as shown in (2) in fig. 3, the user can adjust the zoom magnification used by the mobile phone by operating a zoom magnification instruction 303 in the shooting interface. Such as: when the zoom magnification used by the current mobile phone is "1 ×", the user can change the zoom magnification used by the mobile phone to "5 ×" by clicking the zoom magnification indication 303 one or more times, that is, the mobile phone displays a shooting interface as shown in (1) in fig. 4. In the imaging interface shown in fig. 4 (1), the imaging range of the image previewed in the finder frame 302 is obviously smaller than the imaging range in the finder frame 302 in fig. 3 (2), but the size of the subject to be imaged (for example, a bird) previewed in the finder frame 302 is larger than the subject to be imaged previewed in the finder frame 302. In some examples, the shooting interface shown in (1) in fig. 4 may further continue to display a zoom magnification indication 303, at which time the current zoom magnification is displayed as "5 ×" so that the user knows the current zoom magnification.
Another example is: as shown in (3) in fig. 3, the user may decrease the zoom magnification used by the mobile phone through a gesture of pinching two fingers (or three fingers) in the photographing interface, or increase the zoom magnification used by the mobile phone through a gesture of sliding two fingers (or three fingers) outward (in the opposite direction to the pinching).
Another example is: as shown in (4) in fig. 3, the user may also change the zoom magnification used by the mobile phone by dragging the zoom scale 304 in the shooting interface.
Another example is: the user can also change the zoom magnification of the mobile phone by switching the currently used camera in the shooting interface or the shooting setting interface. For example, if the user selects to switch to a telephoto camera, the mobile phone automatically increases the zoom magnification.
Another example is: the user may change the zoom magnification of the mobile phone by selecting an option of a telephoto shooting scene or an option of a long-distance shooting scene in the shooting interface or the shooting setting interface.
In other embodiments, the mobile phone may also automatically identify a specific scene of the image captured by the camera, and automatically adjust the zoom magnification according to the identified specific scene. Such as: if the mobile phone recognizes that the image captured by the camera is a scene with a large visual field range, such as a sea, a mountain, a forest and the like, the zoom factor can be automatically reduced. For another example: if the mobile phone recognizes that the image captured by the camera is a distant object, for example, a distant bird, a player on a sports ground, etc., the zoom factor may be automatically increased, which is not limited in the present application.
And when the zoom magnification of the mobile phone is adjusted to be larger than or equal to the preset magnification, the mobile phone enters a long-focus shooting scene.
2. Image preview in tele photography scene
In some embodiments of the present application, when the zoom magnification of the mobile phone is adjusted to be greater than or equal to a preset magnification, for example, "10 ×", the mobile phone displays a shooting interface as shown in (2) in fig. 4. In the shooting interface shown in (2) in fig. 4, a mark frame 401 is included in the finder frame 302 for marking the shooting subject recognized by the mobile phone. For example, the photographic subject is a bird. Note that the photographic subject shown in the figure is in a moving state. The method provided by the embodiment of the application is not limited to whether the shooting object is in a moving state or a static state. In some examples, the mobile phone may further display a prompt box 402 for recommending an image processing mode corresponding to the photographic subject category to the user. For example, the recognized photographic subject is a bird, and the user is asked whether to turn on an image processing mode corresponding to the bird (or called bird mode, bird effect). After responding to the user selection and starting the corresponding image processing mode, the mobile phone performs corresponding image processing on the acquired image and displays a shooting interface shown as (3) in fig. 4. In other examples, after the mobile phone identifies the type and the position of the photographic subject, the image processing mode corresponding to the type of the photographic subject may be automatically used for processing. That is, when the zoom magnification of the mobile phone is adjusted to be greater than or equal to the preset magnification, the mobile phone directly displays the shooting interface as shown in (3) in fig. 4.
In the shooting interface shown in (3) in fig. 4, the image displayed in the finder frame 302 is an image subjected to corresponding image processing. As can be seen, the identified shooting object is clearer and has higher definition. Optionally, the mobile phone may further display a tag 403 corresponding to the identified shooting object. Optionally, the mobile phone may also virtualize the background of the identified shooting object, so that the processed image has artistic feeling, and the visual effect of the image is improved.
Of course, in still other examples, after the mobile phone selects and uses the image processing mode corresponding to the shooting object category by default, or after the user manually turns on the image processing mode corresponding to the shooting object category, the user may also manually turn off the image processing mode corresponding to the shooting object category, or when the recognized shooting object category does not correspond to the corresponding image processing mode, the mobile phone automatically turns off the image processing mode corresponding to the shooting object category.
In a specific implementation manner, the ISP or other processor of the mobile phone performs image analysis on the image acquired by the camera, identifies a photographic object that the user intends to photograph from the image, and further identifies the category and the position of the photographic object. The category of the object to be photographed may be arbitrarily classified based on an object that is usually photographed by the user, and may be classified at one stage or at multiple stages. For example: the categories of the photographic subject may include: human, animal, plant, etc. Among them, the animal categories can also include a second-level classification, such as: dogs, cats, squirrels, birds, etc. Further, a third classification may also be included under the category of birds, such as: sparrow, magpie, gull, seabird, pigeon, wild goose, etc.
When the image acquired by the camera includes a plurality of objects, the object with the largest area may be the photographic subject by default, or the object located in the central area of the image may be the photographic subject, or the object selected by the user (for example, the object selected by a click operation) may be the photographic subject. Of course, a plurality of objects may be defaulted or selected as the shooting objects, which is not limited in the embodiment of the present application.
In some examples, the ISP or other processor of the handset may employ deep learning based target detection techniques, such as: the identification of the type and position of the shooting object is carried out by a Region-Convolutional Neural network (R-CNN), a Fast Region-Convolutional Neural network (Fast R-CNN), a Faster Region-Convolutional Neural network (Fast R-CNN) and the like. Optionally, the type and position of the shot object can be marked in the mobile phone viewfinder frame, so that a user can conveniently confirm whether the marked shot object is an intended shot object, and whether the type and position of the shot object are accurately identified by the mobile phone. If the object type and the position automatically identified by the mobile phone are confirmed to be correct, a corresponding image processing mode recommended by the mobile phone can be used. Otherwise, the user can also choose not to use the image processing mode recommended by the mobile phone, or change the shooting object, and the like, so as to avoid carrying out wrong image processing.
After the class and position of the photographic subject are identified, the image of the photographic subject portion may be processed. For example, a partial image corresponding to the photographic subject may be segmented from the captured image based on an image segmentation technique, and then the segmented photographic subject may be subjected to detail enhancement.
It can be understood that the mobile phone may pre-train an AI model for enhancing details of the photographic subject based on the category of the photographic subject. Of course, the handset may also obtain the trained AI model directly from the server. When training an AI model in which details of a photographic subject are enhanced based on the category of the photographic subject, a large number of training samples of the photographic subject, including training samples of each category, need to be input. For example, the training sample aiming at the magpie in the bird class comprises images of different types of magpies, images of different volumes or colors of the same type of magpie, images of different postures of the same type of magpie and the like, images of the same type of magpie in different environments and the like. The detail enhancement of the shot object can include performing intelligent pixel filling on a fuzzy area in the shot object by adopting an AI model, so as to improve the definition of an image. Further comprising: the missing part of the imaging target is repaired by performing intelligent pixel filling or the like using an AI model for the missing part of the imaging target. The method can also comprise the following steps: improve the overall sharpness of the image of the photographic subject portion, and the like.
In other examples, before the mobile phone performs detail enhancement on the photographic subject, the mobile phone may further highlight the photographic subject and improve the quality of the image based on the identified photographic subject, Auto Focus (AF) and Auto Exposure (AE). It is noted that in the prior art, focusing and exposing are generally performed based on a fixed position, or based on a position selected by a user (e.g., a touch position of a click operation). However, in the embodiment of the present application, focusing and exposure are performed based on the position of the photographic object automatically identified by the mobile phone, so as to improve the overall definition of the photographic object. It is noted that in the prior art, the size of the focusing frame is fixed, and is usually set by default for the mobile phone. However, in the automatic focusing process based on the shooting object in the embodiment of the present application, the size of the focusing frame is automatically adjusted according to the size of the shooting object identified by the mobile phone. In some examples, the focusing frame may be the same as the labeling frame 401 shown in (2) in fig. 4.
Optionally, since the method provided by the embodiment of the present application is applied to a long-focus shooting scene, a closest search distance, for example, 10 meters, may be set when the mobile phone is automatically focused. That is, when the mobile phone is in automatic focusing, the searched distance range is from 10 meters away to infinity. In other words, the handset does not have to search in the range of 10 meters to macro from the handset. Therefore, the time of automatic focusing of the mobile phone can be reduced, and the efficiency of image processing is improved.
In still other examples, the cell phone may also capture multiple (e.g., 6) images containing the photographic subject based on the parameters determined during auto-focus and auto-exposure before the cell phone performs detail enhancement on the photographic subject. For a plurality of images including a photographic subject, a high-definition image is synthesized by using a super resolution (super resolution) technology. And then carrying out detail enhancement on the shooting object by using the synthesized high-definition image. Therefore, the overall definition of the image is further improved.
In still other examples, a background extraction algorithm may be further employed to identify a background of the photographic subject based on the identified photographic subject and blurring the background. Therefore, the shooting object is more highlighted, and the integral artistic aesthetic feeling of the image is improved. The background extraction algorithm includes, but is not limited to, an inter-frame difference method, a background difference method, an environment algorithm (ViBe algorithm, ViBe + algorithm), and the like, which is not limited in the embodiments of the present application. The background is blurred, and the image of the area where the background is located can be processed by adopting a fuzzy algorithm to obtain a background image with a blurring effect. The fuzzy algorithm comprises any one or several of gaussian Filtering (gaussian Filtering), circular Filtering, Guided Filtering (Guided Filtering), and Domain Filtering (Domain Filtering).
In other embodiments of the present application, the mobile phone may further adjust the position of the identified photographic subject. In a specific implementation, the background of the photographic subject in the image may be cut or filled based on the recognized position of the photographic subject in the image, so that the position of the photographic subject is located at the center of the image. Alternatively, the position of the shooting object may be located in other positions of the image, for example, a position for a certain distance to the left or right of the center position, which is not limited in this embodiment of the application.
For example, as in the shooting interface shown in (4) in fig. 4, the subject in the image displayed in the finder frame 302 is located in the center area of the finder frame. Therefore, the shooting object can be highlighted, and the visual effect of the image is further improved.
In still other embodiments of the present application, the user may also select to switch between image processing effects corresponding to the category of the photographic subject and no processing (i.e., no effect) during the preview. For example, as shown in (1) of fig. 5, the handset may also display options 502. If the user selects the bird effect option in the option 502, the image displayed in the finder frame 302 is an image processed in the bird effect mode. If the user now has no effect in the options 502, as in the interface shown in (2) in fig. 5, the image displayed in the finder frame 302 is an image that has not been processed in the bird effect mode. In contrast, in the interface shown in (1) in fig. 5, the image (i.e., the subject) in the mark frame 501 in the finder frame 302 has a higher resolution than the image in the mark frame 501 in (2) in fig. 5. In another example, in the interface shown in (1) in fig. 5, the sharpness of the image (i.e., the background) outside the mark frame 501 in the finder frame 302 is smaller than the sharpness of the image outside the mark frame 501 in (2) in fig. 5. In yet another example, the position of the mark frame 501 in the finder frame 302 in the interface shown in (1) in fig. 5 is also different from the position of the mark frame 501 in the interface shown in (2) in fig. 5. Therefore, the user can switch the effects before and after image processing through the option 502, so that the user can compare the effects, select a preferred method for shooting, and improve the user experience.
In still other embodiments of the present application, after the user selects to use the image processing corresponding to the shooting object category, the user may further select whether to adjust the position of the shooting object. For example, as in the interface shown in (3) in fig. 5, an option control 503 may also be displayed. When the user does not select the option control 503, the mobile phone retains the original position of the shooting object. When the user selects the option control 503, the mobile phone displays an interface as shown in (4) in fig. 5, and the mark box 501 is located in the central area of the viewfinder 302. That is, the mobile phone adjusts the position of the photographic subject to the image center area.
In still other embodiments of the present application, the user may also manually turn on the image processing function based on the shooting object category provided in the embodiments of the present application. For example, as the interface shown in (1) in fig. 6, in response to detecting that the user operates the setting control 601, the cellular phone displays a shooting setting interface 602 shown in (2) in fig. 6. A control 603 for starting an image processing function based on a photographic subject category is displayed in the photographic setting interface 602. That is to say, after the user starts the function, the method provided by the embodiment of the present application is automatically adopted when the mobile phone is in the telephoto shooting scene, so as to process the acquired image. Of course, the user can also manually turn off the image processing function based on the shooting object category in the telephoto scene through the control 603.
Optionally, as shown in (3) of fig. 6, some common category controls 605 may be displayed in the shooting setting interface 604. The user can open the image processing function of the corresponding category through the category control 605. That is to say, after the user starts the image processing function corresponding to a certain category, the mobile phone automatically adopts the corresponding image processing method to process the acquired image when being in the long-focus shooting scene. For example: the user selects the function of starting the bird effect, and then the cell-phone will gather the image and handle according to the image processing mode of birds.
Alternatively, as the setting interface 606 shown in (4) in fig. 6, a plurality of categories of options are displayed. The user can select the corresponding category according to the actual scene, so that the mobile phone processes the acquired image according to the corresponding image processing mode.
Therefore, after the user selects the processing mode of a specific category, the mobile phone can directly use the corresponding image processing mode without automatically identifying the category of the shot object, so that the image processing effect can be accelerated. Alternatively, after the mobile phone recognizes the wrong category of the photographic subject, the user may manually select the image processing mode corresponding to the correct category.
3. Taking pictures or videos
The user can decide the timing of shooting based on the preview in the finder frame. After the shooting operation executed by the user is detected, the mobile phone carries out corresponding image processing based on the image acquired at the current moment to obtain a processed image. The image processing method may refer to image processing during preview, and is not described herein again.
The mobile phone can save the image processed by the image processing based on the category of the photographic subject in the album. In some examples, the mobile phone may also save images that have not been subjected to image processing based on the photographic subject category in the album at the same time.
When the mobile phone is in other modes such as continuous shooting, slow motion shooting, video recording and the like, the method provided by the application can be adopted to carry out corresponding processing on the acquired images. For example: when the mobile phone continuously shoots, the camera acquires a plurality of images, and the mobile phone can process each image based on the shot object to obtain a plurality of processed images. Another example is: when the mobile phone shoots slow motion or records videos, the slow motion and the videos are composed of one frame of image, each frame of image can be processed based on a shooting object, and then the processed images are composed into new slow motion and new videos.
When the image stored in the mobile phone or the image received from other devices is an image in a scene shooting scene, the method provided by the application can be used for processing the image based on the shooting object.
Embodiments of the present application further provide a chip system, as shown in fig. 7, the chip system includes at least one processor 1101 and at least one interface circuit 1102. The processor 1101 and the interface circuit 1102 may be interconnected by wires. For example, the interface circuit 1102 may be used to receive signals from other devices, such as a memory of the mobile terminal 100. As another example, the interface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101). Illustratively, the interface circuit 1102 may read instructions stored in the memory and send the instructions to the processor 1101. The instructions, when executed by the processor 1101, may cause the mobile terminal to perform the various steps performed by the mobile terminal 100 (e.g., a handset) in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides an apparatus, where the apparatus is included in a mobile terminal, and the apparatus has a function of implementing a behavior of the mobile terminal in any one of the above-mentioned embodiments. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. Such as detection modules or units, and determination modules or units, etc.
Embodiments of the present application further provide a computer-readable storage medium, which includes computer instructions, and when the computer instructions are executed on a mobile terminal, the mobile terminal is caused to execute any one of the methods in the foregoing embodiments.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute any one of the methods in the above embodiments.
It is to be understood that the above-mentioned terminal and the like include hardware structures and/or software modules corresponding to the respective functions for realizing the above-mentioned functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
In the embodiment of the present application, the terminal and the like may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A shooting method in a long-focus scene is applied to a mobile terminal comprising a camera, and the method comprises the following steps:
the mobile terminal starts the camera, displays a view-finding frame, the view-finding frame is used for displaying a first preview picture, and the zoom magnification of the camera corresponding to the first preview picture is a first magnification;
receiving a first operation of increasing the camera zoom magnification input by a user;
responding to the first operation, the view frame displays a second preview picture, the zooming magnification of the camera corresponding to the second preview picture is a second magnification, and the second magnification is larger than the first magnification;
if the second magnification is larger than or equal to a preset magnification, before the mobile terminal displays the second preview picture, the mobile terminal performs first image processing on an image acquired by the camera to generate the second preview picture;
wherein the first image processing includes:
identifying a target shooting object and a background of the target shooting object according to the image acquired by the camera;
and improving the image definition of the area where the target shooting object is located, and adding a blurring effect to the image of the area where the background of the target shooting object is located.
2. The method of claim 1, further comprising:
and the mobile terminal displays a mark frame on the second preview picture, and the mark frame is used for marking the area where the target shooting object is located.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and the mobile terminal displays prompt information on the second preview picture, and the prompt information is used for recommending the first image processing corresponding to the target shooting object.
4. The method of claim 3, wherein the first image processing performed on the image captured by the camera by the mobile terminal comprises:
the mobile terminal receives a second operation input by a user and used for selecting the first image processing;
and responding to the second operation, and the mobile terminal performs the first image processing on the image acquired by the camera.
5. The method of any of claims 1-2, or 4, wherein the first image processing further comprises:
and adjusting the second preview picture so that the target shooting object is positioned in the central area of the view frame.
6. The method of any of claims 1-2 or 4, wherein after the mobile terminal displays the second preview screen, the method further comprises:
receiving a third operation input by a user, wherein the third operation is used for indicating the mobile terminal to close the first image processing function corresponding to the target shooting object;
in response to the third operation, the mobile terminal determines not to perform the first image processing on the image captured by the camera.
7. The method according to any one of claims 1 to 2 or 4, wherein the improving of the image sharpness of the region where the target photographic object is located in the first image processing is specifically:
identifying the category of the target shooting object, and segmenting a first image of the target shooting object from the image acquired by the camera;
and inputting the first image into a neural network model corresponding to the category of the target photographic object, and outputting a second image of the target photographic object, wherein the definition of the second image is greater than that of the first image.
8. The method according to claim 7, wherein the neural network model corresponding to the category of the target photographic subject is trained according to images of a plurality of photographic subjects under the category of the target photographic subject.
9. The method according to any one of claims 1-2, 4 or 8, wherein the first image processing identifies a background of the target photographic subject, specifically:
adopting a background extraction algorithm to identify the background of the target shooting object from the image collected by the camera, wherein the background extraction algorithm comprises the following steps: any one or several of an interframe difference method, a background difference method, and an environment algorithm.
10. The method of any of claims 1-2, 4, or 8, wherein the first image processing further comprises:
focusing is automatically performed based on the target photographic object when the camera acquires an image containing the target photographic object.
11. The method of any of claims 1-2, 4, or 8, wherein the first image processing further comprises:
controlling the camera to continuously acquire N images containing the target shooting object, and synthesizing the N images containing the target shooting object into a third image by adopting a super-resolution technology;
and improving the image definition of the area where the target shooting object is located based on the third image, and adding a blurring effect to the image of the area where the background of the target shooting object is located.
12. The method according to any one of claims 1 to 2, 4 or 8, wherein blurring effects are added to the image of the area where the background of the target photographic subject is located in the first image processing, specifically:
and processing the image of the area where the background of the target shooting object is located by adopting a fuzzy algorithm, wherein the fuzzy algorithm comprises any one item or any several items of Gaussian filtering, circular filtering, guiding filtering and domain filtering.
13. A mobile terminal, comprising: a processor, a memory and a touch screen, the memory and the touch screen being coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions, the processor reading the computer instructions from the memory to cause the mobile terminal to perform the photographing method in a tele scene according to any of claims 1-12.
14. A computer-readable storage medium comprising computer instructions which, when run on a terminal, cause the terminal to perform a method of capturing in a tele scene according to any of claims 1-12.
15. A chip system, comprising one or more processors which, when executing instructions, perform a method for capturing in a tele scene as claimed in any of claims 1-12.
CN202010038444.XA 2020-01-14 2020-01-14 Shooting method in long-focus scene and mobile terminal Active CN113194242B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010038444.XA CN113194242B (en) 2020-01-14 2020-01-14 Shooting method in long-focus scene and mobile terminal
US17/779,876 US20220417416A1 (en) 2020-01-14 2020-10-28 Photographing method in telephoto scenario and mobile terminal
EP20914044.1A EP4020967B1 (en) 2020-01-14 2020-10-28 Photographic method in long focal length scenario, and mobile terminal
PCT/CN2020/124545 WO2021143269A1 (en) 2020-01-14 2020-10-28 Photographic method in long focal length scenario, and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038444.XA CN113194242B (en) 2020-01-14 2020-01-14 Shooting method in long-focus scene and mobile terminal

Publications (2)

Publication Number Publication Date
CN113194242A CN113194242A (en) 2021-07-30
CN113194242B true CN113194242B (en) 2022-09-20

Family

ID=76863529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038444.XA Active CN113194242B (en) 2020-01-14 2020-01-14 Shooting method in long-focus scene and mobile terminal

Country Status (4)

Country Link
US (1) US20220417416A1 (en)
EP (1) EP4020967B1 (en)
CN (1) CN113194242B (en)
WO (1) WO2021143269A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11212449B1 (en) * 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
CN113645408B (en) * 2021-08-12 2023-04-14 荣耀终端有限公司 Photographing method, photographing apparatus, and storage medium
CN115914823A (en) * 2021-08-12 2023-04-04 荣耀终端有限公司 Shooting method and electronic equipment
CN115529413A (en) * 2022-08-26 2022-12-27 华为技术有限公司 Shooting method and related device
CN117336597A (en) * 2023-01-04 2024-01-02 荣耀终端有限公司 Video shooting method and related equipment
CN116582743A (en) * 2023-07-10 2023-08-11 荣耀终端有限公司 Shooting method, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3101889A2 (en) * 2015-06-02 2016-12-07 LG Electronics Inc. Mobile terminal and controlling method thereof
CN107948516A (en) * 2017-11-30 2018-04-20 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108307106A (en) * 2017-12-29 2018-07-20 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009193193A (en) * 2008-02-13 2009-08-27 Seiko Epson Corp Image reproduction device, and control method and control program for image reproduction device
JP5036599B2 (en) * 2008-03-05 2012-09-26 株式会社リコー Imaging device
US8134597B2 (en) * 2008-12-05 2012-03-13 Sony Ericsson Mobile Communications Ab Camera system with touch focus and method
KR101663227B1 (en) * 2010-09-13 2016-10-06 삼성전자주식회사 Method and apparatus for processing image
CN103871051B (en) * 2014-02-19 2017-01-18 小米科技有限责任公司 Image processing method, device and electronic equipment
KR102560780B1 (en) * 2016-10-05 2023-07-28 삼성전자주식회사 Image processing system including plurality of image sensors and electronic device including thereof
CN108024056B (en) * 2017-11-30 2019-10-29 Oppo广东移动通信有限公司 Imaging method and device based on dual camera
CN108833768A (en) * 2018-05-10 2018-11-16 信利光电股份有限公司 A kind of image pickup method of multi-cam, camera terminal and readable storage medium storing program for executing
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
CN110572581B (en) * 2019-10-14 2021-04-30 Oppo广东移动通信有限公司 Zoom blurring image acquisition method and device based on terminal equipment
CN115914826A (en) * 2020-05-30 2023-04-04 华为技术有限公司 Image content removing method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3101889A2 (en) * 2015-06-02 2016-12-07 LG Electronics Inc. Mobile terminal and controlling method thereof
CN107948516A (en) * 2017-11-30 2018-04-20 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108307106A (en) * 2017-12-29 2018-07-20 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment

Also Published As

Publication number Publication date
EP4020967A1 (en) 2022-06-29
CN113194242A (en) 2021-07-30
EP4020967A4 (en) 2022-11-16
US20220417416A1 (en) 2022-12-29
EP4020967B1 (en) 2024-04-10
WO2021143269A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
CN109951633B (en) Method for shooting moon and electronic equipment
CN113132620B (en) Image shooting method and related device
CN112532857B (en) Shooting method and equipment for delayed photography
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN113556461B (en) Image processing method, electronic equipment and computer readable storage medium
CN110506416B (en) Method for switching camera by terminal and terminal
WO2020073959A1 (en) Image capturing method, and electronic device
CN113489894B (en) Shooting method and terminal in long-focus scene
CN114915726A (en) Shooting method and electronic equipment
CN112887583B (en) Shooting method and electronic equipment
CN113747048B (en) Image content removing method and related device
CN113497881B (en) Image processing method and device
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113891009B (en) Exposure adjusting method and related equipment
CN113709354A (en) Shooting method and electronic equipment
CN112580400A (en) Image optimization method and electronic equipment
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN114466101B (en) Display method and electronic equipment
CN116709018B (en) Zoom bar segmentation method and electronic equipment
CN115802144B (en) Video shooting method and related equipment
CN115268742A (en) Method for generating cover and electronic equipment
CN113452895A (en) Shooting method and equipment
CN115460343A (en) Image processing method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant