CN115086558B - Focusing method, image pickup apparatus, terminal apparatus, and storage medium - Google Patents

Focusing method, image pickup apparatus, terminal apparatus, and storage medium Download PDF

Info

Publication number
CN115086558B
CN115086558B CN202210669584.6A CN202210669584A CN115086558B CN 115086558 B CN115086558 B CN 115086558B CN 202210669584 A CN202210669584 A CN 202210669584A CN 115086558 B CN115086558 B CN 115086558B
Authority
CN
China
Prior art keywords
image
focusing
pixel point
evaluation value
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210669584.6A
Other languages
Chinese (zh)
Other versions
CN115086558A (en
Inventor
权威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210669584.6A priority Critical patent/CN115086558B/en
Publication of CN115086558A publication Critical patent/CN115086558A/en
Application granted granted Critical
Publication of CN115086558B publication Critical patent/CN115086558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application is suitable for the technical field of camera shooting, and provides a focusing method. According to the method, the images of the picture to be processed under different image distances are obtained, so that the images corresponding to the different image distances one by one can be generated when the picture to be processed is captured, and the rich light field information of the picture to be processed is obtained; the identification unit is reduced to the pixel point by determining the evaluation value of the pixel point at the same pixel position in each image, and the representation of the pixel point at the same pixel position in the image to be processed in different images is quantized, so that the depth analysis capability of the images with different image distances is improved; according to the method, a focusing image is output according to the target image corresponding to the target pixel point in the focusing area, the definition of the target pixel point can be improved according to the focusing image obtained according to the image distance corresponding to the target image, and the other pixel points can generate natural blurring effect when imaging according to the image distance, so that blurring distortion caused by insufficient light information is avoided, and the blurring effect is improved while the definition of imaging is improved.

Description

Focusing method, image pickup apparatus, terminal apparatus, and storage medium
Technical Field
The present application relates to the field of image capturing technologies, and in particular, to a focusing method, an image capturing apparatus, a terminal apparatus, and a storage medium.
Background
Along with the rapid development of digital images and multimedia technologies, the requirements of users on the shooting capabilities of electronic devices such as mobile phones, tablet computers and wearable devices are continuously improved, the electronic devices are limited by the size, the sizes of cameras and light sensors which can be used are limited, and compared with the imaging devices with strong light sensing capabilities such as semi-frame cameras or full-frame cameras, the electronic devices capture limited light information when shooting, are easy to distort when generating blurring effects, so that the imaging effects are unnatural.
Disclosure of Invention
In view of the above, embodiments of the present application provide a focusing method, an image capturing device, a terminal device, and a storage medium, so as to solve the problems that the existing electronic device is limited in size, the size of a camera and a light sensor that can be used is limited, light information captured during image capturing is limited, and distortion is easy when a blurring effect is generated, resulting in unnatural imaging effect.
A first aspect of an embodiment of the present application provides a focusing method, including:
Acquiring images of a picture to be processed under different image distances;
determining an evaluation value of a pixel point at the same pixel position in each image;
outputting a focusing image according to a target image corresponding to a target pixel point in the focusing area;
the maximum evaluation value of the target pixel point is greater than or equal to the maximum evaluation value of other pixel points in the focusing area, and the evaluation value of the target pixel point in the target image is maximum; and the image distance of the focusing image is the same as the image distance of the target image.
According to the focusing method, images of the to-be-processed picture under different image distances are obtained, so that images corresponding to the different image distances one by one can be generated when the to-be-processed picture is captured, and the rich light field information of the to-be-processed picture is obtained; the identification unit is reduced to the pixel point by determining the evaluation value of the pixel point at the same pixel position in each image, and the representation of the pixel point at the same pixel position in the image to be processed in different images is quantized, so that the depth analysis capability of the images with different image distances is improved; according to the method, a focusing image is output according to the target image corresponding to the target pixel point in the focusing area, the definition of the target pixel point can be improved according to the focusing image obtained according to the image distance corresponding to the target image, and the other pixel points can generate natural blurring effect when imaging according to the image distance, so that blurring distortion caused by insufficient light information is avoided, and the blurring effect is improved while the definition of imaging is improved.
A second aspect of the embodiments of the present application provides an image pickup apparatus including a processor, a memory, a computer program stored in the memory and executable on the processor, and a light field camera, the processor implementing the steps of the focusing method provided by the first aspect of the embodiments of the present application when the computer program is executed;
the light field camera is used for capturing a picture to be processed, and recording the propagation path of the light rays in any direction when the picture to be processed is captured by the light rays in any direction.
A third aspect of the embodiments of the present application provides a terminal device, including the image capturing apparatus provided in the second aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the focusing method provided by the first aspect of the embodiments of the present application.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a focusing method according to an embodiment of the present application;
fig. 2 is a schematic view of a scene of an object distance and an image distance when an image capturing apparatus provided by an embodiment of the present application captures an image;
fig. 3 is a schematic view of a scene in which an imaging device provided by an embodiment of the present application collects different light rays of an imaging plane through a micro lens array;
FIG. 4 is a first schematic diagram of a frame to be processed according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a maximum evaluation value heat map corresponding to a to-be-processed picture according to an embodiment of the present application;
fig. 6 is a schematic diagram of a second flow of a focusing method according to an embodiment of the present application;
fig. 7 is a first schematic diagram of object identification on a to-be-processed picture according to an embodiment of the present application;
fig. 8 is a second schematic diagram of object recognition on a to-be-processed picture according to an embodiment of the present application;
FIG. 9 is a second schematic diagram of a frame to be processed according to an embodiment of the present application;
fig. 10 is a schematic diagram of performing saliency recognition on a picture to be processed according to an embodiment of the present application;
fig. 11 is a third flowchart of a focusing method according to an embodiment of the present application;
fig. 12 is a schematic diagram showing a recommended focusing area 210 displayed on a terminal device 200 according to an embodiment of the present application;
Fig. 13 is a fourth flowchart of a focusing method according to an embodiment of the present application;
fig. 14 is a fifth flowchart of a focusing method according to an embodiment of the present application;
fig. 15 is a schematic structural view of an image pickup apparatus provided by an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In application, electronic equipment is limited by the size of the volume, and the size of a camera and a light sensor which can be used is limited, so that compared with imaging equipment with strong light sensing capability such as a half-frame camera or a full-frame camera, the electronic equipment captures limited light information when imaging, and the imaging equipment is easy to distort when generating a blurring effect, so that the imaging effect is unnatural.
Aiming at the technical problems, the embodiment of the application provides a focusing method, which can generate images corresponding to different image distances one by one when capturing a picture to be processed by acquiring the images of the picture to be processed under different image distances, so as to obtain rich light field information of the picture to be processed; the identification unit is reduced to the pixel point by determining the evaluation value of the pixel point at the same pixel position in each image, and the representation of the pixel point at the same pixel position in the image to be processed in different images is quantized, so that the depth analysis capability of the images with different image distances is improved; according to the method, a focusing image is output according to the target image corresponding to the target pixel point in the focusing area, the definition of the target pixel point can be improved according to the focusing image obtained according to the image distance corresponding to the target image, and the other pixel points can generate natural blurring effect when imaging according to the image distance, so that blurring distortion caused by insufficient light information is avoided, and the blurring effect is improved while the definition of imaging is improved.
The focusing method provided by the embodiment of the application can be applied to any image pickup device or terminal equipment carrying the image pickup device. Wherein, the image pickup apparatus may be a Camera (Camera) or a Video Camera (Video Camera) or the like, and the Camera may be a digital single Camera (Digital Single Lens Reflex Camera, DSLR) or a micro single Camera (Micro Single Camera) or the like; the terminal device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), or the like, and the specific types of the image capturing device and the terminal device are not limited in the embodiments of the present application.
As shown in fig. 1, the focusing method provided by the embodiment of the application includes the following steps S101 to S103:
step S101, obtaining images of a picture to be processed under different image distances.
In application, the Image to be processed may be captured by an Image capturing device, and when the Image capturing device captures the Image to be processed, light in the Image to be processed is collected through a Main Lens (Main Lens) and converged to an Image Sensor (Image Sensor) of the Image capturing device, so that an Image of the Image to be processed mapped to the Image Sensor, which is defined as an Image of the Image to be processed, may be obtained. The Distance between the focusing Object or the focusing plane in the Image to be processed and the main lens is an Object Distance (Object Distance), the Distance between the Image of the Image to be processed and the main lens is an Image Distance (Image Distance), generally speaking, the Object Distance and the Image Distance have a Conjugate (Conjugate) relationship, and when the Image capturing device captures the Image, the farther the Object Distance is, the closer the Image Distance is; the closer the object distance, the farther the image distance. The specific content and the frame size of the picture to be processed are determined according to the picture actually captured by the image capturing apparatus.
In application, when light rays in a picture to be processed are collected through the main lens, light rays with different incidence angles can be obtained, so that light rays converged to different imaging planes can be obtained, the image distance of each imaging plane is different, the distance between the main lens and the image sensor can be defined as the image distance of an actual imaging plane, the distance between the main lens and the image sensor is not equal to the image distance of a virtual imaging plane, the image processing is carried out according to the image distance of the actual imaging plane, the actual image of the picture to be processed under the image distance of the actual imaging plane can be obtained, and the image processing is carried out according to the image distance of the virtual imaging plane, so that the virtual image of the picture to be processed under the image distance of the virtual imaging plane can be obtained.
Specifically, the acquisition of light converging to different imaging planes can be realized by changing the distance between the image sensor and the main lens; the Micro Lens array comprises a plurality of Micro Lens units, light rays collected by the main Lens can be transmitted to the Micro Lens array, each Micro Lens unit can map the collected light rays to the image sensor to form a Micro Lens sub-image, so that the light rays converged to the actual imaging surface and the virtual imaging surface can be mapped to the image sensor through the Micro Lens array to form a plurality of Micro Lens sub-images, and the acquisition of the actual image and the virtual image, namely, the images of the pictures to be processed under different image distances can be realized through image processing.
In one embodiment, step S101 includes:
refocusing the picture to be processed under different image distances to obtain images of the picture to be processed under different image distances.
In application, after the light beams converged to the actual imaging surface and the virtual imaging surface are mapped to the image sensor, refocusing (Refocus) is performed on the image to be processed according to the image distance of the virtual imaging surface and the actual image of the image to be processed, so as to Refocus the actual image of the image to be processed through refocusing Jiao Suanfa and the image distance of the virtual imaging surface, and obtain the virtual image of the image to be processed under different image distances. The actual image of the image to be processed represents an image which is focused based on a focused object in the image to be processed and mapped on an image sensor (actual imaging surface). The specific working principle of the refocusing algorithm is described below.
Fig. 2 exemplarily shows a relationship diagram of an object distance and an image distance when the image capturing apparatus captures a picture, where 10 is a picture area of a picture to be processed, 20 is a main lens, 30 is an image sensor, F is an object distance, and F is an image distance.
Fig. 3 is a schematic diagram schematically showing an image capturing apparatus capturing light rays of different imaging planes through a micro lens array, wherein 40 is the micro lens array, F1 is the 1 st object distance, F1 is the 1 st image distance corresponding to the 1 st object distance, F2 is the 2 nd object distance, and F2 is the 2 nd image distance corresponding to the 2 nd object distance.
In one embodiment, step S101 includes:
acquiring the image distance of the ith virtual imaging surface according to the ith refocusing parameter and the image distance of the actual imaging surface;
acquiring an ith image of a picture to be processed under the image distance of an ith virtual imaging surface according to the light information of the actual imaging surface, the light information of the ith virtual imaging surface, the ith refocusing parameter and the image distance of the actual imaging surface;
wherein i=1, 2, …, n, n is an integer greater than or equal to 1.
In application, a plurality of refocusing parameters can be preset, and according to the refocusing parameters and the image distance of the actual imaging surface, the image distance of the virtual imaging surface can be obtained, and it is to be noted that the specific value of the refocusing parameters can be determined according to the image distance of the virtual imaging surface supported by the image capturing device, and the calculation formula of the image distance of the virtual imaging surface can be:
ImageDistance i =ImageDistance*α i
wherein, image distance i Representing the image distance of the ith virtual imaging plane, image distance representing the image distance of the actual imaging plane, alpha i Indicating the ith refocusing Jiao Canshu.
For example, assuming that the image distance of the actual imaging plane of the image pickup apparatus is 50 mm, and the image distances of the supported virtual imaging planes are 20 mm, 30 mm, 40 mm, 60 mm, and 70 mm, refocusing parameters may be 2/5,3/5,4/5,6/5, and 7/5, respectively; or, the image distance of the supported virtual imaging surface is greater than or equal to 20 mm and less than 50 mm and greater than 50 mm and less than or equal to 70 mm, and the range of the refocusing parameter is greater than or equal to 2/5 and less than 1 and greater than 1 and less than or equal to 7/5. The embodiment of the application does not limit the specific size or the range of the refocusing parameters.
In application, the light information of the actual imaging surface, the light information of the ith virtual imaging surface, the ith refocusing parameter and the image distance of the actual imaging surface can be input into a refocusing algorithm, so that the ith image of the picture to be processed under the image distance of the ith virtual imaging surface can be obtained. The light information of the actual imaging surface may include a number of rows and a number of columns of meta pixels of the actual imaging surface, and the light information of the virtual imaging surface may include a number of rows and a number of columns of macro pixels of the virtual imaging surface; the number of lines/columns of the metapixel represents the number of lines/columns of a photosensitive unit (the photosensitive unit is arranged on the image sensor) covered by a corresponding micro lens unit for collecting light rays when virtual imaging is performed, the number of lines/columns of the macro pixel represents the number of lines/columns of the micro lens array, the specific position of the micro lens unit for collecting light rays when generating an ith image under the image distance of an ith virtual imaging plane can be determined according to the number of lines/columns of the macro pixel, and the specific brightness of the light rays can be determined according to the number of lines/columns of the metapixel.
In application, the formula of the refocusing algorithm may be specifically:
wherein E is i The method comprises the steps of representing an ith image of a picture to be processed under the image distance of an ith virtual imaging surface, wherein u represents the number of rows of meta-pixels of an actual imaging surface, v represents the number of columns of meta-pixels of the actual imaging surface, s represents the number of rows of macro-pixels of the virtual imaging surface, and v represents the number of columns of macro-pixels of the virtual imaging surface.
Step S102, determining the evaluation value of the pixel point at the same pixel position in each image.
In an application, an image of a to-be-processed picture includes a plurality of pixel points, the specific number of the pixel points is determined according to the resolution of the image of the to-be-processed picture (for example, the resolution of the image is 1920×1080, the number of the corresponding pixel points is 2073600), and the pixel positions of the pixel points can be pixel coordinates of the pixel points in the image. The evaluation value of the pixel point in the image can be used for reflecting the pixel quality of the pixel point in the image, and the higher the evaluation value is, the higher the pixel quality of the corresponding pixel point is.
In the application, after obtaining the images of the to-be-processed picture under different image distances, the evaluation value of the pixel point at the same pixel position in each image can be determined. Specifically, the evaluation values of all the pixels in each image can be calculated respectively, and the evaluation values of the pixels at the same pixel position in each image are summarized; the evaluation values of the pixel points at the same pixel position in all the images may be calculated. The embodiment of the application does not limit the specific calculation flow of the evaluation value of the pixel point.
In one embodiment, step S102 further includes:
And obtaining the maximum evaluation value of each pixel point, and recording the corresponding image when each pixel point obtains the maximum evaluation value.
In application, after obtaining multiple evaluation values of any pixel point in all images, the evaluation value with the largest value can be used as the largest evaluation value of any pixel point, and the image corresponding to the largest evaluation value obtained by calculating any pixel point is recorded, so as to establish a corresponding relation table of each pixel point and the image. The image with the highest pixel quality of each pixel point can be rapidly determined by a table look-up mode, so that the image distance with the highest pixel quality of each pixel point can be determined. Before the correspondence table between each pixel point and the image is established, the images may be numbered, specifically, the images may be ordered and numbered according to the size of the image distance, for example, the images to be processed have one-to-one correspondence at the image distances of 20 mm, 30 mm, 40 mm and 50 mm, the image with the image distance of 20 mm may be referred to as the 1 st image, the image with the image distance of 30 mm may be referred to as the 2 nd image, the image with the image distance of 40 mm may be referred to as the 3 rd image, and the image with the image distance of 50 mm may be referred to as the 4 th image.
In the application, a maximum evaluation value heat point diagram can be generated according to the maximum evaluation value of each pixel point and the pixel position of each pixel point, the height of the maximum evaluation value is represented on the pixel position of each pixel point through colors, and the size of the maximum evaluation value of each pixel point can be intuitively displayed. After obtaining the maximum evaluation heat point diagram, smoothing filtering processing may be performed by a Filter (Filter), which may be a box Filter, to smooth the color change of the maximum evaluation heat point diagram and generate a peak value.
Fig. 4 and fig. 5 are schematic diagrams illustrating images of a to-be-processed picture at any image distance and maximum evaluation value heat maps corresponding to the to-be-processed picture, and it should be noted that, the maximum evaluation value heat maps of fig. 5 are merely exemplary and gray scale processing is performed, and the correspondence relationship between colors and maximum evaluation values is not limited in the embodiment of the present application.
Step S103, outputting a focusing image according to a target image corresponding to a target pixel point in the focusing area;
the maximum evaluation value of the target pixel point is greater than or equal to the maximum evaluation value of other pixels in the focusing area, and the evaluation value of the target pixel point in the target image is maximum; the image distance of the focusing image is the same as the image distance of the target image.
In the application, the focusing area can be a complete picture to be processed, or can be a partial area in the picture to be processed. After the focusing area of the picture to be processed is focused, target pixel points in the focusing area can be obtained, the maximum evaluation value of the target pixel points needs to be larger than or equal to the maximum evaluation value of other pixel points in the focusing area, when a plurality of target pixel points with the same maximum evaluation value are included in the focusing area, a target image corresponding to any target pixel point can be selected for single-point focusing output focusing images, and a plurality of target images corresponding to any number of target pixel points can be selected for multi-point focusing output focusing images.
In the application, the target image is an image corresponding to the target pixel point when the maximum evaluation value is calculated, after the target pixel point is determined, the target image corresponding to the target pixel point can be determined through a corresponding relation table of each pixel point and the image, and the focusing image can be output according to the target image.
In the application, the image distance of the focusing image is the same as that of the target image, the target pixel point in the focusing image can reach the clearest state, and other pixel points can generate blurring effect according to the image distance of the target image.
In one embodiment, step S103 includes:
focusing the picture to be processed according to the image distance of the target image corresponding to the target pixel point in the focusing area, and outputting a focusing image;
or, acquiring a target image recorded by the target pixel point in the focusing area when the maximum evaluation value is obtained, and outputting the target image as a focusing image.
In application, the method can comprise two output modes of focusing images, wherein the first output mode can read the image distance of the target image, focus the image to be processed according to the image distance of the target image, and output a focusing image; the second output mode can read the target image in the memory and directly output the target image as a focusing image, and it should be noted that the second output mode needs to store the corresponding image in the memory when the highest evaluation value is obtained at each pixel point; when the second output mode is used, if the focusing area comprises a target pixel point, directly outputting a target image corresponding to the target pixel point as a focusing image; if the focusing area comprises a plurality of target pixel points, a plurality of target images corresponding to the target pixel points one by one can be subjected to image processing to be fused to obtain a multi-point focusing image, and the focusing image is output.
In application, by acquiring images of a picture to be processed under different image distances, images corresponding to the different image distances one by one can be generated when the picture to be processed is captured, and the images comprise an actual image mapped on an image sensor and a plurality of virtual images with different image distances, so that Light Field (Light Field) information rich in the picture to be processed is obtained, and the Light Field information acquisition capability of single shooting is improved; the identification unit is reduced to the pixel point by determining the evaluation value of the pixel point at the same pixel position in each image, and the representation of the pixel point at the same pixel position in the image to be processed in different images is quantized, so that the depth analysis capability of the images with different image distances is improved; according to the method, a focusing image is output according to a target image corresponding to a target pixel point in a focusing area, the maximum evaluation value of the target pixel point is larger than or equal to the maximum evaluation value of other pixel points in the focusing area, the definition of the target pixel point can be improved according to the focusing image obtained by the image distance corresponding to the target image, natural blurring effect can be generated when the other pixel points are imaged according to the image distance, blurring distortion caused by insufficient light information is avoided, and the blurring effect is improved while the imaging definition is improved.
As shown in fig. 6, in one embodiment, based on the embodiment corresponding to fig. 1, the following steps S601 to S606 are included:
step S601, obtaining images of a picture to be processed under different image distances.
In application, the focusing method provided in step S601 is consistent with the focusing method provided in step S101, and will not be described herein.
Step S602, performing object recognition on the image, and determining a first evaluation value of each pixel point in the image based on the object recognition result.
In application, the image can be processed by a target object recognition algorithm, and a first evaluation value of each pixel point in the image is determined based on a target object recognition result. Specifically, when the image is processed through the object recognition algorithm, each object in the image can be recognized first, masks (masks) are generated around each object, then pixel points corresponding to the objects are recognized in each Mask, and the specific positions of the objects in the image are determined according to the pixel positions of the pixel points. In addition, the object recognition algorithm may recognize the type of the object, and may preset a correspondence between the type of the object and the first evaluation value, for example, the type of the object may include a human being, a pet, an automobile, a mobile phone, a computer, etc., the first evaluation value of the human being may be 20 points, the first evaluation value of the pet may be 10 points, the first evaluation value of the automobile may be 8 points, and the first evaluation value of the mobile phone and the computer may be 5 points. The embodiment of the application does not limit the specific type of the target object and the corresponding relation between the type of the target object and the first evaluation value.
In application, the target object recognition algorithm may be built based on one or more different types of network structures such as a convolutional neural network (Convolutional Neural Networks, CNN), a target detection convolutional neural network (Region-Convolutional Neural Networks, R-CNN), a full convolutional network (Fully Convolutional Networks, FCN), a target detection full convolutional network (Region-Fully Convolutional Networks, R-FCN), a feature pyramid network (Feature Pyramid Networks, FPN), and the like, and the specific network structure of the target object recognition algorithm is not limited in any way in the embodiment of the present application.
Fig. 7 and 8 are schematic diagrams schematically illustrating object recognition on an image.
Step S603, performing sharpness recognition on the image, and determining a second evaluation value of each pixel in the image based on the sharpness recognition result.
In application, the image may be processed by a sharpness recognition algorithm, and a second evaluation value for each pixel in the image may be determined based on the sharpness recognition result. After the image is processed through the object recognition algorithm, the specific position of the object in the image can be obtained according to the object recognition result, the specific position of the object in the image is subjected to definition recognition through the definition recognition algorithm, and a second evaluation value of each pixel point in the object is output; the complete image can be subjected to definition identification directly through a definition identification algorithm, and second evaluation values corresponding to all pixel points in the image are output.
In application, the construction and selection of the network structure of the definition recognition algorithm is consistent with the construction and selection of the network structure of the definition recognition algorithm, and will not be described in detail herein. Specifically, the definition recognition algorithm can be built by using a gaussian pyramid (gaussian pyramid), and the working principle is as follows: firstly, converting an image into a gray image, and decomposing downwards or upwards through a Gaussian pyramid to obtain a plurality of gray images with reduced resolution or improved resolution; performing lateral and longitudinal edge detection on each gray level image, wherein a Sobel operator can be used for performing edge detection to obtain a lateral gradient and a longitudinal gradient respectively; acquiring a gradient pyramid according to the transverse gradient and the longitudinal gradient of each gray level image; and when the Gaussian pyramid is decomposed downwards, reconstructing the gradient pyramid according to the order from the resolution ratio to the large resolution ratio, and when the Gaussian pyramid is decomposed upwards, reconstructing the gradient pyramid according to the order from the resolution ratio to the small resolution ratio until the resolution ratio of the output image is equal to the resolution ratio of the image, and obtaining the definition identification result of the image.
Step S604, performing saliency recognition on the image, and determining a third evaluation value of each pixel point in the image based on the saliency recognition result.
In application, the image may be processed by a saliency recognition algorithm, and a third evaluation value for each pixel in the image may be determined based on the saliency recognition result. Specifically, the preset saliency recognition rule can be adjusted to enable the saliency recognition algorithm to output a corresponding third evaluation value when a preset object is recognized, or the object with high attention in the image can be analyzed based on the default recognition rule of the saliency recognition algorithm.
In application, the construction and selection of the network structure of the saliency recognition algorithm is consistent with the construction and selection of the network structure of the definition recognition algorithm, and will not be described in detail herein.
Fig. 9 and 10 are schematic diagrams illustrating saliency recognition of an image, wherein the higher the brightness of the pixel position in fig. 10, the higher the corresponding saliency.
Step S605 determines an evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image.
In application, after obtaining the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image, according to the evaluation value of the pixel point at the same pixel position in each image, which is determined by summarizing, the calculation formula is as follows:
Score x,y =δ 1 *Type x,y2 *Clarity x,y3 *Significance x,y );
Wherein x represents the pixel abscissa of the pixel point, y represents the pixel ordinate of the pixel point, score x,y Representing the evaluation value of the pixel point, type x,y First evaluation value representing pixel point x,y Second evaluation value representing pixel point, significa x,y Third evaluation value, δ, representing pixel point 1 As a first coefficient, delta 2 As a second coefficient, delta 3 Is the third coefficient.
The first coefficient, the second coefficient and the third coefficient may be set according to actual evaluation requirements, for example, the first coefficient, the second coefficient and the third coefficient may be 1, 0.8 and 0.4, respectively.
In the application, when the focusing area includes a plurality of target pixel points with equal maximum evaluation values, a target image corresponding to the target pixel point with the maximum first, second or third evaluation value can be selected to perform single-point focusing to output a focusing image.
It should be noted that, the evaluation of the quality of the pixel point according to the type, definition and significance of the object is merely exemplary, and the evaluation condition may be increased, decreased or modified according to the actual requirement, for example, the evaluation condition may further include the vividness of color or the light intensity.
Step S606, outputting a focusing image according to the target image corresponding to the target pixel point in the focusing area.
In application, the focusing method provided in step S606 is consistent with the focusing method provided in step S103, and will not be described herein.
In application, the quality of each pixel point can be effectively quantified by evaluating the quality of the pixel point according to the type, definition and significance of the target object, so that the depth analysis capability of the picture to be processed is further improved.
As shown in fig. 11, in one embodiment, based on the embodiment corresponding to fig. 6, the following steps S1101 to S1108 are included:
step 1101, obtaining images of a to-be-processed picture under different image distances;
step 1102, performing object recognition on the image, and determining a first evaluation value of each pixel point in the image based on the object recognition result;
step S1103, performing definition recognition on the image, and determining a second evaluation value of each pixel point in the image based on the definition recognition result;
step S1104, performing saliency recognition on the image, and determining a third evaluation value of each pixel point in the image based on a saliency recognition result;
step S1105, determining an evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image.
In application, the focusing method provided in step S1101 to step S1105 is identical to the focusing method provided in step S601 to step S605, and will not be described here.
Step S1106, when the automatic focusing mode is in, taking the complete picture to be processed as a focusing area;
step S1107, when in the manual focusing mode, selecting at least one pixel point according to the manual focusing instruction as a focusing area.
In the application, before outputting the focusing image according to the target image corresponding to the target pixel point in the focusing area, the focusing mode can be judged to determine the selected range of the focusing area. The focusing mode can comprise an automatic focusing mode and a manual focusing mode, when the automatic focusing mode is adopted, a complete picture to be processed can be used as a focusing area, and when the manual focusing mode is adopted, at least one pixel point selected according to a manual focusing instruction can be used as the focusing area.
In one embodiment, step S1107 includes:
when the image processing device is in a manual focusing mode, acquiring a preset number of recommended pixel points according to the maximum evaluation value of each pixel point, wherein the maximum evaluation value of the recommended pixel points is larger than or equal to the maximum evaluation value of other pixel points in the image to be processed;
Generating and displaying a recommended focusing area according to the pixel position of the recommended pixel point;
and receiving a manual focusing instruction, and taking at least one pixel point selected according to the manual focusing instruction as a focusing area.
In application, when the manual focusing mode is in the manual focusing mode and the manual focusing instruction is received, a plurality of recommended pixels with the maximum evaluation value and the maximum value can be obtained, and the preset number of the recommended pixels can be set according to actual needs. And generating and displaying a recommended focusing area according to the pixel position of the recommended pixel point so as to provide an area with better focusing effect for a user to select. When the manual focusing instruction is received, at least one pixel point selected according to the manual focusing instruction is used as a focusing area, and the focusing area can be a recommended focusing area or a non-recommended focusing area so as to improve the flexibility of manual focusing.
Fig. 12 exemplarily shows a schematic diagram of displaying the recommended focusing area 210 on the terminal device 200.
Step S1108, outputting a focusing image according to the target image corresponding to the target pixel point in the focusing area.
In application, the focusing method provided in step S1108 is consistent with the focusing method provided in step S606, and will not be described herein.
In the application, the focusing area can be flexibly adjusted according to the selected focusing mode, a complete picture to be processed can be used as the focusing area in the automatic focusing mode, a recommended focusing area with good focusing effect can be provided in the manual focusing mode, the use experience of manual focusing is improved, and the focusing flexibility is improved.
As shown in fig. 13, in one embodiment, based on the embodiment corresponding to fig. 11, the following steps S1301 to S1308 are included:
step S1301, obtaining images of a to-be-processed picture under different image distances;
step S1302, performing object recognition on the image, and determining a first evaluation value of each pixel point in the image based on the object recognition result;
step S1303, performing definition recognition on the image, and determining a second evaluation value of each pixel point in the image based on the definition recognition result;
step S1304, performing saliency recognition on the image, and determining a third evaluation value of each pixel point in the image based on a saliency recognition result;
step S1305, determining an evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image;
Step S1306, when the automatic focusing mode is in, taking the complete picture to be processed as a focusing area;
step S1307, when in the manual focusing mode, selecting at least one pixel point according to the manual focusing instruction as a focusing area.
In application, the focusing method provided in step S1301 to step S1307 is consistent with the focusing method provided in step S1101 to step S1107 described above, and will not be described here again.
Step S1308, obtaining the image distance of the current display image;
step S1309, determining the image distance of the focusing image according to the image distance of the target image corresponding to the target pixel point in the focusing area;
step 1310, switching the image distance of the current display image to the image distance of the focusing image according to the preset switching speed or the preset switching time, so as to output the focusing image.
In the application, the image distance of the current display image can be obtained, the image distance of the focusing image is determined according to the image distance of the target image corresponding to the target pixel point in the focusing area, when the focusing image is output, the image distance of the current display image is switched to the image distance of the focusing image according to the preset switching speed or the preset switching time, the problem that the image switching is blocked due to overlarge image distance change when the current display image is directly switched to the image distance of the focusing image is avoided, and the smoothness during the image distance switching is improved through transition through the interval image distance when the image distance is switched.
The preset switching speed or the preset switching time can be set according to actual needs, and the specific size of the preset switching speed or the preset switching time is not limited in the embodiment of the application.
In one embodiment, step S1310 includes:
acquiring an interval image distance between the image distance of the current display image and the image distance of the focusing image;
focusing the picture to be processed according to the q-th interval image distance when switching to the q-th interval image distance, and outputting a q-th transition image;
wherein q=1, 2, …, m, m is an integer greater than or equal to 1.
In the application, the interval image distance between the image distance of the current display image and the image distance of the focusing image can be obtained, and when the interval image distance is switched to the q interval image distance, focusing is carried out on the image to be processed according to the q interval image distance, and the q transition image is output. The number of the interval image distances can be divided by the preset switching time and rounded up, so as to obtain the number of transition images to be output in each unit time (specifically, 10 ms or 100 ms). Through outputting the transition image, images under different image distances can be displayed during zooming, so that a user can observe the image definition under different image distances in the zooming process conveniently, and the zooming flexibility and the zooming effect are improved.
As shown in fig. 14, in one embodiment, based on the embodiment corresponding to fig. 11, step S1401, step S1412, which includes the following steps:
step S1401, obtaining images of a picture to be processed under different image distances;
step S1402, performing object recognition on the image, and determining a first evaluation value of each pixel point in the image based on the object recognition result;
step S1403, performing sharpness recognition on the image, and determining a second evaluation value of each pixel point in the image based on the sharpness recognition result;
step S1404, performing saliency recognition on the image, and determining a third evaluation value of each pixel point in the image based on the saliency recognition result;
step S1405, determining an evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image;
step S1406, when in the auto-focus mode, taking the complete to-be-processed picture as a focus area;
step S1407, when the manual focusing mode is in, at least one pixel point selected according to the manual focusing instruction is used as a focusing area;
in step S1408, a focused image is output according to the target image corresponding to the target pixel point in the focused region.
In application, the focusing methods provided in steps S1401 to S1408 are consistent with the focusing methods provided in steps S1101 to S1108, and will not be described herein.
Step S1409, judging whether the pixel position of the target pixel point in the focusing area is replaced or not according to the preset frequency;
step S1410, when the target pixel point is replaced at the pixel position in the focusing area, the method returns to obtain the images of the to-be-processed frame at different image distances, so as to update the focusing image according to the replaced target pixel point.
In the application, whether the position of the target pixel point changes or not can be detected according to the preset frequency. Specifically, the highest evaluation value of all the pixel points can be obtained based on step S1401 to step S1405, and whether the maximum evaluation value of the current target pixel point in the focusing area is greater than or equal to the maximum evaluation value of other pixel points in the focusing area is determined, if so, the position of the target pixel point is not changed, and if so, the focused image is continuously output according to the target influence corresponding to the target pixel point; if not, it is indicated that the position of the target pixel is changed, and the process returns to step S1401 to update the pixel position of the target pixel, so as to update the focusing image according to the replaced target pixel. The preset frequency can be set according to actual needs every time of 1 second.
In application, whether the position of the target pixel point is replaced or not is detected in real time, and the pixel position of the target pixel point is updated rapidly when the position of the target pixel point is replaced, so that a focusing image is output according to the target influence corresponding to the updated target pixel point, the response speed of focusing is improved, and the automaticity of zooming is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
As shown in fig. 15, an image capturing apparatus 100 provided in an embodiment of the present application includes a processor 110, a memory 120, a computer program 121 stored in the memory 120 and executable on the processor 110, and a light field camera 130, and the steps of the focusing method provided in the above embodiment are implemented when the processor 110 executes the computer program 121;
the light field camera 130 is used for capturing a picture to be processed, and records a propagation path of light rays in any direction when the picture to be processed is captured by light rays in any direction.
In application, the processor 110 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In applications, the memory 120 may in some embodiments be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory 120 may also be an external storage device of the terminal device in other embodiments, such as a plug-in hard disk provided on the terminal device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), etc. Further, the memory 120 may also include both an internal storage unit of the terminal device and an external storage device. The memory 120 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program code of a computer program, etc. The memory 120 may also be used to temporarily store data that has been output or is to be output.
In application, the light field camera 130 may include a main lens, a micro lens array, and an image sensor, and the functions of the main lens, the micro lens array, and the image sensor may be described with reference to the related descriptions in the above method embodiments, which are not repeated herein. The embodiment of the present application does not impose any limitation on the specific structure of the optical field camera 130.
As shown in fig. 16, a terminal apparatus 200 provided in an embodiment of the present application includes the image pickup apparatus 100 provided in the above-described embodiment.
It is to be understood that the configuration illustrated in the embodiment of the present application does not constitute a specific limitation on the image capturing apparatus 100 and the terminal apparatus 200. In other embodiments of the application, the image device 100 and the terminal device 200 may include more or less components than illustrated, or may combine certain components, or different components, for example, may also include input-output devices, network access devices, etc. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements steps that may implement the various embodiments of the focusing methods described above.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include at least: any entity or device capable of carrying computer program code to a photo terminal equipment, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other manners. For example, the above-described embodiments of the terminal device are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division in actual implementation, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or modules, which may be in electrical, mechanical or other forms.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (12)

1. A focusing method, characterized by comprising:
acquiring images of a picture to be processed under different image distances;
determining an evaluation value of a pixel point at the same pixel position in each image;
outputting a focusing image according to a target image corresponding to a target pixel point in the focusing area;
the maximum evaluation value of the target pixel point is greater than or equal to the maximum evaluation value of other pixel points in the focusing area, and the evaluation value of the target pixel point in the target image is maximum; the image distance of the focusing image is the same as the image distance of the target image;
the obtaining the images of the to-be-processed picture under different image distances comprises the following steps:
Refocusing the picture to be processed under different image distances to obtain images of the picture to be processed under different image distances; refocusing the to-be-processed picture under different image distances to obtain an image of the to-be-processed picture under different image distances, wherein the refocusing comprises the following steps:
acquiring the image distance of the ith virtual imaging surface according to the ith refocusing parameter and the image distance of the actual imaging surface;
acquiring an ith image of a picture to be processed under the image distance of the ith virtual imaging surface according to the light information of the actual imaging surface, the light information of the ith virtual imaging surface, the ith refocusing parameter and the image distance of the actual imaging surface;
wherein i=1, 2, …, n, n is an integer greater than or equal to 1.
2. The focusing method of claim 1, wherein determining the evaluation value of the pixel point at the same pixel position in each image comprises:
performing object recognition on the image, and determining a first evaluation value of each pixel point in the image based on an object recognition result;
performing definition recognition on the image, and determining a second evaluation value of each pixel point in the image based on a definition recognition result;
performing saliency recognition on the image, and determining a third evaluation value of each pixel point in the image based on a saliency recognition result;
And determining the evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image.
3. The focusing method of claim 1, wherein after determining the evaluation value of the pixel point at the same pixel position in each image, further comprises:
and obtaining the maximum evaluation value of each pixel point, and recording the corresponding image when each pixel point obtains the maximum evaluation value.
4. The focusing method of claim 1, wherein before outputting the focused image according to the target image corresponding to the target pixel point in the focusing area, the focusing method further comprises:
when the automatic focusing mode is in, taking the complete picture to be processed as a focusing area;
and when the device is in the manual focusing mode, at least one pixel point selected according to the manual focusing instruction is used as a focusing area.
5. The focusing method of claim 4, wherein the selecting at least one pixel point as a focusing area according to the manual focusing instruction when in the manual focusing mode includes:
when the image processing device is in a manual focusing mode, acquiring a preset number of recommended pixel points according to the maximum evaluation value of each pixel point, wherein the maximum evaluation value of the recommended pixel points is larger than or equal to the maximum evaluation value of other pixel points in the image to be processed;
Generating and displaying a recommended focusing area according to the pixel position of the recommended pixel point;
and receiving a manual focusing instruction, and taking at least one pixel point selected according to the manual focusing instruction as a focusing area.
6. The focusing method of claim 1, wherein outputting the focused image according to the target image corresponding to the target pixel point in the focusing area comprises:
focusing the picture to be processed according to the image distance of the target image corresponding to the target pixel point in the focusing area, and outputting a focusing image;
or, acquiring a target image recorded by the target pixel point in the focusing area when the maximum evaluation value is obtained, and outputting the target image as a focusing image.
7. The focusing method of claim 1, wherein outputting the focused image according to the target image corresponding to the target pixel point in the focusing area comprises:
acquiring the image distance of a current display image;
determining the image distance of a focusing image according to the image distance of a target image corresponding to a target pixel point in the focusing area;
and switching the image distance of the current display image to the image distance of the focusing image according to the preset switching speed or the preset switching time so as to output the focusing image.
8. The focusing method of claim 7, wherein switching the image distance of the currently displayed image to the image distance of the focused image according to a preset switching speed or a preset switching time to output the focused image comprises:
acquiring an interval image distance between the image distance of the current display image and the image distance of the focusing image;
when switching to the q-th interval image distance, focusing the picture to be processed according to the q-th interval image distance, and outputting a q-th transition image;
wherein q=1, 2, …, m, m is an integer greater than or equal to 1.
9. The focusing method according to any one of claims 1 to 8, wherein after outputting the focused image according to the target image corresponding to the target pixel point in the focused region, the focusing method further comprises:
judging whether the pixel position of the target pixel point in the focusing area is replaced or not according to a preset frequency;
and returning to the image of the acquired picture to be processed under different image distances when the pixel position of the target pixel point in the focusing area is replaced, so as to update the focusing image according to the replaced target pixel point.
10. An image pickup apparatus comprising a processor, a memory, a computer program stored in the memory and executable on the processor, and a light field camera, the processor implementing the steps of the focusing method according to any one of claims 1 to 9 when executing the computer program;
The light field camera is used for capturing a picture to be processed, and recording the propagation path of the light rays in any direction when the picture to be processed is captured by the light rays in any direction.
11. A terminal apparatus comprising the image pickup apparatus according to claim 10.
12. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the focusing method according to any one of claims 1 to 9.
CN202210669584.6A 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium Active CN115086558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669584.6A CN115086558B (en) 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669584.6A CN115086558B (en) 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN115086558A CN115086558A (en) 2022-09-20
CN115086558B true CN115086558B (en) 2023-12-01

Family

ID=83252316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669584.6A Active CN115086558B (en) 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN115086558B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547306A (en) * 2008-03-28 2009-09-30 鸿富锦精密工业(深圳)有限公司 Video camera and focusing method thereof
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
CN110602397A (en) * 2019-09-16 2019-12-20 RealMe重庆移动通信有限公司 Image processing method, device, terminal and storage medium
CN112351196A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Image definition determining method, image focusing method and device
CN114554085A (en) * 2022-02-08 2022-05-27 维沃移动通信有限公司 Focusing method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013012820A (en) * 2011-06-28 2013-01-17 Sony Corp Image processing apparatus, image processing apparatus control method, and program for causing computer to execute the method
JP6047025B2 (en) * 2013-02-01 2016-12-21 キヤノン株式会社 Imaging apparatus and control method thereof
US10277889B2 (en) * 2016-12-27 2019-04-30 Qualcomm Incorporated Method and system for depth estimation based upon object magnification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547306A (en) * 2008-03-28 2009-09-30 鸿富锦精密工业(深圳)有限公司 Video camera and focusing method thereof
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
CN110602397A (en) * 2019-09-16 2019-12-20 RealMe重庆移动通信有限公司 Image processing method, device, terminal and storage medium
CN112351196A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Image definition determining method, image focusing method and device
CN114554085A (en) * 2022-02-08 2022-05-27 维沃移动通信有限公司 Focusing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115086558A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
KR102278776B1 (en) Image processing method, apparatus, and apparatus
KR102480245B1 (en) Automated generation of panning shots
WO2021073331A1 (en) Zoom blurred image acquiring method and device based on terminal device
KR102279436B1 (en) Image processing methods, devices and devices
US7606442B2 (en) Image processing method and apparatus
JP6347675B2 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
KR102229811B1 (en) Filming method and terminal for terminal
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR20110053348A (en) System and method to generate depth data using edge detection
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
EP3490252A1 (en) Method and device for image white balance, storage medium and electronic equipment
JP5766077B2 (en) Image processing apparatus and image processing method for noise reduction
CN110324532A (en) A kind of image weakening method, device, storage medium and electronic equipment
CN108012078A (en) Brightness of image processing method, device, storage medium and electronic equipment
CN112367459A (en) Image processing method, electronic device, and non-volatile computer-readable storage medium
US9020269B2 (en) Image processing device, image processing method, and recording medium
CN110392211B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
JP2017060010A (en) Imaging device, method of controlling imaging device, and program
CN111669492A (en) Method for processing shot digital image by terminal and terminal
CN106878604B (en) Image generation method based on electronic equipment and electronic equipment
CN106878606B (en) Image generation method based on electronic equipment and electronic equipment
CN115086558B (en) Focusing method, image pickup apparatus, terminal apparatus, and storage medium
JP2011193066A (en) Image sensing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant