CN114390201A - Focusing method and device thereof - Google Patents

Focusing method and device thereof Download PDF

Info

Publication number
CN114390201A
CN114390201A CN202210034270.9A CN202210034270A CN114390201A CN 114390201 A CN114390201 A CN 114390201A CN 202210034270 A CN202210034270 A CN 202210034270A CN 114390201 A CN114390201 A CN 114390201A
Authority
CN
China
Prior art keywords
image
focusing
region
preview image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210034270.9A
Other languages
Chinese (zh)
Inventor
陈典浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210034270.9A priority Critical patent/CN114390201A/en
Publication of CN114390201A publication Critical patent/CN114390201A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a focusing method and a focusing device, and belongs to the technical field of electronic equipment. The method comprises the following steps: acquiring a preview image displayed on a shooting preview interface; under the condition that the preview image comprises the human face features, performing human face example segmentation on the preview image to obtain an interested region, wherein the shape of the interested region is changed along with the shape of the human face features in the preview image; determining a focusing position according to the region of interest; and adjusting the focusing center of a camera of the electronic equipment according to the focusing position.

Description

Focusing method and device thereof
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to a focusing method and a device thereof.
Background
With the rapid development of electronic devices, more and more users use mobile terminal devices to take pictures. In order to meet different requirements of users, mobile terminal developers develop various photographing functions. Such as a face autofocus function.
In the related technology, in the shooting process, if a face appears in a shooting scene, a face area is selected, and a focusing position is determined according to the face area, so that face automatic focusing is realized. However, the face area selected in this way not only includes face information, but also includes partial background information, and the background information may interfere with the calculation of the focusing position and affect the focusing accuracy.
Disclosure of Invention
The embodiment of the application aims to provide a focusing method and a focusing device, which can solve the problems that the precision of a focusing position is poor and the precise face focusing is difficult to realize in the prior art.
In a first aspect, an embodiment of the present application provides a focusing method, where the method includes:
acquiring a preview image displayed on a shooting preview interface;
under the condition that the preview image comprises the human face features, performing human face example segmentation on the preview image to obtain an interested region, wherein the shape of the interested region is changed along with the shape of the human face features in the preview image;
determining a focusing position according to the region of interest;
and adjusting the focusing center of a camera of the electronic equipment according to the focusing position.
In a second aspect, an embodiment of the present application provides a focusing apparatus, including:
the first acquisition module is used for acquiring a preview image displayed on a shooting preview interface;
the second acquisition module is used for carrying out face instance segmentation on the preview image under the condition that the preview image comprises face features to obtain an interested area, wherein the shape of the interested area is changed along with the shape of the face features in the preview image;
the first determining module is used for determining a focusing position according to the region of interest;
and the first focusing module is used for adjusting the focusing center of a camera of the electronic equipment according to the focusing position.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, in the process of shooting an image, a face instance is segmented for a preview image to determine an interested area, a focusing position is determined according to the interested area, and a focusing center of a camera of electronic equipment is adjusted according to the focusing position, so that the interested area is determined based on the face instance segmentation, the obtained interested area comprises all face information but does not comprise background information except the face information, the interference of the background information on the calculation of the focusing position can be avoided, the calculation accuracy of the focusing position can be improved, and the face automatic focusing can be rapidly and accurately performed.
Drawings
Fig. 1 is a flowchart of a focusing method provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a focusing device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The focusing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Please refer to fig. 1, which is a flowchart illustrating a focusing method according to an embodiment of the present application. The method can be applied to electronic equipment, and the electronic equipment can be a mobile phone, a tablet computer, a notebook computer and the like. As shown in FIG. 1, the method may include steps 1100-1400, described in detail below.
Step 1100, obtaining a preview image displayed on a shooting preview interface.
In this embodiment, the capture preview interface may be a capture preview interface of a camera application of the electronic device. The preview image is an image which is collected by a camera and is used for a user to preview and view in the shooting and previewing process.
Step 1200, performing face instance segmentation on the preview image to obtain an interested area under the condition that the preview image includes the face features, wherein the shape of the interested area changes along with the shape of the face features in the preview image.
In the present embodiment, the preview image includes a human face feature, that is, a human face appears in the shooting scene. The human face features can be a front face or a side face. In specific implementation, the preview image can be detected through a face detection algorithm to determine whether face features exist in the preview image.
In the present embodiment, the example segmentation is an image classification method of one type of pixel-level classification. Example segmentation may classify all pixels in an image, with each individual pixel being a class, to obtain a set of pixel locations for each individual in the image. Face segmentation is a classification algorithm. The face segmentation can classify all pixels in the image into two classes, namely foreground pixels and background pixels. All pixels belonging to the face area are foreground pixels, and pixels not belonging to the face area are background pixels. The face instance segmentation is a multi-classification algorithm, and all pixels in the image can be subjected to multi-classification by the face instance segmentation. The human face example segmentation further divides foreground pixels on the basis of human face segmentation, and the pixels of different human face areas are of one type. If a plurality of face features exist in the foreground pixels, the foreground pixels can be divided into a plurality of types of pixels, and each face feature is independently classified into one type. Based on this, the preview image is subjected to face instance segmentation, so that a face Region can be obtained, and the face Region is used as a Region of Interest (ROI).
In specific implementation, the human face example segmentation can be performed through a full convolution neural network, the preview image is input into the full convolution neural network, the human face example segmentation result is output, and the pixel set of the human face features is extracted to serve as the region of interest.
The region of interest obtained by performing face instance segmentation on the preview image may be an adaptive region of interest, that is, the region of interest is bounded by the contour of the face feature, and the shape of the region of interest may change along with the shape of the face feature in the preview image.
In some embodiments of the present application, when the preview image includes N face features, the preview image is subjected to face instance segmentation to obtain N regions of interest corresponding to the N face features, where N is an integer greater than 1.
In this embodiment, one region of interest corresponds to one face feature. In specific implementation, the preview image including the N face features is input into a full convolution neural network to obtain a face example segmentation result, and then regions of interest corresponding to different face features are extracted from the face example face segmentation result.
To preview imagesThe method comprises the following steps of taking three human face characteristics as an example, and performing human face example segmentation on a preview image to obtain an interested area: traversing face example segmentation result IpredictLabel of all pixels ini,jWherein the label is labeli,jFor storing pixel pixelsi,jIf pixel is classified as a result ofi,jIf the background pixel is, its corresponding labeli,jIs 0; if pixeli,jIf the pixel belongs to the first face feature, the label corresponding to the pixel is labeli,jIs 1; if pixeli,jIf the pixel belongs to the second face feature, the label corresponding to the pixel is labeli,jIs 2; if pixeli,jIf the pixel belongs to the third face feature, the label corresponding to the pixel is labeli,jIs 3. Based on this, in the traversal process, the label of the current pixel is detectedi,jIf label of the current pixel i,j1, put the ROI corresponding to the first face featureface1If the label of the current pixel in the pixel seti,jIs 2, put to the ROI corresponding to the second face featureface2If the label of the current pixel in the pixel seti,j3, put the ROI corresponding to the first face featureface3If the label of the current pixel in the pixel seti,jAt 0, the current pixel is skipped to the next pixel. After traversing all pixels in the preview image, ROI areas corresponding to different human face features can be extracted, namely
The region of interest corresponding to the first face feature is: ROI (region of interest)face1={(i,j),labeli,j==1};
The region of interest corresponding to the second face feature is: ROI (region of interest)face2={(i,j),labeli,j==2};
The region of interest corresponding to the first face feature is: ROI (region of interest)face3={(i,j),labeli,j==3}。
In this embodiment, when the preview image includes a plurality of face features, a plurality of regions of interest corresponding to the plurality of face features may be determined by performing face instance segmentation on the preview image, and a focusing position is determined based on the plurality of regions of interest in combination with subsequent steps, so that the focusing accuracy may be improved.
After step 1200, step 1300 is performed to determine a focus position according to the region of interest.
In this embodiment, the in-focus position may be an in-focus position of the captured image. The focusing position is the position where the image is most clear in the region of interest of the preview image, that is, the image with the clearest portrait part can be obtained by shooting at the focusing position.
In some embodiments of the present application, the determining a focus position according to the region of interest may further include: acquiring a first image and a second image under the condition that the preview image is displayed on the shooting preview interface, wherein the first image is a first phase image output by an image sensor of the electronic equipment, and the second image is a second phase image output by the image sensor of the electronic equipment; determining a phase difference value of the first image and the second image in the region of interest; and determining the focusing position according to the phase difference value.
In the present embodiment, the first phase image and the second phase image may be generated by image signals output by phase pixels of the image sensor. Illustratively, the first phase image may be a left phase image output by an image sensor of the electronic device, i.e., a left pixel pd (phase detect) raw image output by the image sensor of the electronic device. The second phase image may be a right phase image output by the image sensor of the electronic device, i.e., a right pixel PD raw map output by the image sensor of the electronic device. For example, the first phase image may be an upper phase image output by an image sensor of the electronic device and the second phase image may be a lower phase image output by the image sensor of the electronic device.
In specific implementation, the focusing position can be determined by using the pdaf (phase Detect Auto focus) focusing principle. Specifically, left and right PD raw maps are obtained by PD pixels on an image sensor of the electronic apparatus to determine a focus position from a phase difference of the left and right PD raw maps.
The process of calculating the phase difference between the left and right PD raw images is to assume that the acquired left and right PD raw images are respectively PDleftAnd PDright. By moving PDleftAnd determining the phase difference of the left and right PD raw images according to the similarity of the left and right PD raw images in the ROI area. The smaller the similarity value is, the closer the left and right PD raw images are in the ROI area, and the closer the distance to the in-focus position is. Based on this, the phase difference corresponding to the ROI region similarity of the left and right PD raw maps being the minimum value is defined as the phase difference of the left and right PD raw maps.
Specifically, a shift parameter may be set, which represents PDleftThe number of pixels shifted, e.g., shift value of 1, represents PDleftThe whole is shifted to the right by one pixel with shift value of-1, representing PDleftThe whole is shifted one pixel to the left. PD is moved according to the set shift valueleftTo obtain
Figure BDA0003467322850000061
And calculates each shiftiAfter moving
Figure BDA0003467322850000062
And PDrightSimilarity SAD within ROI regioniThe value is obtained. Similarity SADiThe value can be obtained according to the following equation (1):
Figure BDA0003467322850000063
then, three similarity values with the minimum value are selected, namely
Figure BDA0003467322850000064
And
Figure BDA0003467322850000065
and shift values corresponding to these three similar values, i.e.
Figure BDA0003467322850000066
And
Figure BDA0003467322850000067
from these three shift values, the phase difference (PD value) of the left and right PD raw maps is determined. Illustratively, according to the three shift values, a quadratic curve is fitted, and the similarity value corresponding to the symmetry axis of the quadratic curve is the minimum value, here, the in-focus position, that is, the shift value of the symmetry axis position of the quadratic curve is used as the phase difference (PD value) of the left and right PD raw graphs. And calculating the focusing position according to the phase difference of the left and right PD raw images.
In this embodiment, after the region of interest of the preview image is determined by face instance segmentation, a first phase image and a second phase image output by an image sensor of the electronic device are obtained, and a focusing position is determined according to a phase difference value of the first phase image and the second phase image in the region of interest, so that the accuracy of calculating the focusing position can be further improved, the response speed can be improved, and the accuracy of focusing the face can be improved.
In some embodiments of the present application, before determining the focus position according to the region of interest, the method may further include: under the condition that the preview image comprises N human face features, carrying out human face example segmentation on the preview image to obtain N interesting regions corresponding to the N human face features, wherein N is an integer greater than 1; determining a focus position according to the region of interest, including: determining a focusing position according to a target interested area in the N interested areas; and the target interesting region is an interesting region corresponding to the face feature closest to the camera.
In this embodiment, the face feature closest to the camera of the electronic device is usually the object that the user wants to photograph. The target region of interest may be an image region corresponding to a facial feature closest to the camera.
Continuing with the example of the preview image including three facial features, assume that the pd value corresponding to the first facial feature is pdface1The pd value corresponding to the second face feature is pdface2Third, aThe pd value corresponding to the face feature is pdface3And calculating the focusing position according to the pd value of the face feature closest to the camera of the electronic equipment.
In this embodiment, when the preview image includes a plurality of face features, the focusing position may be calculated according to the face feature closest to the camera of the electronic device, so that the electronic device performs face auto-focusing on an object that the user may want to shoot, and the focusing manner is more flexible.
After step 1300, step 1400 is executed to adjust a focus center of a camera of the electronic device according to the focus position.
In this embodiment, the driving of the focusing motor is controlled so that the focusing center of the electronic device is the focusing position. In a specific implementation, a Code value of the focusing motor is determined according to the phase difference (PD value) of the left and right PD raw images, and the operation of the focusing motor is controlled according to the Code value of the focusing motor so that the focusing center of the camera of the electronic device is at the focusing position. Wherein the Code value of the focus motor can be obtained according to the following formula:
Code=pd*DCC (2)
where PD is a phase difference between the left and right PD raw maps, and dcc (defocus reciprocal coefficient) is a coefficient of the focus motor Code. The DCC value may be set according to a simulation test result, which is not limited in this application.
In some embodiments of the present application, after acquiring the preview image displayed by the shooting preview interface, the method may further include: receiving a first input of a user to the shooting preview interface; determining a touch position in response to the first input; and carrying out automatic focusing processing on the preview image according to the touch position.
In this embodiment, the first input may be an input to determine the in-focus position. Illustratively, the user clicks on the preview taking interface. The click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
In this embodiment, a plurality of focusing modes are provided, the face automatic focusing mode is used for focusing when the preview image includes the face feature and the user does not click the shooting preview interface, the face automatic focusing mode is closed when the click input of the user on the shooting preview interface is received, and the focusing position of the camera is adjusted according to the touch position, so that the user can select the focusing position according to actual needs, and the use is more flexible.
In some embodiments of the present application, after adjusting a focus center of a camera of an electronic device according to the focus position, the method may further include: receiving a second input of the user; and responding to the second input, and processing the preview image according to the focusing position to obtain a target image.
In this embodiment, the target image may be an image captured with the in-focus position as a focus. The second input may be an input for capturing a target image. The second input may be, for example, a click input of a user on a target control of the shooting preview interface, or a voice input made by the user, or a specific gesture input by the user, which may be determined according to actual use requirements, and is not limited in this embodiment of the application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture and a dragging gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
In this embodiment, after the focus center of the camera of the electronic device is adjusted according to the focus position, the second input of the user is received, and the image is shot based on the focus position in response to the second input, so that the shooting quality of the image can be improved.
In the embodiment of the application, in the process of shooting an image, a face instance is segmented for a preview image to determine an interested area, a focusing position is determined according to the interested area, and a focusing center of a camera of electronic equipment is adjusted according to the focusing position, so that the interested area is determined based on the face instance segmentation, the obtained interested area comprises all face information but does not comprise background information except the face information, the interference of the background information on the calculation of the focusing position can be avoided, the calculation accuracy of the focusing position can be improved, and the face automatic focusing can be rapidly and accurately performed.
According to the focusing method provided by the embodiment of the application, the execution main body can be a focusing device. In the embodiments of the present application, a method for performing focusing by a focusing device is taken as an example to describe the focusing device provided in the embodiments of the present application.
Corresponding to the above embodiments, referring to fig. 2, an embodiment of the present application further provides a focusing apparatus 200, where the focusing apparatus 200 includes a first obtaining module 201, a second obtaining module 202, a first determining module 203, and a first focusing module 204.
The first obtaining module 201 is configured to obtain a preview image displayed on a shooting preview interface;
the second obtaining module 202 is configured to, when the preview image includes a face feature, perform face instance segmentation on the preview image to obtain an area of interest, where a shape of the area of interest changes along with a shape of the face feature in the preview image;
the first determining module 203 is configured to determine a focus position according to the region of interest;
the first focusing module 204 is configured to adjust a focusing center of a camera of the electronic device according to the focusing position.
In the embodiment of the application, in the process of shooting an image, a face instance is segmented for a preview image to determine an interested area, a focusing position is determined according to the interested area, and a focusing center of a camera of electronic equipment is adjusted according to the focusing position, so that the interested area is determined based on the face instance segmentation, the obtained interested area comprises all face information but does not comprise background information except the face information, the interference of the background information on the calculation of the focusing position can be avoided, the calculation accuracy of the focusing position can be improved, and the face automatic focusing can be rapidly and accurately performed.
Optionally, the second obtaining module 202 is further configured to, under the condition that the preview image includes N face features, perform face instance segmentation on the preview image to obtain N regions of interest corresponding to the N face features, where N is an integer greater than 1; the first determining module 403 is specifically configured to determine a focusing position according to a target region of interest in the N regions of interest; and the target interesting region is an interesting region corresponding to the face feature closest to the camera.
In this embodiment, when the preview image includes a plurality of face features, the focusing position may be calculated according to the face feature closest to the camera of the electronic device, so that the electronic device performs face auto-focusing on an object that the user may want to shoot, and the focusing manner is more flexible.
Optionally, the first determining module 203 includes: the acquisition unit is used for acquiring a first image and a second image under the condition that the preview image is displayed on the shooting preview interface, wherein the first image is a first phase image output by an image sensor of the electronic equipment, and the second image is a second phase image output by the image sensor of the electronic equipment; a first determining unit, configured to determine a phase difference value of the first image and the second image in the region of interest; and the second determining unit is used for determining the focusing position according to the phase difference value.
In this embodiment, after the region of interest of the preview image is determined by face instance segmentation, a first phase image and a second phase image output by an image sensor of the electronic device are obtained, and a focusing position is determined according to a phase difference value of the first phase image and the second phase image in the region of interest, so that the accuracy of calculating the focusing position can be further improved, the response speed can be improved, and the accuracy of focusing the face can be improved.
Optionally, the focusing device 200 further includes: the first receiving module is used for receiving first input of a user to the shooting preview interface; a second determining module for determining a touch position in response to the first input; and the second focusing module is used for carrying out automatic focusing processing on the preview image according to the touch position.
In this embodiment, a plurality of focusing modes are provided, the face automatic focusing mode is used for focusing when the preview image includes the face feature and the user does not click the shooting preview interface, the face automatic focusing mode is closed when the click input of the user on the shooting preview interface is received, and the focusing position of the camera is adjusted according to the touch position, so that the user can select the focusing position according to actual needs, and the use is more flexible.
Optionally, the focusing device 200 further includes: the second receiving module is used for receiving a second input of the user; and the shooting module is used for responding to the second input and processing the preview image according to the focusing position to obtain a target image.
In this embodiment, after the focus center of the camera of the electronic device is adjusted according to the focus position, the second input of the user is received, and the image is shot based on the focus position in response to the second input, so that the shooting quality of the image can be improved.
The focusing device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the embodiments of the present application are not limited in particular.
The focusing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The focusing device provided in the embodiment of the present application can implement each process implemented in the embodiment of the method in fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the foregoing focusing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein, the processor 410 is configured to: acquiring a preview image displayed on a shooting preview interface; under the condition that the preview image comprises the human face features, performing human face example segmentation on the preview image to obtain an interested region, wherein the shape of the interested region is changed along with the shape of the human face features in the preview image; determining a focusing position according to the region of interest; and adjusting the focusing center of a camera of the electronic equipment according to the focusing position.
In the embodiment of the application, in the process of shooting an image, a face instance is segmented for a preview image to determine an interested area, a focusing position is determined according to the interested area, and a focusing center of a camera of electronic equipment is adjusted according to the focusing position, so that the interested area is determined based on the face instance segmentation, the obtained interested area comprises all face information but does not comprise background information except the face information, the interference of the background information on the calculation of the focusing position can be avoided, the calculation accuracy of the focusing position can be improved, and the face automatic focusing can be rapidly and accurately performed.
Optionally, the processor 410 is further configured to, before said determining the in-focus position according to the region of interest: under the condition that the preview image comprises N human face features, carrying out human face example segmentation on the preview image to obtain N interesting regions corresponding to the N human face features, wherein N is an integer greater than 1; the processor 410, when determining the in-focus position from the region of interest, is configured to: determining a focusing position according to a target interested area in the N interested areas; and the target interesting region is an interesting region corresponding to the face feature closest to the camera.
In this embodiment, when the preview image includes a plurality of face features, the focusing position may be calculated according to the face feature closest to the camera of the electronic device, so that the electronic device performs face auto-focusing on an object that the user may want to shoot, and the focusing manner is more flexible.
Optionally, the processor 410, when determining the in-focus position according to the region of interest, is configured to: acquiring a first image and a second image under the condition that the preview image is displayed on the shooting preview interface, wherein the first image is a first phase image output by an image sensor of the electronic equipment, and the second image is a second phase image output by the image sensor of the electronic equipment; determining a phase difference value of the first image and the second image in the region of interest; and determining the focusing position according to the phase difference value.
In this embodiment, after the region of interest of the preview image is determined by face instance segmentation, a first phase image and a second phase image output by an image sensor of the electronic device are obtained, and a focusing position is determined according to a phase difference value of the first phase image and the second phase image in the region of interest, so that the accuracy of calculating the focusing position can be further improved, the response speed can be improved, and the accuracy of focusing the face can be improved.
Optionally, after the obtaining of the preview image displayed on the shooting preview interface, the user input unit 407 is configured to receive a first input of the shooting preview interface by a user; processor 410, further configured to: determining a touch position in response to the first input; and carrying out automatic focusing processing on the preview image according to the touch position.
In this embodiment, a plurality of focusing modes are provided, the face automatic focusing mode is used for focusing when the preview image includes the face feature and the user does not click the shooting preview interface, the face automatic focusing mode is closed when the click input of the user on the shooting preview interface is received, and the focusing position of the camera is adjusted according to the touch position, so that the user can select the focusing position according to actual needs, and the use is more flexible.
Optionally, after the focus center of the camera of the electronic device is adjusted according to the focus position, the user input unit 407 is configured to receive a second input of the user; processor 410, further configured to: and responding to the second input, and processing the preview image according to the focusing position to obtain a target image.
In this embodiment, after the focus center of the camera of the electronic device is adjusted according to the focus position, the second input of the user is received, and the image is shot based on the focus position in response to the second input, so that the shooting quality of the image can be improved.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may comprise volatile memory or non-volatile memory, or the memory 409 may comprise both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 409 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing focusing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above focusing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing focusing method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A focusing method, the method comprising:
acquiring a preview image displayed on a shooting preview interface;
under the condition that the preview image comprises the human face features, performing human face example segmentation on the preview image to obtain an interested region, wherein the shape of the interested region is changed along with the shape of the human face features in the preview image;
determining a focusing position according to the region of interest;
and adjusting the focusing center of a camera of the electronic equipment according to the focusing position.
2. The method of claim 1, wherein prior to determining the in-focus position from the region of interest, the method comprises:
under the condition that the preview image comprises N human face features, carrying out human face example segmentation on the preview image to obtain N interesting regions corresponding to the N human face features, wherein N is an integer greater than 1;
determining a focus position according to the region of interest, including:
determining a focusing position according to a target interested area in the N interested areas;
and the target interesting region is an interesting region corresponding to the face feature closest to the camera.
3. The method of claim 1, wherein determining a focus position from the region of interest comprises:
acquiring a first image and a second image under the condition that the preview image is displayed on the shooting preview interface, wherein the first image is a first phase image output by an image sensor of the electronic equipment, and the second image is a second phase image output by the image sensor of the electronic equipment;
determining a phase difference value of the first image and the second image in the region of interest;
and determining the focusing position according to the phase difference value.
4. The method of claim 1, wherein after the obtaining of the preview image displayed by the capture preview interface, the method further comprises:
receiving a first input of a user to the shooting preview interface;
determining a touch position in response to the first input;
and carrying out automatic focusing processing on the preview image according to the touch position.
5. The method of claim 1, wherein after adjusting a focus center of a camera of an electronic device according to the focus position, the method further comprises:
receiving a second input of the user;
and responding to the second input, and processing the preview image according to the focusing position to obtain a target image.
6. A focusing device, comprising:
the first acquisition module is used for acquiring a preview image displayed on a shooting preview interface;
the second acquisition module is used for carrying out face instance segmentation on the preview image under the condition that the preview image comprises face features to obtain an interested area, wherein the shape of the interested area is changed along with the shape of the face features in the preview image;
the first determining module is used for determining a focusing position according to the region of interest;
and the first focusing module is used for adjusting the focusing center of a camera of the electronic equipment according to the focusing position.
7. The apparatus according to claim 6, wherein the second obtaining module is further configured to, when the preview image includes N face features, perform face instance segmentation on the preview image to obtain N regions of interest corresponding to the N face features, where N is an integer greater than 1;
the first determining module is specifically configured to determine a focusing position according to a target region of interest of the N regions of interest;
and the target interesting region is an interesting region corresponding to the face feature closest to the camera.
8. The apparatus of claim 6, wherein the first determining module comprises:
the acquisition unit is used for acquiring a first image and a second image under the condition that the preview image is displayed on the shooting preview interface, wherein the first image is a first phase image output by an image sensor of the electronic equipment, and the second image is a second phase image output by the image sensor of the electronic equipment;
a first determining unit, configured to determine a phase difference value of the first image and the second image in the region of interest;
and the second determining unit is used for determining the focusing position according to the phase difference value.
9. The apparatus of claim 6, further comprising:
the first receiving module is used for receiving first input of a user to the shooting preview interface;
a second determining module for determining a touch position in response to the first input;
and the second focusing module is used for carrying out automatic focusing processing on the preview image according to the touch position.
10. The apparatus of claim 6, further comprising:
the second receiving module is used for receiving a second input of the user;
and the shooting module is used for responding to the second input and processing the preview image according to the focusing position to obtain a target image.
CN202210034270.9A 2022-01-12 2022-01-12 Focusing method and device thereof Pending CN114390201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210034270.9A CN114390201A (en) 2022-01-12 2022-01-12 Focusing method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210034270.9A CN114390201A (en) 2022-01-12 2022-01-12 Focusing method and device thereof

Publications (1)

Publication Number Publication Date
CN114390201A true CN114390201A (en) 2022-04-22

Family

ID=81201053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210034270.9A Pending CN114390201A (en) 2022-01-12 2022-01-12 Focusing method and device thereof

Country Status (1)

Country Link
CN (1) CN114390201A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097985A (en) * 2023-10-11 2023-11-21 荣耀终端有限公司 Focusing method, electronic device and computer readable storage medium
CN117676331A (en) * 2024-02-01 2024-03-08 荣耀终端有限公司 Automatic focusing method and electronic equipment
CN117690177A (en) * 2024-01-31 2024-03-12 荣耀终端有限公司 Face focusing method, face focusing device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107257433A (en) * 2017-06-16 2017-10-17 广东欧珀移动通信有限公司 Focusing method, device, terminal and computer-readable recording medium
US20190132520A1 (en) * 2017-11-02 2019-05-02 Adobe Inc. Generating image previews based on capture information
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN110418064A (en) * 2019-09-03 2019-11-05 北京字节跳动网络技术有限公司 Focusing method, device, electronic equipment and storage medium
CN113556466A (en) * 2021-06-29 2021-10-26 荣耀终端有限公司 Focusing method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107257433A (en) * 2017-06-16 2017-10-17 广东欧珀移动通信有限公司 Focusing method, device, terminal and computer-readable recording medium
US20190132520A1 (en) * 2017-11-02 2019-05-02 Adobe Inc. Generating image previews based on capture information
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN110418064A (en) * 2019-09-03 2019-11-05 北京字节跳动网络技术有限公司 Focusing method, device, electronic equipment and storage medium
CN113556466A (en) * 2021-06-29 2021-10-26 荣耀终端有限公司 Focusing method and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097985A (en) * 2023-10-11 2023-11-21 荣耀终端有限公司 Focusing method, electronic device and computer readable storage medium
CN117097985B (en) * 2023-10-11 2024-04-02 荣耀终端有限公司 Focusing method, electronic device and computer readable storage medium
CN117690177A (en) * 2024-01-31 2024-03-12 荣耀终端有限公司 Face focusing method, face focusing device, electronic equipment and storage medium
CN117676331A (en) * 2024-02-01 2024-03-08 荣耀终端有限公司 Automatic focusing method and electronic equipment

Similar Documents

Publication Publication Date Title
CN109565551B (en) Synthesizing images aligned to a reference frame
TWI706379B (en) Method, apparatus and electronic device for image processing and storage medium thereof
CN114390201A (en) Focusing method and device thereof
JP2009522591A (en) Method and apparatus for controlling autofocus of a video camera by tracking a region of interest
CN111091590A (en) Image processing method, image processing device, storage medium and electronic equipment
CN116324878A (en) Segmentation for image effects
CN113099122A (en) Shooting method, shooting device, shooting equipment and storage medium
WO2016097468A1 (en) Method, apparatus and computer program product for blur estimation
CN113873166A (en) Video shooting method and device, electronic equipment and readable storage medium
CN114125305A (en) Shooting method, device and equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN113747067A (en) Photographing method and device, electronic equipment and storage medium
CN113283319A (en) Method and device for evaluating face ambiguity, medium and electronic equipment
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115623313A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN113691731B (en) Processing method and device and electronic equipment
CN113364985B (en) Live broadcast lens tracking method, device and medium
CN117671473B (en) Underwater target detection model and method based on attention and multi-scale feature fusion
CN117750215A (en) Shooting parameter updating method and electronic equipment
CN117541507A (en) Image data pair establishing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination