WO2022016550A1 - 拍摄方法、拍摄装置及存储介质 - Google Patents

拍摄方法、拍摄装置及存储介质 Download PDF

Info

Publication number
WO2022016550A1
WO2022016550A1 PCT/CN2020/104596 CN2020104596W WO2022016550A1 WO 2022016550 A1 WO2022016550 A1 WO 2022016550A1 CN 2020104596 W CN2020104596 W CN 2020104596W WO 2022016550 A1 WO2022016550 A1 WO 2022016550A1
Authority
WO
WIPO (PCT)
Prior art keywords
focus
frame image
matching
area
areas
Prior art date
Application number
PCT/CN2020/104596
Other languages
English (en)
French (fr)
Inventor
程正喜
封旭阳
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080006539.1A priority Critical patent/CN113170053A/zh
Priority to PCT/CN2020/104596 priority patent/WO2022016550A1/zh
Publication of WO2022016550A1 publication Critical patent/WO2022016550A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present application relates to the field of photographing technologies, and in particular, to a photographing method, a photographing device, and a storage medium.
  • Auto Focus-Continuous means that the camera always performs the focusing operation regardless of whether the shutter is pressed halfway.
  • AFC it can be roughly divided into the following two categories: (1) Continuous AF automatic mode AFC-Auto, that is, the user does not clearly point out the focus area or target, but only composes the image, and the AF (Auto Focus) is automatically controlled by the system; (2) Continuous auto focus tracking mode AFC-Tracking, continuous tracking focus, that is, the user clearly specifies the focus target, and keeps focusing on the target selected by the user during the shooting process.
  • AFC-Auto mode the focusing process jumps randomly, jumping directly from one target to another, and it is easy to see that the picture is constantly focusing back and forth; the advantage of AFC-Tracking mode over AFC-Auto mode is that it can track a relatively stable target. The focus area can bring users a better focusing experience. However, the camera cannot automatically switch from AFC-Auto mode to AFC-Tracking mode.
  • the present application provides a photographing method, a photographing device, and a storage medium.
  • the present application provides a shooting method, including:
  • the present application provides a photographing apparatus, the apparatus comprising: a memory and a processor;
  • the memory is used to store computer programs
  • the processor is configured to execute the computer program and implement the following steps when executing the computer program:
  • the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor implements the above-mentioned shooting method.
  • the embodiments of the present application provide a shooting method, a shooting device, and a storage medium.
  • N focus areas of N frames of images in an image sequence are acquired; according to the focus targets in the N focus areas, it is determined whether to Switching the current focus mode; and shooting in the determined focus mode. Since N focus areas of N frames of images are acquired in the current focus mode, according to the focus targets in the N focus areas, the scene currently captured by the user can be automatically recognized relatively stably, the focus scene can be automatically recognized, and then determined according to the recognition result. Whether to switch the current focus mode, in this way, a better and smoother focus experience can be brought to the user without the user performing any operations.
  • FIG. 1 is a schematic flowchart of an embodiment of a shooting method of the present application
  • FIG. 2 is a schematic diagram of an embodiment of focus mode switching in the shooting method of the present application.
  • FIG. 3 is a schematic flowchart of another embodiment of the photographing method of the present application.
  • FIG. 5 is a schematic diagram of the principle of an embodiment of matching using the center distance in the shooting method of the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a photographing device of the present application.
  • Continuous AF AFC can be roughly divided into the following two categories: continuous AF auto mode AFC-Auto and continuous AF tracking mode AFC-Tracking.
  • AFC-Tracking mode over AFC-Auto mode is that it can track a relatively stable focus area, which can bring users a better focusing experience.
  • the camera cannot automatically switch from AFC-Auto mode to AFC-Tracking mode.
  • the embodiment of the present application acquires N focus areas of N frames of images in an image sequence; determines whether to switch the current focus mode according to the focus targets in the N focus areas; and shoots in the determined focus mode. Since N focus areas of N frames of images are acquired in the current focus mode, according to the focus targets in the N focus areas, the scene currently captured by the user can be automatically recognized relatively stably, the focus scene can be automatically recognized, and then determined according to the recognition result. Whether to switch the current focus mode, and shoot in the determined focus mode, in this way, a better and smoother focus experience can be brought to the user without the user performing any operations. For example, it can be determined whether to automatically switch to the AFC-Tracking mode according to whether the focus targets in the N focus areas include the same target. The user can automatically recognize the automatic switch without any operation, which can bring better and smoother effects to the user. AF experience.
  • FIG. 1 is a schematic flowchart of an embodiment of a shooting method of the present application. The method includes:
  • Step S101 in the current focus mode, acquire N focus areas of N frames of images in the image sequence.
  • Step S102 Determine whether to switch the current focus mode according to the focus targets in the N focus areas.
  • Step S103 Shoot in the determined focus mode.
  • the so-called focusing refers to the process of adjusting the focusing mechanism of the photographing device to change the position of the object distance and the image distance, so that the image of the object (the object or the subject) to be photographed is clear.
  • the shooting device has a variety of focus modes, including manual focus mode and auto focus (AF, Auto Focus) mode; according to different subjects and scenes, the applicable focus modes are also different, and choosing the appropriate focus mode is more conducive to shooting. good image.
  • the focusing mode in the embodiment of the present application may be a manual focusing mode or an automatic focusing mode.
  • the photographing device when it does not detect manual focusing, if it detects that the focusing target of the previous N frames of images is the same target, it can automatically switch to the automatic focusing mode, and as long as manual focusing is detected, it will switch back to the manual focusing mode. Since most ordinary users usually use the autofocus mode in many application scenarios, the focus mode in the embodiment of the present application may be an autofocus mode with many applications.
  • Autofocus modes include, but are not limited to: single autofocus (AFS, Auto Focus-Single) mode, continuous autofocus mode, and so on.
  • Single autofocus AFS is a most basic focusing mode, which is to perform focusing operation after half-pressing the shutter. The basic steps are: framing, composition, half-pressing the shutter, focusing, and shooting.
  • Continuous AF AFC means that the camera always performs the focusing operation regardless of whether the shutter is pressed halfway.
  • AFC For AFC, it can be roughly divided into the following two categories: (1) AFC-Auto, that is, the user does not clearly point out the focus area or target, but only composes the image, and AF is automatically controlled by the system; (2) AFC-Tracking, continuous tracking focus, that is The user explicitly specifies the focus target and maintains focus on the user-selected target during shooting.
  • the N frames of images in the image sequence can be used to identify the scene currently shot by the user, and then the focus scene can be identified, based on which the user's current area of interest or object of interest can be determined to provide a basis for whether to switch the focus mode in the future.
  • the focus area of a frame image may be the focus area in the frame image in the current focus mode, and the focus area of a frame image may also be an output matching area matching the focus area of the frame image in the current focus mode,
  • the output matching area can be partially or completely encompassed by the focus area.
  • the matching between the output matching area and the focusing area may be that the matching degree between the output matching area of the frame image and the focusing area of the frame image in the current focusing mode is greater than or equal to the preset matching degree, or it may be It is that the degree of overlap between the output matching area of the frame image and the focus area of the frame image in the current focus mode is greater than or equal to the preset overlap degree, and so on.
  • N can be set according to the needs of the user. If N is set relatively large, it can indicate that the determination of whether to switch the focusing mode is triggered only when the focusing time is relatively long.
  • the N frames of images may be consecutive N frames of images, or may be non-consecutive N frames of images. Usually, it is more common to use consecutive N frames of images to identify the scene currently shot by the user.
  • Each focusing is a process of making the target (subject) image clear.
  • Each frame of image records the result of each focusing.
  • the user can automatically identify the current user
  • the shooting scene automatically recognizes the focus scene, and then determines whether to switch the current focus mode according to the recognition result. In this way, the user can bring a better and smoother focus experience without any operation. .
  • the current focusing mode is one-shot auto-focusing
  • N focus areas of consecutive N frames of images in the image sequence are obtained, and it is detected that the focus targets in the N focus areas are the same, at this time Can automatically switch to continuous AF mode.
  • the embodiment of the present application acquires N focus areas of N frames of images in an image sequence; determines whether to switch the current focus mode according to the focus targets in the N focus areas; and shoots in the determined focus mode. Since N focus areas of N frames of images are acquired in the current focus mode, according to the focus targets in the N focus areas, the scene currently captured by the user can be automatically recognized relatively stably, the focus scene can be automatically recognized, and then determined according to the recognition result. Whether to switch the current focus mode, and shoot in the determined focus mode, in this way, a better and smoother focus experience can be brought to the user without the user performing any operations. For example, it can be determined whether to automatically switch to the AFC-Tracking mode according to whether the focus targets in the N focus areas include the same target. The user can automatically recognize the automatic switch without any operation, which can bring better and smoother effects to the user. AF experience.
  • step S102, determining whether to switch the current focus mode according to the focus targets in the N focus areas may include: according to whether the focus targets in the N focus areas include the same one target, to determine whether to switch the current focus mode.
  • the embodiment of the present application is an optimization solution of the focusing system.
  • the embodiment of the present application can automatically detect the situation that the user "always focuses on the same object", and can automatically switch the focusing mode according to the detection result .
  • step S102 the determining whether to switch the current focus mode according to whether the focus targets in the N focus areas include the same target may include: if the N focus areas are The focus targets include the same target, and the current focus mode is the continuous auto focus automatic mode AFC-Auto, then the camera is controlled to switch to the continuous auto focus tracking mode AFC-Tracking.
  • AFC-Auto will try to focus back and forth continuously during the focusing process. If the focus target is not clear at this time, or the focus system cannot fix the focus, it may be easy to see that the screen is constantly focusing back and forth; AFC-Tracking will continue Focus on the area or target specified by the user, even if the target is constantly moving, it can focus on the target, it is based on the ability to track the target stably.
  • the advantage of AFC-Tracking over AFC-Auto is that it can track a relatively stable focus area, while AFC-Auto focuses on a conspicuous area. This area may jump randomly during the focusing process. Jumping directly to another target, AFC-Tracking can bring users a better focusing experience than AFC-Auto.
  • the method may further include: if the current focus mode is the AFC-Tracking mode, and it is detected that the focus target in the focus area is not in the AFC-Tracking mode In the frame image, the camera is controlled to switch to the AFC-Auto mode.
  • FIG. 2 is a schematic diagram of an embodiment of focus mode switching in the shooting method of the present application.
  • Frame 0-Frame N (Frame 0-Frame N) represents a continuous N frame image in the image sequence, and each row in the 2-4 row represents the focus of consecutive N frames
  • ROI-1 in the figure , ROI-2, ROI-3 represent 3 different focus areas, 3 different focus areas include 3 different focus targets, AFC-Auto on both sides, AFC-Tracking represent before Frame 0 and after Frame N Focus mode changes:
  • the focus area of Frame 0-Frame N in the second row is ROI-1, ROI-3, ROI-1, ..., ROI-3;
  • the focus area of Frame 0-Frame N in the 3rd row is a ROI-2;
  • the focus area of Frame 0-Frame N in the fourth row is ROI-2, ROI-1, ROI-1, ..., ROI-1;
  • the focus area can be directly acquired, step S101, in the current focus mode, N frames of images in the image sequence are acquired.
  • the N focus areas can include:
  • N focus areas of consecutive N frames of images in the image sequence are acquired respectively.
  • step S102 according to whether the focus targets in the N focus areas include the same target, includes: sub-step S102A1, sub-step S102A2 and sub-step S102A3, as shown in FIG. 3 .
  • Sub-step S102A1 In the current focus mode, extract the feature points of the focus area of each frame image in the image sequence respectively.
  • Sub-step S102A2 Matching the feature points of the N focus areas of the consecutive N frames of images.
  • Sub-step S102A3 If the feature points of the N focus areas in consecutive N frames of images are successfully matched with each other, it is determined that the focus targets in the N focus areas include the same target.
  • Feature points can be points where the gray value of the image changes drastically or points with large curvature on the edge of the image (ie, the intersection of two edges), and usually include color feature points and texture feature points.
  • the feature points of the image can reflect the essential characteristics of the image and can identify the target in the image.
  • the extraction methods of feature points usually include linear projection analysis and nonlinear feature extraction.
  • the matching of images can be completed through the matching of the feature points. If the feature points of the N focus areas are successfully matched with each other, it is determined that the focus targets in the N focus areas include the same target.
  • the focus area of each frame image is acquired, and the feature points of the focus area of each frame image are extracted, and then the N feature points of the focus area of the consecutive N frame images are matched, If the feature points of the N focus areas are successfully matched with each other, it is determined that the targets in the N focus areas include the same target.
  • the photographing device has the functions of target recognition and target tracking, and can output the output area including the target, step S101, in the current focus mode, acquiring N focus areas of N frames of images in the image sequence , which may include: sub-step S101A1 and sub-step S101A2.
  • Sub-step S101A1 In the current focus mode, acquire N output matching areas of consecutive N frames of images in the image sequence, and the output matching area of each frame image matches the focus area of the corresponding frame image in the current focus mode.
  • Sub-step S101A2 Use N output matching areas of N consecutive frames of images in the image sequence as N focus areas of N frames of images in the image sequence.
  • the sub-step S101A1 in the current focusing mode, obtains N output matching regions of consecutive N frames of images in the image sequence, which may include sub-step S101A11 and sub-step S101A12, as shown in FIG. 4 .
  • Sub-step S101A11 In the current focus mode, multiple output areas of each frame image in the image sequence are acquired through the multi-target tracking algorithm.
  • Sub-step S101A12 Match the focal area of each frame image with the multiple output areas of the corresponding frame image to obtain an output matching area matching the focal area of each frame image and the corresponding frame image, and then obtain the continuous image sequence in the image sequence. N output matching regions for N frames of images.
  • Target tracking can be in a given real-time image sequence, select the target you want to follow in a certain frame of image, and calculate the size and position of the target in subsequent frame images; multi-target tracking (MOT, Multiple Object Tracking) Tracking) refers to tracking multiple targets at the same time.
  • the multi-target tracking algorithms include but are not limited to: a tracking algorithm based on a twin network structure (Fully-Convolutional Siamese Networks for Object Tracking), a correlation filter-based DSST algorithm (Accurate Scale Estimation for Robust Visual Tracking), and so on. It should be noted that there are many available multi-target tracking algorithms, and different multi-target tracking algorithms can be selected according to different application platforms and computing resources.
  • the multi-target tracking algorithm is used to obtain multiple output areas of each frame image in the image sequence.
  • each output area includes one target, and the multiple targets in the multiple output areas may include the focus area. Focus on the target, match the focus area of each frame image with multiple output areas of the corresponding frame image (that is, the frame image), and obtain the output matching area matching the focus area of each frame image and the corresponding frame image, and then obtain N consecutive N frames of images in the image sequence are output matching areas that match the focus areas of the corresponding frame images.
  • the user does not need to manually switch the focus mode to complete the automatic switching of the focus mode.
  • One of the more classic scenes is the long shot.
  • the user may focus on different targets in different time segments.
  • the long shot consists of T1, T2, ..., TN time segments.
  • the target of focusing may be ROI-1, but there may be no ROI-1 in T2.
  • the user only pays attention to ROI-2.
  • the focus mode can also be switched according to this scene recognition, that is, the mode of focusing ROI-1 in T1
  • For AFC-Tracking when ROI-1 is lost, switch to AFC-Auto, enter T2 time, automatically detect the focus target ROI-2, switch to AFC-Tracking to achieve tracking focus on ROI-2.
  • sub-step S101A12 the matching of the focus area of each frame image with the multiple output areas of the corresponding frame image to obtain the output area matching the focus area of each frame image and the corresponding frame image, may also include:
  • the method may include: judging N marks of consecutive N frames of images in the image sequence and outputting the target marked in the matching area Whether it is the same target; if so, it is determined that the focus targets in the N focus areas include the same target.
  • the target in the output matching area that has been successfully matched will be marked, and the N successfully matched output matching areas of consecutive N frames of images in sequence are marked with the same target, then determine the target in the N focus areas.
  • Goals include the same goal.
  • the matching of the focal area of each frame image with the multiple output areas of the corresponding frame image may include: determining that the center of the focal area of each frame image is respectively related to the multiple output areas of the corresponding frame image. Multiple center distances between the centers of the output regions.
  • the matching if the matching is successful, it may include: judging the size of the multiple center distances and the distance threshold; if there is a center distance smaller than the distance threshold among the multiple center distances, it is determined that the matching is successful, and the center The output area corresponding to the distance less than the distance threshold is the output matching area matched with the focus area.
  • each output area BB1, BB2, BB3, ..., BBn also has a center O1, O2, O3, ..., On, the size of the center distance between the center O of the focus area AA and the centers O1, O2, O3, ..., On of the respective output areas BB1, BB2, BB3, ..., BBn, which can simply and conveniently represent the difference between the focus area and the output area.
  • the distance between the output areas When the center distance between the center of the focus area and the center of the output area is less than half of the side length of the focus area, the focus area and the output area will overlap.
  • a distance threshold can be determined.
  • the center distance between the center of the focus area of the frame image and the center of the output area of the corresponding frame image is less than the distance threshold, it can be considered that the focus area matches the output area, and the center distance is less than the distance threshold.
  • the corresponding output area is an output matching area matched with the focus area.
  • the matching of the focus area of each frame image with the multiple output areas of the corresponding frame image may include: extracting the focus area of each frame image and multiple output areas of the corresponding frame image respectively. The feature points of the output area; the feature points of the focus area of each frame image are respectively matched with the feature points of multiple output areas of the corresponding frame image to obtain multiple matching degrees.
  • the matching if the matching is successful, it may include: if there is a matching degree greater than or equal to a threshold matching degree among the plurality of matching degrees, determining that the matching is successful, and the matching degree greater than or equal to the threshold matching degree corresponds to The output area of is the output matching area matched with the focus area.
  • a threshold matching degree is pre-determined, and the feature points of the focus area of the frame image are respectively matched with the feature points of multiple output areas of the corresponding frame image to obtain multiple matching degrees. If the matching degree is greater than or equal to the threshold matching degree, the matching is successful, and the output area corresponding to the matching degree greater than or equal to the threshold matching degree is the output matching area matching the focus area.
  • the center distance between the center of the focus area and the center of the output area and the matching degree between the focus area and the output area are combined to determine the matching between the focus area and the output area. match.
  • the matching of the focus area of each frame image with the multiple output areas of the corresponding frame image may include: determining the difference between the center of the focus area of each frame image and the centers of the multiple output areas of the corresponding frame image respectively multiple center distances between the multiple center distances, determine the size of the multiple center distances and distance thresholds; extract the feature points of the focus area of each frame image and the multiple output areas of the corresponding frame image respectively, and divide the focus area of each frame image The feature points of , respectively, are matched with the feature points of multiple output regions of the corresponding frame image to obtain multiple matching degrees.
  • the matching may include: if there is a center distance smaller than the distance threshold among the multiple center distances, and there is a matching degree greater than or equal to the threshold matching degree among the multiple matching degrees, Then, it is determined that the matching is successful, and the output area corresponding to the center distance less than the distance threshold and the matching degree greater than or equal to the threshold matching degree is the output matching area matching the focus area.
  • the method may further include: if the matching is unsuccessful, taking the focal area of the frame image for which the matching is unsuccessful as a new output area of the multi-target tracking algorithm.
  • the focusing target includes a designated target
  • the method may further include: if the current focusing mode is the AFC-Tracking mode, assisting in tracking the designated target and focusing on the designated target through a designated target detection algorithm Specify the target.
  • the specified target includes a human face.
  • the focus area may always be the face area; if you automatically switch to the AFC-Tracking focus mode, you can perform face detection, match the current focus area with the face detection results, and judge Whether the current focus area is a human face, and if it is a human face, a face detection algorithm can be introduced to assist in tracking the human face and focusing on the human face in the subsequent process.
  • FIG. 6 is a schematic structural diagram of an embodiment of a photographing device of the present application. It should be noted that the photographing device of this embodiment can execute the steps in the above-mentioned photographing method. For a detailed description of the relevant content, please refer to the above-mentioned photographing method. The related content will not be repeated here.
  • the photographing device 100 includes: a memory 1 and a processor 2; the processor 2 and the memory 1 are connected through a bus.
  • the processor 2 may be a microcontroller unit, a central processing unit or a digital signal processor, and so on.
  • the memory 1 may be a Flash chip, a read-only memory, a magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • the memory 1 is used to store a computer program; the processor 2 is used to execute the computer program and implement the following steps when executing the computer program:
  • the current focus mode acquire N focus areas of N frames of images in the image sequence; determine whether to switch the current focus mode according to the focus targets in the N focus areas; shoot in the determined focus mode .
  • the processor when executing the computer program, implements the following steps: determining whether to switch the current focus mode according to whether the focus targets in the N focus areas include the same target.
  • the processor when executing the computer program, implements the following steps: if the focus targets in the N focus areas include the same target, and the current focus mode is an automatic mode AFC-Auto with continuous auto focus , then control the camera to switch to the continuous auto-focus tracking mode AFC-Tracking.
  • the processor when executing the computer program, implements the following steps: if the current focus mode is the AFC-Tracking mode, and it is detected that the focus target in the focus area in the AFC-Tracking mode is not in the frame image , the camera is controlled to switch to the AFC-Auto mode.
  • the processor when executing the computer program, implements the following steps: in the current focus mode, extracting the feature points of the focus area of each frame image in the image sequence respectively; The feature points of the focus areas are matched; if the feature points of the N focus areas of consecutive N frames of images are successfully matched with each other, it is determined that the focus targets in the N focus areas include the same target.
  • the processor when executing the computer program, implements the following steps: in the current focus mode, acquiring N output matching regions of consecutive N frames of images in the image sequence, and the output matching regions of each frame image are related to the corresponding frame The focus areas of the images in the current focus mode are matched; the N output matching areas of the N consecutive frames of images in the image sequence are used as the N focus areas of the N frames of images in the image sequence.
  • the processor when executing the computer program, implements the following steps: in the current focus mode, obtain multiple output areas of each frame image in the image sequence through a multi-target tracking algorithm; focus the focus of each frame image The regions are matched with multiple output regions of the corresponding frame image to obtain output matching regions where each frame image matches the focus region of the corresponding frame image, and then N output matching regions of consecutive N frame images in the image sequence are obtained.
  • the processor when executing the computer program, implements the following steps: matching the focus area of each frame image with multiple output areas of the corresponding frame image; if the matching is successful, matching the output areas in each frame image The corresponding target in the matching area is marked, and the marked output matching area of each frame image is obtained.
  • the processor when executing the computer program, implements the following steps: judging whether the marked targets in the N marked output matching regions of the consecutive N frames of images in the image sequence are the same target; The focus targets in the N focus areas include the same target.
  • the processor when executing the computer program, implements the following steps: determining a plurality of center distances between the center of the focus area of each frame image and the centers of multiple output areas of the corresponding frame image, respectively.
  • the processor when executing the computer program, implements the following steps: judging the size of the plurality of center distances and the distance threshold; if there is a center distance smaller than the distance threshold among the plurality of center distances, determining the matching If successful, the output area corresponding to the center distance less than the distance threshold is the output matching area.
  • the processor when executing the computer program, implements the following steps: extracting the feature points of the focus area of each frame image and the multiple output areas of the corresponding frame image respectively; The points are respectively matched with feature points of multiple output regions of the corresponding frame image to obtain multiple matching degrees.
  • the processor when executing the computer program, implements the following steps: if there is a matching degree greater than or equal to a threshold matching degree among the plurality of matching degrees, it is determined that the matching is successful, and the matching degree is greater than or equal to the threshold value
  • the output area corresponding to the matching degree is the output matching area.
  • the processor when executing the computer program, implements the following steps: determining a plurality of center distances between the center of the focus area of each frame image and the centers of multiple output areas of the corresponding frame image, and judging the The size of the multiple center distances and distance thresholds; extract the feature points of the focus area of each frame image and the multiple output areas of the corresponding frame image, respectively, and compare the feature points of the focus area of each frame image with the corresponding frame image. The feature points of multiple output regions are matched to obtain multiple matching degrees.
  • the processor when executing the computer program, implements the following steps: if there is a center distance less than a distance threshold among the plurality of center distances, and if there is a center distance greater than or equal to the threshold matching degree among the plurality of matching degrees matching degree, it is determined that the matching is successful, and the output area corresponding to the center distance less than the distance threshold and the matching degree greater than or equal to the threshold matching degree is the output matching area.
  • the processor executes the computer program, the following steps are implemented: if the matching is unsuccessful, the focal area of the frame image whose matching is unsuccessful is used as a new output area of the multi-target tracking algorithm.
  • the focusing target includes a designated target
  • the processor when executing the computer program, the processor implements the following steps: if the current focusing mode is the AFC-Tracking mode, assisting in tracking the designated target through a designated target detection algorithm and Focus on the specified target.
  • the specified target includes a human face.
  • the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor enables the processor to implement the photographing method described in any one of the above.
  • a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor enables the processor to implement the photographing method described in any one of the above.
  • the computer-readable storage medium may be an internal storage unit of the above-mentioned photographing apparatus, such as a hard disk or a memory.
  • the computer-readable storage medium may also be an external storage device, such as an equipped plug-in hard disk, smart memory card, secure digital card, flash memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

一种拍摄方法、拍摄装置及存储介质,该方法包括:在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域(S101);根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换(S102);以确定后的对焦模式进行拍摄(S103)。

Description

拍摄方法、拍摄装置及存储介质 技术领域
本申请涉及拍摄技术领域,尤其涉及一种拍摄方法、拍摄装置及存储介质。
背景技术
连续自动对焦(AFC,Auto Focus-Continuous)是指不管是否半按快门,拍摄装置始终执行对焦操作。对于AFC,可以大致分为以下两类:(1)连续自动对焦自动模式AFC-Auto,即用户不明确指出对焦区域或者目标,只进行构图,自动对焦(AF,Auto Focus)由系统自动控制;(2)连续自动对焦跟踪模式AFC-Tracking,连续跟踪对焦,即用户明确指定对焦目标,并在拍摄过程中保持对焦到用户选择的目标。
AFC-Auto模式下,对焦过程中随机跳动,从某一个目标直接跳到另外一个目标,容易看到画面不断来回对焦;AFC-Tracking模式相对AFC-Auto模式的优势是能够跟踪到一个比较稳定的对焦区域,能给用户带来更好的对焦体验。但是,拍摄装置不能从AFC-Auto模式自动切换到AFC-Tracking模式。
发明内容
基于此,本申请提供一种拍摄方法、拍摄装置及存储介质。
第一方面,本申请提供了一种拍摄方法,包括:
在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域;
根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;
以确定后的对焦模式进行拍摄。
第二方面,本申请提供了拍摄装置,所述装置包括:存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域;
根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;
以确定后的对焦模式进行拍摄。
第三方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如上所述的拍摄方法。
本申请实施例提供了一种拍摄方法、拍摄装置及存储介质,在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域;根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;以确定后的对焦模式进行拍摄。由于在当前对焦模式下获取N帧图像的N个对焦区域,根据N个对焦区域中的对焦目标,能够比较稳定地自动识别出用户当前拍摄的场景,自动识别出对焦场景,进而根据识别结果确定是否对当前对焦模式进行切换,通过这种方式,能够在用户不进行任何操作的情况下,给用户带来更好更顺畅的对焦体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请拍摄方法一实施例的流程示意图;
图2是本申请拍摄方法中对焦模式切换一实施例的示意图;
图3是本申请拍摄方法另一实施例的流程示意图;
图4是本申请拍摄方法又一实施例的流程示意图;
图5是本申请拍摄方法中利用中心距离进行匹配一实施例的原理示意图;
图6是本申请拍摄装置一实施例的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
连续自动对焦AFC可以大致分为以下两类:连续自动对焦自动模式AFC-Auto和连续自动对焦跟踪模式AFC-Tracking。AFC-Tracking模式相对AFC-Auto模式的优势是能够跟踪到一个比较稳定的对焦区域,能给用户带来更好的对焦体验。但是,拍摄装置不能从AFC-Auto模式自动切换到AFC-Tracking模式。
本申请实施例获取图像序列中N帧图像的N个对焦区域;根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;以确定后的对焦模式进行拍摄。由于在当前对焦模式下获取N帧图像的N个对焦区域,根据N个对焦区域中的对焦目标,能够比较稳定地自动识别出用户当前拍摄的场景,自动识别出对焦场景,进而根据识别结果确定是否对当前对焦模式进行切换,并以确定后的对焦模式进行拍摄,通过这种方式,能够在用户不进行任何操作的情况下,给用户带来更好更顺畅的对焦体验。例如,可以根据N个对焦区域中的对焦目标是否包括同一个目标,确定是否自动切换到AFC-Tracking模式,用户不进行任何操作,即可自动识别自动切换,能够给用户带来更好更顺畅的自动对焦体验。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
参见图1,图1是本申请拍摄方法一实施例的流程示意图,所述方法包括:
步骤S101:在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域。
步骤S102:根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换。
步骤S103:以确定后的对焦模式进行拍摄。
所谓对焦,是指调整拍摄装置对焦机构改变物距和像距的位置,使被拍物(拍摄对象或拍摄主体)成像清晰的过程。通常拍摄装置有多种对焦模式,包括手动对焦模式和自动对焦(AF,Auto Focus)模式;根据拍摄的主体和场景不同,所适用的对焦模式也不同,选择合适的对焦模式更有利于拍摄出好的图像。本申请实施例的对焦模式可以是手工对焦模式,也可以是自动对焦模式。例如:拍摄装置没有检测到手工对焦时,如果检测到前面N帧图像的对焦目标为同一个目标,可以自动切换为自动对焦模式,只要检测到手工对焦,则切换回手工对焦模式。由于较多的应用场景下大部分普通用户通常采用自动对焦模式,因此本申请实施例的对焦模式可以是应用较多的自动对焦模式。
自动对焦模式包括但不限于:单次自动对焦(AFS,Auto Focus-Single)模式、连续自动对焦模式,等等。单次自动对焦AFS是一种最基本的对焦模式,就是半按快门才进行对焦操作,基本步骤是:取景、构图、半按快门、对焦、拍摄。连续自动对焦AFC是指,不管是否半按快门,拍摄装置始终执行对焦操作。对于AFC,可以大致分为以下两类:(1)AFC-Auto,即用户不明确指出对焦区域或者目标,只进行构图,AF由系统自动控制;(2)AFC-Tracking,连续跟踪对焦,即用户明确指定对焦目标,并在拍摄过程中保持对焦到用户选择的目标。
图像序列中N帧图像可以用来识别出用户当前拍摄的场景,进而可以识别出对焦场景,据此可以判断用户当前感兴趣区域或感兴趣对象,为后续是否切换对焦模式提供依据。
一个帧图像的对焦区域可以是在当前对焦模式下该帧图像中的对焦区域,一个帧图像的对焦区域还可以是与该帧图像在所述当前对焦模式下的对焦区域匹配的输出匹配区域,输出匹配区域可以是部分包括或完全包括对焦区域。 本实施例中,输出匹配区域与对焦区域匹配可以是该帧图像的输出匹配区域与该帧图像在所述当前对焦模式下的对焦区域之间的匹配度大于或等于预设匹配度,或者可以是该帧图像的输出匹配区域与该帧图像在所述当前对焦模式下的对焦区域之间的重叠程度大于或等于预设重叠程度,等等。
其中,N可以根据用户的需求进行设置,N如果设置比较大,可以表示对焦时间比较久的时候才触发确定是否进行对焦模式的切换。N帧图像可以是连续的N帧图像,也可以是非连续的N帧图像。通常情况下,采用连续的N帧图像识别用户当前拍摄的场景更为常见。
每一次对焦,都是使目标(被拍物)成像清晰的过程,每个帧图像记录着每一次对焦的结果,根据述N个对焦区域中的对焦目标,可以比较稳定地自动识别出用户当前拍摄的场景,自动识别出对焦场景,进而根据识别结果确定是否对当前对焦模式进行切换,通过这种方式,能够在用户不进行任何操作的情况下,给用户带来更好更顺畅的对焦体验。
例如:如果当前对焦模式为单次自动对焦,在单次自动对焦模式下,得到图像序列中连续N帧图像的N个对焦区域,检测到N个对焦区域中的对焦目标是一样的,此时可以自动切换到连续自动对焦模式。
本申请实施例获取图像序列中N帧图像的N个对焦区域;根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;以确定后的对焦模式进行拍摄。由于在当前对焦模式下获取N帧图像的N个对焦区域,根据N个对焦区域中的对焦目标,能够比较稳定地自动识别出用户当前拍摄的场景,自动识别出对焦场景,进而根据识别结果确定是否对当前对焦模式进行切换,并以确定后的对焦模式进行拍摄,通过这种方式,能够在用户不进行任何操作的情况下,给用户带来更好更顺畅的对焦体验。例如,可以根据N个对焦区域中的对焦目标是否包括同一个目标,确定是否自动切换到AFC-Tracking模式,用户不进行任何操作,即可自动识别自动切换,能够给用户带来更好更顺畅的自动对焦体验。
在一实施例中,步骤S102,根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换,可以包括:根据所述N个对焦区域中的对焦目标是否包括同一个目标,确定是否对所述当前对焦模式进行切换。
本申请实施例是一种对焦系统的优化方案,在用户不进行任何操作的时候,本申请实施例可以自动检测出用户“一直对焦同一个物体”的情形,并能够根据检测结果自动切换对焦模式。
在一实施例中,步骤S102,所述根据所述N个对焦区域中的对焦目标是否包括同一个目标,确定是否对所述当前对焦模式进行切换,可以包括:若所述N个对焦区域中的对焦目标包括同一个目标,且所述当前对焦模式为连续自动对焦的自动模式AFC-Auto,则控制摄像装置切换到连续自动对焦的跟踪模式AFC-Tracking。
AFC-Auto在对焦的过程中对焦系统会不断来回尝试对焦,如果此时对焦目标不明确,或者对焦系统无法固定抓到焦点,此时可能容易看到画面不断来回对焦;AFC-Tracking则会持续对焦到用户指定的区域或目标,即使目标在不断运动的时候也能够对焦到目标上,它建立在能够稳定跟踪到目标的基础上。AFC-Tracking相对AFC-Auto的优势是能够跟踪到一个比较稳定的对焦区域,而AFC-Auto则是对焦到一个显著的区域,这个区域在对焦的过程中是可能随机跳动的,从某一个物体直接跳到另外一个目标,AFC-Tracking相对AFC-Auto,能给用户带来更好的对焦体验。
其中,若所述当前对焦模式为AFC-Tracking模式,所述方法还可以包括:若所述当前对焦模式为AFC-Tracking模式,且检测到所述AFC-Tracking模式下对焦区域中的对焦目标不在帧图像中,则控制所述摄像装置切换到AFC-Auto模式。
如图2所示,图2是本申请拍摄方法中对焦模式切换一实施例的示意图。第1行中,帧0-帧N(Frame 0-Frame N)表示图像序列中某个连续的N帧图像,第2-4行的每一行表示连续N帧的对焦情况,图中ROI-1、ROI-2、ROI-3表示3个不同的对焦区域,3个不同的对焦区域中包括3个不同的对焦目标,两侧的AFC-Auto,AFC-Tracking表示Frame 0之前和Frame N之后的对焦模式变化:
第2行中Frame 0-Frame N的对焦区域依次是ROI-1、ROI-3、ROI-1、…、ROI-3;
第3行中Frame 0-Frame N的对焦区域都是一个ROI-2;
第4行中Frame 0-Frame N的对焦区域依次是ROI-2、ROI-1、ROI-1、…、ROI-1;
通常情况下,如果连续N帧的对焦区域为同一个目标,可以认为此时用户比较关注该目标,此时可以触发对焦模式的切换,由AFC-Auto模式切换为AFC-Tracking模式,即图2中第3行的情形;如果此前AFC-Tracking中的对焦区域ROI-2中的对焦目标丢失,对焦区域ROI-2丢失(比如遮挡等情况导致目标不在镜头画面中),可以认为此时可以自动退出AFC-Tracking模式。通过识别上述连续N帧图像内对焦区域的变化情况,判断用户是否对同一个目标持续对焦,进而确定是否切换对焦模式。
在一实施例中,如果拍摄装置没有目标识别、目标跟踪功能,不能输出包括目标的输出区域时,可以直接获取对焦区域,步骤S101,所述在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域,可以包括:
在当前对焦模式下,分别获取图像序列中连续N帧图像的N个所述对焦区域。
此时,步骤S102,所述根据所述N个对焦区域中的对焦目标是否包括同一个目标,包括:子步骤S102A1、子步骤S102A2以及子步骤S102A3,如图3所示。
子步骤S102A1:在当前对焦模式下,分别提取图像序列中每个帧图像的所述对焦区域的特征点。
子步骤S102A2:将连续N帧图像的N个所述对焦区域的特征点进行匹配。
子步骤S102A3:若连续N帧图像的N个所述对焦区域的特征点互相匹配成功,则确定N个对焦区域中的对焦目标包括同一个目标。
特征点可以是图像灰度值发生剧烈变化的点或者在图像边缘上曲率较大的点(即两个边缘的交点),通常包括颜色特征点和纹理特征点。图像的特征点能够反映图像本质特征,能够标识图像中的目标。特征点的提取方法通常包括线性投影分析和非线性特征抽取。通过特征点的匹配能够完成图像的匹配,如果N个所述对焦区域的特征点互相匹配成功,则确定N个对焦区域中的对焦目标包括同一个目标。
本实施例在当前对焦模式下,获取每个帧图像的对焦区域,并提取每个帧 图像的对焦区域的特征点,然后将连续N帧图像的N个所述对焦区域的特征点进行匹配,如果N个所述对焦区域的特征点互相匹配成功,则确定N个对焦区域中的目标包括同一个目标。
在另一实施例中,如果拍摄装置有目标识别、目标跟踪功能,能够输出包括目标的输出区域时,步骤S101,所述在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域,可以包括:子步骤S101A1和子步骤S101A2。
子步骤S101A1:在当前对焦模式下,获取图像序列中连续N帧图像的N个输出匹配区域,每个帧图像的输出匹配区域与对应帧图像在所述当前对焦模式下的对焦区域匹配。
子步骤S101A2:将所述图像序列中连续N帧图像的N个输出匹配区域作为图像序列中N帧图像的N个对焦区域。
其中,子步骤S101A1,在当前对焦模式下,获取图像序列中连续N帧图像的N个输出匹配区域,可以包括:子步骤S101A11和子步骤S101A12,如图4所示。
子步骤S101A11:在当前对焦模式下,通过多目标跟踪算法获取图像序列中每个帧图像的多个输出区域。
子步骤S101A12:将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,得到每个帧图像与对应帧图像的对焦区域匹配的输出匹配区域,进而得到所述图像序列中连续N帧图像的N个输出匹配区域。
目标跟踪可以是在给定的实时图像序列中,在某一帧图像中框选出想要跟随的目标,在后续帧图像中计算出该目标的大小和位置;多目标跟踪(MOT,Multiple Object Tracking)指同时对多个目标进行跟踪。其中,多目标跟踪算法包括但不限于:基于孪生网络结构的跟踪算法(Fully-Convolutional Siamese Networks for Object Tracking)、基于相关滤波器的DSST算法(Accurate Scale Estimation for Robust Visual Tracking),等等。需要说明的是,现有可用的多目标跟踪算法有很多,可以根据不同的应用平台和计算资源选择不同的多目标跟踪算法。
本实施例通过多目标跟踪算法获取图像序列中每个帧图像的多个输出区域,通常情况下,每个输出区域会包括一个目标,这多个输出区域的多个目标 有可能包括对焦区域的对焦目标,将每个帧图像的对焦区域与对应帧图像(即该帧图像)的多个输出区域进行匹配,可以得到每个帧图像与对应帧图像的对焦区域匹配的输出匹配区域,进而得到所述图像序列中连续N帧图像的N个与对应帧图像的对焦区域匹配的输出匹配区域。
在很多场景下,用户不需要手动切换对焦模式就可以完成对焦模式的自动切换。其中一个比较经典的场景是长镜头,用户在整个拍摄的过程中,不同的时间片段用户可能关注着不同的目标,假设长镜头由T1、T2、……、TN个时间片段构成,在T1中对焦的目标可能是ROI-1,在T2中可能没有ROI-1,用户只关注ROI-2,此时也可以根据这种场景识别实现对焦模式的切换,即在T1中对焦ROI-1的模式为AFC-Tracking,当ROI-1跟丢的时候切换到AFC-Auto,进入T2时刻,自动检测到对焦目标ROI-2,切换到AFC-Tracking实现对ROI-2的跟踪对焦。
其中,子步骤S101A12,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,得到每个帧图像与对应帧图像的对焦区域匹配的输出区域,还可以包括:
(A1)将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配。
(A2)若匹配成功,将每个帧图像中匹配的输出匹配区域中对应的目标进行标记,得到每个帧图像与对应帧图像的对焦区域匹配的标记输出匹配区域。
此时,步骤S102中,所述根据所述N个对焦区域中的对焦目标是否包括同一个目标,可以包括:判断所述图像序列中连续N帧图像的N个标记输出匹配区域中标记的目标是否为同一个目标;若是,则确定所述N个对焦区域中的对焦目标包括同一个目标。
由于匹配成功时,会对匹配成功的输出匹配区域中的目标进行标记,按照顺序连续N帧图像的N个匹配成功的输出匹配区域均标记同一个目标,则确定所述N个对焦区域中的目标包括同一个目标。
在一实施例中,A1,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,可以包括:确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离。此时,A2中,所述若匹配成功,可以包括:判断所述多个中心距离与距离阈值的大小;若所述多个中心 距离中存在小于距离阈值的中心距离,则确定匹配成功,中心距离小于距离阈值对应的输出区域为与所述对焦区域匹配的输出匹配区域。
如图5所示,对于每个帧图像来说,对焦区域AA有个中心O,每个输出区域BB1、BB2、BB3、……、BBn也各有一个中心O1、O2、O3、……、On,对焦区域AA的中心O与各个输出区域BB1、BB2、BB3、……、BBn的中心O1、O2、O3、……、On之间的中心距离的大小,可以简单方便地表示对焦区域与输出区域之间的距离,当对焦区域的中心与输出区域的中心之间中心距离小于对焦区域边长的一半时,对焦区域与输出区域会重叠,对焦区域与输出区域重叠部分越多,对焦区域与输出区域匹配程度越高,对焦区域的中心与输出区域的中心之间距离越小,甚至可能两个中心重合。据此可以确定一个距离阈值,当帧图像的对焦区域的中心与对应帧图像的输出区域的中心之间的中心距离小于距离阈值时,可以认为对焦区域与该输出区域匹配,中心距离小于距离阈值对应的输出区域为与所述对焦区域匹配的输出匹配区域。
在另一实施例中,A1,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,可以包括:分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点;将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。此时,A2中,所述若匹配成功,可以包括:若所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述匹配度大于或等于阈值匹配度对应的输出区域为与所述对焦区域匹配的输出匹配区域。
本实施例中,预先确定一个阈值匹配度,将帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度,如果多个匹配度中存在匹配度大于或等于阈值匹配度,则匹配成功,匹配度大于或等于阈值匹配度对应的输出区域为与所述对焦区域匹配的输出匹配区域。
在又一实施例中,为了提高匹配的准确度,将对焦区域的中心与输出区域的中心之间的中心距离和对焦区域与输出区域之间的匹配度结合起来一起判断对焦区域与输出区域的匹配情况。
A1,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,可以包括:确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输 出区域的中心之间的多个中心距离,判断所述多个中心距离与距离阈值的大小;分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点,将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。此时,A2中,所述若匹配成功,可以包括:若所述多个中心距离中存在小于距离阈值的中心距离,且所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述中心距离小于距离阈值、且所述匹配度大于或等于阈值匹配度对应的输出区域为与所述对焦区域匹配的输出匹配区域。
在一实施例中,如果匹配不成功,说明当前的多目标跟踪算法不能识别、跟踪对焦区域中的对焦目标,此时可以使多目标跟踪算法学习识别该对焦区域中的对焦目标,在后续帧图像中能够输出对应的输出区域。因此所述方法还可以包括:若匹配不成功,则将匹配不成功的帧图像的对焦区域作为所述多目标跟踪算法新的输出区域。
在较多的应用场景下,所述对焦目标包括指定目标,所述方法还可以包括:若所述当前对焦模式为AFC-Tracking模式,通过指定目标检测算法辅助跟踪所述指定目标并对焦所述指定目标。
其中,所述指定目标包括人脸。
比如:在对某个人进行专访的时候对焦区域可能就一直是人脸区域;如果自动切换到AFC-Tracking对焦模式中,可以进行人脸检测,将当前对焦区域和人脸检测结果进行匹配,判断当前对焦区域是否为人脸,如果为人脸,那么在后续的过程中可以引入人脸检测算法辅助跟踪人脸并对焦人脸。
参见图6,图6是本申请拍摄装置一实施例的结构示意图,需要说明的是,本实施例的拍摄装置能够执行上述拍摄方法中的步骤,相关内容的详细说明,请参见上述拍摄方法的相关内容,在此不再赘叙。
所述拍摄装置100包括:存储器1和处理器2;处理器2与存储器1通过总线连接。
其中,处理器2可以是微控制单元、中央处理单元或数字信号处理器,等等。
其中,存储器1可以是Flash芯片、只读存储器、磁盘、光盘、U盘或者 移动硬盘等等。
所述存储器1用于存储计算机程序;所述处理器2用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域;根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;以确定后的对焦模式进行拍摄。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:根据所述N个对焦区域中的对焦目标是否包括同一个目标,确定是否对所述当前对焦模式进行切换。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:若所述N个对焦区域中的对焦目标包括同一个目标,且所述当前对焦模式为连续自动对焦的自动模式AFC-Auto,则控制摄像装置切换到连续自动对焦的跟踪模式AFC-Tracking。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:若所述当前对焦模式为AFC-Tracking模式,且检测到所述AFC-Tracking模式下对焦区域中的对焦目标不在帧图像中,则控制所述摄像装置切换到AFC-Auto模式。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:在当前对焦模式下,分别获取图像序列中连续N帧图像的N个所述对焦区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:在当前对焦模式下,分别提取图像序列中每个帧图像的所述对焦区域的特征点;将连续N帧图像的N个所述对焦区域的特征点进行匹配;若连续N帧图像的N个所述对焦区域的特征点互相匹配成功,则确定N个对焦区域中的对焦目标包括同一个目标。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:在当前对焦模式下,获取图像序列中连续N帧图像的N个输出匹配区域,每个帧图像的输出匹配区域与对应帧图像在所述当前对焦模式下的对焦区域匹配;将所述图像序列中连续N帧图像的N个输出匹配区域作为图像序列中N帧图像的N个对焦区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:在当前对焦 模式下,通过多目标跟踪算法获取图像序列中每个帧图像的多个输出区域;将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,得到每个帧图像与对应帧图像的对焦区域匹配的输出匹配区域,进而得到所述图像序列中连续N帧图像的N个输出匹配区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配;若匹配成功,将每个帧图像中的输出匹配区域中对应的目标进行标记,得到每个帧图像的标记输出匹配区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:判断所述图像序列中连续N帧图像的N个标记输出匹配区域中标记的目标是否为同一个目标;若是,则确定所述N个对焦区域中的对焦目标包括同一个目标。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:判断所述多个中心距离与距离阈值的大小;若所述多个中心距离中存在小于距离阈值的中心距离,则确定匹配成功,中心距离小于距离阈值对应的输出区域为所述输出匹配区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点;将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:若所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述匹配度大于或等于阈值匹配度对应的输出区域为所述输出匹配区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离,判断所述多个中心距离与距离阈值的大小;分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点,将每个帧图像的对焦区域的 特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:若所述多个中心距离中存在小于距离阈值的中心距离,且所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述中心距离小于距离阈值、且所述匹配度大于或等于阈值匹配度对应的输出区域为所述输出匹配区域。
其中,所述处理器在执行所述计算机程序时,实现如下步骤:若匹配不成功,则将匹配不成功的帧图像的对焦区域作为所述多目标跟踪算法新的输出区域。
其中,所述对焦目标包括指定目标,所述处理器在执行所述计算机程序时,实现如下步骤:若所述当前对焦模式为AFC-Tracking模式,通过指定目标检测算法辅助跟踪所述指定目标并对焦所述指定目标。
其中,所述指定目标包括人脸。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如上任一项所述的拍摄方法。相关内容的详细说明请参见上述相关内容部分,在此不再赘叙。
其中,该计算机可读存储介质可以是上述拍摄装置的内部存储单元,例如硬盘或内存。该计算机可读存储介质也可以是外部存储设备,例如配备的插接式硬盘、智能存储卡、安全数字卡、闪存卡,等等。
应当理解,在本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施例,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (39)

  1. 一种拍摄方法,其特征在于,所述方法包括:
    在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域;
    根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;
    以确定后的对焦模式进行拍摄。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换,包括:
    根据所述N个对焦区域中的对焦目标是否包括同一个目标,确定是否对所述当前对焦模式进行切换。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述N个对焦区域中的对焦目标是否包括同一个目标,确定是否对所述当前对焦模式进行切换,包括:
    若所述N个对焦区域中的对焦目标包括同一个目标,且所述当前对焦模式为连续自动对焦的自动模式AFC-Auto,则控制摄像装置切换到连续自动对焦的跟踪模式AFC-Tracking。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    若所述当前对焦模式为AFC-Tracking模式,且检测到所述AFC-Tracking模式下对焦区域中的对焦目标不在帧图像中,则控制所述摄像装置切换到AFC-Auto模式。
  5. 根据权利要求2所述的方法,其特征在于,所述在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域,包括:
    在当前对焦模式下,分别获取图像序列中连续N帧图像的N个对焦区域。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述N个对焦区域中的对焦目标是否包括同一个目标,包括:
    在当前对焦模式下,分别提取图像序列中每个帧图像的所述对焦区域的特征点;
    将连续N帧图像的N个所述对焦区域的特征点进行匹配;
    若连续N帧图像的N个所述对焦区域的特征点互相匹配成功,则确定N个对焦区域中的对焦目标包括同一个目标。
  7. 根据权利要求2所述的方法,其特征在于,所述在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域,包括:
    在当前对焦模式下,获取图像序列中连续N帧图像的N个输出匹配区域,每个帧图像的输出匹配区域与对应帧图像在所述当前对焦模式下的对焦区域匹配;
    将所述图像序列中连续N帧图像的N个输出匹配区域作为图像序列中N帧图像的N个对焦区域。
  8. 根据权利要求7所述的方法,其特征在于,所述在当前对焦模式下,获取图像序列中连续N帧图像的N个输出匹配区域,包括:
    在当前对焦模式下,通过多目标跟踪算法获取图像序列中每个帧图像的多个输出区域;
    将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,得到每个帧图像与对应帧图像的对焦区域匹配的输出匹配区域,进而得到所述图像序列中连续N帧图像的N个输出匹配区域。
  9. 根据权利要求8所述的方法,其特征在于,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,得到每个帧图像与对应帧图像的对焦区域匹配的输出匹配区域,包括:
    将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配;
    若匹配成功,将每个帧图像中的输出匹配区域中对应的目标进行标记,得到每个帧图像的标记输出匹配区域。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述N个对焦区域中的对焦目标是否包括同一个目标,包括:
    判断所述图像序列中连续N帧图像的N个标记输出匹配区域中标记的目标是否为同一个目标;
    若是,则确定所述N个对焦区域中的对焦目标包括同一个目标。
  11. 根据权利要求9所述的方法,其特征在于,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,包括:
    确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离。
  12. 根据权利要求11所述的方法,其特征在于,所述若匹配成功,包括:
    判断所述多个中心距离与距离阈值的大小;
    若所述多个中心距离中存在小于距离阈值的中心距离,则确定匹配成功,中心距离小于距离阈值对应的输出区域为所述输出匹配区域。
  13. 根据权利要求9所述的方法,其特征在于,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,包括:
    分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点;
    将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。
  14. 根据权利要求13所述的方法,其特征在于,所述若匹配成功,包括:
    若所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述匹配度大于或等于阈值匹配度对应的输出区域为所述输出匹配区域。
  15. 根据权利要求9所述的方法,其特征在于,所述将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,包括:
    确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离,判断所述多个中心距离与距离阈值的大小;
    分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点,将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。
  16. 根据权利要求15所述的方法,其特征在于,所述若匹配成功,包括:
    若所述多个中心距离中存在小于距离阈值的中心距离,且所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述中心距离小于距离阈值、且所述匹配度大于或等于阈值匹配度对应的输出区域为所述输出匹配区域。
  17. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    若匹配不成功,则将匹配不成功的帧图像的对焦区域作为所述多目标跟踪算法新的输出区域。
  18. 根据权利要求3所述的方法,其特征在于,所述对焦目标包括指定目标,所述方法还包括:
    若所述当前对焦模式为AFC-Tracking模式,通过指定目标检测算法辅助跟踪所述指定目标并对焦所述指定目标。
  19. 根据权利要求18所述的方法,其特征在于,所述指定目标包括人脸。
  20. 一种拍摄装置,其特征在于,所述装置包括:存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
    在当前对焦模式下,获取图像序列中N帧图像的N个对焦区域;
    根据所述N个对焦区域中的对焦目标,确定是否对所述当前对焦模式进行切换;
    以确定后的对焦模式进行拍摄。
  21. 根据权利要求20所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述N个对焦区域中的对焦目标是否包括同一个目标,确定是否对所述当前对焦模式进行切换。
  22. 根据权利要求21所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    若所述N个对焦区域中的对焦目标包括同一个目标,且所述当前对焦模式为连续自动对焦的自动模式AFC-Auto,则控制摄像装置切换到连续自动对焦的跟踪模式AFC-Tracking。
  23. 根据权利要求22所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    若所述当前对焦模式为AFC-Tracking模式,且检测到所述AFC-Tracking模式下对焦区域中的对焦目标不在帧图像中,则控制所述摄像装置切换到AFC-Auto模式。
  24. 根据权利要求21所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    在当前对焦模式下,分别获取图像序列中连续N帧图像的N个所述对焦区域。
  25. 根据权利要求24所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    在当前对焦模式下,分别提取图像序列中每个帧图像的所述对焦区域的特征点;
    将连续N帧图像的N个所述对焦区域的特征点进行匹配;
    若连续N帧图像的N个所述对焦区域的特征点互相匹配成功,则确定N个对焦区域中的对焦目标包括同一个目标。
  26. 根据权利要求21所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    在当前对焦模式下,获取图像序列中连续N帧图像的N个输出匹配区域,每个帧图像的输出匹配区域与对应帧图像在所述当前对焦模式下的对焦区域匹配;
    将所述图像序列中连续N帧图像的N个输出匹配区域作为图像序列中N帧图像的N个对焦区域。
  27. 根据权利要求26所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    在当前对焦模式下,通过多目标跟踪算法获取图像序列中每个帧图像的多个输出区域;
    将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配,得到每个帧图像与对应帧图像的对焦区域匹配的输出匹配区域,进而得到所述图像序列中连续N帧图像的N个输出匹配区域。
  28. 根据权利要求27所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    将每个帧图像的对焦区域与对应帧图像的多个输出区域进行匹配;
    若匹配成功,将每个帧图像中的输出匹配区域中对应的目标进行标记,得到每个帧图像的标记输出匹配区域。
  29. 根据权利要求28所述的拍摄装置,其特征在于,所述处理器在执行 所述计算机程序时,实现如下步骤:
    判断所述图像序列中连续N帧图像的N个标记输出匹配区域中标记的目标是否为同一个目标;
    若是,则确定所述N个对焦区域中的对焦目标包括同一个目标。
  30. 根据权利要求28所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离。
  31. 根据权利要求30所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    判断所述多个中心距离与距离阈值的大小;
    若所述多个中心距离中存在小于距离阈值的中心距离,则确定匹配成功,中心距离小于距离阈值对应的输出区域为所述输出匹配区域。
  32. 根据权利要求28所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点;
    将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。
  33. 根据权利要求32所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    若所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述匹配度大于或等于阈值匹配度对应的输出区域为所述输出匹配区域。
  34. 根据权利要求28所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    确定每个帧图像的对焦区域的中心分别与对应帧图像的多个输出区域的中心之间的多个中心距离,判断所述多个中心距离与距离阈值的大小;
    分别提取每个帧图像的对焦区域与对应帧图像的多个输出区域的特征点,将每个帧图像的对焦区域的特征点分别与对应帧图像的多个输出区域的特征点进行匹配,得到多个匹配度。
  35. 根据权利要求34所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    若所述多个中心距离中存在小于距离阈值的中心距离,且所述多个匹配度中存在大于或等于阈值匹配度的匹配度,则确定匹配成功,所述中心距离小于距离阈值、且所述匹配度大于或等于阈值匹配度对应的输出区域为所述输出匹配区域。
  36. 根据权利要求28所述的拍摄装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    若匹配不成功,则将匹配不成功的帧图像的对焦区域作为所述多目标跟踪算法新的输出区域。
  37. 根据权利要求22所述的拍摄装置,其特征在于,所述对焦目标包括指定目标,所述处理器在执行所述计算机程序时,实现如下步骤:
    若所述当前对焦模式为AFC-Tracking模式,通过指定目标检测算法辅助跟踪所述指定目标并对焦所述指定目标。
  38. 根据权利要求37所述的拍摄装置,其特征在于,所述指定目标包括人脸。
  39. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1-19任一项所述的拍摄方法。
PCT/CN2020/104596 2020-07-24 2020-07-24 拍摄方法、拍摄装置及存储介质 WO2022016550A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080006539.1A CN113170053A (zh) 2020-07-24 2020-07-24 拍摄方法、拍摄装置及存储介质
PCT/CN2020/104596 WO2022016550A1 (zh) 2020-07-24 2020-07-24 拍摄方法、拍摄装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/104596 WO2022016550A1 (zh) 2020-07-24 2020-07-24 拍摄方法、拍摄装置及存储介质

Publications (1)

Publication Number Publication Date
WO2022016550A1 true WO2022016550A1 (zh) 2022-01-27

Family

ID=76879306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104596 WO2022016550A1 (zh) 2020-07-24 2020-07-24 拍摄方法、拍摄装置及存储介质

Country Status (2)

Country Link
CN (1) CN113170053A (zh)
WO (1) WO2022016550A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181460A1 (en) * 2007-01-31 2008-07-31 Masaya Tamaru Imaging apparatus and imaging method
US20100086292A1 (en) * 2008-10-08 2010-04-08 Samsung Electro- Mechanics Co., Ltd. Device and method for automatically controlling continuous auto focus
CN102096925A (zh) * 2010-11-26 2011-06-15 中国科学院上海技术物理研究所 一种机动目标的实时闭环预测跟踪方法
WO2013170754A1 (zh) * 2012-05-18 2013-11-21 华为终端有限公司 一种自动切换终端对焦模式的方法及终端
CN103905717A (zh) * 2012-12-27 2014-07-02 联想(北京)有限公司 一种切换方法、装置及电子设备
CN104902182A (zh) * 2015-05-28 2015-09-09 努比亚技术有限公司 一种实现连续自动对焦的方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI413846B (zh) * 2009-09-16 2013-11-01 Altek Corp Continuous focus method of digital camera
CN107257433B (zh) * 2017-06-16 2020-01-17 Oppo广东移动通信有限公司 对焦方法、装置、终端和计算机可读存储介质
CN108777767A (zh) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 拍照方法、装置、终端及计算机可读存储介质
CN110572573B (zh) * 2019-09-17 2021-11-09 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181460A1 (en) * 2007-01-31 2008-07-31 Masaya Tamaru Imaging apparatus and imaging method
US20100086292A1 (en) * 2008-10-08 2010-04-08 Samsung Electro- Mechanics Co., Ltd. Device and method for automatically controlling continuous auto focus
CN102096925A (zh) * 2010-11-26 2011-06-15 中国科学院上海技术物理研究所 一种机动目标的实时闭环预测跟踪方法
WO2013170754A1 (zh) * 2012-05-18 2013-11-21 华为终端有限公司 一种自动切换终端对焦模式的方法及终端
CN103905717A (zh) * 2012-12-27 2014-07-02 联想(北京)有限公司 一种切换方法、装置及电子设备
CN104902182A (zh) * 2015-05-28 2015-09-09 努比亚技术有限公司 一种实现连续自动对焦的方法和装置

Also Published As

Publication number Publication date
CN113170053A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
US7714927B2 (en) Imaging apparatus, imaging apparatus control method and computer program product, with eye blink detection features
US7791668B2 (en) Digital camera
US9501834B2 (en) Image capture for later refocusing or focus-manipulation
CN108496350A (zh) 一种对焦处理方法及设备
KR20130139243A (ko) 초점이 벗어난 상태들 하에서 객체 검출 및 인식
JP2004320287A (ja) デジタルカメラ
CN111263072A (zh) 一种拍摄控制方法、装置及计算机可读存储介质
US8823863B2 (en) Image capturing apparatus and control method therefor
JP2008270896A (ja) 撮像装置及びそのプログラム
CN113302907B (zh) 拍摄方法、装置、设备及计算机可读存储介质
JP2004320285A (ja) デジタルカメラ
WO2019084756A1 (zh) 一种图像处理方法、装置及飞行器
JP2019121860A (ja) 画像処理装置及びその制御方法
CN111212226A (zh) 对焦拍摄方法和装置
US20210158537A1 (en) Object tracking apparatus and control method thereof
WO2022016550A1 (zh) 拍摄方法、拍摄装置及存储介质
US11696025B2 (en) Image processing apparatus capable of classifying an image, image processing method, and storage medium
US11823428B2 (en) Image processing apparatus and control method therefor, image capturing apparatus, and storage medium
CN112995503B (zh) 手势控制全景图像获取方法、装置、电子设备及存储介质
US20240129613A1 (en) Lens focusing control method and device, photographing device
JP5375943B2 (ja) 撮像装置及びそのプログラム
CN115334241B (zh) 对焦控制方法、装置、存储介质及摄像设备
JP2010113130A (ja) 焦点検出装置、撮像装置、焦点検出方法
US20240070877A1 (en) Image processing apparatus, method for controlling the same, imaging apparatus, and storage medium
JP2011071671A (ja) 画像認識装置、および、撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945973

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945973

Country of ref document: EP

Kind code of ref document: A1