CN112740649A - Photographing method, photographing apparatus, and computer-readable storage medium - Google Patents

Photographing method, photographing apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN112740649A
CN112740649A CN201980059476.3A CN201980059476A CN112740649A CN 112740649 A CN112740649 A CN 112740649A CN 201980059476 A CN201980059476 A CN 201980059476A CN 112740649 A CN112740649 A CN 112740649A
Authority
CN
China
Prior art keywords
angle area
definition
view angle
shooting
statistic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980059476.3A
Other languages
Chinese (zh)
Inventor
韩守谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112740649A publication Critical patent/CN112740649A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Abstract

A photographing method, a photographing apparatus (300), and a computer-readable storage medium, wherein the method comprises: acquiring a sharpness statistic for each of a plurality of view angle regions (S101); determining depth-of-field information of each view angle area according to the definition statistic value (S102); according to the depth information, a photographing control operation is performed (S103). The shooting effect of the image can be improved, and the user experience is improved.

Description

Photographing method, photographing apparatus, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of photographing control technologies, and in particular, to a photographing method, a photographing apparatus, and a computer-readable storage medium.
Background
At present, a user can shoot people and scenery through shooting equipment such as a smart phone, a single lens reflex camera, a digital camera and a tablet personal computer, generally speaking, the shooting equipment supports an Auto Focus (AF) function, the people and the scenery can be conveniently shot by the user through the AF to obtain clear photos, however, the current shooting equipment usually only calculates the definition statistic of a Focus area, and does not record the distribution of the overall definition statistic, so that the user lacks the grasp of shooting parameters of the overall area during shooting, thereby affecting the shooting effect, and the user experience is not good.
Disclosure of Invention
Based on this, the application provides a shooting method, shooting equipment and a computer readable storage medium, aiming at improving the shooting effect of images and improving the user experience.
In a first aspect, the present application provides a shooting method applied to a shooting device, the method including:
acquiring a definition statistic value of each visual angle area in a plurality of visual angle areas;
determining depth of field information of each visual angle area according to the definition statistic value;
and executing shooting control operation according to the depth of field information.
In a second aspect, the present application further provides a photographing apparatus, including a photographing device, a memory, and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a definition statistic value of each visual angle area in a plurality of visual angle areas;
determining depth of field information of each visual angle area according to the definition statistic value;
and controlling the shooting device to execute shooting control operation according to the depth of field information.
In a third aspect, the present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the photographing method as described above.
The embodiment of the application provides a shooting method, a shooting device and a computer readable storage medium, wherein the shooting method, the shooting device and the computer readable storage medium can obtain the depth of field information of each visual angle area according to the definition statistics of a plurality of visual angle areas, and execute shooting control operation based on the depth of field information of each visual angle area, so that a user can conveniently and quickly master shooting parameters during shooting, images with better effects can be shot, and user experience is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a photographing method according to an embodiment of the present application;
FIG. 2 is a block diagram of a full view angle region of a shooting device in an embodiment of the present application;
fig. 3 is a schematic view of a plurality of viewing angle regions of the photographing apparatus in the embodiment of the present application;
FIG. 4 is a flow diagram illustrating sub-steps of the photographing method of FIG. 1;
FIG. 5 is a flow chart illustrating steps of another photographing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a position before a focus point is changed in the embodiment of the present application;
FIG. 7 is a schematic view of a position where the focus is changed in the embodiment of the present application;
FIG. 8 is a schematic view of another position after the focus is changed in the embodiment of the present application;
FIG. 9 is a schematic diagram of a position of an in-focus point before a photographing apparatus moves in an embodiment of the present application;
FIG. 10 is a schematic diagram of a position of an in-focus point after a photographing device moves in an embodiment of the present application;
fig. 11 is a block diagram schematically illustrating a structure of a photographing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating steps of a shooting method according to an embodiment of the present application. The photographing method may be applied in a photographing apparatus for performing a photographing control operation. The shooting equipment comprises a mobile phone, a tablet personal computer, a notebook computer, a movable platform provided with a shooting device, a handheld tripod head provided with the shooting device and the like. The movable platform includes an unmanned aerial vehicle and an unmanned vehicle, and the unmanned aerial vehicle includes a rotary wing type unmanned aerial vehicle, such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, a fixed wing unmanned aerial vehicle, or a combination of a rotary wing type and a fixed wing unmanned aerial vehicle, which is not limited herein.
Specifically, as shown in fig. 1, the photographing method includes steps S101 to S103.
S101, obtaining a definition statistic value of each visual angle area in a plurality of visual angle areas.
The plurality of viewing angle regions may be some or all of the viewing angle region blocks in a full viewing angle region of the photographing apparatus, and one viewing angle region block corresponds to one viewing angle region. Referring to fig. 2, fig. 2 is a block diagram of a full view angle area of a shooting device in an embodiment of the present application, and as shown in fig. 2, the full view angle area of the shooting device is divided into 9 view angle area blocks.
The shooting equipment acquires the definition statistic value of each visual angle area in the plurality of visual angle areas, namely, the definition statistic value of each visual angle area is determined through the evaluation function of the definition statistic value. The evaluation function of the sharpness statistic includes, but is not limited to, Brenner gradient function, Tenengrad gradient function, Laplacian gradient function, gray variance function, and gray variance product function.
In an embodiment, the plurality of said viewing angle regions comprises overlapping viewing angle regions, adjacent viewing angle regions and/or alternate viewing angle regions, that is, the plurality of viewing angle regions comprises only overlapping viewing angle regions, or the plurality of viewing angle regions comprises only adjacent viewing angle regions, or the plurality of viewing angle regions comprises only alternate viewing angle regions, or the plurality of viewing angle regions comprises overlapping viewing angle regions and adjacent viewing angle regions, or the plurality of viewing angle regions comprises overlapping viewing angle regions and alternate viewing angle regions, or the plurality of viewing angle regions comprises adjacent viewing angle regions and alternate viewing angle regions, or the plurality of viewing angle regions comprises overlapping viewing angle regions, adjacent viewing angle regions and alternate viewing angle regions. It is understood that the overlapped viewing angle regions, the adjacent viewing angle regions and the spaced viewing angle regions may be combined at will to obtain a plurality of viewing angle regions, which is not specifically limited in this application.
Referring to fig. 3, fig. 3 is a schematic diagram of a plurality of viewing angle regions of a shooting device in the embodiment of the present application, as shown in fig. 3, a viewing angle region a, a viewing angle region B, and a viewing angle region C are spaced from each other, a viewing angle region D, a viewing angle region E, and a viewing angle region F are spaced from each other, and a viewing angle region H, a viewing angle region I, and a viewing angle region G are spaced from each other; the view angle area A is adjacent to the view angle area D, the view angle area B is adjacent to the view angle area E, and the view angle area C is adjacent to the view angle area F; the view area D overlaps the view area H, the view area E overlaps the view area I, and the view area F overlaps the view area G.
S102, determining depth information of each visual angle area according to the definition statistic value.
After the definition statistic value of each visual angle area is determined, the depth information of each visual angle area is determined according to the definition statistic value of each visual angle area. Wherein the depth information includes a depth of view level of the scene in the view area, e.g., in one embodiment, the depth of view level includes foreground and middle scenes, and background; of course, in other embodiments, the depth of field may not be limited to a limited number, and for example, multiple depths of field may be limited according to actual needs, such as the current shooting scene and/or shooting strategy. It is to be understood that the above embodiments are illustrative only and not limiting.
In one embodiment, as shown in fig. 4, step S102 includes sub-steps S1021 to S1022.
And S1021, determining the depth of field type of each visual angle area according to the definition statistic value.
Specifically, the definition statistic value of each visual angle area is analyzed, and the definition change condition of the object in each visual angle area is determined, wherein the definition change condition comprises no change in definition, monotonous rise in definition, monotonous fall in definition and a definition peak value; if the definition change condition is that the definition does not change, the depth of field type of the corresponding visual angle area is a first preset type; and if the definition change condition is that the definition is monotonically increased, the definition is monotonically decreased or has a definition peak value, the depth of field type of the corresponding visual angle area is a second preset type.
And S1022, determining depth information of each view angle area according to the depth type.
Specifically, if the depth of field type of the view angle area is a first preset type, the depth of field information of the view angle area is a null value; if the depth of field type of the visual angle area is a second preset type, determining a definition peak value or a definition change trend according to the definition statistic value; and determining the depth of field information of the visual angle area according to the definition peak value or the definition change trend. The depth information of each view angle area can be rapidly determined through the depth type.
In one embodiment, the definition statistic value corresponding to the depth of field type of the view angle area being a second preset type is analyzed to obtain the definition change condition of the object in the view angle area; if the definition change condition is that the definition peak value exists, the definition peak value of the visual angle area is extracted from the definition statistic value, and if the definition change condition is that the definition monotonically rises or the definition monotonically falls, the definition change trend of the visual angle area is that the definition monotonically rises or the definition monotonically falls. Wherein the sharpness peak is at least one.
In an embodiment, an object distance or an object distance range corresponding to the sharpness peak is obtained, and based on the object distance or the object distance range, depth information of the view angle area is determined, that is, the larger the object distance is, the larger the depth is, and the smaller the object distance is, the smaller the depth is. In another embodiment, based on the sharpness variation trend, an object distance variation trend can be determined, and the depth information of the view angle area is determined according to the object distance variation trend. For example, when the zoom motor moves from infinity to the nearest end, the object distance variation tendency is monotonously decreasing in the object distance if the sharpness variation tendency is monotonously increasing in the sharpness, and the object distance variation tendency is monotonously increasing in the object distance if the sharpness variation tendency is monotonously decreasing in the sharpness. Further, if the object distance monotonically increases, the depth of field of the object can be known to be larger, and if the object distance monotonically decreases, the depth of field of the object can be known to be smaller. Of course, when the zoom motor moves from the closest end toward infinity, the object distance variation tendency is monotonously increasing for the object distance if the sharpness variation tendency is monotonously increasing for sharpness, and is monotonously decreasing for the object distance if the sharpness variation tendency is monotonously decreasing for sharpness. Further, if the object distance monotonically increases, the depth of field of the object can be known to be larger, and if the object distance monotonically decreases, the depth of field of the object can be known to be smaller. It is to be understood that this is by way of illustration and not of limitation.
And S103, executing shooting control operation according to the depth of field information.
After the depth of field information of each visual angle area is determined, shooting control operation is executed based on the depth of field information of each visual angle area, a user can conveniently and quickly master shooting parameters during shooting, images with good effects can be shot, and user experience is greatly improved.
In one embodiment, the contrast of a shot object in each visual angle area is obtained, and the depth information is screened according to the contrast; and executing shooting control operation according to the depth of field information after screening processing. The screening treatment is specifically as follows: and comparing the contrast of each visual angle area with a preset contrast, and removing the depth of field information of the visual angle area with the contrast smaller than the preset contrast. It should be noted that the preset contrast may be set based on actual conditions, and the present application is not limited to this. The depth of field information of each visual angle area is screened through the contrast of the visual angle area, the depth of field information which does not meet the contrast requirement can be removed, meanwhile, shooting control operation is executed according to the depth of field information which is screened, a user can conveniently grasp shooting parameters during shooting, images with good effects can be further shot, and user experience is improved.
In one embodiment, a shooting control strategy of the shooting device is set according to the depth of field information, and shooting control operation corresponding to the shooting control strategy is executed. Wherein the shooting control strategy comprises at least one of the following: the system comprises a focusing control strategy, a foreground and background selection strategy and a panoramic deep-focusing scanning strategy, wherein the focusing control strategy is used for automatically focusing a shot object, the foreground and background selection strategy is used for selecting foreground and background areas in a plurality of view angle areas, the panoramic deep-focusing scanning strategy is used for acquiring the depth of field information of the object in each view angle area, and the foreground and background selection strategy comprises the steps of selecting a foreground area in the view angle area for focusing and/or selecting a background area in the view angle area for focusing.
Specifically, if the object distance of the object is smaller than or equal to a preset object distance, the foreground region is selected, if the object distance of the object is larger than the preset object distance, the background region is selected, or if the contrast of the object is larger than or equal to a preset contrast, the foreground region is selected, and if the contrast of the object is smaller than the preset contrast, the background region is selected.
In one embodiment, the shot object is focused according to the definition statistic of each view angle area. Each view angle area includes a part of or all view angle areas in the full view angle area, for example, the shot object may be focused according to the sharpness statistics of the view area a, the view area B, and the view area C in fig. 3, or the shot object may be focused according to the sharpness statistics of the view area a, the view area B, the view area C, the view area D, the view area E, the view area F, the view area H, the view area I, and the view area G in fig. 3. It is understood that the number of the viewing angle regions obtained by dividing the full viewing angle region may be set based on actual situations, and the present application is not limited thereto. The shot object is focused through the definition statistic values of the multiple visual angle areas, the focusing success rate can be improved, and the imaging definition of the object can be ensured.
In one embodiment, an angle-of-view region to which a photographic object belongs is determined, and the angle-of-view region to which the photographic object belongs is taken as a target angle-of-view region; acquiring a view angle area adjacent to a target view angle area, and taking the view angle area adjacent to the target view angle area as a candidate view angle area; and focusing the shot object according to the definition statistic value of each candidate visual angle area. The shot object is focused through the definition statistic value of the visual angle area adjacent to the visual angle area to which the shot object belongs, so that the focusing success rate of the shot object can be further improved, and the imaging definition of the object can be ensured. It is to be understood that the candidate view angle region may also be set according to practical situations, and is not limited to the view angle region adjacent to the target view angle region.
Specifically, a target definition statistic value is determined according to the definition statistic value of each candidate visual angle area; and focusing the shot object according to the target definition statistic value. The weight values of the candidate view angle regions are the same or different, and may be set based on actual conditions, which is not specifically limited in the present application.
Illustratively, calculating a difference value of the sharpness statistics of each two candidate view angle regions according to the sharpness statistics of each candidate view angle region; determining whether the difference value of the definition statistic values of every two candidate visual angle areas is smaller than or equal to a first preset threshold value; if each difference value is smaller than or equal to a preset first preset threshold value, taking the definition statistic value of any candidate visual angle area as a target definition statistic value; if at least one difference value is larger than a preset first preset threshold value, calculating an average definition statistic value according to the definition statistic values of all candidate visual angle areas, and taking the average definition statistic value as a target definition statistic value. It should be noted that the first preset threshold may be set based on actual conditions, and this application is not limited to this specifically.
In one embodiment, a confidence index of a definition statistic of each candidate view angle area is determined according to the depth information of each candidate view angle area; focusing the shot object according to the definition statistic value of the candidate visual angle area with the credibility index being greater than or equal to the preset credibility index. The determination mode of the credibility index is specifically as follows: and acquiring an object distance from the depth information of the candidate view angle region, recording the object distance as a first object distance, determining the object distance of an object in the candidate view angle region based on the definition statistic value of the candidate view angle region, recording the object distance as a second object distance, calculating the difference value between the first object distance and the second object distance, and acquiring a confidence index corresponding to the difference value. It is understood that the smaller the difference between the first object distance and the second object distance, the larger the confidence index, and the larger the difference between the first object distance and the second object distance, the smaller the confidence index. It should be noted that the preset confidence index may be set based on actual situations, and the present application is not limited to this. The candidate visual angle area is screened, and the shot object is focused based on the definition statistic value of the screened candidate visual angle area, so that the focusing success rate of the shot object can be further improved.
According to the shooting method provided by the embodiment, the depth of field information of each visual angle area can be obtained according to the definition statistics values of the visual angle areas, the shooting control operation is executed based on the depth of field information of each visual angle area, a user can conveniently grasp shooting parameters during shooting, images with good effects can be shot, and user experience is greatly improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating steps of another photographing method according to an embodiment of the present application.
Specifically, as shown in fig. 5, the photographing method includes steps S201 to S203.
S201, in the process of focusing the shooting object, when the position of the focusing point of the shooting equipment is detected to be changed, determining whether the shooting equipment moves.
Specifically, the photographing apparatus detects whether a position of an in-focus point of the photographing apparatus is changed during focusing of a photographing object, and determines whether the photographing apparatus is moved when the change of the position of the in-focus point of the photographing apparatus is detected. Wherein whether the photographing apparatus moves may be determined by an inertial measurement unit of the photographing apparatus.
Exemplarily, position information acquired by an inertial measurement unit at the current system time is acquired, and historical position information separated from the current system time by preset time is acquired, wherein the historical position information is the position information acquired by the inertial measurement unit before the current system time; and comparing the position information with the historical position information, if the position information is the same as the historical position information, determining that the shooting device does not move, and if the position information is not the same as the historical position information, determining that the shooting device moves. It is understood that the preset time may be set based on actual situations, and the present application is not limited thereto. Optionally, the preset time is 1 second.
S202, if the shooting equipment does not move, determining the view angle area to which the focus currently belongs, and taking the view angle area to which the focus currently belongs as a target view angle area.
If the shooting device does not move, the shooting device can know that the position change of the focus is caused by the touch operation of a user on the view angle area or the movement of a shooting object, determine the view angle area to which the focus currently belongs for the shooting device, and take the view angle area to which the focus currently belongs as the target view angle area.
In one embodiment, touch positions of touch operations of a user on a plurality of view angle areas are determined, and a view angle area to which a focus belongs currently is determined according to the touch positions; or determining the view angle area to which the shooting object belongs, and taking the view angle area to which the shooting object belongs as the view angle area to which the focusing point belongs currently.
Specifically, acquiring a position coordinate of the touch position and a position coordinate set of each view angle area; determining a visual angle area to which the touch position belongs according to the position coordinate and the position coordinate set, namely, taking the visual angle area corresponding to the position coordinate set containing the position coordinate as the visual angle area to which the touch position belongs; and taking the visual angle area to which the touch position belongs as the current visual angle area to which the focus belongs. The position coordinate is the position coordinate of the touch position in the time area, and the shooting equipment stores the position coordinate set of each visual angle area.
As shown in fig. 6, the focus point is located in the view area E, when the user touches the view area D, the position of the focus point changes, and the position of the focus point after the position changes is as shown in fig. 7, and when the animal in the view area E moves, the position of the focus point changes, and the position of the focus point after the position changes is as shown in fig. 8, and the focus point is located in the view area F.
S203, focusing the shot object according to the definition statistic value and/or the depth information of the target view angle area.
After determining the target view angle region, the photographing apparatus may focus the photographing object based on the sharpness statistic and/or the depth information of the target view angle region. After the position of the focusing point is changed, the shot object can be quickly focused, and the focusing success rate is improved.
In one embodiment, at least one historical definition statistic of a target visual angle area and a definition statistic of the current moment are obtained, wherein the historical definition statistic is a definition statistic recorded before the current moment; determining whether the shot object moves or not according to the definition statistic and at least one historical definition statistic; and if the shot object does not move, focusing the shot object according to the definition statistic value and/or the depth of field information of the target visual angle area.
Specifically, it may be determined whether the sharpness of the photographed object changes based on the sharpness statistic and at least one historical sharpness statistic, that is, if the sharpness statistic is the same as the historical sharpness statistic, the sharpness of the photographed object does not change, and if the sharpness statistic is different from the historical sharpness statistic, the sharpness of the photographed object changes; if the definition of the photographed object is changed, it may be determined that the photographed object is moved, and if the definition of the photographed object is not changed, it may be determined that the photographed object is not moved.
In one embodiment, if the shot object moves, determining the moving trend of the shot object according to the definition statistic and at least one historical definition statistic; and focusing the shot object according to the moving trend and the definition statistic value of the target visual angle area. Wherein the moving tendency of the photographic subject includes moving away from the photographic apparatus and moving close to the photographic apparatus. After the shot object moves, the shot object can be focused based on the moving trend of the shot object and the definition statistic value, so that the focus tracking of the moving object is realized, the focusing success rate of the moving object is improved, the clear image of the moving object can be shot, and the user experience is improved.
In one embodiment, if the shooting device moves, determining the movement displacement of the shooting device; determining a target visual angle area according to the movement displacement and the visual angle area to which the focus belongs at the previous moment; and focusing the shot object according to the definition statistic value and/or the depth information of the target visual angle area. Wherein, the movement displacement of the shooting equipment can be determined according to an inertia measuring unit and/or an image recognition device of the shooting equipment, and the movement displacement comprises a movement direction and a movement distance. After the shooting equipment moves, the shot object is focused based on the definition statistic value and/or the depth of field information of the current view angle area to which the focus belongs, and the focusing success rate of the shot object can be improved.
Specifically, an angle-of-view region to which the focus point belongs at the previous moment is taken as a historical angle-of-view region, and angle-of-view regions other than the historical angle-of-view region are taken as candidate angle-of-view regions; and acquiring the position relation and the interval distance between each candidate view angle area and the historical view angle area, and determining the target view angle area based on the moving direction and the moving distance in the moving displacement and the position relation and the interval distance between each candidate view angle area and the historical view angle area. The interval distance is the distance between the center points of the two view angle areas, and the position relation and the interval distance between each candidate view angle area and the historical view angle area are stored in the shooting equipment.
In one embodiment, a definition statistic value of a view angle area to which a focus belongs at the previous moment is acquired; calculating a difference absolute value between the definition statistic value of the target visual angle area and the definition statistic value of the visual angle area to which the focusing point belongs at the previous moment; determining whether the absolute value of the difference is less than or equal to a second preset threshold; if the absolute value of the difference is smaller than or equal to a second preset threshold, focusing the shot object according to the definition statistic and/or the depth information of the target visual angle area; if the absolute value of the difference is larger than a second preset threshold, acquiring the definition statistic value of each visual angle area in the multiple visual angle areas again; and according to the newly acquired definition statistic value, refocusing the shot object. It should be noted that the second preset threshold may be set based on actual conditions, and this application is not limited to this specifically.
After the shooting equipment moves, when the definition statistic value of the visual angle region corresponding to the focusing point changes less, the shooting object can be refocused based on the definition statistic value of the visual angle region, and when the definition statistic value of the visual angle region corresponding to the focusing point changes more, the definition statistic value needs to be acquired again, and the shooting object can be refocused based on the definition statistic value acquired again, so that the focusing success rate of the shooting object can be further improved.
As shown in fig. 9, the focusing point before the photographing apparatus moves is located in the view area E, and when the photographing apparatus moves, the focusing point correspondingly moves, and as shown in fig. 10, the focusing point after the photographing apparatus moves is located in the view area I.
According to the shooting method provided by the embodiment, when the position of the focus of the shooting device changes and the shooting device does not move, the shot object is focused based on the definition statistic value and/or the depth of field information of the current view angle area to which the focus belongs, the shot object can be rapidly focused after the position of the focus changes, the focusing success rate is improved, and the user experience is greatly improved.
Referring to fig. 11, fig. 11 is a schematic block diagram of a structure of a shooting device according to an embodiment of the present application. In one embodiment, the shooting device includes a mobile phone, a tablet computer, a notebook computer, a movable platform with a shooting device, a handheld tripod head with a shooting device, and the like. The movable platform includes an unmanned aerial vehicle and an unmanned vehicle, and the unmanned aerial vehicle includes a rotary wing type unmanned aerial vehicle, such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, a fixed wing unmanned aerial vehicle, or a combination of a rotary wing type and a fixed wing unmanned aerial vehicle, which is not limited herein.
Further, the photographing apparatus 300 includes a processor 301, a memory 302, and a photographing device 303, and the processor 301, the memory 302, and the photographing device 303 are connected by a bus 304, such as an I2C (Inter-integrated Circuit) bus 304.
Specifically, the Processor 301 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
Specifically, the photographing device 303 may be a digital camera, a camera, and a single lens reflex camera.
Wherein the processor 301 is configured to run a computer program stored in the memory 302, and when executing the computer program, implement the following steps:
acquiring a definition statistic value of each visual angle area in a plurality of visual angle areas;
determining depth of field information of each visual angle area according to the definition statistic value;
and controlling the shooting device to execute shooting control operation according to the depth of field information.
Optionally, when determining the depth information of each of the viewing angle regions according to the sharpness statistics, the processor is configured to:
determining the depth of field type of each visual angle area according to the definition statistic value;
and determining the depth information of each visual angle area according to the depth type.
Optionally, when determining the depth information of each view angle region according to the depth type, the processor is configured to:
if the depth of field type of the visual angle area is a first preset type, the depth of field information of the visual angle area is a null value;
if the depth of field type of the visual angle area is a second preset type, determining a definition peak value or a definition change trend according to the definition statistic value;
and determining the depth information of the visual angle area according to the definition peak value or the definition change trend.
Optionally, the processor is configured to, when controlling the shooting device to execute a shooting control operation according to the depth of field information, implement:
acquiring the contrast of a shot object in each visual angle area, and screening the depth of field information according to the contrast;
and controlling the shooting device to execute shooting control operation according to the depth-of-field information after the screening processing.
Optionally, a plurality of said view angle regions comprises overlapping view angle regions, adjacent view angle regions and/or spaced view angle regions.
Optionally, the processor is configured to, when controlling the shooting device to execute a shooting control operation according to the depth of field information, implement:
setting a shooting control strategy of the shooting equipment according to the depth of field information;
and controlling the shooting device to execute shooting control operation corresponding to the shooting control strategy.
Optionally, the shooting control strategy includes at least one of: the system comprises a focusing control strategy, a foreground and background selection strategy and a panoramic deep-focus scanning strategy, wherein the focusing control strategy is used for carrying out automatic focusing on a shot object, the foreground and background selection strategy is used for selecting foreground and background areas in a plurality of view areas, and the panoramic focus scanning strategy is used for acquiring the depth of field information of the object in each view area.
Optionally, the processor is configured to, when controlling the shooting device to perform a shooting control operation, implement:
and controlling the shooting device to focus the shot object according to the definition statistic value of each visual angle area.
Optionally, the processor is configured to control the photographing device to focus on the photographed object according to the sharpness statistic of each of the view angle regions, so as to implement:
determining a visual angle area to which the shooting object belongs, and taking the visual angle area to which the shooting object belongs as a target visual angle area;
acquiring the view angle area adjacent to the target view angle area, and taking the view angle area adjacent to the target view angle area as a candidate view angle area;
and controlling the shooting device to focus a shot object according to the definition statistic value of each candidate visual angle area.
Optionally, the processor is configured to control the photographing device to focus on a photographed object according to the sharpness statistic of each candidate view angle region, so as to implement:
determining a target sharpness statistic according to the sharpness statistic of each candidate view angle region;
and controlling the shooting device to focus the shot object according to the target definition statistic value.
Optionally, the weight values of the candidate view angle regions are the same or different.
Optionally, the processor is configured to, when determining a target sharpness statistic according to the sharpness statistic of each candidate view angle region, implement:
calculating the difference value of the definition statistics values of every two candidate view angle areas according to the definition statistics value of each candidate view angle area;
determining whether the difference of the sharpness statistic of each two candidate view angle regions is less than or equal to a first preset threshold;
and if each difference value is less than or equal to a preset first preset threshold value, taking the definition statistic value of any candidate visual angle area as a target definition statistic value.
Optionally, after determining whether the difference between the sharpness statistics of each two candidate view angle regions is less than or equal to a preset first preset threshold, the processor is further configured to:
if at least one difference value is larger than a preset first preset threshold value, calculating an average definition statistic value according to the definition statistic value of each candidate visual angle area, and taking the average definition statistic value as a target definition statistic value.
Optionally, the processor is configured to control the photographing device to focus on a photographed object according to the sharpness statistic of each candidate view angle region, so as to implement:
determining a confidence index of the sharpness statistic of each candidate view angle region according to the depth information of each candidate view angle region;
and controlling the shooting device to focus a shot object according to the definition statistic value of the candidate visual angle area of which the credibility index is greater than or equal to a preset credibility index.
Optionally, the processor is further configured to implement:
in the process of focusing the shooting object, when the change of the position of the focusing point of the shooting equipment is detected, determining whether the shooting equipment moves;
if the shooting equipment does not move, determining the view angle area to which the focus currently belongs, and taking the view angle area to which the focus currently belongs as a target view angle area;
and controlling the shooting device to focus the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
Optionally, the processor, when determining the view angle region to which the focus currently belongs, is configured to:
determining touch positions of touch operation of a user on a plurality of visual angle areas, and determining the visual angle area to which the focus belongs currently according to the touch positions; or
And determining the view angle area to which the shooting object belongs, and taking the view angle area to which the shooting object belongs as the view angle area to which the focus currently belongs.
Optionally, when determining the view angle area to which the focus currently belongs according to the touch position, the processor is configured to implement:
acquiring the position coordinates of the touch position and the position coordinate set of each visual angle area;
determining the view angle area to which the touch position belongs according to the position coordinate and the position coordinate set;
and taking the visual angle area to which the touch position belongs as the visual angle area to which the focus currently belongs.
Optionally, the processor is configured to, before controlling the shooting device to focus on the shot object according to the sharpness statistic and/or the depth information of the target view angle region, further:
acquiring at least one historical definition statistic of the target view angle area and the definition statistic at the current moment, wherein the historical definition statistic is a definition statistic recorded before the current moment;
determining whether the shot object moves according to the definition statistic and at least one historical definition statistic;
and if the shot object does not move, controlling the shooting device to focus the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
Optionally, the processor is configured to, after determining whether the photographic object moves according to the sharpness statistics and at least one of the historical sharpness statistics, further:
if the shot object moves, determining the moving trend of the shot object according to the definition statistic and at least one historical definition statistic;
and controlling the shooting device to focus the shot object according to the moving trend and the definition statistic value of the target visual angle area.
Optionally, after determining whether the shooting device moves, the processor is further configured to:
if the shooting equipment moves, determining the movement displacement of the shooting equipment;
determining a target visual angle area according to the movement displacement and the visual angle area to which the focus belongs at the previous moment;
and controlling the shooting device to focus the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
Optionally, the movement displacement of the photographing apparatus is determined according to an inertial measurement unit and/or an image recognition device of the photographing apparatus.
Optionally, the processor is configured to, before controlling the shooting device to focus on the shot object according to the sharpness statistic and/or the depth information of the target view angle region, further:
acquiring the definition statistic value of the view angle area to which the focus belongs at the previous moment;
calculating a difference absolute value between the definition statistic of the target view angle area and the definition statistic of the view angle area to which the focusing point belongs at the previous moment;
determining whether the absolute value of the difference is less than or equal to a second preset threshold;
and focusing the shot object according to the definition statistic and/or the depth information of the target visual angle area if the absolute value of the difference is smaller than or equal to a second preset threshold.
Optionally, after determining whether the absolute value of the difference is less than or equal to a second preset threshold, the processor is further configured to:
if the absolute value of the difference is larger than a second preset threshold, acquiring the definition statistic value of each of the plurality of view angle areas again;
and controlling the shooting device to refocus the shot object according to the newly acquired definition statistic value.
Optionally, the photographing apparatus includes at least one of: digital cameras, video cameras and single lens reflex cameras.
It should be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the shooting device described above may refer to the corresponding process in the foregoing shooting method embodiment, and is not described herein again.
In an embodiment of the present application, a computer-readable storage medium is further provided, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the processor executes the program instructions to implement the steps of the shooting method provided in the foregoing embodiment.
The computer-readable storage medium may be an internal storage unit of the shooting device described in any of the foregoing embodiments, for example, a hard disk or a memory of the shooting device. The computer-readable storage medium may also be an external storage device of the photographing apparatus, such as a plug-in hard disk provided on the photographing apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (48)

1. A shooting method is applied to a shooting device, and the method comprises the following steps:
acquiring a definition statistic value of each visual angle area in a plurality of visual angle areas;
determining depth of field information of each visual angle area according to the definition statistic value;
and executing shooting control operation according to the depth of field information.
2. The shooting method according to claim 1, wherein the determining depth information of each of the view angle regions according to the sharpness statistic includes:
determining the depth of field type of each visual angle area according to the definition statistic value;
and determining the depth information of each visual angle area according to the depth type.
3. The shooting method according to claim 2, wherein the determining depth information for each of the view angle regions according to the depth type includes:
if the depth of field type of the visual angle area is a first preset type, the depth of field information of the visual angle area is a null value;
if the depth of field type of the visual angle area is a second preset type, determining a definition peak value or a definition change trend according to the definition statistic value;
and determining the depth information of the visual angle area according to the definition peak value or the definition change trend.
4. The photographing method according to claim 1, wherein the performing of the photographing control operation according to the depth information includes:
acquiring the contrast of a shot object in each visual angle area, and screening the depth of field information according to the contrast;
and executing shooting control operation according to the screened depth of field information.
5. The photographing method according to claim 1, wherein the plurality of view angle regions include overlapping view angle regions, adjacent view angle regions, and/or spaced view angle regions.
6. The photographing method according to claim 1, wherein the performing of the photographing control operation according to the depth information includes:
setting a shooting control strategy of the shooting equipment according to the depth of field information;
and executing shooting control operation corresponding to the shooting control strategy.
7. The photographing method according to claim 6, wherein the photographing control strategy includes at least one of: the system comprises a focusing control strategy, a foreground and background selection strategy and a panoramic deep-focus scanning strategy, wherein the focusing control strategy is used for carrying out automatic focusing on a shot object, the foreground and background selection strategy is used for selecting foreground and background areas in a plurality of view areas, and the panoramic focus scanning strategy is used for acquiring the depth of field information of the object in each view area.
8. The photographing method according to any one of claims 1 to 7, wherein the performing of the photographing control operation includes:
and focusing the shot object according to the definition statistic value of each visual angle area.
9. The shooting method according to claim 8, wherein focusing the shot object according to the sharpness statistics of each of the view angle regions comprises:
determining a visual angle area to which the shooting object belongs, and taking the visual angle area to which the shooting object belongs as a target visual angle area;
acquiring the view angle area adjacent to the target view angle area, and taking the view angle area adjacent to the target view angle area as a candidate view angle area;
focusing the shot object according to the definition statistic value of each candidate visual angle area.
10. The shooting method according to claim 9, wherein focusing a shot object according to the sharpness statistic for each of the candidate view angle regions comprises:
determining a target sharpness statistic according to the sharpness statistic of each candidate view angle region;
and focusing the shot object according to the target definition statistic value.
11. The photographing method according to claim 10, wherein weight values of the respective candidate view angle regions are the same or different.
12. The shooting method according to claim 10, wherein said determining a target sharpness statistic from the sharpness statistic for each of the candidate view angle regions comprises:
calculating the difference value of the definition statistics values of every two candidate view angle areas according to the definition statistics value of each candidate view angle area;
determining whether the difference of the sharpness statistic of each two candidate view angle regions is less than or equal to a first preset threshold;
and if each difference value is less than or equal to a preset first preset threshold value, taking the definition statistic value of any candidate visual angle area as a target definition statistic value.
13. The shooting method according to claim 12, wherein after determining whether the difference between the sharpness statistics of each two of the candidate view angle regions is smaller than or equal to a preset first preset threshold, the method further comprises:
if at least one difference value is larger than a preset first preset threshold value, calculating an average definition statistic value according to the definition statistic value of each candidate visual angle area, and taking the average definition statistic value as a target definition statistic value.
14. The shooting method according to claim 9, wherein focusing a shot object according to the sharpness statistic for each of the candidate view angle regions comprises:
determining a confidence index of the sharpness statistic of each candidate view angle region according to the depth information of each candidate view angle region;
focusing the shot object according to the definition statistic value of the candidate visual angle area of which the credibility index is greater than or equal to a preset credibility index.
15. The photographing method according to claim 8, further comprising:
in the process of focusing the shooting object, when the change of the position of the focusing point of the shooting equipment is detected, determining whether the shooting equipment moves;
if the shooting equipment does not move, determining the view angle area to which the focus currently belongs, and taking the view angle area to which the focus currently belongs as a target view angle area;
focusing the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
16. The shooting method according to claim 15, wherein the determining the view angle region to which the in-focus point currently belongs includes:
determining touch positions of touch operation of a user on a plurality of visual angle areas, and determining the visual angle area to which the focus belongs currently according to the touch positions; or
And determining the view angle area to which the shooting object belongs, and taking the view angle area to which the shooting object belongs as the view angle area to which the focus currently belongs.
17. The shooting method according to claim 16, wherein the determining the view angle area to which the focus currently belongs according to the touch position comprises:
acquiring the position coordinates of the touch position and the position coordinate set of each visual angle area;
determining the view angle area to which the touch position belongs according to the position coordinate and the position coordinate set;
and taking the visual angle area to which the touch position belongs as the visual angle area to which the focus currently belongs.
18. The shooting method according to claim 15, wherein before focusing the shooting object according to the sharpness statistic and/or the depth information of the target view angle region, the method further comprises:
acquiring at least one historical definition statistic of the target view angle area and the definition statistic at the current moment, wherein the historical definition statistic is a definition statistic recorded before the current moment;
determining whether the shot object moves according to the definition statistic and at least one historical definition statistic;
and if the shot object does not move, focusing the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
19. The shooting method according to claim 18, wherein said determining whether the object is moving according to the sharpness statistics and at least one of the historical sharpness statistics further comprises:
if the shot object moves, determining the moving trend of the shot object according to the definition statistic and at least one historical definition statistic;
focusing the shot object according to the moving trend and the definition statistic value of the target visual angle area.
20. The photographing method according to claim 15, wherein after determining whether the photographing apparatus is moved, further comprising:
if the shooting equipment moves, determining the movement displacement of the shooting equipment;
determining a target visual angle area according to the movement displacement and the visual angle area to which the focus belongs at the previous moment;
focusing the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
21. The photographing method according to claim 20, wherein the movement displacement of the photographing apparatus is determined according to an inertial measurement unit and/or an image recognition device of the photographing apparatus.
22. The shooting method according to claim 20, wherein before focusing the shooting object according to the sharpness statistic and/or the depth information of the target view angle region, the method further comprises:
acquiring the definition statistic value of the view angle area to which the focus belongs at the previous moment;
calculating a difference absolute value between the definition statistic of the target view angle area and the definition statistic of the view angle area to which the focusing point belongs at the previous moment;
determining whether the absolute value of the difference is less than or equal to a second preset threshold;
and focusing the shot object according to the definition statistic and/or the depth information of the target visual angle area if the absolute value of the difference is smaller than or equal to a second preset threshold.
23. The photographing method according to claim 22, wherein after determining whether the absolute value of the difference is less than or equal to a second preset threshold, further comprising:
if the absolute value of the difference is larger than a second preset threshold, acquiring the definition statistic value of each of the plurality of view angle areas again;
and according to the newly acquired definition statistic value, refocusing the shot object.
24. A photographing apparatus, characterized in that the photographing apparatus comprises a photographing device, a memory, and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a definition statistic value of each visual angle area in a plurality of visual angle areas;
determining depth of field information of each visual angle area according to the definition statistic value;
and controlling the shooting device to execute shooting control operation according to the depth of field information.
25. The camera of claim 24, wherein the processor, when determining depth information for each of the view angle regions based on the sharpness statistics, is configured to:
determining the depth of field type of each visual angle area according to the definition statistic value;
and determining the depth information of each visual angle area according to the depth type.
26. The camera of claim 25, wherein the processor, when determining the depth information for each of the view angle regions according to the depth type, is configured to:
if the depth of field type of the visual angle area is a first preset type, the depth of field information of the visual angle area is a null value;
if the depth of field type of the visual angle area is a second preset type, determining a definition peak value or a definition change trend according to the definition statistic value;
and determining the depth information of the visual angle area according to the definition peak value or the definition change trend.
27. The camera apparatus of claim 24, wherein the processor, when controlling the camera to perform the camera control operation according to the depth information, is configured to:
acquiring the contrast of a shot object in each visual angle area, and screening the depth of field information according to the contrast;
and controlling the shooting device to execute shooting control operation according to the depth-of-field information after the screening processing.
28. The camera of claim 24, wherein the plurality of view angle regions comprises overlapping view angle regions, adjacent view angle regions, and/or alternate view angle regions.
29. The camera apparatus of claim 24, wherein the processor, when controlling the camera to perform the camera control operation according to the depth information, is configured to:
setting a shooting control strategy of the shooting equipment according to the depth of field information;
and controlling the shooting device to execute shooting control operation corresponding to the shooting control strategy.
30. The photographing apparatus of claim 29, wherein the photographing control strategy comprises at least one of: the system comprises a focusing control strategy, a foreground and background selection strategy and a panoramic deep-focus scanning strategy, wherein the focusing control strategy is used for carrying out automatic focusing on a shot object, the foreground and background selection strategy is used for selecting foreground and background areas in a plurality of view areas, and the panoramic focus scanning strategy is used for acquiring the depth of field information of the object in each view area.
31. The apparatus according to any one of claims 24 to 30, wherein the processor, when effecting control of the photographing means to perform a photographing control operation, is configured to effect:
and controlling the shooting device to focus the shot object according to the definition statistic value of each visual angle area.
32. The camera apparatus of claim 31, wherein the processor is configured to control the camera to focus on the object according to the sharpness statistics of the respective view angle regions, and is configured to:
determining a visual angle area to which the shooting object belongs, and taking the visual angle area to which the shooting object belongs as a target visual angle area;
acquiring the view angle area adjacent to the target view angle area, and taking the view angle area adjacent to the target view angle area as a candidate view angle area;
and controlling the shooting device to focus a shot object according to the definition statistic value of each candidate visual angle area.
33. The camera apparatus of claim 32, wherein the processor is configured to control the camera to focus on the object according to the sharpness statistic for each of the candidate view angle regions, and is configured to:
determining a target sharpness statistic according to the sharpness statistic of each candidate view angle region;
and controlling the shooting device to focus the shot object according to the target definition statistic value.
34. The photographing apparatus according to claim 33, wherein weight values of the respective candidate view angle regions are the same or different.
35. The camera of claim 33, wherein the processor, when determining a target sharpness statistic based on the sharpness statistics for each of the candidate view angle regions, is configured to:
calculating the difference value of the definition statistics values of every two candidate view angle areas according to the definition statistics value of each candidate view angle area;
determining whether the difference of the sharpness statistic of each two candidate view angle regions is less than or equal to a first preset threshold;
and if each difference value is less than or equal to a preset first preset threshold value, taking the definition statistic value of any candidate visual angle area as a target definition statistic value.
36. The capturing device of claim 35, wherein the processor, after determining whether the difference between the sharpness statistics for each two candidate view angle regions is less than or equal to a preset first preset threshold, is further configured to:
if at least one difference value is larger than a preset first preset threshold value, calculating an average definition statistic value according to the definition statistic value of each candidate visual angle area, and taking the average definition statistic value as a target definition statistic value.
37. The camera apparatus of claim 32, wherein the processor is configured to control the camera to focus on the object according to the sharpness statistic for each of the candidate view angle regions, and is configured to:
determining a confidence index of the sharpness statistic of each candidate view angle region according to the depth information of each candidate view angle region;
and controlling the shooting device to focus a shot object according to the definition statistic value of the candidate visual angle area of which the credibility index is greater than or equal to a preset credibility index.
38. The camera of claim 31, wherein the processor is further configured to implement:
in the process of focusing the shooting object, when the change of the position of the focusing point of the shooting equipment is detected, determining whether the shooting equipment moves;
if the shooting equipment does not move, determining the view angle area to which the focus currently belongs, and taking the view angle area to which the focus currently belongs as a target view angle area;
and controlling the shooting device to focus the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
39. The camera device of claim 38, wherein the processor, when determining the view angle region to which the focus point currently belongs, is configured to:
determining touch positions of touch operation of a user on a plurality of visual angle areas, and determining the visual angle area to which the focus belongs currently according to the touch positions; or
And determining the view angle area to which the shooting object belongs, and taking the view angle area to which the shooting object belongs as the view angle area to which the focus currently belongs.
40. The camera device of claim 39, wherein the processor, when determining the view angle region to which the focus currently belongs according to the touch position, is configured to:
acquiring the position coordinates of the touch position and the position coordinate set of each visual angle area;
determining the view angle area to which the touch position belongs according to the position coordinate and the position coordinate set;
and taking the visual angle area to which the touch position belongs as the visual angle area to which the focus currently belongs.
41. The camera apparatus of claim 38, wherein the processor is configured to control the camera to focus on the object according to the sharpness statistic and/or the depth information of the target view angle region, and further configured to:
acquiring at least one historical definition statistic of the target view angle area and the definition statistic at the current moment, wherein the historical definition statistic is a definition statistic recorded before the current moment;
determining whether the shot object moves according to the definition statistic and at least one historical definition statistic;
and if the shot object does not move, controlling the shooting device to focus the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
42. The camera of claim 41, wherein the processor is further configured to, after determining whether the photographic subject has moved based on the sharpness statistics and at least one of the historical sharpness statistics,:
if the shot object moves, determining the moving trend of the shot object according to the definition statistic and at least one historical definition statistic;
and controlling the shooting device to focus the shot object according to the moving trend and the definition statistic value of the target visual angle area.
43. The camera of claim 38, wherein the processor, after determining whether the camera is moving, is further configured to:
if the shooting equipment moves, determining the movement displacement of the shooting equipment;
determining a target visual angle area according to the movement displacement and the visual angle area to which the focus belongs at the previous moment;
and controlling the shooting device to focus the shot object according to the definition statistic value and/or the depth information of the target visual angle area.
44. The camera device according to claim 43, characterized in that the displacement of the movement of the camera device is determined from an inertial measurement unit and/or an image recognition means of the camera device.
45. The camera of claim 43, wherein the processor is configured to control the camera to focus the object before controlling the camera to focus on the object according to the sharpness statistic and/or the depth information of the target view angle region, and further configured to:
acquiring the definition statistic value of the view angle area to which the focus belongs at the previous moment;
calculating a difference absolute value between the definition statistic of the target view angle area and the definition statistic of the view angle area to which the focusing point belongs at the previous moment;
determining whether the absolute value of the difference is less than or equal to a second preset threshold;
and focusing the shot object according to the definition statistic and/or the depth information of the target visual angle area if the absolute value of the difference is smaller than or equal to a second preset threshold.
46. The camera of claim 45, wherein the processor, after determining whether the absolute value of the difference is less than or equal to a second predetermined threshold, is further configured to:
if the absolute value of the difference is larger than a second preset threshold, acquiring the definition statistic value of each of the plurality of view angle areas again;
and controlling the shooting device to refocus the shot object according to the newly acquired definition statistic value.
47. The camera of claim 24, wherein the camera comprises at least one of: digital cameras, video cameras and single lens reflex cameras.
48. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the photographing method according to any one of claims 1 to 23.
CN201980059476.3A 2019-12-12 2019-12-12 Photographing method, photographing apparatus, and computer-readable storage medium Pending CN112740649A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/124960 WO2021114194A1 (en) 2019-12-12 2019-12-12 Photography method, photography device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112740649A true CN112740649A (en) 2021-04-30

Family

ID=75589255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980059476.3A Pending CN112740649A (en) 2019-12-12 2019-12-12 Photographing method, photographing apparatus, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112740649A (en)
WO (1) WO2021114194A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117288168A (en) * 2023-11-24 2023-12-26 山东中宇航空科技发展有限公司 Unmanned aerial vehicle city building system of taking photo by plane of low-power consumption

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973978A (en) * 2014-04-17 2014-08-06 华为技术有限公司 Method and electronic device for achieving refocusing
CN104184935A (en) * 2013-05-27 2014-12-03 鸿富锦精密工业(深圳)有限公司 Image shooting device and method
CN106204554A (en) * 2016-07-01 2016-12-07 厦门美图之家科技有限公司 Depth of view information acquisition methods based on multiple focussing image, system and camera terminal
US20180316870A1 (en) * 2016-01-15 2018-11-01 Olympus Corporation Focus control device, endoscope apparatus, and method for operating focus control device
CN109141823A (en) * 2018-08-16 2019-01-04 南京理工大学 A kind of microscopic system depth of field measuring device and method based on clarity evaluation
CN110278383A (en) * 2019-07-25 2019-09-24 浙江大华技术股份有限公司 Focus method, device and electronic equipment, storage medium
CN110455258A (en) * 2019-09-01 2019-11-15 中国电子科技集团公司第二十研究所 A kind of unmanned plane Terrain Clearance Measurement method based on monocular vision

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9596401B2 (en) * 2006-10-02 2017-03-14 Sony Corporation Focusing an image based on a direction of a face of a user
TW200928544A (en) * 2007-12-19 2009-07-01 Altek Corp Method of automatically adjusting the depth of field
CN103167226B (en) * 2011-12-12 2016-06-01 华晶科技股份有限公司 Produce method and the device of panoramic deep image
JP5893713B1 (en) * 2014-11-04 2016-03-23 オリンパス株式会社 Imaging apparatus, imaging method, and processing program
US10033917B1 (en) * 2015-11-13 2018-07-24 Apple Inc. Dynamic optical shift/tilt lens

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184935A (en) * 2013-05-27 2014-12-03 鸿富锦精密工业(深圳)有限公司 Image shooting device and method
CN103973978A (en) * 2014-04-17 2014-08-06 华为技术有限公司 Method and electronic device for achieving refocusing
US20180316870A1 (en) * 2016-01-15 2018-11-01 Olympus Corporation Focus control device, endoscope apparatus, and method for operating focus control device
CN106204554A (en) * 2016-07-01 2016-12-07 厦门美图之家科技有限公司 Depth of view information acquisition methods based on multiple focussing image, system and camera terminal
CN109141823A (en) * 2018-08-16 2019-01-04 南京理工大学 A kind of microscopic system depth of field measuring device and method based on clarity evaluation
CN110278383A (en) * 2019-07-25 2019-09-24 浙江大华技术股份有限公司 Focus method, device and electronic equipment, storage medium
CN110455258A (en) * 2019-09-01 2019-11-15 中国电子科技集团公司第二十研究所 A kind of unmanned plane Terrain Clearance Measurement method based on monocular vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117288168A (en) * 2023-11-24 2023-12-26 山东中宇航空科技发展有限公司 Unmanned aerial vehicle city building system of taking photo by plane of low-power consumption
CN117288168B (en) * 2023-11-24 2024-01-30 山东中宇航空科技发展有限公司 Unmanned aerial vehicle city building system of taking photo by plane of low-power consumption

Also Published As

Publication number Publication date
WO2021114194A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
KR101503333B1 (en) Image capturing apparatus and control method thereof
JP6271990B2 (en) Image processing apparatus and image processing method
US9432575B2 (en) Image processing apparatus
US9501834B2 (en) Image capture for later refocusing or focus-manipulation
US10115178B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
US10659676B2 (en) Method and apparatus for tracking a moving subject image based on reliability of the tracking state
US9865064B2 (en) Image processing apparatus, image processing method, and storage medium
US9204034B2 (en) Image processing apparatus and image processing method
US8493493B2 (en) Imaging apparatus, imaging apparatus control method, and computer program
TWI471677B (en) Auto focus method and auto focus apparatus
US20150003676A1 (en) Image processing apparatus for performing object recognition focusing on object motion, and image processing method therefor
US20200267309A1 (en) Focusing method and device, and readable storage medium
CN104333748A (en) Method, device and terminal for obtaining image main object
US9615019B2 (en) Image capturing apparatus and control method for image capturing apparatus with particle filter for main object detection and selecting focus detection area based on priority
US20150201182A1 (en) Auto focus method and auto focus apparatus
JP6512907B2 (en) Shift element control device, shift element control program and optical apparatus
CN108710192B (en) Automatic focusing system and method based on statistical data
US20140184792A1 (en) Image processing apparatus and image processing method
CN113572958B (en) Method and equipment for automatically triggering camera to focus
JP6525809B2 (en) Focus detection apparatus and control method thereof
JP2015012482A (en) Image processing apparatus and image processing method
JP5968379B2 (en) Image processing apparatus and control method thereof
CN106922181B (en) Direction-aware autofocus
US10277795B2 (en) Image pickup apparatus for taking static image, control method therefor, and storage medium storing control program therefor
CN112740649A (en) Photographing method, photographing apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210430

WD01 Invention patent application deemed withdrawn after publication