WO2021114194A1 - 拍摄方法、拍摄设备及计算机可读存储介质 - Google Patents

拍摄方法、拍摄设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021114194A1
WO2021114194A1 PCT/CN2019/124960 CN2019124960W WO2021114194A1 WO 2021114194 A1 WO2021114194 A1 WO 2021114194A1 CN 2019124960 W CN2019124960 W CN 2019124960W WO 2021114194 A1 WO2021114194 A1 WO 2021114194A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewing angle
sharpness
statistical value
angle area
photographing device
Prior art date
Application number
PCT/CN2019/124960
Other languages
English (en)
French (fr)
Inventor
韩守谦
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/124960 priority Critical patent/WO2021114194A1/zh
Priority to CN201980059476.3A priority patent/CN112740649A/zh
Publication of WO2021114194A1 publication Critical patent/WO2021114194A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Definitions

  • This application relates to the technical field of shooting control, and in particular to a shooting method, a shooting device, and a computer-readable storage medium.
  • AF Auto Focus
  • users can take photos of people and landscapes through smartphones, SLR cameras, digital cameras, and tablet computers.
  • the shooting devices support Auto Focus (AF) function, which can facilitate users to take photos through auto focus. People and landscapes can get clear photos.
  • current shooting equipment usually only calculates the sharpness statistical value of the focus area, and does not record the global sharpness statistical value distribution, which makes the user lack of shooting the global area when shooting The grasp of the parameters affects the shooting effect and the user experience is not good.
  • the present application provides a shooting method, a shooting device, and a computer-readable storage medium, which aim to improve the shooting effect of images and improve user experience.
  • this application provides a photographing method applied to a photographing device, and the method includes:
  • a shooting control operation is performed.
  • the present application also provides a photographing equipment, the photographing equipment including a photographing device, a memory, and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • control the photographing device to perform photographing control operations.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the shooting method described above .
  • the embodiments of the present application provide a shooting method, a shooting device, and a computer-readable storage medium.
  • the depth of field of each viewing area can be obtained by obtaining the sharpness statistical value of multiple viewing angle areas, and according to the sharpness statistical value of the multiple viewing angle areas Information, and perform shooting control operations based on the depth information of each viewing angle area, which is convenient for the user to quickly grasp the shooting parameters during shooting, and can shoot images with better effects, which greatly improves the user experience.
  • FIG. 1 is a schematic flowchart of steps of a shooting method provided by an embodiment of the present application
  • FIG. 2 is a block diagram of a full-view area of a photographing device in an embodiment of the present application
  • FIG. 3 is a schematic diagram of multiple viewing angle areas of a photographing device in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of sub-steps of the shooting method in FIG. 1;
  • FIG. 5 is a schematic flowchart of steps of another shooting method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a position before the focus point changes in the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a position after the focus point is changed in the embodiment of the present application.
  • FIG. 8 is a schematic diagram of another position after the focus point is changed in the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a position of the focus point of the photographing device before moving in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a position of the focus point of the shooting device after moving in an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of the structure of a photographing device according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of steps of a photographing method according to an embodiment of the present application.
  • the shooting method can be applied to a shooting device to perform shooting control operations.
  • the shooting equipment includes mobile phones, tablet computers, notebook computers, mobile platforms equipped with shooting devices, and handheld pan-tilts equipped with shooting devices.
  • Movable platforms include unmanned aerial vehicles and unmanned vehicles.
  • Unmanned aerial vehicles include rotary-wing unmanned aerial vehicles, such as quad-rotor unmanned aerial vehicles, hexa-rotor unmanned aerial vehicles, octo-rotor unmanned aerial vehicles, and fixed-wing unmanned aerial vehicles. , It can also be a combination of a rotary wing type and a fixed-wing unmanned aerial vehicle, which is not limited here.
  • the shooting method includes step S101 to step S103.
  • the multiple viewing angle areas may be partial viewing angle area blocks or all viewing angle area blocks in the full viewing angle area of the photographing device, and one viewing angle area block corresponds to one viewing angle area.
  • FIG. 2 is a block diagram of the full viewing angle area of the shooting device in an embodiment of the present application. As shown in FIG. 2, the full viewing angle area of the shooting device is divided into 9 viewing angle area blocks.
  • the photographing device obtains the sharpness statistical value of each of the multiple viewing angle regions, that is, determines the sharpness statistical value of each viewing angle area through the evaluation function of the sharpness statistical value.
  • the evaluation function of the sharpness statistical value includes, but is not limited to, the Brenner gradient function, the Tenengrad gradient function, the Laplacian gradient function, the gray-level variance function, and the gray-level variance product function.
  • the multiple viewing angle areas include overlapping viewing angle areas, adjacent viewing angle areas, and/or spaced viewing angle areas, that is, multiple viewing angle areas only include overlapping viewing angle areas, or multiple viewing angle areas.
  • the viewing angle area includes only adjacent viewing angle areas, or multiple viewing angle areas include only spaced viewing angle areas, or multiple viewing angle areas include overlapping viewing angle areas and adjacent viewing angle areas, or multiple viewing angle areas include overlapping viewing angle areas and Spaced viewing angle areas, or multiple viewing angle areas include adjacent viewing angle areas and spaced viewing angle areas, or multiple viewing angle areas including overlapping viewing angle areas, adjacent viewing angle areas, and spaced viewing angle areas. It is understandable that overlapping viewing angle areas, adjacent viewing angle areas, and spaced viewing angle areas can be arbitrarily combined to obtain multiple viewing angle areas, which are not specifically limited in this application.
  • FIG. 3 is a schematic diagram of multiple viewing angle areas of the photographing device in an embodiment of the present application.
  • viewing angle area A, viewing angle area B, and viewing angle area C are separated from each other, viewing angle area D, viewing angle area E
  • the viewing area is separated from the viewing area F, the viewing area H, the viewing area I, and the viewing area G are separated from each other;
  • the viewing area A is adjacent to the viewing area D
  • the viewing area B is adjacent to the viewing area E
  • the viewing area C is adjacent to the viewing area F
  • the depth information of each viewing angle area is determined according to the sharpness statistical value of each viewing angle area.
  • the depth of field information includes the depth of field of the scene in the viewing angle area.
  • the depth of field includes foreground, middle ground, and background; of course, in other embodiments, the depth of field may not be limited in number.
  • multiple degrees of depth of field can be defined according to actual needs, such as the current shooting scene and/or shooting strategy. It can be understood that the above-mentioned embodiments are only exemplary descriptions and are not limited.
  • step S102 includes sub-steps S1021 to S1022.
  • the sharpness statistical value of each viewing angle area is analyzed to determine the sharpness change of the object in each viewing angle area, where the sharpness change includes no change in sharpness, monotonous increase in sharpness, and monotonous decrease in sharpness And has a sharpness peak; if the sharpness change is no change in sharpness, the depth type of the corresponding viewing angle area is the first preset type; if the sharpness change is monotonous rise in sharpness, monotonous decline in sharpness, or with sharpness Degree peak value, the depth type of the corresponding viewing angle area is the second preset type.
  • the depth information of the viewing angle area is a null value
  • the sharpness peak value is determined according to the sharpness statistical value Or the sharpness change trend
  • the sharpness peak or the sharpness change trend is determined.
  • the sharpness statistical value corresponding to the second preset type of the depth type of the viewing angle area is analyzed to obtain the sharpness change situation of the object in the viewing angle area; if the sharpness change situation has a sharpness peak , Extract the sharpness peak value of the viewing angle area from the sharpness statistical value. If the sharpness change is monotonous rise or sharpness decrease, then the sharpness change trend of the viewing angle area is monotonous rise or sharpness. The degree of monotonous decline. Among them, the sharpness peak is at least one.
  • the object distance or object distance range corresponding to the sharpness peak is obtained, and based on the object distance or object distance range, the depth information of the viewing angle area is determined, that is, the greater the object distance, the greater the depth of field and the greater the object distance. Smaller, the smaller the depth of field.
  • the change trend of the object distance can be determined, and the depth information of the viewing angle area can be determined according to the change trend of the object distance.
  • the zoom motor moves from infinity to the nearest end
  • the sharpness change trend is that the sharpness rises monotonously
  • the object distance change trend is that the object distance decreases monotonously
  • the sharpness change trend is that the sharpness decreases monotonously
  • the distance change trend is a monotonous increase in the object distance. Further, if the object distance increases monotonously, it can be known that the greater the depth of field of the object, while the object distance decreases monotonously, the smaller the depth of field of the object can be known.
  • the object distance change trend is that the object distance rises monotonously
  • the sharpness change trend is that the sharpness decreases monotonously
  • the object distance The distance change trend is a monotonous decrease in the object distance.
  • the object distance increases monotonously it can be known that the greater the depth of field of the object, while the object distance decreases monotonously, the smaller the depth of field of the object can be known. It can be understood that this is only an exemplary description and is not limited.
  • the shooting control operation is performed based on the depth information of each viewing angle area, which is convenient for the user to quickly grasp the shooting parameters during shooting, and can shoot images with better effects, which greatly improves the user experience.
  • the contrast of the photographed object in each viewing angle area is acquired, and the depth information is filtered according to the contrast; and the shooting control operation is performed according to the depth information after the filtering.
  • the specific method of the screening process is: comparing the contrast of each viewing angle area with a preset contrast, and removing the depth information of the viewing angle area whose contrast is less than the preset contrast.
  • the aforementioned preset contrast can be set based on actual conditions, which is not specifically limited in this application. Filtering the depth information of each viewing angle area through the contrast of the viewing angle area can remove the depth information that does not meet the contrast requirements.
  • the shooting control operation is performed, which is convenient for the user to grasp the shooting parameters during shooting. It is possible to further capture images with better effects and improve user experience.
  • the shooting control strategy of the shooting device is set, and the shooting control operation corresponding to the shooting control strategy is executed.
  • the shooting control strategy includes at least one of the following: a focus control strategy, a front-to-back scene selection strategy, and a panoramic deep-sweep strategy, the focus control strategy is used to automatically focus a photographed object, and the front-to-back scene selection strategy is used for Select the front and back scene areas in the multiple viewing angle areas, the panoramic sweep focus strategy is used to obtain the depth information of the objects in each viewing angle area, and the front and back scene selection strategy includes selecting the foreground area in the viewing angle area for focusing and/or selecting the angle of view Focus on the background area in the area.
  • the selected current area is the foreground area or the background area according to the object distance and/or contrast of the object in the viewing angle area. Specifically, if the object distance of the object is less than or equal to the preset object distance, the foreground area is selected , If the object distance is greater than the preset object distance, the background area is selected, or if the contrast of the object is greater than or equal to the preset contrast, the foreground area is selected, and if the object’s contrast is less than the preset contrast, the background area is selected.
  • each view area includes a partial view area or all view area in the full view area.
  • the subject can be focused according to the sharpness statistics of view area A, view area B, and view area C in FIG. 3, According to the statistical values of the definition of the view area A, the view area B, the view area C, the view area D, the view area E, the view area F, the view area H, the view area I and the view area G in FIG.
  • the subject is in focus. It is understandable that the number of viewing angle areas obtained by dividing the full viewing angle area can be set based on actual conditions, which is not specifically limited in this application. Focusing on the subject through the sharpness statistics of multiple viewing angle areas can improve the focus success rate and also ensure the sharpness of the object's imaging.
  • the angle of view area to which the photographed object belongs is determined, and the angle of view area to which the photographed object belongs is taken as the target angle of view area; the angle of view area adjacent to the target angle of view area is acquired, and the angle of view adjacent to the target angle of view area is obtained
  • the area is used as a candidate viewing angle area; according to the statistical value of the sharpness of each candidate viewing angle area, the shooting object is focused. Focusing on the shooting object can further improve the focus success rate of the shooting object and ensure the imaging clarity of the object by focusing on the shooting object through the sharpness statistical value of the viewing angle area adjacent to the viewing angle area to which the shooting object belongs.
  • the candidate viewing angle area may also be set according to actual conditions, and is not limited to the viewing angle area adjacent to the target viewing angle area.
  • the target clarity statistical value is determined according to the sharpness statistical value of each candidate viewing angle area; and the shooting object is focused according to the target sharpness statistical value.
  • the weight value of each candidate viewing angle area is the same or different, and can be set based on actual conditions, which is not specifically limited in this application.
  • the above-mentioned first preset threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the credibility index of the sharpness statistical value of each candidate viewing angle area is determined; according to the sharpness statistics of the candidate viewing angle area whose credibility index is greater than or equal to the preset credibility index Value to focus on the subject.
  • the method for determining the credibility index is specifically: obtaining the object distance from the depth information of the candidate viewing angle area, recording it as the first object distance, and determining the object's object in the candidate viewing angle area based on the statistical value of the sharpness of the candidate viewing angle area. The distance is recorded as the second object distance, and then the difference between the first object distance and the second object distance is calculated, and the credibility index corresponding to the difference is obtained.
  • the above-mentioned preset credibility index can be set based on actual conditions, which is not specifically limited in this application.
  • the shooting control operation facilitates the user to grasp the shooting parameters during shooting, and can shoot images with better effects, which greatly improves the user experience.
  • FIG. 5 is a schematic flowchart of the steps of another shooting method according to an embodiment of the present application.
  • the shooting method includes steps S201 to S203.
  • the photographing device detects whether the position of the focusing point of the photographing device has changed, and when detecting that the position of the focusing point of the photographing device has changed, it is determined whether the photographing device has moved.
  • the inertial measurement unit of the photographing device can be used to determine whether the photographing device moves.
  • acquiring position information collected by the inertial measurement unit at the current system time and acquiring historical position information at a preset time interval from the current system time, where the historical position information is the position collected by the inertial measurement unit before the current system time Information; compare the location information with the historical location information, if the location information is the same as the historical location information, it is determined that the photographing device has not moved; if the location information is not the same as the historical location information, it is determined that the photographing device has occurred mobile.
  • the foregoing preset time can be set based on actual conditions, and this application does not specifically limit this.
  • the preset time is 1 second.
  • the shooting device does not move, it can be known that the position change of the focus point is caused by the user's touch operation on the viewing angle area or the movement of the shooting object.
  • the shooting device determines the viewing angle area to which the focus point currently belongs, and will The viewing angle area to which the focus currently belongs is the target viewing angle area.
  • the touch position of the user's touch operation on multiple viewing angle areas is determined, and according to the touch position, the viewing angle area to which the focus point currently belongs is determined; or the viewing angle area to which the photographed object belongs is determined, and the object is photographed
  • the viewing angle area to which the focus point currently belongs is used as the viewing angle area to which the focus point currently belongs.
  • the position coordinates of the touch position and the position coordinate set of each view area are acquired; according to the position coordinates and the position coordinate set, the view area to which the touch position belongs is determined, that is, the view angle corresponding to the position coordinate set containing the position coordinates The area is taken as the viewing angle area to which the touch position belongs; the viewing angle area to which the touch position belongs is taken as the viewing angle area to which the focus point currently belongs.
  • the position coordinates are the position coordinates of the touch position in the time zone, and the photographing device stores a set of position coordinates of each view area.
  • the focus point is located in the view area E.
  • the position of the focus point changes.
  • the position of the focus point after the change is shown in Figure 7, and the focus point is located in the view area D.
  • the animal in the view area E moves, the position of the focus point changes.
  • the position of the focus point after the position change is shown in FIG. 8, and the focus point is located in the view area F.
  • S203 Focusing on the photographed object according to the sharpness statistical value of the target viewing angle area and/or the depth information.
  • the photographing device may focus on the photographed object based on the sharpness statistical value and/or depth information of the target viewing angle area. It can quickly focus on the subject after the position of the focus point changes, improving the focus success rate.
  • At least one historical sharpness statistical value and a current sharpness statistical value of the target viewing angle area are acquired, where the historical sharpness statistical value is the sharpness statistical value recorded before the current time; according to the sharpness The statistical value and at least one historical sharpness statistical value are used to determine whether the photographed object has moved; if the photographed object does not move, the photographed object is focused according to the sharpness statistical value and/or depth information of the target viewing angle area.
  • the sharpness statistical value based on the sharpness statistical value and at least one historical sharpness statistical value, it can be determined whether the sharpness of the photographed object has changed, that is, if the sharpness statistical value is the same as the historical sharpness statistical value, the sharpness of the photographed object No change. If the sharpness statistical value is different from the historical sharpness statistical value, the sharpness of the photographed object will change; if the sharpness of the photographed object changes, it can be determined that the photographed object is moving, if the sharpness of the photographed object is changed If there is no change, it can be determined that the subject has not moved.
  • the movement trend of the photographed object is determined based on the sharpness statistical value and at least one historical sharpness statistical value; the movement trend of the photographed object is determined according to the movement trend and the sharpness statistical value of the target viewing angle area.
  • Focus Among them, the movement trend of the photographed object includes moving away from the photographing equipment and approaching the photographing equipment. After the subject moves, you can focus on the subject based on the subject’s movement trend and sharpness statistics to achieve focus tracking on the moving object, improve the success rate of focusing on the moving object, and get the clarity of the moving object. Images to improve user experience.
  • the movement displacement of the photographing device is determined; the target viewing angle area is determined according to the movement displacement and the viewing angle area to which the focus point belongs at the previous moment; according to the statistical value and the definition of the target viewing angle area / Or depth of field information to focus on the subject.
  • the movement displacement of the photographing device may be determined according to the inertial measurement unit and/or the image recognition device of the photographing device, and the movement displacement includes a moving direction and a moving distance.
  • the viewing angle area to which the focus point belongs at the previous moment is taken as the historical viewing angle area, and the viewing angle area outside the historical viewing angle area is used as the candidate viewing angle area; and the positional relationship between each candidate viewing angle area and the historical viewing angle area is obtained.
  • the separation distance, and the target viewing angle area is determined based on the moving direction and moving distance in the moving displacement, and the positional relationship and separation distance between each candidate viewing angle area and the historical viewing angle area.
  • the separation distance is the distance between the center points of the two viewing angle areas, and the photographing device stores the positional relationship and separation distance between each candidate viewing angle area and the historical viewing angle area.
  • the definition statistical value of the viewing angle area to which the focus point belongs at the previous moment is obtained; the difference between the definition statistical value of the target viewing angle area and the definition statistics value of the viewing angle area to which the focus point belongs at the previous moment is calculated Determine whether the absolute value of the difference is less than or equal to the second preset threshold; if the absolute value of the difference is less than or equal to the second preset threshold, according to the sharpness statistics and/or depth information of the target viewing area , Focus on the shooting object; if the absolute value of the difference is greater than the second preset threshold, re-acquire the sharpness statistical value of each of the multiple viewing angle areas; according to the re-obtained sharpness statistical value, the object Perform refocusing.
  • the foregoing second preset threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the object When the shooting device is moved, when the sharpness statistical value of the viewing angle area corresponding to the focus point changes little, the object can be refocused based on the sharpness statistical value of the viewing angle area, and when the viewing angle area corresponding to the focus point is clear
  • the sharpness statistic value needs to be re-acquired, and based on the re-acquired sharpness statistic value, the subject is refocused, which can further improve the success rate of focusing on the subject.
  • the focus point of the camera before moving is located in the view area E.
  • the focus point also moves correspondingly.
  • the focus point of the camera after moving is located in the view area I.
  • the shooting object Focusing can quickly focus on the subject after the position of the focus point changes, improve the focus success rate, and greatly improve the user experience.
  • the photographing equipment includes a mobile phone, a tablet computer, a notebook computer, a movable platform equipped with a photographing device, a handheld pan/tilt equipped with a photographing device, and the like.
  • Movable platforms include unmanned aerial vehicles and unmanned vehicles.
  • Unmanned aerial vehicles include rotary-wing unmanned aerial vehicles, such as quad-rotor unmanned aerial vehicles, hexa-rotor unmanned aerial vehicles, octo-rotor unmanned aerial vehicles, and fixed-wing unmanned aerial vehicles. , It can also be a combination of a rotary wing type and a fixed-wing unmanned aerial vehicle, which is not limited here.
  • the photographing device 300 includes a processor 301, a memory 302, and a photographing device 303.
  • the processor 301, the memory 302, and the photographing device 303 are connected by a bus 304, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 301 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 302 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the photographing device 303 may be a digital camera, a camera, and a single-lens reflex camera.
  • the processor 301 is configured to run a computer program stored in the memory 302, and implement the following steps when the computer program is executed:
  • control the photographing device to perform photographing control operations.
  • the processor when the processor realizes determining the depth information of each of the viewing angle regions according to the sharpness statistical value, it is used to realize:
  • the depth information of each of the viewing angle regions is determined.
  • the processor when the processor realizes determining the depth information of each viewing angle area according to the depth type, it is used to realize:
  • the depth information of the viewing angle area is a null value
  • the depth type of the viewing angle area is the second preset type, determine the sharpness peak or sharpness change trend according to the sharpness statistical value
  • the processor when the processor is configured to control the photographing device to perform photographing control operations according to the depth information, the processor is used to implement:
  • the camera is controlled to perform a shooting control operation.
  • the plurality of viewing angle areas include overlapping viewing angle areas, adjacent viewing angle areas, and/or spaced viewing angle areas.
  • the processor when the processor is configured to control the photographing device to perform photographing control operations according to the depth information, the processor is used to implement:
  • the shooting control strategy includes at least one of the following: a focus control strategy, a front-to-back scene selection strategy, and a panoramic deep-sweep strategy, the focus control strategy is used to automatically focus a photographed object, the front-to-rear scene selection strategy It is used to select the front and back scene areas in a plurality of the viewing angle areas, and the panoramic focus sweep strategy is used to obtain depth information of the objects in each viewing angle area.
  • a focus control strategy is used to automatically focus a photographed object
  • the front-to-rear scene selection strategy It is used to select the front and back scene areas in a plurality of the viewing angle areas
  • the panoramic focus sweep strategy is used to obtain depth information of the objects in each viewing angle area.
  • the processor controls the photographing device to perform photographing control operations, it is used to implement:
  • the photographing device is controlled to focus on the photographed object.
  • the processor when the processor is configured to control the photographing device to focus on the photographed object according to the statistical value of the sharpness of each of the viewing angle regions, the processor is used to realize:
  • the processor is configured to control the photographing device to focus on the photographed object according to the statistical value of the sharpness of each of the candidate viewing angle regions, to realize:
  • the camera is controlled to focus on the object.
  • the weight value of each candidate viewing angle area is the same or different.
  • the processor realizes that the target definition statistical value is determined according to the definition statistical value of each candidate viewing angle area, it is used to realize:
  • the sharpness statistical value of any one of the candidate viewing angle regions is used as the target sharpness statistical value.
  • the processor determines whether the difference between the sharpness statistical values of each of the two candidate viewing angle regions is less than or equal to a preset first preset threshold, the processor is further configured to achieve:
  • At least one of the differences is greater than the first preset threshold, then calculate an average clarity statistical value according to the clarity statistical value of each candidate viewing angle area, and use the average clarity statistical value as Target clarity statistics.
  • the processor is configured to control the photographing device to focus on the photographed object according to the statistical value of the sharpness of each of the candidate viewing angle regions, to realize:
  • the camera is controlled to focus on the photographed object.
  • the processor is further configured to implement:
  • the photographing device does not move, determining the viewing angle area to which the focus point currently belongs, and using the viewing angle area to which the focus point currently belongs as a target viewing angle area;
  • the photographing device is controlled to focus on the photographed object.
  • the processor realizes the determination of the viewing angle area to which the focus point currently belongs, it is configured to realize:
  • the processor when the processor realizes the determination of the viewing angle area to which the focus point currently belongs according to the touch position, the processor is configured to realize:
  • the viewing angle area to which the touch position belongs is taken as the viewing angle area to which the focus point currently belongs.
  • the processor before the processor realizes that according to the sharpness statistical value and/or the depth of field information of the target viewing angle area, it is further configured to realize:
  • control the photographing device to focus on the photographed object according to the sharpness statistical value and/or the depth information of the target viewing angle area.
  • the processor determines whether the photographed object moves according to the sharpness statistical value and at least one historical sharpness statistical value, it is further configured to implement:
  • the processor realizes the determination of whether the photographing device has moved, it is further configured to realize:
  • the photographing device is controlled to focus on the photographed object.
  • the movement displacement of the photographing device is determined according to an inertial measurement unit and/or an image recognition device of the photographing device.
  • the processor before the processor realizes that according to the sharpness statistical value and/or the depth of field information of the target viewing angle area, it is further configured to realize:
  • the processor determines whether the absolute value of the difference is less than or equal to a second preset threshold, it is further used to achieve:
  • control the photographing device to refocus the photographed object.
  • the photographing device includes at least one of the following: a digital camera, a camera, and a single-lens reflex camera.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the shooting method provided in the example.
  • the computer-readable storage medium may be the internal storage unit of the photographing device described in any of the foregoing embodiments, such as the hard disk or memory of the photographing device.
  • the computer-readable storage medium may also be an external storage device of the photographing device, such as a plug-in hard disk equipped on the photographing device, a Smart Media Card (SMC), or Secure Digital (SD). ) Card, Flash Card, etc.
  • SMC Smart Media Card
  • SD Secure Digital

Abstract

一种拍摄方法、拍摄设备(300)及计算机可读存储介质,其中该方法包括:获取多个视角区域中每个视角区域的清晰度统计值(S101);根据清晰度统计值,确定各个视角区域的景深信息(S102);根据景深信息,执行拍摄控制操作(S103)。可以提高图像的拍摄效果,提升用户体验。

Description

拍摄方法、拍摄设备及计算机可读存储介质 技术领域
本申请涉及拍摄控制的技术领域,尤其涉及一种拍摄方法、拍摄设备及计算机可读存储介质。
背景技术
目前,用户通过智能手机、单反相机、数码相机和平板电脑等拍摄设备,可以拍摄人物和风景,一般来说,拍摄设备都支持自动对焦(Auto Focus,AF)功能,通过自动对焦可以方便用户拍摄人物和风景,得到清晰的照片,然而,目前的拍摄设备通常只对对焦区域进行清晰度统计值的计算,而不会记录全局清晰度统计值分布,使得用户在拍摄时缺少对全局区域的拍摄参数的把握,从而影响拍摄效果,用户体验不好。
发明内容
基于此,本申请提供了一种拍摄方法、拍摄设备及计算机可读存储介质,旨在提高图像的拍摄效果,提升用户体验。
第一方面,本申请提供了一种拍摄方法,应用于拍摄设备,所述方法包括:
获取多个视角区域中每个视角区域的清晰度统计值;
根据所述清晰度统计值,确定各个所述视角区域的景深信息;
根据所述景深信息,执行拍摄控制操作。
第二方面,本申请还提供了一种拍摄设备,所述拍摄设备包括拍摄装置、存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
获取多个视角区域中每个视角区域的清晰度统计值;
根据所述清晰度统计值,确定各个所述视角区域的景深信息;
根据所述景深信息,控制所述拍摄装置执行拍摄控制操作。
第三方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现 如上所述的拍摄方法。
本申请实施例提供了一种拍摄方法、拍摄设备及计算机可读存储介质,通过获取多个视角区域的清晰度统计值,并根据多个视角区域的清晰度统计值可以得到各个视角区域的景深信息,且基于各个视角区域的景深信息,执行拍摄控制操作,便于用户快速把握拍摄时的拍摄参数,可以拍摄得到效果较好的图像,极大的提高了用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的一种拍摄方法的步骤示意流程图;
图2是本申请实施例中拍摄设备的全视角区域的分块示意图;
图3是本申请实施例中拍摄设备的多个视角区域的示意图;
图4是图1中的拍摄方法的子步骤示意流程图;
图5是本申请一实施例提供的另一种拍摄方法的步骤示意流程图;
图6是本申请实施例中对焦点发生变化前的一位置示意图;
图7是本申请实施例中对焦点发生变化后的一位置示意图;
图8是本申请实施例中对焦点发生变化后的另一位置示意图;
图9是本申请实施例中拍摄设备移动前对焦点的一位置示意图;
图10是本申请实施例中拍摄设备移动后对焦点的一位置示意图;
图11是本申请一实施例提供的一种拍摄设备的结构示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤, 也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
请参阅图1,图1是本申请一实施例提供的一种拍摄方法的步骤示意流程图。该拍摄方法可以应用在拍摄设备中,用于执行拍摄控制操作。其中拍摄设备包括手机、平板电脑、笔记本电脑、搭载有拍摄装置的可移动平台和搭载有拍摄装置的手持云台等。可移动平台包括无人飞行器和无人驾驶车辆,无人飞行器包括旋翼型无人飞行器,例如四旋翼无人飞行器、六旋翼无人飞行器、八旋翼无人飞行器,也可以是固定翼无人飞行器,还可以是旋翼型与固定翼无人飞行器的组合,在此不作限定。
具体地,如图1所示,该拍摄方法包括步骤S101至步骤S103。
S101、获取多个视角区域中每个视角区域的清晰度统计值。
其中,多个视角区域可以为拍摄设备的全视角区域中的部分视角区域块或者全部视角区域块,一个视角区域块对应一个视角区域。请参照图2,图2是本申请实施例中拍摄设备的全视角区域的分块示意图,如图2所示,拍摄设备的全视角区域被划分为9个视角区域块。
拍摄设备获取多个视角区域中的每个视角区域的清晰度统计值,即通过清晰度统计值的评价函数,确定每个视角区域的清晰度统计值。其中,清晰度统计值的评价函数包括但不限于Brenner梯度函数、Tenengrad梯度函数、Laplacian梯度函数、灰度方差函数和灰度方差乘积函数。
在一实施例中,多个所述视角区域包括重叠的视角区域、相邻的视角区域和/或间隔的视角区域,也就是说,多个视角区域仅包括重叠的视角区域,或者,多个视角区域仅包括相邻的视角区域,或者多个视角区域仅包括间隔的视角区域,或者多个视角区域包括重叠的视角区域和相邻的视角区域,或者多个视角区域包括重叠的视角区域和间隔的视角区域,或者多个视角区域包括相邻的视角区域和间隔的视角区域,或者多个视角区域包括重叠的视角区域、相邻的视角区域和间隔的视角区域。可以理解的是,重叠的视角区域、相邻的视角区域和间隔的视角区域可以任意组合,得到多个视角区域,本申请对此不作具体限定。
请参照图3,图3是本申请实施例中拍摄设备的多个视角区域的示意图,如图3所示,视角区域A、视角区域B和视角区域C互相间隔,视角区域D、 视角区域E和视角区域F互相间隔,视角区域H、视角区域I和视角区域G互相间隔;视角区域A与视角区域D相邻,视角区域B与视角区域E相邻,视角区域C与视角区域F相邻;视角区域D与视角区域H重叠,视角区域E与视角区域I重叠,视角区域F与视角区域G重叠。
S102、根据所述清晰度统计值,确定各个所述视角区域的景深信息。
在确定每个视角区域的清晰度统计值之后,根据每个视角区域的清晰度统计值,确定各个视角区域的景深信息。其中,该景深信息包括视角区域中景物的景深程度,例如,在一种实施例中,景深程度包括前景和中景、后景;当然,在其他实施例中,景深程度可以不限定个数,例如可以根据实际需要,例如当前拍摄场景和/或拍摄策略等,限定多个程度的景深。可以理解,上述实施例仅为示例性说明,不作限定。
在一实施例中,如图4所示,步骤S102包括子步骤S1021至S1022。
S1021、根据所述清晰度统计值,确定各个所述视角区域的景深类型。
具体地,对每个视角区域的清晰度统计值进行分析,确定每个视角区域中物体的清晰度变化情况,其中,清晰度变化情况包括清晰度无变化、清晰度单调上升、清晰度单调下降和具有清晰度峰值;如果清晰度变化情况为清晰度无变化,则对应的视角区域的景深类型为第一预设类型;如果清晰度变化情况为清晰度单调上升、清晰度单调下降或具有清晰度峰值,则对应的视角区域的景深类型为第二预设类型。
S1022、根据所述景深类型,确定各个所述视角区域的景深信息。
具体地,若视角区域的景深类型为第一预设类型,则视角区域的景深信息为空值;若视角区域的景深类型为第二预设类型,则根据清晰度统计值,确定清晰度峰值或清晰度变化趋势;根据该清晰度峰值或该清晰度变化趋势,确定视角区域的景深信息。通过景深类型可以快速的确定各个视角区域的景深信息。
在一实施例中,对视角区域的景深类型为第二预设类型对应的清晰度统计值进行分析,得到该视角区域中物体的清晰度变化情况;如果该清晰度变化情况为具有清晰度峰值,则从该清晰度统计值中提取视角区域的清晰度峰值,如果该清晰度变化情况为清晰度单调上升或清晰度单调下降,则该视角区域的清晰度变化趋势为清晰度单调上升或清晰度单调下降。其中,清晰度峰值为至少一个。
在一实施例中,获取清晰度峰值对应的物距或物距范围,并基于该物距或物距范围,确定视角区域的景深信息,即物距越大,则景深越大,物距越小, 则景深越小。在另一实施例中,基于清晰度变化趋势,可以确定物距变化趋势,并根据物距变化趋势,确定视角区域的景深信息。例如,当变焦电机从无穷远处朝向最近端移动时,若清晰度变化趋势为清晰度单调上升,则物距变化趋势为物距单调下降,若清晰度变化趋势为清晰度单调下降,则物距变化趋势为物距单调上升。进一步地,物距单调上升,则可以知晓物体的景深越大,而物距单调下降,则可以知晓物体的景深越小。当然,当变焦电机从最近端朝向无穷远处移动时,若清晰度变化趋势为清晰度单调上升,则物距变化趋势为物距单调上升,若清晰度变化趋势为清晰度单调下降,则物距变化趋势为物距单调下降。进一步地,物距单调上升,则可以知晓物体的景深越大,而物距单调下降,则可以知晓物体的景深越小。可以理解,此处仅为示例性说明,不作限定。
S103、根据所述景深信息,执行拍摄控制操作。
在确定各个视角区域的景深信息之后,基于各个视角区域的景深信息执行拍摄控制操作,便于用户快速把握拍摄时的拍摄参数,可以拍摄得到效果较好的图像,极大的提高了用户体验。
在一实施例中,获取拍摄物体在各个视角区域的对比度,并根据对比度,对景深信息进行筛选处理;根据经过筛选处理后的景深信息,执行拍摄控制操作。其中,筛选处理的具体方式为:将各个视角区域的对比度与预设对比度进行比较,去除对比度小于预设对比度的视角区域的景深信息。需要说明的是,上述预设对比度可基于实际情况进行设置,本申请对此不作具体限定。通过视角区域的对比度对各个视角区域的景深信息进行筛选处理,可以去除不满足对比度要求的景深信息,同时根据经过筛选处理后的景深信息,执行拍摄控制操作,便于用户把握拍摄时的拍摄参数,可以进一步地拍摄得到效果较好的图像,提高用户体验。
在一实施例中,根据景深信息,设置拍摄设备的拍摄控制策略,并执行拍摄控制策略对应的拍摄控制操作。其中,所述拍摄控制策略包括如下至少一种:对焦控制策略、前后景选择策略和全景深扫焦策略,所述对焦控制策略用于对拍摄物体进行自动对焦,所述前后景选择策略用于选择多个视角区域中的前后景区域,所述全景扫焦策略用于获取每个视角区域中的物体的景深信息,前后景选择策略包括选择视角区域中的前景区域进行对焦和/或选择视角区域中的后景区域进行对焦。
其中,可以根据视角区域中物体的物距和/或对比度,确定选择的当前区域是前景区域,还是后景区域,具体地,如果物体的物距小于或等于预设物距, 则选择前景区域,如果物体的物距大于预设物距,则选择后景区域,或者如果物体的对比度大于或等于预设对比度,则选择前景区域,如果物体的对比度小于预设对比度,则选择后景区域。
在一实施例中,根据各个视角区域的清晰度统计值,对拍摄物体进行对焦。其中,各个视角区域包括全视角区域中的部分视角区域或者全部视角区域,例如,可以根据图3中的视图区域A、视图区域B和视图区域C的清晰度统计值,对拍摄物体进行对焦,也可以根据图3中的视图区域A、视图区域B、视图区域C、视图区域D、视图区域E、视图区域F、视图区域H、视图区域I和视图区域G的清晰度统计值,对拍摄物体进行对焦。可以理解的是,全视角区域划分得到视角区域的数量可基于实际情况进行设置,本申请对此不作具体限定。通过多个视角区域的清晰度统计值,对拍摄物体进行对焦,可以提高对焦成功率,也可以保证物体的成像清晰度。
在一实施例中,确定拍摄物体所属的视角区域,并将拍摄物体所属的视角区域作为目标视角区域;获取与目标视角区域相邻的视角区域,并将与目标视角区域相邻的所述视角区域作为候选视角区域;根据各个候选视角区域的清晰度统计值,对拍摄物体进行对焦。通过与拍摄物体所属视角区域的相邻的视角区域的清晰度统计值,对拍摄物体进行对焦,可以进一步地提高拍摄物体的对焦成功率,也可以保证物体的成像清晰度。可以理解,候选视角区域也可以根据实际情况设置,不限于与目标视角区域相邻的视角区域。
具体地,根据各个候选视角区域的清晰度统计值,确定目标清晰度统计值;根据目标清晰度统计值,对拍摄物体进行对焦。其中,各个候选视角区域的权重值相同或者不同,可基于实际情况进行设置,本申请对此不作具体限定。
示例性的,根据各个候选视角区域的所述清晰度统计值,计算每两个候选视角区域的所述清晰度统计值的差值;确定每两个候选视角区域的清晰度统计值的差值是否均小于或等于第一预设阈值;若每个该差值均小于或等于预设的第一预设阈值,则将任一候选视角区域的清晰度统计值作为目标清晰度统计值;若至少一个差值大于预设的第一预设阈值,则根据各个候选视角区域的清晰度统计值,计算平均清晰度统计值,并将平均清晰度统计值作为目标清晰度统计值。需要说明的是,上述第一预设阈值可基于实际情况进行设置,本申请对此不作具体限定。
在一实施例中,根据各个候选视角区域的景深信息,确定各个候选视角区域的清晰度统计值的可信指数;根据可信指数大于或等于预设可信指数的候选 视角区域的清晰度统计值,对拍摄物体进行对焦。其中,可信指数的确定方式具体为:从候选视角区域的景深信息中获取物距,记为第一物距,并基于该候选视角区域的清晰度统计值,确定候选视角区域中物体的物距,记为第二物距,然后计算第一物距与第二物距的差值,并获取该差值对应的可信指数。可以理解的是,第一物距与第二物距的差值越小,则可信指数越大,第一物距与第二物距的差值越大,则可信指数越小。需要说明的是,上述预设可信指数可基于实际情况进行设置,本申请对此不作具体限定。通过对候选视角区域进行筛选,并基于筛选后的候选视角区域的清晰度统计值,对拍摄物体进行对焦,可以进一步地提高拍摄物体的对焦成功率。
上述实施例提供的拍摄方法,通过获取多个视角区域的清晰度统计值,并根据多个视角区域的清晰度统计值可以得到各个视角区域的景深信息,且基于各个视角区域的景深信息,执行拍摄控制操作,便于用户把握拍摄时的拍摄参数,可以拍摄得到效果较好的图像,极大的提高了用户体验。
请参阅图5,图5是本申请一实施例提供的另一种拍摄方法的步骤示意流程图。
具体地,如图5所示,该拍摄方法包括步骤S201至S203。
S201、在对所述拍摄物体进行对焦的过程中,检测到所述拍摄设备的对焦点的位置发生变化时,确定所述拍摄设备是否发生移动。
具体地,拍摄设备在对拍摄物体进行对焦的过程中,检测拍摄设备的对焦点的位置是否发生变化,当检测到拍摄设备的对焦点的位置发生变化时,确定拍摄设备是否发生移动。其中,可以通过拍摄设备的惯性测量单元确定拍摄设备是否发生移动。
示例性的,获取惯性测量单元在当前系统时间采集到的位置信息,并获取间隔当前系统时间预设时间的历史位置信息,其中,历史位置信息为惯性测量单元在当前系统时间之前采集到的位置信息;将该位置信息与该历史位置信息进行比较,如果该位置信息与该历史位置信息相同,则确定拍摄设备未发生移动,如果该位置信息与该历史位置信息不相同,则确定拍摄设备发生移动。可以理解的是,上述预设时间可基于实际情况进行设置,本申请对此不作具体限定。可选地,预设时间为1秒。
S202、若所述拍摄设备未发生移动,则确定所述对焦点当前所属的所述视角区域,并将所述对焦点当前所属的所述视角区域作为目标视角区域。
如果拍摄设备未发生移动,则可以知晓对焦点的位置变化是由用户对视角 区域的触控操作或者拍摄物体的移动而引起的,对此拍摄设备确定对焦点当前所属的视角区域,并将对焦点当前所属的视角区域作为目标视角区域。
在一实施例中,确定用户对多个视角区域的触控操作的触控位置,并根据触控位置,确定对焦点当前所属的视角区域;或者确定拍摄物体所属的视角区域,并将拍摄物体所属的视角区域作为对焦点当前所属的视角区域。
具体地,获取该触控位置的位置坐标和每个视角区域的位置坐标集;根据位置坐标和位置坐标集,确定触控位置所属的视角区域,即将包含该位置坐标的位置坐标集对应的视角区域作为触控位置所属的视角区域;将触控位置所属的视角区域作为对焦点当前所属的视角区域。其中,该位置坐标为触控位置在时间区域中的位置坐标,拍摄设备存储有每个视角区域的位置坐标集。
如图6所示,对焦点位于视图区域E,当用户触控视图区域D后,对焦点的位置发生变化,位置发生变化后的对焦点的位置如图7所示,对焦点位于视图区域D,当视图区域E中的动物移动,导致对焦点的位置发生变化,位置发生变化后的对焦点的位置如图8所示,对焦点位于视图区域F。
S203、根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
在确定目标视角区域之后,拍摄设备可以基于目标视角区域的清晰度统计值和/或景深信息,对拍摄物体进行对焦。能够在对焦点的位置发生变化后,可以快速的对拍摄物体进行对焦,提高对焦成功率。
在一实施例中,获取目标视角区域的至少一个历史清晰度统计值和当前时刻的清晰度统计值,其中,历史清晰度统计值为在当前时刻之前记录的清晰度统计值;根据该清晰度统计值和至少一个历史清晰度统计值,确定拍摄物体是否发生移动;若拍摄物体未发生移动,则根据目标视角区域的清晰度统计值和/或景深信息,对拍摄物体进行对焦。
具体地,可以基于该清晰度统计值和至少一个历史清晰度统计值,确定拍摄物体的清晰度是否发生变化,即如果该清晰度统计值与历史清晰度统计值相同,则拍摄物体的清晰度未发生变化,如果该清晰度统计值与历史清晰度统计值不同,则拍摄物体的清晰度发生变化;如果拍摄物体的清晰度发生变化,则可以确定拍摄物体发生移动,如果拍摄物体的清晰度未发生变化,则可以确定拍摄物体未发生移动。
在一实施例中,若拍摄物体发生移动,则根据清晰度统计值和至少一个历史清晰度统计值,确定拍摄物体的移动趋势;根据移动趋势以及目标视角区域 的清晰度统计值,对拍摄物体进行对焦。其中,拍摄物体的移动趋势包括远离拍摄设备和靠近拍摄设备。在拍摄物体发生移动之后,可以基于拍摄物体的移动趋势以及清晰度统计值,对拍摄物体进行对焦,实现对移动物体的追焦,提高对移动物体的对焦成功率,可以拍摄得到移动物体的清晰图像,提高用户体验。
在一实施例中,若拍摄设备发生移动,则确定拍摄设备的移动位移;根据移动位移和对焦点在上一时刻所属的视角区域,确定目标视角区域;根据目标视角区域的清晰度统计值和/或景深信息,对拍摄物体进行对焦。其中,所述拍摄设备的移动位移可以根据拍摄设备的惯性测量单元和/或图像识别装置确定,移动位移包括移动方向和移动距离。在拍摄设备发生移动后,基于对焦点当前所属的视角区域的清晰度统计值和/或景深信息,对拍摄物体进行对焦,可以提高对拍摄物体的对焦成功率。
具体地,将对焦点在上一时刻所属的视角区域作为历史视角区域,并将历史视角区域之外的视角区域作为候选视角区域;获取每个候选视角区域与历史视角区域之间的位置关系和间隔距离,并基于该移动位移中的移动方向和移动距离以及每个候选视角区域与历史视角区域之间的位置关系和间隔距离,确定目标视角区域。其中,该间隔距离为两个视角区域的中心点之间的距离,拍摄设备存储有每个候选视角区域与历史视角区域之间的位置关系和间隔距离。
在一实施例中,获取对焦点在上一时刻所属的视角区域的清晰度统计值;计算目标视角区域的清晰度统计值与对焦点在上一时刻所属的视角区域的清晰度统计值之间的差值绝对值;确定差值绝对值是否小于或等于第二预设阈值;若差值绝对值小于或等于第二预设阈值,则根据目标视角区域的清晰度统计值和/或景深信息,对拍摄物体进行对焦;若差值绝对值大于第二预设阈值,则重新获取多个视角区域中每个视角区域的清晰度统计值;根据重新获取到的清晰度统计值,对拍摄物体进行重新对焦。需要说明的是,上述第二预设阈值可基于实际情况进行设置,本申请对此不作具体限定。
当拍摄设备发生移动后,当对焦点对应的视角区域的清晰度统计值变化较小时,可以基于视角区域的清晰度统计值,对拍摄物体进行重新对焦,而当对焦点对应的视角区域的清晰度统计值变化较大时,需要重新获取清晰度统计值,并基于重新获取到的清晰度统计值,对拍摄物体进行重新对焦,能够进一步地提高对拍摄物体的对焦成功率。
如图9所示,拍摄设备移动前的对焦点位于视图区域E,当拍摄设备移动 后,对焦点也对应的移动,如图10所示,拍摄设备移动后的对焦点位于视图区域I。
上述实施例提供的拍摄方法,通过在拍摄设备的对焦点的位置发生变化,而拍摄设备未发生移动时,基于对焦点当前所属的视角区域的清晰度统计值和/或景深信息,对拍摄物体进行对焦,能够在对焦点的位置发生变化后,可以快速的对拍摄物体进行对焦,提高对焦成功率,极大的提高用户体验。
请参阅图11,图11是本申请一实施例提供的一种拍摄设备的结构示意性框图。在一种实施方式中,该拍摄设备包括手机、平板电脑、笔记本电脑、搭载有拍摄装置的可移动平台和搭载有拍摄装置的手持云台等。可移动平台包括无人飞行器和无人驾驶车辆,无人飞行器包括旋翼型无人飞行器,例如四旋翼无人飞行器、六旋翼无人飞行器、八旋翼无人飞行器,也可以是固定翼无人飞行器,还可以是旋翼型与固定翼无人飞行器的组合,在此不作限定。
进一步地,该拍摄设备300包括处理器301、存储器302和拍摄装置303,处理器301、存储器302和拍摄装置303通过总线304连接,该总线304比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器301可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器302可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
具体地,拍摄装置303可以是数码相机、摄像头和单反相机。
其中,所述处理器301用于运行存储在存储器302中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取多个视角区域中每个视角区域的清晰度统计值;
根据所述清晰度统计值,确定各个所述视角区域的景深信息;
根据所述景深信息,控制所述拍摄装置执行拍摄控制操作。
可选地,所述处理器实现根据所述清晰度统计值,确定各个所述视角区域的景深信息时,用于实现:
根据所述清晰度统计值,确定各个所述视角区域的景深类型;
根据所述景深类型,确定各个所述视角区域的景深信息。
可选地,所述处理器实现根据所述景深类型,确定各个所述视角区域的景深信息时,用于实现:
若所述视角区域的景深类型为第一预设类型,则所述视角区域的景深信息为空值;
若所述视角区域的景深类型为第二预设类型,则根据所述清晰度统计值,确定清晰度峰值或清晰度变化趋势;
根据所述清晰度峰值或所述清晰度变化趋势,确定所述视角区域的景深信息。
可选地,所述处理器实现根据所述景深信息,控制所述拍摄装置执行拍摄控制操作时,用于实现:
获取拍摄物体在各个所述视角区域的对比度,并根据所述对比度,对所述景深信息进行筛选处理;
根据经过筛选处理后的所述景深信息,控制所述拍摄装置执行拍摄控制操作。
可选地,多个所述视角区域包括重叠的视角区域、相邻的视角区域和/或间隔的视角区域。
可选地,所述处理器实现根据所述景深信息,控制所述拍摄装置执行拍摄控制操作时,用于实现:
根据所述景深信息,设置所述拍摄设备的拍摄控制策略;
控制所述拍摄装置执行所述拍摄控制策略对应的拍摄控制操作。
可选地,所述拍摄控制策略包括如下至少一种:对焦控制策略、前后景选择策略和全景深扫焦策略,所述对焦控制策略用于对拍摄物体进行自动对焦,所述前后景选择策略用于选择多个所述视角区域中的前后景区域,所述全景扫焦策略用于获取每个视角区域中的物体的景深信息。
可选地,所述处理器实现控制所述拍摄装置执行拍摄控制操作时,用于实现:
根据各个所述视角区域的清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
可选地,所述处理器实现根据各个所述视角区域的清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦时,用于实现:
确定所述拍摄物体所属的视角区域,并将所述拍摄物体所属的视角区域作为目标视角区域;
获取与所述目标视角区域相邻的所述视角区域,并将与所述目标视角区域相邻的所述视角区域作为候选视角区域;
根据各个所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
可选地,所述处理器实现根据各个所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦时,用于实现:
根据各个所述候选视角区域的所述清晰度统计值,确定目标清晰度统计值;
根据所述目标清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
可选地,各个所述候选视角区域的权重值相同或者不同。
可选地,所述处理器实现根据各个所述候选视角区域的所述清晰度统计值,确定目标清晰度统计值时,用于实现:
根据各个所述候选视角区域的所述清晰度统计值,计算每两个所述候选视角区域的所述清晰度统计值的差值;
确定每两个所述候选视角区域的所述清晰度统计值的差值是否均小于或等于第一预设阈值;
若每个所述差值均小于或等于预设的第一预设阈值,则将任一所述候选视角区域的清晰度统计值作为目标清晰度统计值。
可选地,所述处理器实现确定每两个所述候选视角区域的所述清晰度统计值的差值是否均小于或等于预设的第一预设阈值之后,还用于实现:
若至少一个所述差值大于预设的第一预设阈值,则根据各个所述候选视角区域的所述清晰度统计值,计算平均清晰度统计值,并将所述平均清晰度统计值作为目标清晰度统计值。
可选地,所述处理器实现根据各个所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦时,用于实现:
根据各个所述候选视角区域的所述景深信息,确定各个所述候选视角区域的所述清晰度统计值的可信指数;
根据所述可信指数大于或等于预设可信指数的所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
可选地,所述处理器还用于实现:
在对所述拍摄物体进行对焦的过程中,检测到所述拍摄设备的对焦点的位置发生变化时,确定所述拍摄设备是否发生移动;
若所述拍摄设备未发生移动,则确定所述对焦点当前所属的所述视角区域,并将所述对焦点当前所属的所述视角区域作为目标视角区域;
根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述 拍摄装置对所述拍摄物体进行对焦。
可选地,所述处理器实现确定所述对焦点当前所属的所述视角区域时,用于实现:
确定用户对多个所述视角区域的触控操作的触控位置,并根据所述触控位置,确定所述对焦点当前所属的所述视角区域;或者
确定所述拍摄物体所属的视角区域,并将所述拍摄物体所属的视角区域作为所述对焦点当前所属的所述视角区域。
可选地,所述处理器实现根据所述触控位置,确定所述对焦点当前所属的所述视角区域时,用于实现:
获取所述触控位置的位置坐标和每个所述视角区域的位置坐标集;
根据所述位置坐标和位置坐标集,确定所述触控位置所属的所述视角区域;
将所述触控位置所属的所述视角区域作为所述对焦点当前所属的所述视角区域。
可选地,所述处理器实现根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦之前,还用于实现:
获取所述目标视角区域的至少一个历史清晰度统计值和当前时刻的所述清晰度统计值,其中,所述历史清晰度统计值为在当前时刻之前记录的清晰度统计值;
根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体是否发生移动;
若所述拍摄物体未发生移动,则根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦。
可选地,所述处理器实现根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体是否发生移动之后,还用于实现:
若所述拍摄物体发生移动,则根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体的移动趋势;
根据所述移动趋势以及所述目标视角区域的所述清晰度统计值,控制所述拍摄装置对所述拍摄物体进行对焦。
可选地,所述处理器实现确定所述拍摄设备是否发生移动之后,还用于实现:
若所述拍摄设备发生移动,则确定所述拍摄设备的移动位移;
根据所述移动位移和所述对焦点在上一时刻所属的所述视角区域,确定目标视角区域;
根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦。
可选地,所述拍摄设备的移动位移根据所述拍摄设备的惯性测量单元和/或图像识别装置确定。
可选地,所述处理器实现根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦之前,还用于实现:
获取所述对焦点在上一时刻所属的所述视角区域的所述清晰度统计值;
计算所述目标视角区域的所述清晰度统计值与所述对焦点在上一时刻所属的所述视角区域的所述清晰度统计值之间的差值绝对值;
确定所述差值绝对值是否小于或等于第二预设阈值;
若所述差值绝对值小于或等于第二预设阈值,则根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
可选地,所述处理器实现确定所述差值绝对值是否小于或等于第二预设阈值之后,还用于实现:
若所述差值绝对值大于第二预设阈值,则重新获取多个所述视角区域中每个视角区域的清晰度统计值;
根据重新获取到的所述清晰度统计值,控制所述拍摄装置对所述拍摄物体进行重新对焦。
可选地,所述拍摄装置包括如下至少一种:数码相机、摄像头和单反相机。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的拍摄设备的具体工作过程,可以参考前述拍摄方法实施例中的对应过程,在此不再赘述。
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的拍摄方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的拍摄设备的内部存储单元,例如所述拍摄设备的硬盘或内存。所述计算机可读存储介质也可以是所述拍摄设备的外部存储设备,例如所述拍摄设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存 卡(Flash Card)等。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (48)

  1. 一种拍摄方法,其特征在于,应用于拍摄设备,所述方法包括:
    获取多个视角区域中每个视角区域的清晰度统计值;
    根据所述清晰度统计值,确定各个所述视角区域的景深信息;
    根据所述景深信息,执行拍摄控制操作。
  2. 根据权利要求1所述的拍摄方法,其特征在于,所述根据所述清晰度统计值,确定各个所述视角区域的景深信息,包括:
    根据所述清晰度统计值,确定各个所述视角区域的景深类型;
    根据所述景深类型,确定各个所述视角区域的景深信息。
  3. 根据权利要求2所述的拍摄方法,其特征在于,所述根据所述景深类型,确定各个所述视角区域的景深信息,包括:
    若所述视角区域的景深类型为第一预设类型,则所述视角区域的景深信息为空值;
    若所述视角区域的景深类型为第二预设类型,则根据所述清晰度统计值,确定清晰度峰值或清晰度变化趋势;
    根据所述清晰度峰值或所述清晰度变化趋势,确定所述视角区域的景深信息。
  4. 根据权利要求1所述的拍摄方法,其特征在于,所述根据所述景深信息,执行拍摄控制操作,包括:
    获取拍摄物体在各个所述视角区域的对比度,并根据所述对比度,对所述景深信息进行筛选处理;
    根据经过筛选处理后的所述景深信息,执行拍摄控制操作。
  5. 根据权利要求1所述的拍摄方法,其特征在于,多个所述视角区域包括重叠的视角区域、相邻的视角区域和/或间隔的视角区域。
  6. 根据权利要求1所述的拍摄方法,其特征在于,所述根据所述景深信息,执行拍摄控制操作,包括:
    根据所述景深信息,设置所述拍摄设备的拍摄控制策略;
    执行所述拍摄控制策略对应的拍摄控制操作。
  7. 根据权利要求6所述的拍摄方法,其特征在于,所述拍摄控制策略包括如下至少一种:对焦控制策略、前后景选择策略和全景深扫焦策略,所述对焦 控制策略用于对拍摄物体进行自动对焦,所述前后景选择策略用于选择多个所述视角区域中的前后景区域,所述全景扫焦策略用于获取每个视角区域中的物体的景深信息。
  8. 根据权利要求1至7中任一项所述的拍摄方法,其特征在于,所述执行拍摄控制操作,包括:
    根据各个所述视角区域的清晰度统计值,对拍摄物体进行对焦。
  9. 根据权利要求8所述的拍摄方法,其特征在于,所述根据各个所述视角区域的清晰度统计值,对拍摄物体进行对焦,包括:
    确定所述拍摄物体所属的视角区域,并将所述拍摄物体所属的视角区域作为目标视角区域;
    获取与所述目标视角区域相邻的所述视角区域,并将与所述目标视角区域相邻的所述视角区域作为候选视角区域;
    根据各个所述候选视角区域的所述清晰度统计值,对拍摄物体进行对焦。
  10. 根据权利要求9所述的拍摄方法,其特征在于,所述根据各个所述候选视角区域的所述清晰度统计值,对拍摄物体进行对焦,包括:
    根据各个所述候选视角区域的所述清晰度统计值,确定目标清晰度统计值;
    根据所述目标清晰度统计值,对拍摄物体进行对焦。
  11. 根据权利要求10所述的拍摄方法,其特征在于,各个所述候选视角区域的权重值相同或者不同。
  12. 根据权利要求10所述的拍摄方法,其特征在于,所述根据各个所述候选视角区域的所述清晰度统计值,确定目标清晰度统计值,包括:
    根据各个所述候选视角区域的所述清晰度统计值,计算每两个所述候选视角区域的所述清晰度统计值的差值;
    确定每两个所述候选视角区域的所述清晰度统计值的差值是否均小于或等于第一预设阈值;
    若每个所述差值均小于或等于预设的第一预设阈值,则将任一所述候选视角区域的清晰度统计值作为目标清晰度统计值。
  13. 根据权利要求12所述的拍摄方法,其特征在于,所述确定每两个所述候选视角区域的所述清晰度统计值的差值是否均小于或等于预设的第一预设阈值之后,还包括:
    若至少一个所述差值大于预设的第一预设阈值,则根据各个所述候选视角区域的所述清晰度统计值,计算平均清晰度统计值,并将所述平均清晰度统计 值作为目标清晰度统计值。
  14. 根据权利要求9所述的拍摄方法,其特征在于,所述根据各个所述候选视角区域的所述清晰度统计值,对拍摄物体进行对焦,包括:
    根据各个所述候选视角区域的所述景深信息,确定各个所述候选视角区域的所述清晰度统计值的可信指数;
    根据所述可信指数大于或等于预设可信指数的所述候选视角区域的所述清晰度统计值,对拍摄物体进行对焦。
  15. 根据权利要求8所述的拍摄方法,其特征在于,所述拍摄方法还包括:
    在对所述拍摄物体进行对焦的过程中,检测到所述拍摄设备的对焦点的位置发生变化时,确定所述拍摄设备是否发生移动;
    若所述拍摄设备未发生移动,则确定所述对焦点当前所属的所述视角区域,并将所述对焦点当前所属的所述视角区域作为目标视角区域;
    根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
  16. 根据权利要求15所述的拍摄方法,其特征在于,所述确定所述对焦点当前所属的所述视角区域,包括:
    确定用户对多个所述视角区域的触控操作的触控位置,并根据所述触控位置,确定所述对焦点当前所属的所述视角区域;或者
    确定所述拍摄物体所属的视角区域,并将所述拍摄物体所属的视角区域作为所述对焦点当前所属的所述视角区域。
  17. 根据权利要求16所述的拍摄方法,其特征在于,所述根据所述触控位置,确定所述对焦点当前所属的所述视角区域,包括:
    获取所述触控位置的位置坐标和每个所述视角区域的位置坐标集;
    根据所述位置坐标和位置坐标集,确定所述触控位置所属的所述视角区域;
    将所述触控位置所属的所述视角区域作为所述对焦点当前所属的所述视角区域。
  18. 根据权利要求15所述的拍摄方法,其特征在于,所述根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦之前,还包括:
    获取所述目标视角区域的至少一个历史清晰度统计值和当前时刻的所述清晰度统计值,其中,所述历史清晰度统计值为在当前时刻之前记录的清晰度统计值;
    根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体是否发生移动;
    若所述拍摄物体未发生移动,则根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
  19. 根据权利要求18所述的拍摄方法,其特征在于,所述根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体是否发生移动之后,还包括:
    若所述拍摄物体发生移动,则根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体的移动趋势;
    根据所述移动趋势以及所述目标视角区域的所述清晰度统计值,对所述拍摄物体进行对焦。
  20. 根据权利要求15所述的拍摄方法,其特征在于,所述确定所述拍摄设备是否发生移动之后,还包括:
    若所述拍摄设备发生移动,则确定所述拍摄设备的移动位移;
    根据所述移动位移和所述对焦点在上一时刻所属的所述视角区域,确定目标视角区域;
    根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
  21. 根据权利要求20所述的拍摄方法,其特征在于,所述拍摄设备的移动位移根据所述拍摄设备的惯性测量单元和/或图像识别装置确定。
  22. 根据权利要求20所述的拍摄方法,其特征在于,所述根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦之前,还包括:
    获取所述对焦点在上一时刻所属的所述视角区域的所述清晰度统计值;
    计算所述目标视角区域的所述清晰度统计值与所述对焦点在上一时刻所属的所述视角区域的所述清晰度统计值之间的差值绝对值;
    确定所述差值绝对值是否小于或等于第二预设阈值;
    若所述差值绝对值小于或等于第二预设阈值,则根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
  23. 根据权利要求22所述的拍摄方法,其特征在于,所述确定所述差值绝对值是否小于或等于第二预设阈值之后,还包括:
    若所述差值绝对值大于第二预设阈值,则重新获取多个所述视角区域中每 个视角区域的清晰度统计值;
    根据重新获取到的所述清晰度统计值,对所述拍摄物体进行重新对焦。
  24. 一种拍摄设备,其特征在于,所述拍摄设备包括拍摄装置、存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
    获取多个视角区域中每个视角区域的清晰度统计值;
    根据所述清晰度统计值,确定各个所述视角区域的景深信息;
    根据所述景深信息,控制所述拍摄装置执行拍摄控制操作。
  25. 根据权利要求24所述的拍摄设备,其特征在于,所述处理器实现根据所述清晰度统计值,确定各个所述视角区域的景深信息时,用于实现:
    根据所述清晰度统计值,确定各个所述视角区域的景深类型;
    根据所述景深类型,确定各个所述视角区域的景深信息。
  26. 根据权利要求25所述的拍摄设备,其特征在于,所述处理器实现根据所述景深类型,确定各个所述视角区域的景深信息时,用于实现:
    若所述视角区域的景深类型为第一预设类型,则所述视角区域的景深信息为空值;
    若所述视角区域的景深类型为第二预设类型,则根据所述清晰度统计值,确定清晰度峰值或清晰度变化趋势;
    根据所述清晰度峰值或所述清晰度变化趋势,确定所述视角区域的景深信息。
  27. 根据权利要求24所述的拍摄设备,其特征在于,所述处理器实现根据所述景深信息,控制所述拍摄装置执行拍摄控制操作时,用于实现:
    获取拍摄物体在各个所述视角区域的对比度,并根据所述对比度,对所述景深信息进行筛选处理;
    根据经过筛选处理后的所述景深信息,控制所述拍摄装置执行拍摄控制操作。
  28. 根据权利要求24所述的拍摄设备,其特征在于,多个所述视角区域包括重叠的视角区域、相邻的视角区域和/或间隔的视角区域。
  29. 根据权利要求24所述的拍摄设备,其特征在于,所述处理器实现根据所述景深信息,控制所述拍摄装置执行拍摄控制操作时,用于实现:
    根据所述景深信息,设置所述拍摄设备的拍摄控制策略;
    控制所述拍摄装置执行所述拍摄控制策略对应的拍摄控制操作。
  30. 根据权利要求29所述的拍摄设备,其特征在于,所述拍摄控制策略包括如下至少一种:对焦控制策略、前后景选择策略和全景深扫焦策略,所述对焦控制策略用于对拍摄物体进行自动对焦,所述前后景选择策略用于选择多个所述视角区域中的前后景区域,所述全景扫焦策略用于获取每个视角区域中的物体的景深信息。
  31. 根据权利要求24至30中任一项所述的拍摄设备,其特征在于,所述处理器实现控制所述拍摄装置执行拍摄控制操作时,用于实现:
    根据各个所述视角区域的清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
  32. 根据权利要求31所述的拍摄设备,其特征在于,所述处理器实现根据各个所述视角区域的清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦时,用于实现:
    确定所述拍摄物体所属的视角区域,并将所述拍摄物体所属的视角区域作为目标视角区域;
    获取与所述目标视角区域相邻的所述视角区域,并将与所述目标视角区域相邻的所述视角区域作为候选视角区域;
    根据各个所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
  33. 根据权利要求32所述的拍摄设备,其特征在于,所述处理器实现根据各个所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦时,用于实现:
    根据各个所述候选视角区域的所述清晰度统计值,确定目标清晰度统计值;
    根据所述目标清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
  34. 根据权利要求33所述的拍摄设备,其特征在于,各个所述候选视角区域的权重值相同或者不同。
  35. 根据权利要求33所述的拍摄设备,其特征在于,所述处理器实现根据各个所述候选视角区域的所述清晰度统计值,确定目标清晰度统计值时,用于实现:
    根据各个所述候选视角区域的所述清晰度统计值,计算每两个所述候选视角区域的所述清晰度统计值的差值;
    确定每两个所述候选视角区域的所述清晰度统计值的差值是否均小于或等于第一预设阈值;
    若每个所述差值均小于或等于预设的第一预设阈值,则将任一所述候选视角区域的清晰度统计值作为目标清晰度统计值。
  36. 根据权利要求35所述的拍摄设备,其特征在于,所述处理器实现确定每两个所述候选视角区域的所述清晰度统计值的差值是否均小于或等于预设的第一预设阈值之后,还用于实现:
    若至少一个所述差值大于预设的第一预设阈值,则根据各个所述候选视角区域的所述清晰度统计值,计算平均清晰度统计值,并将所述平均清晰度统计值作为目标清晰度统计值。
  37. 根据权利要求32所述的拍摄设备,其特征在于,所述处理器实现根据各个所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦时,用于实现:
    根据各个所述候选视角区域的所述景深信息,确定各个所述候选视角区域的所述清晰度统计值的可信指数;
    根据所述可信指数大于或等于预设可信指数的所述候选视角区域的所述清晰度统计值,控制所述拍摄装置对拍摄物体进行对焦。
  38. 根据权利要求31所述的拍摄设备,其特征在于,所述处理器还用于实现:
    在对所述拍摄物体进行对焦的过程中,检测到所述拍摄设备的对焦点的位置发生变化时,确定所述拍摄设备是否发生移动;
    若所述拍摄设备未发生移动,则确定所述对焦点当前所属的所述视角区域,并将所述对焦点当前所属的所述视角区域作为目标视角区域;
    根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦。
  39. 根据权利要求38所述的拍摄设备,其特征在于,所述处理器实现确定所述对焦点当前所属的所述视角区域时,用于实现:
    确定用户对多个所述视角区域的触控操作的触控位置,并根据所述触控位置,确定所述对焦点当前所属的所述视角区域;或者
    确定所述拍摄物体所属的视角区域,并将所述拍摄物体所属的视角区域作为所述对焦点当前所属的所述视角区域。
  40. 根据权利要求39所述的拍摄设备,其特征在于,所述处理器实现根据 所述触控位置,确定所述对焦点当前所属的所述视角区域时,用于实现:
    获取所述触控位置的位置坐标和每个所述视角区域的位置坐标集;
    根据所述位置坐标和位置坐标集,确定所述触控位置所属的所述视角区域;
    将所述触控位置所属的所述视角区域作为所述对焦点当前所属的所述视角区域。
  41. 根据权利要求38所述的拍摄设备,其特征在于,所述处理器实现根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦之前,还用于实现:
    获取所述目标视角区域的至少一个历史清晰度统计值和当前时刻的所述清晰度统计值,其中,所述历史清晰度统计值为在当前时刻之前记录的清晰度统计值;
    根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体是否发生移动;
    若所述拍摄物体未发生移动,则根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦。
  42. 根据权利要求41所述的拍摄设备,其特征在于,所述处理器实现根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体是否发生移动之后,还用于实现:
    若所述拍摄物体发生移动,则根据所述清晰度统计值和至少一个所述历史清晰度统计值,确定所述拍摄物体的移动趋势;
    根据所述移动趋势以及所述目标视角区域的所述清晰度统计值,控制所述拍摄装置对所述拍摄物体进行对焦。
  43. 根据权利要求38所述的拍摄设备,其特征在于,所述处理器实现确定所述拍摄设备是否发生移动之后,还用于实现:
    若所述拍摄设备发生移动,则确定所述拍摄设备的移动位移;
    根据所述移动位移和所述对焦点在上一时刻所属的所述视角区域,确定目标视角区域;
    根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦。
  44. 根据权利要求43所述的拍摄设备,其特征在于,所述拍摄设备的移动位移根据所述拍摄设备的惯性测量单元和/或图像识别装置确定。
  45. 根据权利要求43所述的拍摄设备,其特征在于,所述处理器实现根据 所述目标视角区域的所述清晰度统计值和/或所述景深信息,控制所述拍摄装置对所述拍摄物体进行对焦之前,还用于实现:
    获取所述对焦点在上一时刻所属的所述视角区域的所述清晰度统计值;
    计算所述目标视角区域的所述清晰度统计值与所述对焦点在上一时刻所属的所述视角区域的所述清晰度统计值之间的差值绝对值;
    确定所述差值绝对值是否小于或等于第二预设阈值;
    若所述差值绝对值小于或等于第二预设阈值,则根据所述目标视角区域的所述清晰度统计值和/或所述景深信息,对所述拍摄物体进行对焦。
  46. 根据权利要求45所述的拍摄设备,其特征在于,所述处理器实现确定所述差值绝对值是否小于或等于第二预设阈值之后,还用于实现:
    若所述差值绝对值大于第二预设阈值,则重新获取多个所述视角区域中每个视角区域的清晰度统计值;
    根据重新获取到的所述清晰度统计值,控制所述拍摄装置对所述拍摄物体进行重新对焦。
  47. 根据权利要求24所述的拍摄设备,其特征在于,所述拍摄装置包括如下至少一种:数码相机、摄像头和单反相机。
  48. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至23中任一项所述的拍摄方法。
PCT/CN2019/124960 2019-12-12 2019-12-12 拍摄方法、拍摄设备及计算机可读存储介质 WO2021114194A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/124960 WO2021114194A1 (zh) 2019-12-12 2019-12-12 拍摄方法、拍摄设备及计算机可读存储介质
CN201980059476.3A CN112740649A (zh) 2019-12-12 2019-12-12 拍摄方法、拍摄设备及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/124960 WO2021114194A1 (zh) 2019-12-12 2019-12-12 拍摄方法、拍摄设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021114194A1 true WO2021114194A1 (zh) 2021-06-17

Family

ID=75589255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124960 WO2021114194A1 (zh) 2019-12-12 2019-12-12 拍摄方法、拍摄设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112740649A (zh)
WO (1) WO2021114194A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117288168B (zh) * 2023-11-24 2024-01-30 山东中宇航空科技发展有限公司 一种低功耗的无人机城市建筑航拍系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090162044A1 (en) * 2007-12-19 2009-06-25 Altek Corporation Method of automatically adjusting the depth of field
CN103167226A (zh) * 2011-12-12 2013-06-19 华晶科技股份有限公司 产生全景深影像的方法及装置
US20140308988A1 (en) * 2006-10-02 2014-10-16 Sony Corporation Focused areas in an image
CN104184935A (zh) * 2013-05-27 2014-12-03 鸿富锦精密工业(深圳)有限公司 影像拍摄设备及方法
CN106797434A (zh) * 2014-11-04 2017-05-31 奥林巴斯株式会社 摄像装置、摄像方法、处理程序
US10200596B1 (en) * 2015-11-13 2019-02-05 Apple Inc. Dynamic optical shift/tilt lens

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973978B (zh) * 2014-04-17 2018-06-26 华为技术有限公司 一种实现重对焦的方法和电子设备
JP6670854B2 (ja) * 2016-01-15 2020-03-25 オリンパス株式会社 フォーカス制御装置、内視鏡装置及びフォーカス制御装置の作動方法
CN106204554A (zh) * 2016-07-01 2016-12-07 厦门美图之家科技有限公司 基于多聚焦图像的景深信息获取方法、系统及拍摄终端
CN109141823A (zh) * 2018-08-16 2019-01-04 南京理工大学 一种基于清晰度评价的显微镜系统景深测量装置和方法
CN110278383B (zh) * 2019-07-25 2021-06-15 浙江大华技术股份有限公司 聚焦方法、装置以及电子设备、存储介质
CN110455258B (zh) * 2019-09-01 2021-08-10 中国电子科技集团公司第二十研究所 一种基于单目视觉的无人机离地高度测量方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140308988A1 (en) * 2006-10-02 2014-10-16 Sony Corporation Focused areas in an image
US20090162044A1 (en) * 2007-12-19 2009-06-25 Altek Corporation Method of automatically adjusting the depth of field
CN103167226A (zh) * 2011-12-12 2013-06-19 华晶科技股份有限公司 产生全景深影像的方法及装置
CN104184935A (zh) * 2013-05-27 2014-12-03 鸿富锦精密工业(深圳)有限公司 影像拍摄设备及方法
CN106797434A (zh) * 2014-11-04 2017-05-31 奥林巴斯株式会社 摄像装置、摄像方法、处理程序
US10200596B1 (en) * 2015-11-13 2019-02-05 Apple Inc. Dynamic optical shift/tilt lens

Also Published As

Publication number Publication date
CN112740649A (zh) 2021-04-30

Similar Documents

Publication Publication Date Title
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
CN108076278B (zh) 一种自动对焦方法、装置及电子设备
US9521311B2 (en) Quick automatic focusing method and image acquisition apparatus
WO2019114617A1 (zh) 快速抓拍的方法、装置及系统
JP6042434B2 (ja) 立体画像ペアを獲得するためのシステムおよび方法
WO2015180609A1 (zh) 一种实现自动拍摄的方法、装置及计算机存储介质
US20200267309A1 (en) Focusing method and device, and readable storage medium
CN104333748A (zh) 获取图像主体对象的方法、装置及终端
US10659676B2 (en) Method and apparatus for tracking a moving subject image based on reliability of the tracking state
TWI471677B (zh) 自動對焦方法及自動對焦裝置
WO2016065991A1 (en) Methods and apparatus for controlling light field capture
US9865064B2 (en) Image processing apparatus, image processing method, and storage medium
US20150350523A1 (en) Image processing device, image processing method, and program
CN104363378A (zh) 相机对焦方法、装置及终端
US9628717B2 (en) Apparatus, method, and storage medium for performing zoom control based on a size of an object
CN103297696A (zh) 拍摄方法、装置和终端
JP4780205B2 (ja) 撮像装置、画角調節方法、及び、プログラム
US9838594B2 (en) Irregular-region based automatic image correction
CN108710192B (zh) 基于统计数据的自动对焦系统和方法
JP6447840B2 (ja) 画像装置、及び画像装置における自動的な焦点合わせのための方法、並びに対応するコンピュータプログラム
CN104363377A (zh) 对焦框的显示方法、装置及终端
JP2015012482A (ja) 画像処理装置及び画像処理方法
JP5968379B2 (ja) 画像処理装置およびその制御方法
CN106922181B (zh) 方向感知自动聚焦

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19955545

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19955545

Country of ref document: EP

Kind code of ref document: A1