WO2021204020A1 - 装置、摄像装置、摄像系统、移动体、方法以及程序 - Google Patents

装置、摄像装置、摄像系统、移动体、方法以及程序 Download PDF

Info

Publication number
WO2021204020A1
WO2021204020A1 PCT/CN2021/083913 CN2021083913W WO2021204020A1 WO 2021204020 A1 WO2021204020 A1 WO 2021204020A1 CN 2021083913 W CN2021083913 W CN 2021083913W WO 2021204020 A1 WO2021204020 A1 WO 2021204020A1
Authority
WO
WIPO (PCT)
Prior art keywords
quadrant
information
optical system
imaging
image
Prior art date
Application number
PCT/CN2021/083913
Other languages
English (en)
French (fr)
Inventor
高宫诚
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Publication of WO2021204020A1 publication Critical patent/WO2021204020A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Definitions

  • the present invention relates to a device, a camera device, a camera system, a mobile body, a method, and a program.
  • Patent Document 1 describes the principle of distance calculation based on the DFD method.
  • Patent Document 1 JP 2013-242617 A.
  • An apparatus includes a circuit configured to obtain imaging based on a plurality of image data obtained by imaging with different focal positions of an optical system included in the imaging device and blur characteristic information of the optical system The distance information of the object.
  • the circuit is configured to store blur characteristic information of the optical system corresponding to a specific quadrant on the imaging surface of the imaging device.
  • the circuit is composed of: when acquiring distance information of the imaging object corresponding to other quadrants, the blur characteristic information of the optical system in a specific quadrant and the image information corresponding to the other quadrants in each of the plurality of image data
  • One of the quadrants performs quadrant conversion processing, and based on the information obtained by the conversion processing, and the information of the other one of the blur characteristic information of the optical system in the specific quadrant and the image information corresponding to the other quadrants, the information obtained from the other quadrant is obtained.
  • the distance information of the camera object corresponding to the quadrant is composed of: when acquiring distance information of the imaging object corresponding to other quadrants, the blur characteristic information of the optical system in a specific quadrant and the image information corresponding to the other quadrants in each of the plurality of image data
  • One of the quadrants performs quadrant conversion processing, and based on the information obtained by the conversion processing, and the information of the other one of the blur characteristic information of the optical system in the specific quadrant and the image information
  • the circuit configuration can be: when acquiring the distance information of the imaging object corresponding to other quadrants, the image information corresponding to the other quadrants in each of the plurality of image data is converted to a specific quadrant, based on the conversion process to the specific quadrant.
  • the image information obtained by the conversion process and the blur characteristic information of the optical system in a specific quadrant are used to obtain the distance information of the imaging object corresponding to other quadrants.
  • the circuit configuration can be: when acquiring the distance information of the imaging object corresponding to other quadrants, the blur characteristic information of the optical system in the specific quadrant is converted to other quadrants, based on the blur obtained by the conversion to other quadrants.
  • the characteristic information and the image information corresponding to the other quadrants are used to obtain the distance information of the imaging object corresponding to the other quadrants.
  • the blur characteristic information of the optical system can be a point spread function.
  • the circuit configuration may be: a point spread function of an optical system that stores multiple points included in a specific quadrant.
  • the specific quadrant may include the point corresponding to the optical axis of the optical system as the origin, the coordinate value of the first coordinate axis and the second coordinate axis of the four quadrants divided by the first coordinate axis and the second coordinate axis that are perpendicular to each other.
  • the area where the coordinate value is positive is the first quadrant.
  • the circuit configuration can be as follows: the image information corresponding to the second quadrant in which the coordinate value of the first coordinate axis is negative and the coordinate value of the second coordinate axis is positive is converted to the second coordinate axis by line symmetry. One-quadrant conversion processing.
  • the circuit configuration can be as follows: the image information corresponding to the third quadrant where the coordinate value of the first coordinate axis is negative and the coordinate value of the second coordinate axis is negative is converted to the first quadrant by point symmetry conversion about the origin. Conversion processing.
  • the circuit configuration can be as follows: the image information corresponding to the fourth quadrant in which the coordinate value of the first coordinate axis is positive and the coordinate value of the second coordinate axis is negative is converted to the first coordinate axis by line symmetry. One-quadrant conversion processing.
  • the blur characteristic information of the optical system can be a point spread function.
  • the circuit configuration may be: the point spread function of the optical system storing more than one point in the first quadrant, the point spread function of the optical system storing more than one point on the first coordinate axis, and one or more points on the second coordinate axis The point spread function of the point optical system.
  • the circuit configuration may be such that the blur characteristic information of the optical system in each of the first area and the second area in the first quadrant is stored.
  • the circuit configuration can be: based on the image information of the first quadrant, the blur characteristic information generated by interpolating the blur characteristic information of the optical system in the first area and the blur characteristic information of the optical system in the second area can be obtained.
  • Distance information of the imaging subject between the first area and the second area may be based on image information obtained by performing line-symmetric conversion on the second coordinate axis on the image information of the second quadrant, by comparing the blur characteristic information of the optical system in the first area, and the optical system in the second area.
  • the blur characteristic information of the system is generated by interpolation processing to obtain the distance information of the imaging object corresponding to the area between the first area and the second area in the second quadrant.
  • the circuit configuration may be based on image information obtained by performing point-symmetric conversion on the origin of the image information of the third quadrant, and the blur characteristic information of the optical system in the first region and the blur of the optical system in the second region.
  • the characteristic information is the blur characteristic information generated by interpolation processing to obtain the distance information of the imaging subject corresponding to the area between the first area and the second area in the third quadrant.
  • the circuit configuration can be based on image information obtained by performing line-symmetric conversion on the first coordinate axis on the image information of the fourth quadrant, by comparing the blur characteristic information of the optical system in the first region and the optical system in the second region.
  • the blur characteristic information of the system is generated by interpolation processing to obtain the distance information of the imaging object corresponding to the area between the first area and the second area in the fourth quadrant.
  • the circuit can be configured to adjust the focus of the optical system based on the distance information of the imaging object.
  • An imaging device includes the above-mentioned device and an image sensor having an imaging surface.
  • An imaging system includes the above-mentioned imaging device and a support mechanism capable of controlling the posture of the imaging device and supporting it.
  • the mobile body according to one aspect of the present invention may be a mobile body that mounts the above-mentioned imaging device and moves.
  • a method includes a stage of acquiring distance information of an imaging subject based on a plurality of image data obtained by performing imaging with different focal positions of an optical system included in an imaging device and blur characteristic information of the optical system.
  • the stage of acquiring the distance information of the imaging object includes a stage of storing the blur characteristic information of the optical system corresponding to a specific quadrant in the imaging surface of the imaging device.
  • the stage of obtaining the distance information of the camera object includes when obtaining the distance information of the camera object corresponding to other quadrants, the blur characteristic information of the optical system in a specific quadrant and the image data corresponding to other quadrants among the multiple image data.
  • One of the two information is the stage of quadrant conversion processing.
  • the stage of obtaining the distance information of the imaging object includes information based on the other of the information obtained through the conversion process, the blur characteristic information of the optical system in a specific quadrant and the image information corresponding to the other quadrants, to obtain the The stage of the distance information of the camera object corresponding to the other quadrants.
  • the program related to one aspect of the present invention may be a program that causes a computer to execute the above-mentioned method.
  • the storage capacity of fuzzy characteristic information can be reduced.
  • FIG. 1 is a diagram showing an example of an external perspective view of an imaging device 100 according to this embodiment.
  • FIG. 2 is a diagram showing functional blocks of the imaging device 100 according to this embodiment.
  • Fig. 3 is an example of a curve showing the relationship between the amount of blur (Cost) of the image and the position of the focus lens.
  • Fig. 4 is a flowchart showing an example of a distance calculation process in the BDAF method.
  • Fig. 5 shows the calculation process of the subject distance.
  • Fig. 6 schematically shows the position of an ROI (Region of Interest) set as an image region.
  • FIG. 7 shows a graph of the detection error of the defocus amount calculated using the test subject 700.
  • FIG. 8 shows a graph of the detection error of the defocus amount calculated using the test subject 800.
  • FIG. 9 shows a graph of the detection error of the defocus amount calculated using the test subject 900.
  • FIG. 10 shows the calculation results of the aperture erosion shape with respect to each ROI position.
  • Fig. 11 shows both the observation result of diametric corrosion and the calculation result of the shape of diametric corrosion.
  • FIG. 12 shows the detection error of the defocus amount calculated by applying the PSF data of each ROI to the DFD operation using the test subject 700.
  • FIG. 13 shows the detection error of the defocus amount calculated by applying the PSF data of each ROI to the DFD operation using the test subject 800.
  • FIG. 14 shows the detection error of the defocus amount calculated by applying the PSF data of each ROI to the DFD operation using the test subject 900.
  • FIG. 15 shows PSF data pre-stored in the imaging control unit 110.
  • Figure 16 schematically shows the quadrant transition.
  • FIG. 17 is a flowchart showing the processing procedure of the focus control executed by the imaging control unit 110.
  • Figure 18 shows an example of an unmanned aerial vehicle (UAV).
  • UAV unmanned aerial vehicle
  • FIG. 19 shows an example of a computer 1200 that can embody aspects of the present invention in whole or in part.
  • the blocks can represent (1) the stages of the process of performing operations or (2) the "parts" of the device that perform operations. Specific stages and “parts” can be implemented by programmable circuits and/or processors.
  • Dedicated circuits may include digital and/or analog hardware circuits. May include integrated circuits (ICs) and/or discrete circuits.
  • Programmable circuits may include reconfigurable hardware circuits.
  • Reconfigurable hardware circuits can include logical AND, logical OR, logical exclusive OR, logical NAND, logical NOR, and other logical operations, flip-flops, registers, field programmable gate arrays (FPGA), programmable logic arrays (PLA) ) And other memory components.
  • the computer-readable medium may include any tangible device that can store instructions to be executed by a suitable device.
  • the computer-readable medium on which instructions are stored includes a product that includes instructions that can be executed to create means for performing operations specified by the flowchart or block diagram.
  • electronic storage media, magnetic storage media, optical storage media, electromagnetic storage media, semiconductor storage media, and the like may be included.
  • the computer-readable medium may include floppy disk (registered trademark), floppy disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory ), electrically erasable programmable read-only memory (EEPROM), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disc (DVD), Blu-ray (registered trademark) disc, memory stick , Integrated circuit cards, etc.
  • floppy disk registered trademark
  • floppy disk hard disk
  • random access memory RAM
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory electrically erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disc
  • Blu-ray registered trademark
  • the computer-readable instructions may include any one of source code or object code described in any combination of one or more programming languages.
  • the source code or object code includes a traditional procedural programming language.
  • Traditional procedural programming languages can be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or Smalltalk (registered trademark), JAVA (registered trademark) , C++ and other object-oriented programming languages and "C" programming language or similar programming languages.
  • the computer-readable instructions may be provided locally or via a wide area network (WAN) such as a local area network (LAN) or the Internet to a processor or programmable circuit of a general-purpose computer, a special-purpose computer, or other programmable data processing device.
  • WAN wide area network
  • LAN local area network
  • the processor or programmable circuit can execute computer-readable instructions to create means for performing the operations specified in the flowchart or block diagram.
  • Examples of processors include computer processors, processing units, microprocessors, digital signal processors, controllers, microcontrollers, and so on.
  • FIG. 1 is a diagram showing an example of an external perspective view of an imaging device 100 according to this embodiment.
  • FIG. 2 is a diagram showing functional blocks of the imaging device 100 according to this embodiment.
  • the imaging device 100 includes an imaging unit 102 and a lens unit 200.
  • the imaging unit 102 includes an image sensor 120, an imaging control unit 110, a memory 130, an instruction unit 162, and a display unit 160.
  • the image sensor 120 may be composed of CCD or CMOS.
  • the image sensor 120 receives light through the lens 210 included in the lens part 200.
  • the image sensor 120 outputs image data of the optical image formed by the lens 210 to the imaging control unit 110.
  • the imaging control unit 110 may be constituted by a microprocessor such as a CPU or an MPU, a microcontroller such as an MCU, or the like.
  • the memory 130 may be a computer-readable recording medium, and may also include at least one of flash memory such as SRAM, DRAM, EPROM, EEPROM, and USB memory.
  • the imaging control unit 110 corresponds to a circuit.
  • the memory 130 stores programs and the like necessary for the imaging control unit 110 to control the image sensor 120 and the like.
  • the memory 130 may be provided inside the housing of the imaging device 100.
  • the storage 130 may be configured to be detachable from the housing of the imaging device 100.
  • the instruction unit 162 is a user interface that accepts instructions to the imaging device 100 from the user.
  • the display unit 160 displays images captured by the image sensor 120 and processed by the imaging control unit 110, various setting information of the imaging device 100, and the like.
  • the display part 160 may be composed of a touch panel.
  • the imaging control unit 110 controls the lens unit 200 and the image sensor 120. For example, the imaging control unit 110 controls the position and focal length of the focal point of the lens 210.
  • the imaging control unit 110 outputs a control command to the lens control unit 220 included in the lens unit 200 based on information indicating the user's instruction, thereby controlling the lens unit 200.
  • the lens unit 200 includes more than one lens 210, a lens driving unit 212, a lens control unit 220, and a memory 222.
  • one or more lenses 210 are collectively referred to as “lens 210”.
  • the lens 210 may include a focus lens and a zoom lens. At least a part or all of the lenses included in the lens 210 are arranged to be movable along the optical axis of the lens 210.
  • the lens part 200 may be an interchangeable lens detachably provided on the imaging part 102.
  • the lens driving unit 212 moves at least a part or all of the lens 210 along the optical axis of the lens 210.
  • the lens control unit 220 drives the lens drive unit 212 according to the lens control instruction from the imaging unit 102 to move the entire lens 210 or the zoom lens or the focus lens included in the lens 210 along the optical axis, thereby performing the zoom operation or focus operation. at least one.
  • the lens control commands are, for example, zoom control commands and focus control commands.
  • the lens driving part 212 may include a voice coil motor (VCM) that moves at least a part or all of the plurality of lenses 210 in the optical axis direction.
  • the lens driving part 212 may include a motor such as a DC motor, a coreless motor, or an ultrasonic motor.
  • the lens driving unit 212 can transmit the power from the motor to at least a part or all of the plurality of lenses 210 via mechanism components such as cam rings and guide shafts, so as to move at least a part or all of the lenses 210 along the optical axis.
  • the memory 222 stores control values for the focus lens and zoom lens that are moved by the lens drive unit 212.
  • the memory 222 may include at least one of flash memory such as SRAM, DRAM, EPROM, EEPROM, and USB memory.
  • the imaging control unit 110 outputs a control command to the image sensor 120 based on the information indicating the user's instruction acquired by the instruction unit 162 and the like to perform control including the control of the imaging operation on the image sensor 120.
  • the imaging control unit 110 acquires an image captured by the image sensor 120.
  • the imaging control unit 110 performs image processing on the image acquired from the image sensor 120 and stores it in the memory 130.
  • the imaging control unit 110 acquires the distance information of the imaging subject based on a plurality of image data obtained by performing imaging with different focal positions of the lens 210 included in the imaging device 100 and blur characteristic information of the lens 210. Specifically, the imaging control unit 110 stores blur characteristic information of the lens 210 corresponding to a specific quadrant on the imaging surface of the imaging device 100. When acquiring the distance information of the imaging subject corresponding to other quadrants, the imaging control unit 110 determines the blur characteristic information of the lens 210 in the specific quadrant and the image information corresponding to the other quadrants in each of the plurality of image data.
  • One performs quadrant conversion processing, and based on the information obtained by the conversion processing, as well as the information of the other one of the blur characteristic information of the lens 210 in the specific quadrant and the image information corresponding to the other quadrants, the information obtained from the other The distance information of the camera object corresponding to the quadrant.
  • the imaging control unit 110 performs focus adjustment of the lens 210 based on the distance information of the imaging object.
  • the imaging control unit 110 when acquiring the distance information of the imaging subject corresponding to other quadrants, the imaging control unit 110 performs conversion processing of the image information corresponding to the other quadrants in each of the plurality of image data to the specific quadrant.
  • the image information obtained by the conversion process and the blur characteristic information of the lens 210 in a specific quadrant are used to obtain the distance information of the imaging object corresponding to other quadrants.
  • the imaging control unit 110 When acquiring the distance information of the imaging subject corresponding to other quadrants, the imaging control unit 110 performs conversion processing on the blur characteristic information of the lens 210 in the specific quadrant to other quadrants, based on the blur characteristics obtained by the conversion processing to other quadrants Information and image information corresponding to other quadrants to obtain distance information of the camera object corresponding to other quadrants.
  • the blur characteristic information of the lens 210 is, for example, a point spread function.
  • the imaging control unit 110 is configured to store a point spread function of the lens 210 of a plurality of points included in a specific quadrant.
  • the imaging control unit 110 has a non-volatile storage medium inside to store the point spread function of the lens 210 among a plurality of points included in a specific quadrant.
  • the imaging control unit 110 may store the point spread function of the lens 210 at more than one point in the first quadrant, the point spread function of the lens 210 at one or more points on the first coordinate axis, and one of the point spread functions of the lens 210 on the second coordinate axis.
  • the point spread function of the lens 210 at the above points.
  • the specific quadrant may include the point corresponding to the optical axis of the lens 210 as the origin, the coordinate value of the first coordinate axis and the second coordinate axis of the four quadrants divided by the first coordinate axis and the second coordinate axis perpendicular to each other.
  • the area where the coordinate value is positive is the first quadrant.
  • the imaging control unit 110 performs line-symmetric transformation on the second coordinate axis on the image information corresponding to the second quadrant in which the coordinate value of the first coordinate axis is negative and the coordinate value of the second coordinate axis is positive, to perform the conversion to the first coordinate axis. Conversion processing of quadrants.
  • the imaging control unit 110 performs the conversion to the first quadrant by performing point-symmetric conversion about the origin on the image information corresponding to the third quadrant in which the coordinate value of the first coordinate axis is negative and the coordinate value of the second coordinate axis is negative. deal with.
  • the imaging control unit 110 performs line symmetry conversion on the first coordinate axis on the image information corresponding to the fourth quadrant in which the coordinate value of the first coordinate axis is positive and the coordinate value of the second coordinate axis is negative, to perform the conversion to the first coordinate axis. Conversion processing of quadrants.
  • the imaging control unit 110 stores blur characteristic information of the lens 210 in each of the first area and the second area in the first quadrant.
  • the imaging control unit 110 acquires the first quadrant based on the image information, and the blur characteristic information generated by interpolating the blur characteristic information of the lens 210 in the first region and the blur characteristic information of the lens 210 in the second region.
  • the imaging control unit 110 is based on the image information obtained by performing line symmetry conversion on the second coordinate axis on the image information of the second quadrant, the blur characteristic information of the lens 210 in the first area, and the lens 210 in the second area.
  • the blur characteristic information is generated by interpolation processing to obtain the distance information of the imaging subject corresponding to the area between the first area and the second area in the second quadrant.
  • the imaging control unit 110 is based on image information obtained by performing point symmetry conversion with respect to the origin on the image information of the third quadrant, the blur characteristic information of the lens 210 in the first region, and the blur characteristic of the lens 210 in the second region.
  • the information is interpolated to generate blur characteristic information to obtain distance information of the imaging subject corresponding to the area between the first area and the second area in the third quadrant.
  • the imaging control unit 110 is based on the image information obtained by performing line symmetry conversion on the first coordinate axis on the image information of the fourth quadrant, the blur characteristic information of the lens 210 in the first region, and the image information of the lens 210 in the second region.
  • the blur characteristic information is generated by performing interpolation processing to obtain the distance information of the imaging subject corresponding to the area between the first area and the second area in the fourth quadrant.
  • the imaging apparatus 100 determines the distance from the lens 210 to the subject (subject distance).
  • a method for determining the subject distance there is a method of moving the focus lens and determining based on the blur amount of a plurality of images captured in a state where the positional relationship between the focus lens and the light receiving surface of the image sensor 120 is different.
  • the AF using this method is called the Bokeh Detection Auto Foucus (BDAF) method.
  • BDAF Bokeh Detection Auto Foucus
  • DFD Depth From Defocus
  • the amount of blurring of the image can be expressed by the following formula (1) using a Gaussian function. That is, the distribution of the amount of blur can be calculated by applying formula (1) to an image acquired at a certain focus lens position.
  • x represents the pixel position in the horizontal direction.
  • represents the standard deviation value.
  • FIG. 3 shows an example of a curve representing the positional relationship between Cost and the focus lens calculated using formula (1).
  • the focus lens is moved to two different positions to calculate the distribution of the amount of blur.
  • Figure 3 depicts a curve passing through these two points.
  • C1 is the Cost of the image obtained when the focus lens is at x1.
  • C2 is the Cost of the image obtained when the focus lens is located at x2. It is possible to focus on the subject by aligning the focus lens at the lens position x0 corresponding to the minimum point 502 of the curve 500 determined by C1 and C2 in consideration of the optical characteristics of the lens 210.
  • Fig. 4 is a flowchart showing an example of a distance calculation process in the BDAF method.
  • the imaging control unit 110 takes a first image and stores it in the memory 130 in a state where the lens 210 and the imaging surface of the image sensor 120 are in a first positional relationship.
  • the imaging control unit 110 moves the lens 210 in the optical axis direction to bring the lens 210 and the imaging surface into a state of the second positional relationship, and captures a second image by the imaging device 100 and stores it in the memory 130 (S201).
  • the imaging control unit 110 changes the positional relationship between the lens 210 and the imaging surface from the first positional relationship to the second positional relationship by moving the focus lens along the optical axis direction.
  • the amount of movement of the lens may be, for example, about 10 ⁇ m.
  • the imaging control unit 110 divides the first image into a plurality of regions (S202).
  • the imaging control unit 110 may calculate a feature amount for each pixel in the first image, and divide the first image into a plurality of regions by taking a group of pixels having similar feature amounts as one region.
  • the imaging control unit 110 may divide the pixel group set as the range of the AF processing frame in the first image into a plurality of regions.
  • the imaging control unit 110 divides the second image into a plurality of regions corresponding to the plurality of regions of the first image.
  • the imaging control unit 110 calculates the respective blur amounts of the plurality of regions to correspond to the objects contained in each of the plurality of regions based on the respective blur amounts of the plurality of regions of the first image and the respective blur amounts of the plurality of regions of the second image The distance of the subject (S203).
  • the method of changing the positional relationship between the lens 210 and the imaging surface of the image sensor 120 is not limited to the method of moving the focus lens included in the lens 210.
  • the imaging control unit 110 may move the entire lens 210 in the optical axis direction.
  • the imaging control unit 110 can move the imaging surface of the image sensor 120 in the optical axis direction.
  • the imaging control unit 110 may move at least a part of the lens included in the lens 210 and the imaging surface of the image sensor 120 along the optical axis direction.
  • the imaging control unit 110 may adopt any method for optically changing the relative positional relationship between the focal point of the lens 210 and the imaging surface of the image sensor 120.
  • the calculation process of the subject distance will be further explained with reference to FIG. 5.
  • the distance from the principal point of the lens L to the subject 510 (object plane) is set to A, and the distance from the principal point of the lens L to the position (image plane) where the light beam from the subject 510 is formed is set to B, Set the focal length of lens L to F.
  • the following formula (2) can be used to express the relationship between the distance A, the distance B, and the focal length F according to the lens formula.
  • the focal length F is determined by the position of each lens included in the lens L. Therefore, if the distance B of the light beam imaging from the subject 510 can be determined, the formula (2) can be used to determine the distance A from the principal point of the lens L to the subject 510.
  • the respective images of the image I 1 at a position distance D1 from the imaging surface and the image I 2 at a position distance D2 from the imaging surface are blurred.
  • the image I 1 assuming that the point spread function (Point Spread Function) is PSF 1 and the subject image is I d1 , the image I 1 can be expressed by the following equation (3) through convolution operation.
  • the image I 2 can also be represented by the convolution operation of PSF 2 in the same way.
  • the Fourier transform of the subject image is f
  • the optical transfer functions obtained by Fourier transform of the point spread functions PSF 1 and PSF 2 are OTF 1 and OTF 2
  • the following equation (4) is obtained Ratio.
  • the C value shown in equation (4) is the amount of change in the respective blur amounts of the image at a distance D1 from the principal point of the lens L and the image at a position D2 from the principal point of the lens L, that is, the C value is equivalent to The difference between the amount of blur between the image at a position distance D1 from the principal point of the lens L and an image at a position distance D2 from the principal point of the lens L.
  • FIG. 5 the case where the positional relationship between the lens L and the imaging surface is changed by moving the imaging surface to the side of the lens L has been described.
  • the amount of blur will also be different.
  • images with different blur amounts are acquired, DFD calculations are performed based on the acquired images, and DFD calculation values indicating the amount of defocus are obtained, and the DFD calculation values are calculated based on the DFD calculation values.
  • Fig. 6 schematically shows the position of an ROI (Region of Interest) set as an image region.
  • the image area 600 shown in FIG. 6 shows the entire area of the image captured by the imaging device 100.
  • 15 regions of ROI00 to ROI14 are set in the image region 600.
  • FIG. 7 to 9 show the detection error of the defocus amount calculated by the DFD operation using the test subject.
  • FIG. 7 shows a graph of the detection error of the defocus amount calculated using the test subject 700.
  • the test subject 700 is a plurality of bar patterns extending in the vertical direction.
  • FIG. 8 shows a graph of the detection error of the defocus amount calculated using the test subject 800.
  • the test subject 800 is a plurality of bar patterns extending in the horizontal direction.
  • FIG. 9 shows a graph of the detection error of the defocus amount calculated using the test subject 900.
  • the test subject 900 is a mixed pattern containing horizontal and vertical components.
  • each graph shown in FIGS. 7 to 9 is the defocus amount
  • the vertical axis is the error of the defocus amount calculated by the DFD calculation.
  • the unit of the vertical axis and the horizontal axis is f ⁇ .
  • Figures 7 to 9 respectively show 15 charts. These 15 charts correspond to the 15 ROIs shown in Figure 6 one-to-one. When looking toward the paper, these 15 charts are arranged in the same arrangement as the corresponding ROIs among the 15 ROIs shown in FIG. 6.
  • the same PSF is used for all ROIs to calculate the amount of defocus.
  • the circular PSF of the ROI07 at the position corresponding to the optical axis is used to calculate the defocus amount.
  • peripheral ROI the size of the error varies depending on the pattern of the subject. This means that when the subject pattern changes, the DFD calculation result in the surrounding ROI changes. Therefore, it means that even if the calculation result of the DFD calculation is corrected using the correction value corresponding to each ROI, the detection accuracy may be reduced due to the difference in the subject.
  • FIG. 10 shows the calculation results of the vignetting shape for each ROI position.
  • Two circles corresponding to each ROI are shown in FIG. 10. These two circles show the range restricted due to the diameter of the optical system and the like. The overlapping part of the two circles forms a PSF shape.
  • FIG. 11 also shows the observation result 1100 of the aperture erosion and the calculation result of the aperture erosion shape in the ROI05.
  • the observation result 1100 of caliber corrosion is almost consistent with the calculated result.
  • the imaging control unit 110 applies a different PSF for each image area to perform DFD calculation.
  • the PSF used in the DFD calculation needs to store multiple PSF data corresponding to the amount of defocus. For example, when using a PSF with a defocus amount ranging from -100f ⁇ to +100f ⁇ , if the lens drive interval is set to 2f ⁇ , 101 PSF data must be used. Therefore, in terms of storage capacity, it is unrealistic to have PSF data for the entire image area. Therefore, for example, the following method is adopted: the PSF data of 15 ROIs are stored in the entire image area, and the PSF data of adjacent ROIs are interpolated and the DFD operation is performed on the coordinates between the ROIs.
  • the shape of the PSF is roughly centered on the optical axis and has symmetry.
  • the image sensor usually has pixels arranged at a certain distance in the horizontal direction and the vertical direction. Therefore, even if the length (image height) of the PSF from the center of the optical axis is the same for a ROI of, for example, 512 ⁇ 512 pixels, the shape of the PSF will be different due to the difference in the diagonal image height, the horizontal image height, and the vertical image height.
  • FIG. 15 schematically shows the range in which the imaging control unit 110 stores PSF data.
  • the imaging control unit 110 stores PSF data in consideration of aperture erosion in a nonvolatile storage medium.
  • the imaging control unit 110 stores 15 PSF data corresponding to the first quadrant in a nonvolatile storage medium.
  • the imaging control unit 110 does not store PSF data corresponding to the second quadrant, the third quadrant, and the fourth quadrant in a nonvolatile storage medium.
  • the imaging control unit 110 stores the PSF data in the first quadrant, the PSF data on the X axis that is in contact with the first quadrant, and the PSF data on the Y axis that is in contact with the first quadrant in the nonvolatile Storage medium.
  • the DFD calculation is performed using PSF data corresponding to the first quadrant and image information corresponding to the first quadrant stored in a non-volatile storage medium.
  • the PSF data in the first quadrant is used to perform DFD operations.
  • FIG 16 schematically shows the quadrant transition.
  • the imaging control unit 110 uses the image information 1622 obtained by converting the image information 1620 of the second quadrant to the quadrant of the first quadrant and the PSF data corresponding to the first quadrant when performing DFD calculation on the imaging object in the second quadrant. To perform DFD operations. Specifically, the imaging control unit 110 performs conversion processing to the first quadrant by performing line-symmetric conversion on the Y-axis of the image information 1620 in the second quadrant. That is, the imaging control unit 110 performs conversion processing to the first quadrant by inverting the image information 1620 of the second quadrant in the horizontal direction with respect to the Y axis.
  • the imaging control unit 110 uses the image information 1632 obtained by converting the image information 1630 of the third quadrant to the quadrant of the first quadrant and the image information corresponding to the first quadrant. PSF data to perform DFD calculations. Specifically, the imaging control unit 110 performs conversion processing to the first quadrant by performing point-symmetric conversion with respect to the origin of the image information 1630 in the third quadrant. That is, the imaging control unit 110 performs conversion processing to the first quadrant by rotating the image information 1630 in the third quadrant by 180 degrees around the origin.
  • the image information 1642 obtained by converting the image information 1640 of the fourth quadrant to the quadrant of the first quadrant and the PSF data corresponding to the first quadrant are used.
  • the imaging control unit 110 performs conversion processing to the first quadrant by performing line-symmetric conversion with respect to the X axis on the image information 1640 of the fourth quadrant. That is, the imaging control unit 110 performs conversion processing to the first quadrant by inverting the image information 1640 of the fourth quadrant in the vertical direction with respect to the X axis.
  • the imaging control unit 110 performs quadrant conversion of image information on the second quadrant, the third quadrant, and the fourth quadrant to maintain the vertical image height and the horizontal image height. Therefore, the imaging control unit 110 does not need to hold the PSF data of the second quadrant, the third quadrant, and the fourth quadrant.
  • the imaging control unit 110 stores 15 PSF data as PSF data corresponding to the first quadrant.
  • the PSF of the surrounding ROI can be obtained more accurately by performing interpolation operation. Therefore, the detection accuracy of the DFD operation can be improved. If the PSF data of all quadrants are stored in order to obtain the same degree of detection accuracy, 45 PSF data need to be stored. Therefore, according to this embodiment, the storage capacity of PSF data can be reduced to 1/3.
  • FIG. 17 is a flowchart showing the processing procedure of the focus control executed by the imaging control unit 110.
  • the imaging control unit 110 executes in parallel a focus control process 1700 implemented by an algorithm for performing focus control, and a DFD process 1750 implemented by an algorithm for performing DFD calculation.
  • the area that is the target of focus adjustment may be an area designated by the user.
  • the area targeted for focus adjustment may be an area where the imaging control unit 110 detects the main subject through image analysis such as face detection.
  • the area targeted for focus adjustment may be a plurality of areas where the imaging control unit 110 detects the main subject through image analysis or the like.
  • the method of specifying the target area of focus adjustment is not particularly limited.
  • the imaging control unit 110 captures the image sensor 120 at the current lens position, and acquires image data of the first image (S1702).
  • the imaging control unit 110 performs quadrant conversion processing on the image information of the second quadrant, the third quadrant, and the fourth quadrant as necessary (S1752). For example, in a case where the area targeted for focus adjustment is included in any one of the second quadrant, third quadrant, and fourth quadrant, the imaging control section 110 compares the image of the quadrant including the area targeted for focus adjustment. The information undergoes quadrant transformation.
  • the imaging control unit 110 uses the PSF data of the first quadrant to perform a convolution operation on the image information obtained by subjecting the first image to the quadrant conversion processing of S1752.
  • the imaging control unit 110 performs a convolution operation for each defocus amount, and stores the convolution operation result for each defocus amount in the memory.
  • the imaging control unit 110 may use adjacent PSF data to generate PSF data corresponding to the area targeted for focus adjustment by performing interpolation processing based on the position of the area targeted for focus adjustment, and perform convolution operation.
  • the deconvolution operation is best implemented by hardware, but it can also be implemented by software.
  • the imaging control unit 110 moves the position of the focus lens of the lens 210 by a preset movement amount (S1704), causes the image sensor 120 to take an image, and acquires image data of the second image (S1706).
  • DFD processing 1750 regarding the image data of the second image, the imaging control unit 110 performs quadrant conversion processing on the image information of the second, third, and fourth quadrants as needed (S1752), using the PSF data of the first quadrant , Perform a convolution operation on the image information after the second image is subjected to the quadrant conversion processing of S1752 (S1754).
  • the imaging control unit 110 uses the result of the convolution operation of the first image and the second image to calculate the amount of blur using the blur amount function to calculate the amount of defocus that minimizes the amount of blur function, based on the first The amount of defocus of the image and the second image is used to calculate the amount of lens movement.
  • the imaging control unit 110 moves the position of the focus lens of the lens 210 by a preset movement amount (S1708), causes the image sensor 120 to take an image, and acquires image data of the third image (S1710).
  • the imaging control unit 110 performs quadrant conversion processing on the image information of the second, third, and fourth quadrants as necessary (S1762).
  • the imaging control unit 110 uses the PSF data of the first quadrant to perform a convolution operation on the image information obtained by subjecting the third image to the quadrant conversion processing of S1762 (S1764).
  • the imaging control unit 110 uses the convolution calculation result of the second image and the third image to calculate the amount of blur by the blur amount function, thereby calculating the defocus amount that minimizes the blur amount function. Based on the defocus amount calculated in S1766, it can be determined whether the imaging subject is in focus.
  • the imaging control unit 110 determines whether or not the imaging subject is in focus based on the calculation result of S1766. When it is determined that it is in a focused state, the focus control ends.
  • the lens shift amount is calculated based on the defocus amount (S1770), and the process is moved to S1708 of the focus control process 1700.
  • the DFD process 1750 and the movement of the focus lens are repeated until the focus is reached.
  • the convolution operation result of the latest two images is used to perform the processing of S1766 and S1768.
  • the reliability evaluation value of the DFD operation is calculated. When the reliability evaluation value is smaller than the preset value, the DFD-based focus control is stopped, and other focus control methods such as contrast AF can also be switched.
  • the imaging control section 110 may calculate the reliability evaluation value of the DFD operation based on the blur amount of the subject in the image.
  • the imaging control unit 110 may calculate the reliability evaluation value of the DFD calculation based on the blur amount of the image represented by the equation (1), for example.
  • the imaging control unit 110 may increase the reliability evaluation value as the amount of blur is smaller.
  • the reliability evaluation value is small, even if the position of the focus lens is not changed, the accuracy of the DFD calculation is sometimes high by performing DFD calculations in a plurality of images. Therefore, when the reliability evaluation value is smaller than the preset value, it is also possible to continue DFD-based focus control by performing DFD calculations in multiple images without changing the position of the focus lens.
  • the image information of the quadrants other than the first quadrant is converted to the first quadrant, and the DFD calculation is performed using the image information obtained from the quadrant conversion and the PSF data of the first quadrant.
  • the PSF data of the first quadrant can also be converted to the second quadrant, and the DFD operation can be performed using the PSF data after the quadrant conversion and the image information of the second quadrant.
  • the PSF data of the first quadrant can also be converted to the third quadrant, and the PSF data after the quadrant conversion and the image information of the third quadrant can be used for DFD operation; the PSF data of the first quadrant can also be converted to the third quadrant.
  • the quadrant conversion of the fourth quadrant uses the PSF data after the quadrant conversion and the image information of the fourth quadrant to perform DFD operations.
  • the imaging device 100 it is possible to perform focus control on the entire image area using PSF data corresponding to a specific quadrant. Therefore, the storage capacity of PSF data can be reduced. Moreover, the accuracy of DFD operations can be maintained. In addition, focus control can be performed on the entire image area.
  • the aforementioned imaging device 100 may be mounted on a mobile body.
  • the imaging device 100 can be mounted on an unmanned aerial vehicle (UAV) as shown in Fig. 18.
  • UAV 10 may include a UAV main body 20, a universal joint 50, a plurality of camera devices 60, and the camera device 100.
  • the gimbal 50 and the camera device 100 are an example of a camera system.
  • UAV10 is an example of a moving body propelled by a propulsion unit.
  • the concept of moving objects refers to flying objects such as airplanes moving in the air, vehicles moving on the ground, ships moving on water, etc., in addition to UAVs.
  • the UAV main body 20 includes a plurality of rotors. Multiple rotors are an example of a propulsion section.
  • the UAV main body 20 makes the UAV 10 fly by controlling the rotation of a plurality of rotors.
  • the UAV main body 20 uses, for example, four rotors to make the UAV 10 fly.
  • the number of rotors is not limited to four.
  • UAV10 can also be a fixed-wing aircraft without rotors.
  • the imaging device 100 is an imaging camera that captures a subject included in a desired imaging range.
  • the universal joint 50 rotatably supports the imaging device 100.
  • the universal joint 50 is an example of a supporting mechanism.
  • the gimbal 50 uses an actuator to rotatably support the imaging device 100 with a pitch axis.
  • the universal joint 50 uses an actuator to further rotatably support the imaging device 100 around the roll axis and the yaw axis, respectively.
  • the gimbal 50 can change the posture of the camera device 100 by rotating the camera device 100 around at least one of the yaw axis, the pitch axis, and the roll axis.
  • the plurality of imaging devices 60 are sensing cameras that photograph the surroundings of the UAV 10 in order to control the flight of the UAV 10.
  • the two camera devices 60 can be installed on the nose of the UAV 10, that is, on the front side.
  • the other two camera devices 60 may be installed on the bottom surface of the UAV 10.
  • the two imaging devices 60 on the front side may be paired to function as a so-called stereo camera.
  • the two imaging devices 60 on the bottom side may also be paired to function as a stereo camera.
  • the three-dimensional spatial data around the UAV 10 can be generated based on the images captured by the plurality of camera devices 60.
  • the number of imaging devices 60 included in the UAV 10 is not limited to four.
  • the UAV 10 may include at least one camera device 60.
  • the UAV 10 may also include at least one camera 60 on the nose, tail, side, bottom and top surfaces of the UAV 10, respectively.
  • the viewing angle that can be set in the imaging device 60 may be larger than the viewing angle that can be set in the imaging device 100.
  • the imaging device 60 may have a single focus lens or a fisheye lens.
  • the remote operation device 300 communicates with the UAV 10 to remotely operate the UAV 10.
  • the remote operation device 300 can wirelessly communicate with the UAV 10.
  • the remote operation device 300 transmits instruction information indicating various instructions related to the movement of the UAV 10 such as ascending, descending, accelerating, decelerating, forwarding, retreating, and rotating to the UAV 10.
  • the instruction information includes, for example, instruction information for raising the height of the UAV 10.
  • the indication information may indicate the height at which the UAV10 should be located.
  • the UAV 10 moves to be positioned at the height indicated by the instruction information received from the remote operation device 300.
  • the instruction information may include an ascending instruction to raise the UAV10. UAV10 rises while receiving the rise command. When the height of UAV10 has reached the upper limit height, even if the ascending instruction is accepted, the ascent of UAV10 can be restricted.
  • FIG. 19 shows an example of a computer 1200 that can embody aspects of the present invention in whole or in part.
  • the program installed on the computer 1200 can make the computer 1200 function as an operation associated with the device according to the embodiment of the present invention or one or more "parts" of the device.
  • a program installed on the computer 1200 can make the computer 1200 function as the imaging control unit 110.
  • the program can enable the computer 1200 to perform related operations or related functions of one or more "parts".
  • This program enables the computer 1200 to execute the process or stages of the process involved in the embodiment of the present invention.
  • Such a program may be executed by the CPU 1212, so that the computer 1200 executes specified operations associated with some or all of the blocks in the flowcharts and block diagrams described in this specification.
  • the computer 1200 of this embodiment includes a CPU 1212 and a RAM 1214, which are connected to each other through a host controller 1210.
  • the computer 1200 further includes a communication interface 1222, an input/output unit, which is connected to the host controller 1210 through the input/output controller 1220.
  • the computer 1200 also includes a ROM 1230.
  • the CPU 1212 operates in accordance with programs stored in the ROM 1230 and RAM 1214 to control each unit.
  • the communication interface 1222 communicates with other electronic devices through the network.
  • the hard disk drive can store programs and data used by the CPU 1212 in the computer 1200.
  • the ROM 1230 stores therein a boot program executed by the computer 1200 during operation, and/or a program dependent on the hardware of the computer 1200.
  • the program is provided through a computer-readable recording medium such as CR-ROM, USB memory, or IC card, or a network.
  • the program is installed in RAM 1214 or ROM 1230 which is also an example of a computer-readable recording medium, and is executed by CPU 1212.
  • the information processing described in these programs is read by the computer 1200 and causes cooperation between the programs and the various types of hardware resources described above.
  • the apparatus or method can be constituted by realizing the operation or processing of information according to the use of the computer 1200.
  • the CPU 1212 can execute a communication program loaded in the RAM 1214, and based on the processing described in the communication program, instruct the communication interface 1222 to perform communication processing.
  • the communication interface 1222 reads the transmission data stored in the transmission buffer provided in a recording medium such as RAM 1214 or USB memory under the control of the CPU 1212, and sends the read transmission data to the network or receives the data from the network. The received data is written into the receiving buffer provided in the recording medium, etc.
  • the CPU 1212 can make the RAM 1214 read all or necessary parts of files or databases stored in an external recording medium such as a USB memory, and perform various types of processing on the data on the RAM 1214. Then, the CPU 1212 can write the processed data back to the external recording medium.
  • an external recording medium such as a USB memory
  • the CPU 1212 can perform various types of operations, information processing, condition determination, conditional transfer, unconditional transfer, and information retrieval/retrieval/information specified by the instruction sequence of the program described in various places in this disclosure. Replace various types of processing, and write the results back to RAM 1214.
  • the CPU 1212 can search for information in files, databases, and the like in the recording medium. For example, when multiple entries having the attribute value of the first attribute respectively associated with the attribute value of the second attribute are stored in the recording medium, the CPU 1212 may retrieve the attribute value of the specified first attribute from the multiple entries. The item that matches the condition is read, and the attribute value of the second attribute stored in the item is read, so as to obtain the attribute value of the second attribute that is associated with the first attribute that meets the predetermined condition.
  • the programs or software modules described above may be stored on the computer 1200 or on a computer-readable storage medium near the computer 1200.
  • a recording medium such as a hard disk or RAM provided in a server system connected to a dedicated communication network or the Internet can be used as a computer-readable storage medium so that the program can be provided to the computer 1200 via the network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种装置,包括电路,所述电路构成为:基于使摄像装置包括的光学系统的焦点位置不同进行摄像而得到的多个图像数据和光学系统的模糊特性信息,来获取摄像对象的距离信息。电路构成为:存储摄像装置的摄像面中与特定象限对应的光学系统的模糊特性信息。电路构成为:当获取与其他象限对应的摄像对象的距离信息时,对特定象限中的光学系统的模糊特性信息和多个图像数据的各个图像数据中与其他象限对应的图像信息这二者中的一个进行象限的转换处理,基于由转换处理而得到的信息、以及特定象限中的光学系统的模糊特性信息和与其他象限对应的图像信息这二者中的另一个的信息,来获取与其他象限对应的摄像对象的距离信息。

Description

装置、摄像装置、摄像系统、移动体、方法以及程序 技术领域
本发明涉及一种装置、摄像装置、摄像系统、移动体、方法以及程序。
背景技术
专利文献1中记载了根据DFD法来进行距离计算的原理。
[专利文献1]日本特开2013-242617号公报。
发明内容
本发明的一个方面所涉及的装置包括电路,所述电路构成为:基于使摄像装置包括的光学系统的焦点位置不同进行摄像而得到的多个图像数据和光学系统的模糊特性信息,来获取摄像对象的距离信息。电路构成为:存储摄像装置的摄像面中与特定象限对应的光学系统的模糊特性信息。电路构成为:当获取与其他象限对应的摄像对象的距离信息时,对特定象限中的光学系统的模糊特性信息和多个图像数据的各个图像数据中与其他象限对应的图像信息这二者中的一个进行象限的转换处理,基于由转换处理而得到的信息、和特定象限中的光学系统的模糊特性信息和与其他象限对应的图像信息这二者中的另一个的信息,来获取与其他象限对应的摄像对象的距离信息。
电路构成可以为:在获取与其他象限对应的摄像对象的距离信息时,对多个图像数据的各个图像数据中与其他象限对应的图像信息进行向特定象限的转换处理,基于由向特定象限的转换处理而得到的图像信息和在特定象限中的光学系统的模糊特性信息,来获取与其他象限对应的摄像对象的距离信息。
电路构成可以为:在获取与其他象限对应的摄像对象的距离信息时,对特定象限中的光学系统的模糊特性信息进行向其他象限的转换处理,基于由向其他象限的转换处理而得到的模糊特性信息和与其他象限对应的图像信息,来获取与其他象限对应的摄像对象的距离信息。
光学系统的模糊特性信息可以为点扩散函数。电路构成可以为:存储特定象限所包含的多个点的光学系统的点扩散函数。
特定象限可包括将与光学系统的光轴对应的点作为原点、通过相互垂 直的第一坐标轴和第二坐标轴分成的四个象限中的第一坐标轴的坐标值以及第二坐标轴的坐标值为正的区域,即第一象限。电路构成可以为:通过对与第一坐标轴的坐标值为负而第二坐标轴的坐标值为正的第二象限对应的图像信息进行关于第二坐标轴的线对称转换,来进行向第一象限的转换处理。电路构成可以为:通过对与第一坐标轴的坐标值为负且第二坐标轴的坐标值为负的第三象限对应的图像信息进行关于原点的点对称转换,来进行向第一象限的转换处理。电路构成可以为:通过对与第一坐标轴的坐标值为正而第二坐标轴的坐标值为负的第四象限对应的图像信息进行关于第一坐标轴的线对称转换,来进行向第一象限的转换处理。
光学系统的模糊特性信息可以为点扩散函数。电路构成可以为:存储第一象限内的一个以上的点的光学系统的点扩散函数、第一坐标轴上的一个以上的点的光学系统的点扩散函数和第二坐标轴上的一个以上的点的光学系统的点扩散函数。
电路构成可以为:存储第一象限内的第一区域以及第二区域的各自中的光学系统的模糊特性信息。电路构成可以为:基于第一象限的图像信息、通过对第一区域中的光学系统的模糊特性信息和第二区域中的光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取第一区域和第二区域之间的摄像对象的距离信息。电路构成可以为:基于通过对第二象限的图像信息进行关于第二坐标轴的线对称转换而得到的图像信息、通过对第一区域中的光学系统的模糊特性信息及第二区域中的光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与第二象限中的第一区域和第二区域之间的区域对应的摄像对象的距离信息。电路构成可以为:基于通过对第三象限的图像信息进行关于原点的点对称转换而得到的图像信息、通过对第一区域中的光学系统的模糊特性信息及第二区域中的光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与第三象限中的第一区域和第二区域之间的区域对应的摄像对象的距离信息。电路构成可以为:基于通过对第四象限的图像信息进行关于第一坐标轴的线对称转换而得到的图像信息、通过对第一区域中的光学系统的模糊特性信息及第二区域中的光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与第四象限中的第一区域和第二区域之间的 区域对应的摄像对象的距离信息。
电路构成可以为:基于摄像对象的距离信息,来进行光学系统的焦点调节。
根据本发明的一个方面所涉及的摄像装置包括上述装置以及具有摄像面的图像传感器。
本发明的一个方面所涉及的摄像系统包括上述摄像装置以及能够控制摄像装置的姿势地进行支撑的支撑机构。
本发明的一个方面所涉及的移动体可以是搭载上述摄像装置并进行移动的移动体。
本发明的一个方面所涉及的方法包括基于使摄像装置包括的光学系统的焦点位置不同进行摄像而得到的多个图像数据和光学系统的模糊特性信息来获取摄像对象的距离信息的阶段。获取摄像对象的距离信息的阶段包括存储摄像装置的摄像面中与特定象限对应的光学系统的模糊特性信息的阶段。获取摄像对象的距离信息的阶段包括当获取与其他象限对应的摄像对象的距离信息时,对特定象限中的光学系统的模糊特性信息和多个图像数据的各个图像数据中与其他象限对应的图像信息这二者中的一个进行象限的转换处理的阶段。获取摄像对象的距离信息的阶段包括基于通过转换处理而得到的信息、以及特定象限中的光学系统的模糊特性信息和与其他象限对应的图像信息这二者中的另一个的信息,来获取与其他象限对应的摄像对象的距离信息的阶段。
本发明的一个方面所涉及的程序可以是使计算机执行上述方法的程序。
根据本发明的一个方面,可削减模糊特性信息的存储容量。
此外,上述发明内容未列举本发明的必要的全部特征。此外,这些特征组的子组合也可以构成发明。
附图说明
图1是示出本实施方式所涉及的摄像装置100的外观立体图的一个示例的图。
图2是示出本实施方式所涉及的摄像装置100的功能块的图。
图3是示出表示图像的模糊量(Cost)与聚焦镜头的位置之间的关系 的曲线的一个示例。
图4是表示BDAF方式中的距离计算过程的一个示例的流程图。
图5示出被摄体距离的计算过程。
图6示意性地示出被设定为图像区域的ROI(Region of Interest,感兴趣区域)的位置。
图7示出使用测试被摄体700计算出的散焦量的检测误差的图表。
图8示出使用测试被摄体800计算出的散焦量的检测误差的图表。
图9示出使用测试被摄体900计算出的散焦量的检测误差的图表。
图10示出相对于各ROI位置的口径蚀形状的计算结果。
图11同时示出口径蚀的观察结果和口径蚀形状的计算结果。
图12示出使用测试被摄体700,通过应用每个ROI的PSF数据进行DFD运算而计算出的散焦量的检测误差。
图13示出使用测试被摄体800,通过应用每个ROI的PSF数据进行DFD运算而计算出的散焦量的检测误差。
图14示出使用测试被摄体900,通过应用每个ROI的PSF数据进行DFD运算而计算出的散焦量的检测误差。
图15示出摄像控制部110预存的PSF数据。
图16示意性地示出象限转换。
图17是表示摄像控制部110执行的对焦控制的处理过程的流程图。
图18示出无人驾驶航空器(UAV)的一个示例。
图19示出可整体或部分地体现本发明的多个方面的计算机1200的一个示例。
【符号说明】
10 UAV
20 UAV主体
50万向节
60摄像装置
100摄像装置
102摄像部
110摄像控制部
120图像传感器
130存储器
160显示部
162指示部
200镜头部
210镜头
212镜头驱动部
220镜头控制部
222存储器
300远程操作装置
600图像区域
700、800、900
1620、1622、1630、1632、1640、1642图像信息
1700对焦控制处理
1750 DFD处理
1200计算机
1210主机控制器
1212 CPU
1214 RAM
1220输入/输出控制器
1222通信接口
1230 ROM
具体实施方式
以下,通过发明的实施方式来说明本发明,但是以下的实施方式并不限定权利要求书所涉及的发明。此外,实施方式中所说明的所有特征组合对于发明的解决方案未必是必须的。对本领域普通技术人员来说,显然可以对以下实施方式加以各种变更或改良。从权利要求书的描述显而易见的是,加以了这样的变更或改良的方式都可包含在本发明的技术范围之内。
权利要求书、说明书、说明书附图以及说明书摘要中包含作为著作权所保护对象的事项。任何人只要如专利局的文档或者记录所表示的那样进 行这些文件的复制,著作权人则不会提出异议。但是,在除此以外的情况下,保留一切的著作权。
本发明的各种实施方式可参照流程图及框图来描述,这里,方框可表示(1)执行操作的过程的阶段或者(2)具有执行操作的作用的装置的“部”。特定的阶段和“部”可以通过可编程电路和/或处理器来实现。专用电路可以包括数字和/或模拟硬件电路。可以包括集成电路(IC)和/或分立电路。可编程电路可以包括可重构硬件电路。可重构硬件电路可以包括逻辑与、逻辑或、逻辑异或、逻辑与非、逻辑或非、及其它逻辑操作、触发器、寄存器、现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)等存储器元件等。
计算机可读介质可以包括可以对由适宜的设备执行的指令进行存储的任意有形设备。其结果是,其上存储有指令的计算机可读介质包括一种包括指令的产品,该指令可被执行以创建用于执行流程图或框图所指定的操作的手段。作为计算机可读介质的示例,可以包括电子存储介质、磁存储介质、光学存储介质、电磁存储介质、半导体存储介质等。作为计算机可读介质的更具体的示例,可以包括软盘(注册商标)、软磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或者闪存)、电可擦可编程只读存储器(EEPROM)、静态随机存取存储器(SRAM)、光盘只读存储器(CD-ROM)、数字多用途光盘(DVD)、蓝光(注册商标)光盘、记忆棒、集成电路卡等。
计算机可读指令可以包括由一种或多种编程语言的任意组合描述的源代码或者目标代码中的任意一个。源代码或者目标代码包括传统的程序式编程语言。传统的程序式编程语言可以为汇编指令、指令集架构(ISA)指令、机器指令、与机器相关的指令、微代码、固件指令、状态设置数据、或者Smalltalk(注册商标)、JAVA(注册商标)、C++等面向对象编程语言以及“C”编程语言或者类似的编程语言。计算机可读指令可以在本地或者经由局域网(LAN)、互联网等广域网(WAN)提供给通用计算机、专用计算机或者其它可编程数据处理装置的处理器或可编程电路。处理器或可编程电路可以执行计算机可读指令,以创建用于执行流程图或框图所指定操作的手段。处理器的示例包括计算机处理器、处理单元、微处理器、数 字信号处理器、控制器、微控制器等。
图1是示出本实施方式所涉及的摄像装置100的外观立体图的一个示例的图。图2是示出本实施方式所涉及的摄像装置100的功能块的图。
摄像装置100包括摄像部102及镜头部200。摄像部102包括图像传感器120、摄像控制部110、存储器130、指示部162以及显示部160。
图像传感器120可以由CCD或CMOS构成。图像传感器120通过镜头部200所包括的镜头210接收光。图像传感器120将通过镜头210成像的光学图像的图像数据输出至摄像控制部110。
摄像控制部110可以由CPU或MPU等微处理器、MCU等微控制器等构成。存储器130可以是计算机可读记录介质,也可以包括诸如SRAM、DRAM、EPROM、EEPROM和USB存储器等闪存中的至少一种。摄像控制部110对应于电路。存储器130储存摄像控制部110对图像传感器120等进行控制所需的程序等。存储器130可以设置于摄像装置100的壳体内部。存储器130可以设置成可从摄像装置100的壳体上拆卸下来。
指示部162是从用户处接受对于摄像装置100的指示的用户界面。显示部160显示由图像传感器120摄像而由摄像控制部110处理的图像、摄像装置100的各种设定信息等。显示部160可以由触控面板组成。
摄像控制部110对镜头部200及图像传感器120进行控制。例如,摄像控制部110控制镜头210的焦点的位置和焦距。摄像控制部110基于表示用户的指示的信息,将控制指令输出到镜头部200所包括的镜头控制部220,从而对镜头部200进行控制。
镜头部200包括一个以上的镜头210、镜头驱动部212、镜头控制部220以及存储器222。在本实施方式中,将一个以上的镜头210统称为“镜头210”。镜头210可以包括聚焦镜头和变焦镜头。镜头210包括的镜头中的至少一部分或全部被布置为可沿着镜头210的光轴移动。镜头部200可以是可拆卸地设置在摄像部102上的可更换镜头。
镜头驱动部212使镜头210中的至少一部分或全部沿着镜头210的光轴移动。镜头控制部220根据来自摄像部102的镜头控制指令,驱动镜头驱动部212,使镜头210整体或镜头210所包含的变焦镜头或聚焦镜头沿光轴方向移动,从而执行变焦操作或聚焦操作中的至少一个。镜头控制指 令例如是变焦控制指令以及聚焦控制指令等。
镜头驱动部212可包括使多个镜头210的至少一部分或全部沿光轴方向移动的音圈电机(VCM)。镜头驱动部212可包括DC电机、空心杯电机或超声波电机等电动机。镜头驱动部212可将来自电动机的动力经由凸轮环、导轴等的机构部件传递给多个镜头210的至少一部分或全部,使镜头210的至少一部分或全部沿光轴移动。
存储器222存储通过镜头驱动部212进行移动的聚焦镜头和变焦镜头用的控制值。存储器222可以包括SRAM、DRAM、EPROM、EEPROM及USB存储器等闪速存储器中的至少一个。
摄像控制部110基于表示通过指示部162等获取的用户的指示的信息,通过向图像传感器120输出控制命令,对图像传感器120执行包含摄像动作的控制在内的控制。摄像控制部110获取由图像传感器120拍摄的图像。摄像控制部110对从图像传感器120获取的图像实施图像处理并存储在存储器130中。
对本实施方式中的摄像控制部110的动作进行说明。摄像控制部110基于使摄像装置100所包括的镜头210的焦点位置不同进行摄像而得到的多个图像数据和镜头210的模糊特性信息,来获取摄像对象的距离信息。具体来说,摄像控制部110存储摄像装置100的摄像面中与特定象限对应的镜头210的模糊特性信息。当获取与其他象限对应的摄像对象的距离信息时,摄像控制部110对特定象限中的镜头210的模糊特性信息和多个图像数据的各个图像数据中与其他象限对应的图像信息这二者中的一个进行象限的转换处理,基于由转换处理而得到的信息、以及特定象限中的镜头210的模糊特性信息和与其他象限对应的图像信息这二者中的另一个的信息,来获取与其他象限对应的摄像对象的距离信息。摄像控制部110基于摄像对象的距离信息来进行镜头210的焦点调节。
例如,在获取与其他象限对应的摄像对象的距离信息时,摄像控制部110对多个图像数据的各个图像数据中与其他象限对应的图像信息进行向特定象限的转换处理,基于通过向特定象限的转换处理而得到的图像信息和特定象限中的镜头210的模糊特性信息,来获得与其他象限对应的摄像对象的距离信息。
在获取与其他象限对应的摄像对象的距离信息时,摄像控制部110对特定象限中的镜头210的模糊特性信息进行向其他象限的转换处理,基于由向其他象限的转换处理而得到的模糊特性信息和与其他象限对应的图像信息,来获取与其他象限对应的摄像对象的距离信息。
镜头210的模糊特性信息例如为点扩散函数。摄像控制部110构成为:存储特定象限所包含的多个点的镜头210的点扩散函数。例如,摄像控制部110在内部具有非易失性存储介质,来存储特定象限所包含的多个点中的镜头210的点扩散函数。而且,摄像控制部110可存储第一象限内的一个以上的点的镜头210的点扩散函数、第一坐标轴上的一个以上的点的镜头210的点扩散函数和第二坐标轴上的一个以上的点的镜头210的点扩散函数。
特定象限可包括将与镜头210的光轴对应的点作为原点、通过相互垂直的第一坐标轴和第二坐标轴分成的四个象限中的第一坐标轴的坐标值以及第二坐标轴的坐标值为正的区域,即第一象限。摄像控制部110通过对与第一坐标轴的坐标值为负而第二坐标轴的坐标值为正的第二象限对应的图像信息进行关于第二坐标轴的线对称转换,来进行向第一象限的转换处理。摄像控制部110通过对与第一坐标轴的坐标值为负且第二坐标轴的坐标值为负的第三象限对应的图像信息进行关于原点的点对称转换,来进行向第一象限的转换处理。摄像控制部110通过对与第一坐标轴的坐标值为正而第二坐标轴的坐标值为负的第四象限对应的图像信息进行关于第一坐标轴的线对称转换,来进行向第一象限的转换处理。
摄像控制部110存储第一象限内的第一区域以及第二区域的各自中的镜头210的模糊特性信息。摄像控制部110基于第一象限的图像信息、通过对第一区域中的镜头210的模糊特性信息和第二区域中的镜头210的模糊特性信息进行插值处理而生成的模糊特性信息,来获取第一区域和第二区域之间的摄像对象的距离信息。摄像控制部110基于通过对第二象限的图像信息进行关于第二坐标轴的线对称转换而得到的图像信息、通过对第一区域中的镜头210的模糊特性信息及第二区域中的镜头210的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与第二象限中的第一区域和第二区域之间的区域对应的摄像对象的距离信息。摄像控制部110基 于通过对第三象限的图像信息进行关于原点的点对称转换而得到的图像信息、通过对第一区域中的镜头210的模糊特性信息及第二区域中的镜头210的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与第三象限中的第一区域和第二区域之间的区域对应的摄像对象的距离信息。摄像控制部110基于由对第四象限的图像信息进行关于第一坐标轴的线对称转换而得到的图像信息、通过对第一区域的镜头210的模糊特性信息及第二区域中的镜头210的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与第四象限中的第一区域和第二区域之间的区域对应的摄像对象的距离信息。
这里,对摄像装置100执行的AF方式进行说明。为了执行AF处理,摄像装置100确定从镜头210到被摄体的距离(被摄体距离)。作为用于确定被摄体距离的方式,存在以下方式:通过移动聚焦镜头,基于在聚焦镜头和图像传感器120的受光面的位置关系不同的状态下拍摄到的多个图像的模糊量来确定。文中,将采用该方式的AF称为模糊检测自动聚焦(Bokeh Detection Auto Foucus:BDAF)方式。具体来说,在BDAF中进行DFD(Depth From Defocus,散焦测距)运算来进行AF。
例如,图像的模糊量可以采用高斯函数由以下公式(1)来表示。即,可通过对在某特定的聚焦镜头位置获取的图像应用公式(1),来计算模糊量的分布。在式(1)中,x表示水平方向上的像素位置。σ表示标准偏差值。
【式1】
Figure PCTCN2021083913-appb-000001
图3示出表示使用公式(1)计算出的Cost和聚焦镜头的位置关系的曲线的一个示例。例如使聚焦镜头移动到不同的两个位置来计算模糊量的分布,图3描绘了通过这两点的曲线。C1为聚焦镜头位于x1时得到的图像的Cost。C2为聚焦镜头位于x2时得到的图像的Cost。可以通过将聚焦镜头对准于与考虑镜头210的光学特性而由C1及C2确定的曲线500的极小点502对应的镜头位置x0而对焦于被摄体。
图4是示出在BDAF方式中距离计算过程的一个示例的流程图。摄像控制部110在镜头210和图像传感器120的摄像面处于第一位置关系的状态下,拍摄第一图像并存储于存储器130中。摄像控制部110通过沿光轴方向移动镜头210,使镜头210与摄像面处于第二位置关系的状态,通过摄像装置100拍摄第二图像并存储于存储器130中(S201)。例如,摄像控制部110通过沿光轴方向移动聚焦镜头,将镜头210与摄像面的位置关系从第一位置关系变更为第二位置关系。镜头的移动量可以为例如10μm左右。
然后,摄像控制部110将第一图像分割为多个区域(S202)。摄像控制部110可以对第一图像内的每个像素计算特征量,将具有类似的特征量的像素组作为一个区域从而将第一图像分割为多个区域。摄像控制部110也可以将第一图像中被设定为AF处理框的范围的像素组分割为多个区域。摄像控制部110将第二图像分割为与第一图像的多个区域对应的多个区域。摄像控制部110基于第一图像的多个区域各自的模糊量和第二图像的多个区域各自的模糊量,对多个区域的各个区域计算到与多个区域的各个区域中包含的对象对应的被摄体的距离(S203)。
而且,改变镜头210与图像传感器120的摄像面的位置关系的方法并不限于使镜头210包括的聚焦镜头移动的方法。例如,摄像控制部110可以使镜头210整体沿光轴方向移动。摄像控制部110可以使图像传感器120的摄像面沿光轴方向移动。摄像控制部110可以将镜头210所包括的至少一部分镜头以及图像传感器120的摄像面都沿光轴方向移动。摄像控制部110可以采用用于光学地改变镜头210的焦点与图像传感器120的摄像面的相对位置关系的任意方法。
将参照图5进一步说明被摄体距离的计算过程。将从镜头L的主点到被摄体510(物面)的距离设为A,将从镜头L的主点到来自被摄体510的光束成像的位置(像面)的距离设为B,将镜头L的焦距设为F。在这种情况下,可以根据镜头公式用以下公式(2)来表示距离A、距离B及焦距F之间的关系。
【式2】
Figure PCTCN2021083913-appb-000002
焦距F由镜头L所包括的各镜头的位置决定。因此,如果可以确定来自被摄体510的光束成像的距离B,则能够采用公式(2)来确定从镜头L的主点到被摄体510的距离A。
这里,假设通过将图像传感器的摄像面移动到镜头L侧,改变了镜头L与摄像面的位置关系。如图5所示,如果在距离镜头L的主点的距离D1的位置和距离镜头L的主点的距离D2的位置处存在摄像面,则投影在摄像面上的被摄体510的像产生模糊。可以通过根据投影在摄像面上的被摄体510的像的模糊大小(弥散圆512和514)计算出被摄体510成像的位置,来确定距离B,并且进一步确定距离A。也就是说,考虑到模糊的大小(模糊量)与摄像面和成像位置成比例,可以根据模糊量的差确定摄像位置。
这里,距摄像面距离D1的位置的图像I 1以及距摄像面距离D2的位置的图像I 2的各个图像是模糊的。关于图像I 1,设点扩散函数(Point Spread Function)为PSF 1,被摄体像为I d1,则图像I 1可以通过卷积运算用下式(3)来表示。
【式3】
I 1=PSF 1*I d1…(3)
图像I 2也同样可以通过PSF 2的卷积运算来表示。设被摄体像的傅里叶变换为f,设点扩散函数PSF 1及PSF 2进行傅里叶变换获取的光学传递函数(Optical Transfer Function)为OTF 1及OTF 2,得到如下式(4)的比值。
【式4】
Figure PCTCN2021083913-appb-000003
式(4)所示的C值为距离镜头L的主点距离D1的位置的像以及距离镜头L的主点距离D2的位置的像的各自的模糊量的变化量,即,C值相当于距离镜头L的主点距离D1的位置的像以及距离镜头L的主点距离D2的位置的像的模糊量之差。
在图5中,对于通过使摄像面向镜头L侧移动而改变了镜头L与摄像面的位置关系的情况进行了说明。通过使聚焦镜头相对于摄像面移动而使镜头L的焦点位置和摄像面的位置关系发生改变,模糊量也会产生不同。在本实施方式中,主要通过使聚焦镜头相对于摄像面移动,获取模糊量不同的图像,基于获取的图像进行DFD运算,获取表示散焦量的DFD运算值,并基于DFD运算值计算出用于聚焦于被摄体的聚焦镜头的位置的目标值。
图6示意性地示出被设定为图像区域的ROI(Region of Interest,感兴趣区域)的位置。图6所示的图像区域600示出由摄像装置100摄像的图像的整个区域。在图6的示例中,图像区域600中设定了ROI00~ROI14的15个区域。
图7至图9示出使用测试被摄体通过DFD运算而计算出的散焦量的检测误差。图7示出使用测试被摄体700而计算出的散焦量的检测误差的图表。测试被摄体700为向垂直方向延伸的多个条形图案。图8示出使用测试被摄体800而计算出的散焦量的检测误差的图表。测试被摄体800为向水平方向延伸的多个条形图案。图9示出使用测试被摄体900而计算出的散焦量的检测误差的图表。测试被摄体900为含有水平方向和垂直方向成分的混合图案。
图7至图9所示的各图表的横轴为散焦量,纵轴为通过DFD运算而计算出的散焦量的误差。纵轴和横轴的单位为fδ。而且,图7至图9分别展示了15个图表。这15个图表与图6所示的15个ROI一一对应。在朝向纸面看时,这15个图表被布置为与图6所示的15个ROI中的对应ROI相同的排列。
而且,在DFD运算中,对全部的ROI使用相同的PSF来计算出散焦量。具体来说,使用与光轴对应位置的ROI07的圆形的PSF来计算出散焦量。
从图7至图9的图表可以看出,对于与光轴对应的ROI,无论何种测试图案,其检测误差都较小。但是,对于远离光轴位置的ROI(以下称为“周边ROI”),根据被摄体的图案不同其误差大小不同。这意味着当被摄体图案改变时,在周边ROI的DFD运算结果发生改变。因此,意味着假设即使使用与各ROI对应的校正值来对DFD运算的运算结果进行了校正,但由于被摄体不同,检测精度也可能降低。
图10示出对于各ROI位置的口径蚀(vignetting)形状的计算结果。图10中示出了对应于各ROI的两个圆。这两个圆示出由于光学系统的直径等被限制的范围。两个圆的重叠部分形成PSF形状。
图11同时示出在ROI05中口径蚀的观察结果1100和口径蚀形状的计算结果。如图11所示,口径蚀的观察结果1100与计算结果几乎一致。一般来说,可以预测,随着像高变大,口径蚀成为移动的多个圆形重叠。如同预测的那样,从图10的计算结果看到随着像高变大而两个圆移动的样子。
图12、图13以及图14分别使用测试被摄体700、800和900,示出通过应用图10所示的每个ROI的PSF数据来进行DFD运算而计算出的散焦量的检测误差。图12至图14所示的各图表的横轴为散焦量,纵轴为通过DFD运算而计算出的散焦量的误差。纵轴和横轴的单位为fδ。
从图12至图14的图表可以看出,即使在周边ROI,无论被摄体图案如何,DFD运算的检测误差都较小。从中可知,如图7至图9所示,在周边ROI中的DFD运算的检测误差由于被摄体图案不同而产生差异的主要原因在于与图10联系起来进行说明的PSF的不同。
在这里,摄像控制部110对每个图像区域运用不同的PSF来进行DFD运算。DFD运算中使用的PSF需存储与散焦量对应的多个PSF数据。例如,当使用从-100fδ到+100fδ的范围内的散焦量的PSF时,设镜头驱动间隔为2fδ,则需使用101个PSF数据。因此,在存储容量这一点上,对图像区域整体拥有PSF数据是不现实的。因此,例如采用如下方法:图像区域整体存储好15个ROI的PSF数据,关于ROI之间的坐标,对邻近的ROI的PSF数据进行插值并进行DFD运算。即使是在存储15个ROI的PSF的情况下,也需要存储15×101=1515个PSF数据,因此如果为了 DFD运算高精准度而增加ROI的数量的话,则会导致对用于存储PSF数据的非易失性存储介质的存储容量的压迫。
通常,PSF的形状大致以光轴为中心并具有对称性。另一方面,图像传感器上通常在水平方向和垂直方向上间隔一定的距离布置有像素。因此,即使对于例如512×512像素的ROI的PSF距光轴中心的长度(像高)相同,由于对角像高、水平像高和垂直像高的不同,PSF的形状也会不同。
图15示意性地示出摄像控制部110存储PSF数据的范围。如图15所示,摄像控制部110将考虑到口径蚀的PSF数据存储于非易失性存储介质。摄像控制部110将与第一象限对应的15个PSF数据存储于非易失性存储介质。摄像控制部110不将与第二象限、第三象限以及第四象限对应的PSF数据存储于非易失性存储介质。
更具体地,摄像控制部110将第一象限内的PSF数据、与第一象限相接的X轴上的PSF数据、与第一象限相接的Y轴上的PSF数据存储于非易失性存储介质。在对第一象限的摄像对象进行DFD运算时,使用存储于非易失性存储介质中的与第一象限对应的PSF数据和与第一象限对应的图像信息来进行DFD运算。在对其他象限的摄像对象进行DFD运算时,在对其他象限的图像信息实施向第一象限的象限转换后,使用第一象限的PSF数据来进行DFD运算。
图16示意性地示出象限转换。摄像控制部110在对第二象限的摄像对象进行DFD运算时,使用通过将第二象限的图像信息1620进行向第一象限的象限转换而得到的图像信息1622和与第一象限对应的PSF数据来进行DFD运算。具体来说,摄像控制部110通过将第二象限的图像信息1620进行关于Y轴的线对称转换,来进行向第一象限的转换处理。即,摄像控制部110通过将第二象限的图像信息1620进行关于Y轴的水平方向上的反转,来进行向第一象限的转换处理。
而且,在对第三象限的摄像对象进行DFD运算时,摄像控制部110使用通过将第三象限的图像信息1630进行向第一象限的象限转换而得到的图像信息1632和与第一象限对应的PSF数据,来进行DFD运算。具体来说,摄像控制部110通过将第三象限的图像信息1630进行关于原点的点对称转换,来进行向第一象限的转换处理。即,摄像控制部110通过使 第三象限的图像信息1630绕原点进行180度旋转,来进行向第一象限的转换处理。
而且,在对第四象限的摄像对象进行DFD运算时,使用通过将第四象限的图像信息1640进行向第一象限的象限转换而得到的图像信息1642和与第一象限对应的PSF数据,来进行DFD运算。具体来说,摄像控制部110通过将第四象限的图像信息1640进行关于X轴的线对称转换,来进行向第一象限的转换处理。即,摄像控制部110通过将第四象限的图像信息1640进行关于X轴的垂直方向上的反转,来进行向第一象限的转换处理。
根据本实施方式,摄像控制部110对第二象限、第三象限以及第四象限进行图像信息的象限转换,以保持垂直像高和水平像高。因此,摄像控制部110无需保有第二象限、第三象限以及第四象限的PSF数据。如图15所示,摄像控制部110将15个PSF数据作为与第一象限对应的PSF数据进行保存。由此,与例如图6所示的在全部图像区域设定15个ROI的情况相比,通过进行插值运算,可以更加准确地获取周边ROI的PSF。因此,可以提高DFD运算的检测精度。如果为了获得相同程度的检测精度而存储全部象限的PSF数据,则需存储45个PSF数据。因此,根据本实施方式,可将PSF数据的存储容量削减至1/3。
图17是表示摄像控制部110执行的对焦控制的处理过程的流程图。摄像控制部110并行执行通过用于进行对焦控制的算法所实施的对焦控制处理1700、和通过用于进行DFD运算的算法所实施的DFD处理1750。
而且,作为焦点调节的对象的区域可以是由用户指定的区域。作为焦点调节的对象的区域可以是摄像控制部110通过人脸检测等图像分析检测出主被摄体的区域。作为焦点调节的对象的区域也可以是摄像控制部110通过图像分析等检测出主被摄体的多个区域。在本实施方式中,并未特别限定确定作为焦点调节的对象的区域的方法。
在对焦控制处理1700中,摄像控制部110在当前的镜头位置使图像传感器120摄像,获取第一图像的图像数据(S1702)。
在DFD处理1750中,关于第一图像的图像数据,摄像控制部110对第二象限、第三象限以及第四象限的图像信息根据需要进行象限转换处理 (S1752)。例如,在作为焦点调节的对象的区域包括在第二象限、第三象限以及第四象限中的任一个的情况下,摄像控制部110对包括作为焦点调节的对象的区域在内的象限的图像信息进行象限转换。
在S1754中,摄像控制部110使用第一象限的PSF数据,对将第一图像经过S1752的象限转换处理后的图像信息进行卷积运算。摄像控制部110对每散焦量进行卷积运算,并将每散焦量的卷积运算结果保存于存储器。而且,摄像控制部110可根据作为焦点调节的对象的区域的位置,使用邻近的PSF数据,通过进行插值处理生成与作为焦点调节的对象的区域对应的PSF数据,来进行卷积运算。去卷积运算最好通过硬件来实施,但通过软件来实施也可以。
在对焦控制处理1700中,摄像控制部110以预先设定的移动量移动镜头210的聚焦镜头的位置(S1704),使图像传感器120摄像,获取第二图像的图像数据(S1706)。在DFD处理1750中,关于第二图像的图像数据,摄像控制部110对第二象限、第三象限以及第四象限的图像信息根据需要进行象限转换处理(S1752),使用第一象限的PSF数据,对将第二图像经过S1752的象限转换处理后的图像信息进行卷积运算(S1754)。
接着,在S1756中,摄像控制部110使用第一图像和第二图像的卷积运算结果,通过模糊量函数计算模糊量,以此计算出使模糊量函数最小化的散焦量,基于第一图像和第二图像的散焦量来计算出镜头移动量。
在对焦控制处理1700中,摄像控制部110以预先设定的移动量移动镜头210的聚焦镜头的位置(S1708),使图像传感器120摄像,获取第三图像的图像数据(S1710)。
在DFD处理1750中,关于第三图像的图像数据,与S1752一样,摄像控制部110对第二象限、第三象限以及第四象限的图像信息根据需要进行象限转换处理(S1762)。摄像控制部110使用第一象限的PSF数据,对将第三图像经过S1762的象限转换处理后的图像信息进行卷积运算(S1764)。在S1766中,摄像控制部110使用第二图像和第三图像的卷积运算结果,通过模糊量函数计算模糊量,以此计算出使模糊量函数最小化的散焦量。基于在S1766计算出的散焦量,可判断是否对摄像对象处于对焦状态。
在S1768中,摄像控制部110基于S1766的运算结果,判断是否对摄像对象处于对焦状态。当判断出处于对焦状态时,对焦控制结束。
当在S1768的判断中判断出未处于对焦状态时,基于散焦量计算出镜头移动量(S1770),将处理移至对焦控制处理1700的S1708。接下来,按照流程图,反复进行DFD处理1750和聚焦镜头的移动,直至成为对焦状态。而且,在DFD处理1750中,每当拍摄了新的图像,就采用对于最新的两个图像的卷积运算结果,进行S1766和S1768的处理。而且,在DFD处理1750中,计算出DFD运算的可靠性评估值,当可靠性评估值比预设值小时,停止基于DFD的对焦控制,也可以切换为对比度AF等其他的对焦控制方式。而且,摄像控制部110可以基于图像内的被摄体的模糊量来计算DFD运算的可靠性评估值。摄像控制部110例如可以基于式(1)表示的图像的模糊量,计算DFD运算的可靠性评估值。摄像控制部110可以在模糊量越小时,越提高可靠性评估值。而且,即使在可靠性评估值较小的情况下,即使不改变聚焦镜头的位置,通过在多个图像中进行DFD运算,DFD运算的精度有时也很高。因此,当可靠性评估值比预设值小时,也可通过不改变聚焦镜头的位置而在多个图像中进行DFD运算,来继续进行基于DFD的对焦控制。
而且,在图15至图17中,对将第一象限以外的象限的图像信息进行向第一象限的象限转换,使用根据象限转换得到的图像信息和第一象限的PSF数据进行DFD运算的这一方式进行了说明。但是,也可以对第一象限的PSF数据进行向第二象限的象限转换,使用象限转换后的PSF数据和第二象限的图像信息进行DFD运算。同样地,也可以对第一象限的PSF数据进行向第三象限的象限转换,使用象限转换后的PSF数据和第三象限的图像信息进行DFD运算;也可以对第一象限的PSF数据进行向第四象限的象限转换,使用象限转换后的PSF数据和第四象限的图像信息进行DFD运算。
如上所述,根据摄像装置100,能够使用与特定象限对应的PSF数据,对图像区域整体进行对焦控制。因此,可削减PSF数据的存储容量。而且,可保持DFD运算的精度。而且,可对图像区域整体进行对焦控制。
上述摄像装置100可以搭载于移动体上。摄像装置100可以搭载于如 图18所示的无人驾驶航空器(UAV)上。UAV10可以包括UAV主体20、万向节50、多个摄像装置60,以及摄像装置100。万向节50及摄像装置100为摄像系统的一个示例。UAV10为由推进部推进的移动体的一个示例。移动体的概念是指除UAV之外,包括在空中移动的飞机等飞行体、在地面上移动的车辆、在水上移动的船舶等。
UAV主体20包括多个旋翼。多个旋翼为推进部的一个示例。UAV主体20通过控制多个旋翼的旋转而使UAV10飞行。UAV主体20使用例如四个旋翼来使UAV10飞行。旋翼的数量不限于四个。另外,UAV10也可以是没有旋翼的固定翼机。
摄像装置100是对包含在所期望的摄像范围内的被摄体进行摄像的摄像用相机。万向节50可旋转地支撑摄像装置100。万向节50为支撑机构的一个示例。例如,万向节50使用致动器以俯仰轴可旋转地支撑摄像装置100。万向节50使用致动器进一步分别以滚转轴和偏航轴为中心可旋转地支撑摄像装置100。万向节50可通过使摄像装置100以偏航轴、俯仰轴以及滚转轴中的至少一个为中心旋转,来改变摄像装置100的姿势。
多个摄像装置60是为了控制UAV10的飞行而对UAV10的周围进行拍摄的传感用相机。两个摄像装置60可以设置于UAV10的机头、即正面。并且,其它两个摄像装置60可以设置于UAV10的底面。正面侧的两个摄像装置60可以成对,起到所谓的立体相机的作用。底面侧的两个摄像装置60也可以成对,起到立体相机的作用。可以根据由多个摄像装置60所拍摄的图像来生成UAV10周围的三维空间数据。UAV10所包括的摄像装置60的数量不限于四个。UAV10包括至少一个摄像装置60即可。UAV10也可以在UAV10的机头、机尾、侧面、底面及顶面分别包括至少一个摄像装置60。摄像装置60中可设定的视角可大于摄像装置100中可设定的视角。摄像装置60也可以具有单焦点镜头或鱼眼镜头。
远程操作装置300与UAV10通信,以远程操作UAV10。远程操作装置300可以与UAV10进行无线通信。远程操作装置300向UAV10发送表示上升、下降、加速、减速、前进、后退、旋转等与UAV10的移动有关的各种指令的指示信息。指示信息包括例如使UAV10的高度上升的指示信息。指示信息可以表示UAV10应该位于的高度。UAV10进行移动,以 位于从远程操作装置300接收的指示信息所表示的高度。指示信息可以包括使UAV10上升的上升指令。UAV10在接受上升指令的期间上升。在UAV10的高度已达到上限高度时,即使接受上升指令,也可以限制UAV10上升。
图19示出可全部或部分地体现本发明的多个方面的计算机1200的一个示例。安装在计算机1200上的程序能够使计算机1200作为与本发明的实施方式所涉及的装置相关联的操作或者该装置的一个或多个“部”而起作用。例如,安装在计算机1200上的程序能够使计算机1200作为摄像控制部110而起作用。或者,该程序能够使计算机1200执行相关操作或者相关一个或多个“部”的功能。该程序能够使计算机1200执行本发明的实施方式所涉及的过程或者该过程的阶段。这种程序可以由CPU1212执行,以使计算机1200执行与本说明书所述的流程图及框图中的一些或者全部方框相关联的指定操作。
本实施方式的计算机1200包括CPU1212以及RAM1214,它们通过主机控制器1210相互连接。计算机1200还包括通信接口1222、输入/输出单元,它们通过输入/输出控制器1220与主机控制器1210连接。计算机1200还包括ROM1230。CPU1212按照ROM1230及RAM1214内存储的程序而工作,从而控制各单元。
通信接口1222通过网络与其他电子装置通信。硬盘驱动器可以存储计算机1200内的CPU1212所使用的程序及数据。ROM1230在其中存储运行时由计算机1200执行的引导程序等、和/或依赖于计算机1200的硬件的程序。程序通过CR-ROM、USB存储器或IC卡之类的计算机可读记录介质或者网络来提供。程序安装在也作为计算机可读记录介质的示例的RAM1214或ROM1230中,并通过CPU1212执行。这些程序中记述的信息处理由计算机1200读取,并引起程序与上述各种类型的硬件资源之间的协作。可以通过根据计算机1200的使用而实现信息的操作或者处理来构成装置或方法。
例如,当在计算机1200和外部装置之间执行通信时,CPU1212可执行加载在RAM1214中的通信程序,并且基于通信程序中描述的处理,命令通信接口1222进行通信处理。通信接口1222在CPU1212的控制下, 读取存储在RAM1214或USB存储器之类的记录介质内提供的发送缓冲区中的发送数据,并将读取的发送数据发送到网络,或者将从网络接收的接收数据写入记录介质内提供的接收缓冲区等中。
此外,CPU1212可以使RAM1214读取USB存储器等外部记录介质所存储的文件或数据库的全部或者需要的部分,并对RAM1214上的数据执行各种类型的处理。接着,CPU1212可以将处理过的数据写回到外部记录介质中。
可以将各种类型的程序、数据、表格及数据库之类的各种类型的信息存储在记录介质中,并接受信息处理。对于从RAM1214读取的数据,CPU1212可执行在本公开的各处描述的、包括由程序的指令序列指定的各种类型的操作、信息处理、条件判定、条件转移、无条件转移、信息的检索/替换等各种类型的处理,并将结果写回到RAM1214中。此外,CPU1212可以检索记录介质内的文件、数据库等中的信息。例如,在记录介质中存储具有分别与第二属性的属性值相关联的第一属性的属性值的多个条目时,CPU1212可以从该多个条目中检索出与指定第一属性的属性值的条件相匹配的条目,并读取该条目内存储的第二属性的属性值,从而获取与满足预定条件的第一属性相关联的第二属性的属性值。
以上描述的程序或者软件模块可以存储在计算机1200上或者计算机1200附近的计算机可读存储介质上。另外,连接到专用通信网络或因特网的服务器系统中提供的诸如硬盘或RAM之类的记录介质可以用作计算机可读存储介质,从而可以经由网络将程序提供给计算机1200。
以上使用实施方式对本发明进行了说明,但是本发明的技术范围并不限于上述实施方式所描述的范围。对本领域普通技术人员来说,显然可对上述实施方式加以各种变更或改良。从权利要求书的描述显而易见的是,加以了这样的变更或改良的方式都可包含在本发明的技术范围之内。
应该注意的是,权利要求书、说明书以及说明书附图中所示的装置、系统、程序以及方法中的动作、过程、步骤以及阶段等各项处理的执行顺序,只要没有特别明示“在...之前”、“事先”等,且只要前面处理的输出并不用在后面的处理中,则可以任意顺序实现。关于权利要求书、说明书以及说明书附图中的操作流程,为方便起见而使用“首先”、“接着”等进行了 说明,但并不意味着必须按照这样的顺序实施。

Claims (13)

  1. 一种装置,其特征在于,包括电路,所述电路构成为:基于使摄像装置包括的光学系统的焦点位置不同来进行摄像而得到的多个图像数据和所述光学系统的模糊特性信息,来获取摄像对象的距离信息,
    所述电路构成为:存储所述摄像装置的摄像面中与特定象限对应的所述光学系统的模糊特性信息;当获取与其他象限对应的摄像对象的距离信息时,对所述特定象限中的所述光学系统的模糊特性信息和所述多个图像数据的各个图像数据中与所述其他象限对应的图像信息这二者中的一个进行象限的转换处理,基于由所述转换处理而得到的信息、以及所述特定象限中的所述光学系统的模糊特性信息和与所述其他象限对应的图像信息这二者中的另一个的信息,来获取与所述其他象限对应的摄像对象的距离信息。
  2. 根据权利要求1所述的装置,其特征在于,所述电路构成为:在获取与所述其他象限对应的摄像对象的距离信息时,对所述多个图像数据的各个图像数据中与所述其他象限对应的图像信息进行向所述特定象限的转换处理,基于由向所述特定象限的转换处理而得到的图像信息和在所述特定象限中的所述光学系统的模糊特性信息,来获取与所述其他象限对应的摄像对象的距离信息。
  3. 根据权利要求1所述的装置,其特征在于,所述电路构成为:在获取与所述其他象限对应的摄像对象的距离信息时,对所述特定象限中的所述光学系统的模糊特性信息进行向所述其他象限的转换处理,基于由向所述其他象限的转换处理而得到的模糊特性信息和与所述其他象限对应的图像信息,来获取与所述其他象限对应的摄像对象的距离信息。
  4. 根据权利要求1至3中任一项所述的装置,其特征在于,所述光学系统的模糊特性信息为点扩散函数,
    所述电路构成为:存储所述特定象限所包含的多个点的所述光学 系统的点扩散函数。
  5. 根据权利要求1或者2所述的装置,其特征在于,所述特定象限包括将与所述光学系统的光轴对应的点作为原点、通过相互垂直的第一坐标轴和第二坐标轴分成的四个象限中的所述第一坐标轴的坐标值和所述第二坐标轴的坐标值为正的区域即第一象限,
    所述电路构成为:
    通过对与所述第一坐标轴的坐标值为负而所述第二坐标轴的坐标值为正的第二象限对应的图像信息进行关于所述第二坐标轴的线对称转换,来进行向所述第一象限的转换处理;
    通过对与所述第一坐标轴的坐标值为负且所述第二坐标轴的坐标值为负的第三象限对应的图像信息进行关于原点的点对称转换,来进行向所述第一象限的转换处理;以及
    通过对与所述第一坐标轴的坐标值为正而所述第二坐标轴的坐标值为负的第四象限对应的图像信息进行关于所述第一坐标轴的线对称转换,来进行向所述第一象限的转换处理。
  6. 根据权利要求5所述的装置,其特征在于,所述光学系统的模糊特性信息为点扩散函数,
    所述电路构成为:存储所述第一象限内的一个以上的点的所述光学系统的点扩散函数、所述第一坐标轴上的一个以上的点的所述光学系统的点扩散函数和第二坐标轴上的一个以上的点的所述光学系统的点扩散函数。
  7. 根据权利要求5所述的装置,其特征在于,所述电路构成为:
    存储所述第一象限内的第一区域以及第二区域的各自中的所述光学系统的模糊特性信息,
    基于所述第一象限的图像信息、通过对所述第一区域中的所述光学系统的模糊特性信息和所述第二区域中的所述光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取所述第一区域及所述第二区域之间的摄像对象的距离信息;
    基于通过对所述第二象限的图像信息进行关于所述第二坐标轴的线对称转换而得到的图像信息、通过对所述第一区域中的所述光学 系统的模糊特性信息及所述第二区域中的所述光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与所述第二象限中的所述第一区域和所述第二区域之间的区域对应的摄像对象的距离信息;
    基于通过对所述第三象限的图像信息进行关于所述原点的点对称转换而得到的图像信息、通过对所述第一区域中的所述光学系统的模糊特性信息及所述第二区域中的所述光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与所述第三象限中的所述第一区域和所述第二区域之间的区域对应的摄像对象的距离信息;以及
    基于由对所述第四象限的图像信息进行关于所述第一坐标轴的线对称转换而得到的图像信息、通过对所述第一区域中的所述光学系统的模糊特性信息及所述第二区域中的所述光学系统的模糊特性信息进行插值处理而生成的模糊特性信息,来获取与所述第四象限中的所述第一区域及所述第二区域之间的区域对应的摄像对象的距离信息。
  8. 根据权利要求1至3中任一项所述的装置,其特征在于,所述电路构成为:基于所述摄像对象的距离信息,来进行所述光学系统的焦点调节。
  9. 一种摄像装置,其特征在于,包括:根据权利要求1至3中任一项所述的装置;以及
    具有所述摄像面的图像传感器。
  10. 一种摄像系统,其特征在于,包括:根据权利要求9所述的摄像装置;以及
    能够控制所述摄像装置的姿势地进行支撑的支撑机构。
  11. 一种移动体,其特征在于,其搭载根据权利要求9所述的摄像装置并进行移动。
  12. 一种方法,其特征在于,包括基于使摄像装置包括的光学系统的焦点位置不同进行摄像而得到的多个图像数据和所述光学系统的模糊特性信息来获取摄像对象的距离信息的阶段,
    其中,获取所述摄像对象的距离信息的阶段包括以下阶段:存储所述摄像装置的摄像面中与特定象限对应的所述光学系统的模糊特性信息;
    当获取与其他象限对应的摄像对象的距离信息时,对所述特定象限中的所述光学系统的模糊特性信息和所述多个图像数据的各个图像数据中与所述其他象限对应的图像信息这二者中的一个进行象限的转换处理;以及
    基于由所述转换处理而得到的信息、和所述特定象限中的所述光学系统的模糊特性信息和与所述其他象限对应的图像信息这二者中的另一个的信息,来获取与所述其他象限对应的摄像对象的距离信息。
  13. 一种程序,其特征在于,其是使计算机执行根据权利要求12所述的方法的程序。
PCT/CN2021/083913 2020-04-07 2021-03-30 装置、摄像装置、摄像系统、移动体、方法以及程序 WO2021204020A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-068865 2020-04-07
JP2020068865A JP7019895B2 (ja) 2020-04-07 2020-04-07 装置、撮像装置、撮像システム、移動体、方法、及びプログラム

Publications (1)

Publication Number Publication Date
WO2021204020A1 true WO2021204020A1 (zh) 2021-10-14

Family

ID=78022078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083913 WO2021204020A1 (zh) 2020-04-07 2021-03-30 装置、摄像装置、摄像系统、移动体、方法以及程序

Country Status (2)

Country Link
JP (1) JP7019895B2 (zh)
WO (1) WO2021204020A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040332A (zh) * 2004-10-15 2007-09-19 皇家飞利浦电子股份有限公司 多维光学扫描器
CN102472619A (zh) * 2010-06-15 2012-05-23 松下电器产业株式会社 摄像装置及摄像方法
CN108235815A (zh) * 2017-04-07 2018-06-29 深圳市大疆创新科技有限公司 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及程序
CN109923854A (zh) * 2016-11-08 2019-06-21 索尼公司 图像处理装置、图像处理方法以及程序

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4915166B2 (ja) 2006-08-04 2012-04-11 日本電気株式会社 ぼかしフィルタ設計方法
JP6071860B2 (ja) 2013-12-09 2017-02-01 キヤノン株式会社 画像処理方法、画像処理装置、撮像装置および画像処理プログラム
JP2015204566A (ja) 2014-04-15 2015-11-16 株式会社東芝 カメラシステム
WO2018070017A1 (ja) 2016-10-13 2018-04-19 エスゼット ディージェイアイ テクノロジー カンパニー リミテッド 光学システム及び移動体
JP7204387B2 (ja) 2018-09-14 2023-01-16 キヤノン株式会社 画像処理装置およびその制御方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040332A (zh) * 2004-10-15 2007-09-19 皇家飞利浦电子股份有限公司 多维光学扫描器
CN102472619A (zh) * 2010-06-15 2012-05-23 松下电器产业株式会社 摄像装置及摄像方法
CN109923854A (zh) * 2016-11-08 2019-06-21 索尼公司 图像处理装置、图像处理方法以及程序
CN108235815A (zh) * 2017-04-07 2018-06-29 深圳市大疆创新科技有限公司 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及程序

Also Published As

Publication number Publication date
JP7019895B2 (ja) 2022-02-16
JP2021166340A (ja) 2021-10-14

Similar Documents

Publication Publication Date Title
CN108235815B (zh) 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及介质
WO2020011230A1 (zh) 控制装置、移动体、控制方法以及程序
JP6733106B2 (ja) 決定装置、移動体、決定方法、及びプログラム
JP2019110462A (ja) 制御装置、システム、制御方法、及びプログラム
WO2021013143A1 (zh) 装置、摄像装置、移动体、方法以及程序
WO2021204020A1 (zh) 装置、摄像装置、摄像系统、移动体、方法以及程序
JP6503607B2 (ja) 撮像制御装置、撮像装置、撮像システム、移動体、撮像制御方法、及びプログラム
WO2020216037A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2021031833A1 (zh) 控制装置、摄像系统、控制方法以及程序
WO2019061887A1 (zh) 控制装置、摄像装置、飞行体、控制方法以及程序
CN111357271B (zh) 控制装置、移动体、控制方法
WO2020108284A1 (zh) 确定装置、移动体、确定方法以及程序
JP2021129141A (ja) 制御装置、撮像装置、制御方法、及びプログラム
WO2020107487A1 (zh) 图像处理方法和无人机
WO2020156085A1 (zh) 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序
JP6569157B1 (ja) 制御装置、撮像装置、移動体、制御方法、及びプログラム
WO2020244440A1 (zh) 控制装置、摄像装置、摄像系统、控制方法以及程序
JP2020016703A (ja) 制御装置、移動体、制御方法、及びプログラム
WO2021249245A1 (zh) 装置、摄像装置、摄像系统及移动体
JP7043706B2 (ja) 制御装置、撮像システム、制御方法、及びプログラム
WO2021052216A1 (zh) 控制装置、摄像装置、控制方法以及程序
WO2021143425A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2020063770A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
JP2019008060A (ja) 決定装置、撮像装置、撮像システム、移動体、決定方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21785569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21785569

Country of ref document: EP

Kind code of ref document: A1