CN107409205B - Apparatus and method for focus adjustment and depth map determination - Google Patents

Apparatus and method for focus adjustment and depth map determination Download PDF

Info

Publication number
CN107409205B
CN107409205B CN201580077679.7A CN201580077679A CN107409205B CN 107409205 B CN107409205 B CN 107409205B CN 201580077679 A CN201580077679 A CN 201580077679A CN 107409205 B CN107409205 B CN 107409205B
Authority
CN
China
Prior art keywords
interest
scene
imaging
energy function
disparity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201580077679.7A
Other languages
Chinese (zh)
Other versions
CN107409205A (en
Inventor
朱振宇
赵丛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Priority to CN202010110338.8A priority Critical patent/CN111371986A/en
Publication of CN107409205A publication Critical patent/CN107409205A/en
Application granted granted Critical
Publication of CN107409205B publication Critical patent/CN107409205B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

Apparatus for focus adjustment and depth map determination and methods of implementing and using the same are provided. The focus adjustment apparatus includes a distance component configured to determine a distance between an object of interest and an imaging mechanism for imaging the object of interest. The focus adjustment apparatus may further include a focus assembly configured to automatically adjust a focus of the imaging mechanism according to the determined distance. The focus adjustment device advantageously enables automatic focus adjustment at a lower cost than solutions using laser cameras.

Description

Apparatus and method for focus adjustment and depth map determination
Technical Field
Embodiments disclosed herein relate generally to digital imaging and more particularly, but not by way of limitation, to apparatus and methods for automatically adjusting focus and/or determining a depth map for an image.
Background
Stereoscopic imaging is becoming increasingly common in many fields, whereby a three-dimensional image is formed by stereoscopic vision using a plurality of imaging devices. Stereoscopic imaging is particularly useful in robotics, where it is often desirable to gather three-dimensional information about the operating environment of a machine. Stereoscopic imaging simulates binocular vision of the human eye and applies the principles of stereoscopic vision to achieve depth perception. This technique can be reproduced by a man-made imaging device by using multiple imaging devices to view a given object of interest from slightly different vantage points. The differences between the different views of the object of interest convey depth information about the position of the object, thereby supporting three-dimensional imaging of the object.
During the shooting of a video or a tv show, tracking is a difficult and technical task. Mainstream focus-following devices require an experienced focus-following engineer to adjust the focal length of the camera in real time according to the monitor screen and the shooting scene.
For some imaging applications, manual focus adjustment is cumbersome and may not be practical in the case of remotely operating the imaging device. Accordingly, it is desirable for a focus adjustment system to automatically adjust the focus to track an object of interest that is moving.
Disclosure of Invention
According to a first aspect disclosed herein, a method for automatic focus adjustment is set forth, comprising:
determining a distance between the object of interest and the imaging mechanism; and
and automatically adjusting the focal length of the imaging mechanism according to the determined distance.
In an exemplary embodiment of the disclosed method, the determining comprises determining a distance between an object of interest in the scene and the imaging mechanism.
In an exemplary embodiment of the disclosed method, the determining comprises imaging the scene with a first imaging device and a second imaging device comprised in the imaging mechanism.
In an exemplary embodiment of the disclosed method, the determining further comprises:
obtaining a depth map of the scene;
selecting an object of interest in the scene; and
calculating a distance of the object of interest from the depth map of the scene.
In an exemplary embodiment of the disclosed method, the obtaining comprises calculating a disparity of the scene images from the first and second imaging devices.
In an exemplary embodiment of the disclosed method, said calculating the disparity comprises optimizing a global energy function.
In an exemplary embodiment of the disclosed method, the optimizing the global energy function comprises summing a disparity energy function and a scaling smoothing term.
In an exemplary embodiment of the disclosed method, the disparity energy function is represented by a Birchfield-Tomasi term.
In an exemplary embodiment of the disclosed method, the Birchfield-Tomasi term is defined by accumulating a minimum disparity of pixel coordinates in a first image of the scene captured by the first imaging device and a second image of the scene captured by the second imaging device.
Exemplary embodiments of the disclosed method further comprise: accumulating a scaling trigger function of disparity between two neighboring pixels for all neighbors of a pixel to obtain the smoothing term.
In an exemplary embodiment of the disclosed method, the accumulating comprises: the scaling trigger function of the disparity between two adjacent pixels from the four domains is accumulated for all neighbors of the pixel.
In an exemplary embodiment of the disclosed method, the smoothing term is obtained by accumulating a scaling trigger function of the disparities of all neighbors of each pixel.
Exemplary embodiments of the disclosed method further comprise:
aggregating data items in a plurality of directions to obtain an energy function for each of the directions; and
accumulating the energy functions in the directions to obtain the energy function.
In an exemplary embodiment of the disclosed method, the aggregating comprises obtaining an energy function in a predetermined number of directions.
In exemplary embodiments of the disclosed method, the aggregating comprises obtaining an energy function in four directions or eight directions.
In an exemplary embodiment of the disclosed method, the aggregated data item comprises: the energy function in a direction is obtained by summing the corresponding smoothing term with the dynamic programming in that direction.
In an exemplary embodiment of the disclosed method, said summing a corresponding smoothing term with said dynamic plan in said direction comprises: representing the dynamic programming in the direction based on a recursion of the energy function of its neighbors in the direction.
In an exemplary embodiment of the disclosed method, the direction comprises a horizontal direction.
In an exemplary embodiment of the disclosed method, the aggregating data items in the horizontal direction comprises: the energy is calculated by recursion based on the energy function of its neighbors in the horizontal direction.
Exemplary embodiments of the disclosed method further include obtaining an optimal depth.
In an exemplary embodiment of the disclosed method, said obtaining the optimal depth comprises seeking a disparity value that minimizes a sum of energies in a plurality of directions.
In an exemplary embodiment of the disclosed method, said obtaining the optimal depth comprises seeking the disparity value based on an energy function in one direction.
Exemplary embodiments of the disclosed method further comprise reducing noise by performing at least one of: matching scene images from the first and second imaging devices while setting the disparity to-1, and identifying respective distinctiveness of the scene images.
Exemplary embodiments of the disclosed method further comprise compensating for the error based on at least factors selected from the group consisting of: a distance between centerlines of the two imaging devices, an actual distance of two adjacent pixels, focal lengths of the two imaging devices, and a depth between the object of interest and the first and second imaging devices.
Exemplary embodiments of the disclosed method further comprise optimizing the depth map by using a non-local optimization equation.
Exemplary embodiments of the disclosed method further comprise obtaining Jacobi iterations of the non-local optimization equations by using recursive filtering.
In an exemplary embodiment of the disclosed method, said selecting an object of interest in said scene comprises receiving an external instruction to select said object of interest.
In an exemplary embodiment of the disclosed method, the receiving instructions includes identifying a selected object of interest on either of the images of the scene from the first imaging device or the second imaging device.
In an exemplary embodiment of the disclosed method, said identifying the selected object of interest comprises: sensing a box framing the object of interest on any of the scene images or sensing a click on the object of interest on any of the scene images.
In an exemplary embodiment of the disclosed method, said receiving an external instruction comprises receiving a voice instruction (optionally a preset name of said object of interest) to determine said object of interest.
In an exemplary embodiment of the disclosed method, said selecting an object of interest in said scene comprises: a determination is made under at least one preset rule and the object of interest is automatically determined based on the determination.
In an exemplary embodiment of the disclosed method, said determining under at least one preset rule comprises determining whether said object is approaching said imaging mechanism or is within a certain distance of said imaging mechanism.
In an exemplary embodiment of the disclosed method, the automatically adjusting the focus comprises automatically adjusting the focus of the imaging mechanism in real-time with a tracking learning detection based on gray level information of the object of interest.
According to another aspect disclosed herein, a stereoscopic imaging system is set forth that is configured for performing automatic focus adjustment according to any of the above methods.
According to another aspect disclosed herein, there is set forth a focus adjustment apparatus comprising:
a distance component for determining a distance between an object of interest in a scene and an imaging mechanism for imaging the scene; and
a focus component for automatically adjusting a focus of the imaging mechanism according to the determined distance.
In an exemplary embodiment of the disclosed apparatus, the distance component is configured to determine a distance between an object of interest in a scene and the imaging mechanism.
In an exemplary embodiment of the disclosed apparatus, the imaging mechanism includes first and second imaging devices that image the scene to obtain first and second scene images.
In an exemplary embodiment of the disclosed apparatus, either one of the first and second imaging devices is a camera or a sensor.
In an exemplary embodiment of the disclosed apparatus, the first and second imaging devices are selected from the group consisting of: laser cameras, infrared cameras, ultrasound cameras, and time-of-flight cameras.
In an exemplary embodiment of the disclosed apparatus, the first and second imaging devices are red-green-blue (RGB) cameras.
In an exemplary embodiment of the disclosed apparatus, the distance assembly comprises:
a depth estimation mechanism for obtaining a depth map of the scene;
an object determination mechanism for determining an object of interest in the scene; and
a computing mechanism for computing a distance of the object of interest from a depth map of the scene.
In an exemplary embodiment of the disclosed apparatus, the depth map is obtained based on a disparity of the first scene image and the second scene image.
In an exemplary embodiment of the disclosed apparatus, the depth estimation mechanism optimizes a global energy function.
In an exemplary embodiment of the disclosed apparatus, the global energy function is defined as a sum of a disparity energy function and a scaling smoothing term.
In an exemplary embodiment of the disclosed apparatus, the disparity energy function comprises a Birchfield-Tomasi data item.
In an exemplary embodiment of the disclosed apparatus, the Birchfield-Tomasi data item is defined based on a minimum disparity of pixel coordinates in a first image of the scene captured by the first imaging device and a second image of the scene captured by the second imaging device.
In an exemplary embodiment of the disclosed apparatus, the smoothing term employs an energy function of the disparity of the disparities.
In an exemplary embodiment of the disclosed apparatus, the smoothing term is a sum of scaling trigger functions of the disparity between two neighboring pixels for all neighbors of a pixel having coordinates (x, y).
In an exemplary embodiment of the disclosed apparatus, the neighbors are pixels from four domains.
In an exemplary embodiment of the disclosed apparatus, the smoothing term is defined based on a scaling trigger function of the disparities of all neighbors of each pixel.
In an exemplary embodiment of the disclosed apparatus, the global energy function is optimized by:
aggregating data items in a plurality of directions to obtain a directional energy function for each of the directions; and
accumulating the directional energy functions in the directions to obtain the energy function.
In an exemplary embodiment of the disclosed apparatus, the directions comprise a predetermined number of directions.
In exemplary embodiments of the disclosed apparatus, the predetermined number of directions includes four directions or eight directions.
In an exemplary embodiment of the disclosed apparatus, the energy function in one direction is based on a dynamic programming in that direction.
In an exemplary embodiment of the disclosed apparatus, the energy function in one direction is obtained by summing a corresponding smoothing term with a dynamic plan in that direction.
In an exemplary embodiment of the disclosed apparatus, the dynamic planning in the direction is based on a recursion of energy functions of its neighbors in the direction.
In exemplary embodiments of the disclosed apparatus, the direction comprises a horizontal direction.
In an exemplary embodiment of the disclosed apparatus, the energy function in the horizontal direction is obtained by a recursion based on said energy function of its neighbors in the horizontal direction.
In an exemplary embodiment of the disclosed apparatus, the optimal depth is obtained by seeking a disparity value that minimizes the sum of the energies in the multiple directions.
In an exemplary embodiment of the disclosed apparatus, the optimal depth is obtained based on an energy function in one direction.
In an exemplary embodiment of the disclosed apparatus, noise is reduced by matching the first and second scene images and/or identifying the uniqueness of each of the first and second scene images when the disparity is set to-1.
In an exemplary embodiment of the disclosed apparatus, the error is compensated based on at least factors selected from the group consisting of: a distance between centerlines of the two imaging devices, an actual distance of two adjacent pixels, focal lengths of the two imaging devices, and a depth between the object of interest and the first and second imaging devices.
In an exemplary embodiment of the disclosed apparatus, the depth map is optimized by using a non-layout optimization equation.
An exemplary embodiment of the disclosed apparatus includes Jacobi iterations of the non-local optimization equations obtained by recursive filtering.
In an exemplary embodiment of the disclosed apparatus, the object determination mechanism receives an external instruction to determine the object of interest.
In an exemplary embodiment of the disclosed apparatus, the object determination mechanism enables identification of a selected object of interest on either of the first scene image and the second scene image.
In an exemplary embodiment of the disclosed apparatus, the object determination mechanism enables identification of the object of interest by at least one of: sensing a box on any one of the first and second scene images to frame the object of interest, and sensing a click on the object of interest on any one of the first and second scene images.
In an exemplary embodiment of the disclosed apparatus, the object determination mechanism receives an external voice command (optionally a preset name of the object of interest) to determine the object of interest.
In an exemplary embodiment of the disclosed apparatus, the object determination mechanism automatically determines the object of interest based on a determination under at least one preset rule.
In an exemplary embodiment of the disclosed apparatus, the preset rule includes: whether the object of interest is approaching the first imaging device and the second imaging device or is within a certain distance.
In an exemplary embodiment of the disclosed apparatus, the focus component automatically adjusts the focus of the imaging mechanism in real-time with a tracking learning detection based on gray level information of the object of interest.
According to another aspect disclosed herein, a mobile platform is set forth that performs any of the methods according to the above.
According to another aspect disclosed herein, a mobile platform is set forth comprising any of the above-described apparatus.
According to another aspect disclosed herein, the mobile platform is an Unmanned Aerial Vehicle (UAV).
According to another aspect disclosed herein, the mobile platform is a self-stabilizing platform.
According to another aspect disclosed herein, a method for obtaining a depth map of a scene is set forth, comprising:
capturing a plurality of scene images of the scene; and
calculating a disparity of the plurality of scene images.
In an exemplary embodiment of the disclosed method, said capturing a plurality of scene images comprises capturing said plurality of scene images via a first imaging device and a second imaging device.
In an exemplary embodiment of the disclosed method, said calculating the disparity comprises optimizing a global energy function.
In an exemplary embodiment of the disclosed method, the optimizing the global energy function comprises summing a disparity energy function and a scaling smoothing term.
In an exemplary embodiment of the disclosed method, the disparity energy function is represented by a Birchfield-Tomasi term.
In an exemplary embodiment of the disclosed method, wherein the Birchfield-Tomasi term is defined by accumulating a minimum disparity of pixel coordinates in a first image of the scene captured by the first imaging device and a second image of the scene captured by the second imaging device.
Exemplary embodiments of the disclosed method further comprise: accumulating a scaling trigger function of disparity between two neighboring pixels to a selected pixel to obtain the smoothing term for all neighbors of the selected pixel.
In an exemplary embodiment of the disclosed method, the accumulating comprises: accumulating a scaling trigger function for disparity between two adjacent pixels from the four domains for all neighbors of the selected pixel.
In an exemplary embodiment of the disclosed method, the smoothing term is obtained by accumulating a scaling trigger function of the disparities of all neighbors of each pixel.
Exemplary embodiments of the disclosed method further comprise:
aggregating data items in a plurality of directions to obtain a directional energy function for each of the directions; and
accumulating the directional energy functions in the directions to obtain the energy function.
In an exemplary embodiment of the disclosed method, the aggregating comprises obtaining an energy function in a predetermined number of directions.
In exemplary embodiments of the disclosed method, the aggregating comprises obtaining an energy function in four directions or eight directions.
In an exemplary embodiment of the disclosed method, the aggregated data item comprises: the energy function in a selected direction is obtained by summing the corresponding smoothing term with the dynamic programming in the selected direction.
In an exemplary embodiment of the disclosed method, said summing a corresponding smoothing term with the dynamic plan in the direction comprises: the dynamic programming in the direction is represented by a recursion based on the energy function of its neighbors in the direction.
In an exemplary embodiment of the disclosed method, the direction comprises a horizontal direction.
In an exemplary embodiment of the disclosed method, the aggregating data items in the horizontal direction comprises: the energy is calculated by recursion based on the energy function of its neighbors in the horizontal direction.
Exemplary embodiments of the disclosed method further include obtaining an optimal depth.
In an exemplary embodiment of the disclosed method, said obtaining the optimal depth comprises seeking a disparity value that minimizes a sum of energies in a plurality of directions.
In an exemplary embodiment of the disclosed method, said obtaining the optimal depth comprises seeking the disparity value based on an energy function in one direction.
Exemplary embodiments of the disclosed method further comprise reducing noise by performing at least one of: matching scene images from the first and second imaging devices while setting the disparity to-1, and identifying respective distinctiveness of the scene images.
Exemplary embodiments of the disclosed method further comprise compensating for the error based on at least factors selected from the group consisting of: a distance between centerlines of the two imaging devices, an actual distance of two adjacent pixels, focal lengths of the two imaging devices, and a depth between the object of interest and the first and second imaging devices.
Exemplary embodiments of the disclosed method further comprise optimizing the depth map by using a non-local optimization equation.
Exemplary embodiments of the disclosed method further comprise obtaining Jacobi iterations of the non-local optimization equations by using recursive filtering.
According to another aspect disclosed herein, an apparatus for obtaining a depth map of a scene is set forth, comprising:
an imaging system for capturing a plurality of images of a scene; and
a depth component for calculating a disparity of the plurality of scene images.
In an exemplary embodiment of the disclosed apparatus, the imaging system includes a first imaging device and a second imaging device.
In an exemplary embodiment of the disclosed apparatus, the depth component is configured to optimize a global energy function.
In an exemplary embodiment of the disclosed apparatus, the global energy function is defined as a sum of a disparity energy function and a scaling smoothing term.
In an exemplary embodiment of the disclosed apparatus, the disparity energy function comprises a Birchfield-Tomasi data item.
In an exemplary embodiment of the disclosed apparatus, the Birchfield-Tomasi data item is defined based on a minimum disparity of pixel coordinates in a first image of the scene captured by the first imaging device and a second image of the scene captured by the second imaging device.
In an exemplary embodiment of the disclosed apparatus, the smoothing term employs an energy function of the disparity of the disparities.
In an exemplary embodiment of the disclosed apparatus, the smoothing term is a sum of scaling trigger functions of the disparity between two neighboring pixels for all neighbors of a selected pixel having coordinates (x, y).
In an exemplary embodiment of the disclosed apparatus, the neighbors are pixels from four domains.
In an exemplary embodiment of the disclosed apparatus, the smoothing term is defined based on a scaling trigger function of the disparities of all neighbors of each pixel.
In an exemplary embodiment of the disclosed apparatus, the global energy function is optimized by:
aggregating data items in a plurality of directions to obtain an energy function for each of the directions; and
accumulating the energy functions in the directions to obtain the energy function.
In an exemplary embodiment of the disclosed apparatus, the directions comprise a predetermined number of directions.
In exemplary embodiments of the disclosed apparatus, the predetermined number of directions includes four directions or eight directions.
In an exemplary embodiment of the disclosed apparatus, the energy function in one direction is based on a dynamic programming in that direction.
In an exemplary embodiment of the disclosed apparatus, the energy function in one direction is obtained by summing a corresponding smoothing term with a dynamic plan in that direction.
In an exemplary embodiment of the disclosed apparatus, the dynamic planning in the direction is based on a recursion of energy functions of its neighbors in the direction.
In exemplary embodiments of the disclosed apparatus, the direction comprises a horizontal direction.
In an exemplary embodiment of the disclosed apparatus, the energy function in the horizontal direction is obtained by a recursion based on said energy function of its neighbors in the horizontal direction.
In an exemplary embodiment of the disclosed apparatus, the depth component is configured to obtain the optimal depth by seeking a disparity value that minimizes a sum of energies in a plurality of directions.
In an exemplary embodiment of the disclosed apparatus, the optimal depth is obtained based on an energy function in one direction.
In an exemplary embodiment of the disclosed apparatus, the depth component is configured to reduce noise by matching the plurality of images and/or identifying a uniqueness of each of the plurality of images when the disparity is set to-1.
In an exemplary embodiment of the disclosed apparatus, the depth component is configured to compensate for the error based on at least factors selected from the group consisting of: a distance between centerlines of the two imaging devices, an actual distance of two adjacent pixels, focal lengths of the two imaging devices, and a depth between the object of interest and the first and second imaging devices.
In an exemplary embodiment of the disclosed apparatus, the depth map is optimized by using a non-layout optimization equation.
In an exemplary embodiment of the disclosed apparatus, the depth component is configured to obtain Jacobi iterations of the non-local optimization equations by recursive filtering.
Drawings
Fig. 1 is an exemplary top-level block diagram illustrating an embodiment of a focus adjustment apparatus and an imaging mechanism having a first imaging device and a second imaging device.
Fig. 2 and 3 show examples of two images of a scene obtained by the first and second imaging devices of fig. 1.
Fig. 4 is an exemplary block diagram illustrating an embodiment of a first imaging device and a second imaging device.
Fig. 5 is a part diagram illustrating an alternative embodiment of the first and second imaging devices of fig. 1, wherein the first and second imaging devices are mounted on an Unmanned Aerial Vehicle (UAV).
Fig. 6 schematically illustrates a process of calculating a distance between an object of interest and the first and second imaging devices of fig. 1 via triangulation.
FIG. 7 is an exemplary top-level block diagram illustrating an embodiment of a system for adjusting focus, wherein the system includes a distance component.
Fig. 8 is an exemplary depth map obtained by the focus adjustment apparatus of fig. 1.
FIG. 9 is an exemplary top-level flow chart illustrating an embodiment of a method for focus adjustment.
FIG. 10 is an exemplary flow chart illustrating a process of determining a distance between an imaging mechanism and an object of interest.
Fig. 11 to 13 are exemplary diagrams for illustrating an error of the focus adjusting apparatus of fig. 1 and an effect of compensating for the error.
It should be noted that the figures are not drawn to scale and that elements having similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It should also be noted that the figures are only intended to help describe preferred embodiments. The drawings do not illustrate every aspect of the described embodiments and do not limit the scope of the disclosure.
Detailed Description
Because currently available focus adjustment systems are not capable of providing automatic focus adjustment for imaging systems, focus adjustment apparatus and methods are provided herein for automatically adjusting focus and serve as the basis for a wide range of applications, such as applications on Unmanned Aerial Vehicles (UAVs) and other mobile platforms. According to one embodiment disclosed herein, this result may be achieved by a focus adjustment device 110 as illustrated in fig. 1.
Fig. 1 depicts an illustrative embodiment of a focus adjustment device 110. As shown in fig. 1, the focus adjustment device 110 may be coupled with the imaging mechanism 130. The imaging mechanism 130 may generate one or more images of the scene 100 in which the object of interest 120 is located in the scene 100.
Examples of images 199 of the scene 100 obtained by the imaging mechanism 130 are shown in fig. 2 and 3. The generated image of the scene 100 may be processed by the focus adjustment device 110 to generate a signal for adjusting the focus of the imaging mechanism 130. The focal length of the imaging mechanism 130 may be adjusted in any suitable manner, preferably in real time.
Although shown and described with reference to fig. 1, the imaging mechanism 130 includes two imaging devices 131, 132 for illustrative purposes only, the imaging mechanism 130 may include any number of imaging devices 133. For example, the imaging device 130 may have 2, 3, 4, 5, 6, or even a greater number of imaging devices. For imaging mechanisms 130 having more than two imaging devices, the automatic focus adjustment illustrated herein may be applicable to any imaging device pair.
The imaging devices 131, 132 of fig. 1 may be arranged in any desired manner in the imaging mechanism 130. The specific arrangement of the imaging devices 131, 132 may depend on the imaging application concerned. In some embodiments, for example, the imaging devices 131, 132 may be positioned side-by-side such that the imaging devices 131, 132 have parallel optical axes. In other embodiments, the imaging devices 131, 132 may be positioned such that the optical axes of the imaging devices 131, 132 are not parallel.
Each of the imaging devices 131, 132 may sense light and convert the sensed light into an electronic signal, which may ultimately be presented as an image. Exemplary imaging devices 131, 132 suitable for use with the focal length adjustment apparatus 110 include, but are not limited to, commercially available cameras (color cameras and/or monochrome cameras) and camcorders. Suitable imaging devices 131, 132 may include analog imaging devices (e.g., camera tubes) and/or digital imaging devices (e.g., Charge Coupled Devices (CCDs), Complementary Metal Oxide Semiconductor (CMOS), N-type metal oxide semiconductor (NMOS) imaging devices, and hybrids/variations thereof). For example, a digital imaging device may include a two-dimensional array of photosensitive elements (not shown), each of which may capture image information for one pixel. Either of the imaging devices 131, 132 may be, for example, a photosensor, a thermal/infrared sensor, a color or monochrome sensor, a multispectral imaging sensor, a spectrophotometer, a spectrometer, a thermometer, and/or a luminometer. Further, either of the imaging devices 131, 132 may be, for example, a red-green-blue (RGB) camera, an ultrasonic camera, a laser camera, an infrared camera, an ultrasonic camera, or a time-of-flight camera. However, the imaging devices 131, 132 may alternatively be of the same type. Similarly, the focal lengths of the imaging devices 131, 132 may be the same and/or different without limiting the scope of the present disclosure.
An exemplary first imaging device 131 and second imaging device 132 are shown in fig. 4. The distance D between the first imaging device 131 and the second imaging device 132 may be adjustable according to an object distance Z (shown in fig. 6) between the imaging devices 131, 132 and the object of interest 120. In an embodiment, the first and second imaging devices 131 and 132 may be mounted on a portable pan-tilt head ("cradle head") 150. Once the object of interest 120 is determined, the focal lengths of the first and second imaging devices 131, 132 may be automatically adjusted based on the object distance Z. By adjusting the focal length of the imaging devices 131, 132, the object of interest 120 may become clearly visible.
In some embodiments, the focal length adjustment device 110 (shown in fig. 1) may be physically located adjacent to the imaging mechanism 130 (shown in fig. 1), in which case data between the focal length adjustment device 110 and the imaging mechanism 130 may be communicated locally. The advantage of local communication is that transmission delays can be reduced to facilitate real-time focus adjustment, image processing, and parameter calibration. In other embodiments, the focal length adjustment device 110 may be located remotely from the imaging mechanism 130. Remote processing may be preferred due to, for example, weight limitations or other reasons related to the operating environment of the focus adjustment device 110. By way of non-limiting example, if the imaging mechanism 130 is mounted on a mobile platform such as an Unmanned Aerial Vehicle (UAV) (shown in fig. 5), it may be desirable to transmit the imaging data to a remote terminal (not shown), such as a ground terminal or base station, for centralized processing. Centralized processing may be desirable, for example, where multiple unmanned aerial vehicles are imaging a given object of interest 120 in a coordinated manner. Fig. 4 illustrates an exemplary embodiment of the focus adjustment apparatus 110, with the imaging devices 131, 132 mounted on the unmanned aerial vehicle 400.
Although the mobile platform is shown and described in fig. 5 as being an unmanned aerial vehicle 400 for exemplary purposes only, the mobile platform may be any kind of such platform, including but not limited to any self-stabilizing mobile platform.
Various communication methods may be used for remote communication between the imaging mechanism 130 and the focus adjustment device 110. Suitable communication methods include, for example, radio, wireless fidelity (Wi-Fi), cellular, satellite, and broadcast. Exemplary wireless communication technologies include, but are not limited to, Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), wideband CDMA (W-CDMA), CDMA2000, IMT Single Carrier, enhanced data rates for GSM evolution (EDGE), Long Term Evolution (LTE), LTE-advanced, time division LTE (TD-LTE), high Performance Wireless local area network (HiperLAN), high Performance Wireless Wide area network (HiperWAN), high Performance Wireless metropolitan area network (HiperMAN), Local Multipoint Distribution Service (LMDS), Worldwide Interoperability for Microwave Access (WiMAX), ZigBee, Bluetooth, fast orthogonal frequency division multiplexing (Flash-OFDM), high Capacity multiple Access (HC-SDMA), iBurst, Universal Mobile Telecommunications System (UMTS), UMTS time division Duplex (UMTS-TDD), enhanced high speed packet Access (HSPA +), time division synchronous code division multiple Access (TD-SCDMA), evolution data optimization (EV-DO), and, Digital Enhanced Cordless Telecommunications (DECT), etc.
Alternatively and/or additionally, the imaging mechanism 130 may be at least partially incorporated into the focal length adjustment device 110. Therefore, the imaging mechanism 130 can advantageously serve as a component of the focal length adjustment device 110.
As shown in fig. 1, the imaging mechanism 130 may be engaged with the focus adjustment device 110. For example, the imaging devices 131, 132 of the imaging mechanism 130 may respectively acquire images 199 (shown in fig. 2 and 3) of the scene 100 and may relay the acquired images to the focus adjustment apparatus 110 locally and/or remotely via a data communication system (not shown). For example, the focal length adjustment device 110 may be configured to reconstruct a three-dimensional depiction of the object of interest 120 via stereo vision using the two images. Accordingly, the focus adjustment apparatus 110 may determine whether focus adjustment would be advantageous and/or whether to transmit a calibration signal to the imaging mechanism 130 for focus adjustment based on the object distance Z between the imaging mechanism 130 and the object of interest 120. Additionally and/or alternatively, the focus adjustment device 110 may advantageously be configured for automatically calibrating one or more external parameters for stereoscopic imaging.
Referring now to fig. 6, images 199 acquired by imaging devices 131, 132 may include images 199A and 199B. Image 199A (left, denoted as l in the following equation) and graph 199B (right, denoted as r in the following equation) may be compared to determine an object distance Z between imaging devices 131, 132 and object of interest 120. The object distance Z can be determined using triangulation with a binocular disparity d between the two images 199A and 199B. Specifically, the coordinates (X) of pixel i in image 199A (left) may be given by the following equationi,Yi,Zi):
Figure BDA0001404530710000141
Figure BDA0001404530710000142
Figure BDA0001404530710000143
Wherein c isxAnd cyRepresenting the center coordinates of imaging devices 131, 132, xi and yi representing the coordinates of object of interest 120 in one or both of images 199A (left) and 199B (right), respectively, T is a baseline (in other words, the distance between the center coordinates of imaging devices 131, 132), and f is the corrected focal length of imaging devices 131, 132, i is an index of a plurality of objects of interest 120 and/or a plurality of selected points of object of interest 120 that can be used to determine object distance Z, and d is an index between images 199A (l) and 199B (r)Binocular parallax, here denoted as:
Figure BDA0001404530710000151
the focus adjustment device 110 may include any processing hardware and/or software necessary to perform image acquisition, focus adjustment, calibration, and any other functions and operations described herein. Without limitation, the focal length adjustment device 110 may include one or more general purpose microprocessors (e.g., single-core or multi-core processors), application specific integrated circuits, dedicated instruction set processors, graphics processing units, physical processing units, digital signal processing units, co-processors, network processing units, audio processing units, cryptographic processing units, and the like. In some embodiments, the focus adjustment device 110 may include an image processing engine or media processing unit that may include specialized hardware for increasing the speed and efficiency of image capture, filtering, and processing operations. Such operations include, for example, Bayer (Bayer) transforms, demosaicing operations, noise reduction operations, and/or image sharpening/softening operations.
In some embodiments, the focus adjustment apparatus 110 may include specialized hardware and/or software for performing focus adjustment and parameter calibration. For example, specialized hardware and/or software may be provided for functions including, but not limited to: reconstructing a three-dimensional depiction of the object of interest 120 via stereo vision using the two-dimensional image, determining whether a focus adjustment is required based on a distance between the imaging mechanism 130 and the object of interest 120, determining an optimal focus, and transmitting control signals to any component of the focus adjustment device 110 for focus adjustment.
In some embodiments, the focal length adjustment device 110 may include one or more additional hardware components (not shown) as desired. Exemplary additional hardware components include, but are not limited to, memory (e.g., Random Access Memory (RAM), static RAM, dynamic RAM, Read Only Memory (ROM), programmable ROM, erasable programmable ROM, electrically erasable programmable ROM, flash memory, Secure Digital (SD) cards, etc.) and/or one or more input/output interfaces (e.g., Universal Serial Bus (USB), Digital Video Interface (DVI), displayport, serial ata (sata), IEEE 1394 interface (also known as firewire), serial port, Video Graphics Array (VGA), advanced video graphics array (SVAG), Small Computer System Interface (SCSI), High Definition Multimedia Interface (HDMI), audio port, and/or proprietary input/output interfaces). Without limitation, one or more input/output devices (e.g., buttons, a keyboard, a keypad, a trackball, a display, and a monitor) may also be included in the focal length adjustment apparatus 110 as desired.
In some embodiments, image acquisition, focus adjustment, calibration, and any other functions and operations described herein for the focus adjustment apparatus 110 may be implemented by software running on a conventional processor or a general purpose computer (such as a personal computer). The software may operate with the appropriate hardware discussed above as desired. For example, the software may take any form of source code, object code, executable code, and machine-readable code. The source code can be written in any form of high-level programming language, including but not limited to C + +, Java, Pascal, Visual B, and the like.
Turning now to fig. 7, an exemplary block diagram illustrating an alternative embodiment of the focus adjustment apparatus 110 of fig. 1 is shown. The focus adjustment apparatus 110 in fig. 7 comprises a distance component 701, which distance component 701 is used to determine an object distance Z, i.e. the distance between the object of interest 120 in the scene 100 and the imaging mechanism 130. The focus adjustment apparatus 110 further comprises a focus assembly 702, the focus assembly 702 being configured to automatically adjust the focus of the imaging mechanism 130 according to the distance determined by the distance assembly 701.
As seen from fig. 7, the distance component 701 is shown as including: a depth estimation mechanism 7011 for obtaining a depth map of the scene 100; an object determination mechanism 7012 for determining an object of interest 120 in the scene 100; and a calculating mechanism 7013 for calculating the distance of the object of interest 120 from the depth map of the scene 100 from the depth estimating mechanism 7011.
In an embodiment of the present disclosure, the depth estimation mechanism 7011 receives a first image 199A (shown in fig. 2) of the scene 100 from the first imaging device 131 and a second image 199B (shown in fig. 3) of the scene 100 from the second imaging device 132. Based on the first and second images 199A, 199B of the scene 100 as shown in fig. 2 and 3, the depth estimation mechanism 7011 obtains their disparity, based on which the depth map 800 (shown in fig. 8) is acquired. The specific operation of the depth estimation mechanism 7011 for obtaining a depth map will be described in detail below with reference to fig. 8.
An exemplary depth map 800 is depicted in fig. 8. Each pixel (not shown) in the depth map 800 is associated with a value representing a distance between a point corresponding to a pixel in the scene 100 (shown in fig. 1) and the imaging mechanism 130 (shown in fig. 1). For example, in some embodiments, luminance values are used to represent the distance between a point in the scene (on the object of interest 120 or not on the object of interest 120) and the imaging device imaging the scene. Alternatively and/or additionally, different color values (color values) may be assigned to the pixels to represent the distance. In the luminance value example, as seen from fig. 8, the lighter regions 810 indicate points in the scene 100 that are closer to the imaging mechanism 130 (shown in fig. 1), the darker regions 820 indicate points in the scene 100 that are farther from the imaging mechanism 130, and the gray regions 830 indicate points in the scene 100 that are between the near and far distances. If the object of interest 120 moves within the scene 100, the pixel brightness of the object of interest 120 may vary based on the distance between the object of interest 120 and the imaging mechanism 130. As shown in fig. 8, selected pixels of the object of interest 120 may become brighter as the distance between the object of interest 120 and the imaging mechanism 130 decreases, and may become darker as the distance increases.
Returning to fig. 7, the object determination mechanism 7012 may receive an external instruction to determine an object of interest 120 (shown in fig. 1) by identifying the selected object of interest 120 in either of the first and second images of the scene 100. The external instruction may be given by an operator of the focus adjustment device 110, for example. Selecting the object of interest 120 may be accomplished by framing the object of interest 120 in either of the first and second images of the scene 100. The first and second images of the scene 100 may be displayed on a display screen (not shown) and/or on a display screen (not shown) to an operator of the focal length adjustment device 110 for selection. Alternatively or additionally, selecting the object of interest 120 may be performed by clicking on one or more display screens the object of interest 120 in either of the first and second images of the scene 100 displayed on the one or more display screens. The object determination mechanism 7012 may sense a frame-out operation and/or a click operation to identify that the object of interest 120 is being selected.
In another embodiment, the object determination mechanism 7012 can receive external verbal instructions from, for example, an operator of the focal adjustment device 110. Alternatively, the verbal instruction may be a preset name of the object of interest 120.
Alternatively and/or additionally, the object determination mechanism 7012 may be enabled to automatically determine the object of interest 120 based on a determination made under at least one preset rule. Any rule for judgment may be set as necessary. For example, the preset rules may include: the object of interest 120 is determined if the object of interest 120 is approaching the first and second imaging devices 131, 132 and/or if the object of interest 120 is within a certain distance from the first and second imaging devices 131, 132.
Based on the depth map from the depth estimation mechanism 7011 and the information about the object of interest 120 from the object determination mechanism 7012, the calculation mechanism 7013 is enabled to calculate the distance between the imaging mechanism 130 and the object of interest 120. The calculation means 7013 preferably calculates the distance in real time.
Based on the calculated distance, the focus component 702 is enabled to automatically adjust (preferably in real-time) the focus of the imaging mechanism 130 with a tracking learning detection method based on the gray scale information of the object of interest 120 serving as an initial value.
If the user wants to focus the imaging mechanism 130 on a particular object, for example, the object determination mechanism 7012 may enable the user to draw a box on a display showing an image of the scene 100 to frame the object of interest 120 to be tracked. The boxes may be any suitable size, or shape, including but not limited to rectangular, square, or circular, or even irregular shapes. Optionally, the user is enabled to click on one or more display screens to confirm the selection. By using the depth map (shown in fig. 8) obtained by the depth estimation mechanism 7011, the calculation mechanism 7013 is enabled to calculate the distance of the object being selected by the user, and the focus component 702 is enabled to automatically adjust the focus of the focus adjustment device 110 according to the object distance Z (shown in fig. 6), which is preferably acquired in real time.
Referring now to FIG. 9, one embodiment of a method 900 for focus adjustment is illustrated. In 901, an object distance between the imaging mechanism 130 (shown in FIG. 1) and the object of interest 120 (shown in FIG. 1) is determined. The object distance Z (shown in fig. 6) may be determined using any of a number of different methods, as desired. In some embodiments, object distance Z may be determined via stereo vision by using a plurality of imaging devices 133 (shown in fig. 1) in imaging mechanism 130. For example, two imaging devices 131, 132 (shown in fig. 1) of the imaging mechanism 130 may respectively acquire images of the object of interest 120 (shown in fig. 2 and 3), and may analyze overlapping portions of the acquired images to assess the depth of the object of interest 120. Alternatively and/or additionally, object distances may be acquired using one or more non-stereoscopic methods, such as by using a laser and/or using ultrasound. In 902, the focal length of the imaging mechanism 130 is automatically adjusted according to the object distance Z determined in step 901.
A detailed process 901 of determining the distance between the imaging mechanism 130 and the object of interest 120 is illustrated in fig. 10. In 9011, a depth map of the scene 100 is obtained. Any process of obtaining a depth map may be applied herein without limitation. Exemplary procedures are illustrated in detail below.
In an embodiment, the depth map is obtained by calculating a disparity of the scene images from the first imaging device 131 and the second imaging device 132. In the field of obtaining depth maps, the energy function is typically computed over a subset of the entire image. In one embodiment, the global energy function is optimized to obtain the disparity global energy. In particular, the disparity global energy may be calculated by summing a disparity energy function with a scaling smoothing term.
An exemplary optimization can be illustrated by the following equation:
E(d)=Ed(d)+pEs(s) (Eq.5)
Where d indicates the disparity between the first and second images of the scene, ed (d) is a data term indicating a disparity energy function and es (d) indicates a smoothing term.
Data items ed (d) include Birchfield-Tomasi data items that may be obtained according to the following equation:
Figure BDA0001404530710000191
wherein,
Figure BDA0001404530710000192
Figure BDA0001404530710000193
Figure BDA0001404530710000194
Figure BDA0001404530710000195
wherein IL represents a first image of the scene 100 captured by the first imaging device 131 and IR represents a second image of the scene 100 captured by the second imaging device 132; x in IL (x) denotes the horizontal coordinate of the pixel in the first image and x 'in IL (x') denotes the horizontal coordinate of the pixel in the second image; (x, y) represents the coordinates of the pixels in the scene 100.
The Birchfield-Tomasi data item is a commonly used data item in image sampling/matching, which is introduced to solve the problem of erroneous image matching by utilizing the matching accuracy of sub-pixels, which is a measure of the degree of pixel dissimilarity that is insensitive to image sampling. The contents of IEEE Transactions on Pattern Analysis and Machine understanding (1998) explaining the Birchfield-Tomasi data item are incorporated herein by reference. When calculating the parallax global energy, other data items may be employed as the data items in equation 5.
The smoothing term es (d) may be represented by an energy function of the disparity difference, which may be obtained by summing up the scaling trigger functions of the disparity between two adjacent pixels, for all the neighbors of a pixel with coordinates (x, y). Specifically, the smoothing term es (d) can be obtained according to the following equation:
Figure BDA0001404530710000196
wherein (x, y) represents pixel coordinates of the first image and (x ', y') represents pixel coordinates of the second image; p is a radical of2And p1Are two adjustable weights, and usually p2≥p1And the accumulated symbols are all neighbors for (x, y), where four fields are typically used, T is the trigger function, which triggers when the condition in parentheses is true.
To optimize the smoothing term es (d), the fastest dynamic programming is utilized by: aggregating data items in a plurality of directions to obtain a directional energy function for each of the directions and accumulating the directional energy functions in the directions to obtain an energy function. In an embodiment, four directions or eight directions are selected to aggregate the data items. Other numbers of directions, such as three, five, or ten directions, may be selected without limitation.
In an embodiment, the energy function in one direction may be obtained by summing its corresponding smoothing term with the dynamic programming in that direction. Dynamic programming in one direction can be represented by a recursion based on the energy function of its neighbors in that direction. For example, the energy function of the neighbors of a pixel in the horizontal direction is denoted L (x-1, y, d), L (x-1, y, d +1), L (x-1, y, d-1), and L (x-1, y, d'). Specifically, the energy function in the horizontal direction is obtained according to the following equation:
Figure BDA0001404530710000201
where the energy received at coordinates (x, y, d) is defined as L (x, y, d), which may be expressed as a recursion of neighbor-based energies L (x-1, y, d), L (x-1, y, d +1), L (x-1, y, d-1), and L (x-1, y, d').
Furthermore, the optimal depth is obtained by seeking a disparity value that minimizes the sum of the energies in the multiple directions. In an embodiment, the optimal depth is obtained according to the following equation:
d*=argmindsigma L (x, y, d) (equation 13)
Where d indicates the optimal depth and L (x, y, d) indicates the energy function in one direction.
As an embodiment, noise is reduced by matching the first scene image and the second scene image and/or identifying the uniqueness of each of the first scene image and the second scene image when the set disparity is-1.
The present disclosure also optimizes the depth map obtained by the depth estimation mechanism 7011. As described above, the global energy function is optimized by compensating for errors of the depth estimation mechanism 7011. Specifically, the error may be compensated by the following equation:
Figure BDA0001404530710000202
Figure BDA0001404530710000203
by combining equation 14 and equation 15, we obtain:
Figure BDA0001404530710000204
on the other hand, we have:
Figure BDA0001404530710000205
Figure BDA0001404530710000211
by combining equation 17 and equation 18, we obtain:
Figure BDA0001404530710000212
wherein
f is focal length (mm)
Depth (depth between object and imaging plane, mm)
a is base line (distance between the median lines of the two imaging devices, mm)
b is the actual distance (mm) of two adjacent pixels
d ═ parallax (measured in pixels)
Thus, the depth estimation error is in the range of [ -x, x ], while the estimated depth is in the range of [ D-x, D + x ]. The above error analysis and compensation are based on the following assumptions. For example, the error estimation is based on the theoretical value assuming that the camera calibration parameters are perfectly correct and the average error of the disparity map is 1 pixel. Actual calibration may introduce errors, and depth estimation errors may exceed 1 pixel. Therefore, the above error data reflects only the trend. For reference: by using the first ranked algorithm of the stereo matching process, middlebury, the average error of depth estimation in the four test plots is 1.29+0.14+6.47+ 5.70/4-3.4 (pixels). Even in the absence of noise, no beam transformation, and correct calibration parameters, the current optimal process exhibits an average error of 3.4 pixels for all points.
Several schematic diagrams illustrating the error of the focus adjustment apparatus 110 and the effect of compensating for the error are depicted in fig. 11-13.
Fig. 11 illustrates an exemplary relationship between the estimation error of the focus adjustment device 110 and the measured depth D of the scene 100 when only the measured depth D is changed. In other words, when the measurement depth D changes, the baseline a, the distance b of two adjacent pixels, and the focal length f in equations 16, 17 remain constant. As shown in fig. 11, the horizontal axis represents the measured depth D in mm, and the vertical axis represents the estimation error of the focus adjustment device 110. As shown in fig. 11, when the measurement depth D changes, the estimation error of the focus adjustment device 110 also changes in a nonlinear relationship with respect to the measurement depth D. For example, when the measured depth D is 3000mm, the estimation error is about 5mm according to fig. 11, but when the measured depth D is increased to 6000mm, the corresponding estimation error increases to more than 20 mm.
Although the relationship of the estimation error to the measured depth D is shown and described in fig. 11 as being non-linear for exemplary purposes only, the relationship may be any linear relationship and/or non-linear relationship.
Fig. 12 illustrates an exemplary relationship between the estimation error of the focus adjustment apparatus 110 and the baseline when only the baseline a changes. Here, when the baseline a is changed, the measurement depth D, the distance b of two adjacent pixels, and the focal length f in equations 16 and 17 remain constant. In fig. 12, the horizontal axis represents the baseline a in mm, and the vertical axis represents the estimation error in mm. As shown in fig. 12, when the baseline a increases, the estimation error may decrease according to a non-linear relationship between the estimation error and the baseline a. For example, when baseline a is 500mm, the estimation error may be as high as 35mm according to fig. 12, but when baseline a is increased to 2000mm, the estimation error decreases to about 8 mm.
Although the relationship of the estimation error to the baseline is shown and described in fig. 12 as being non-linear for exemplary purposes only, the relationship may be any linear relationship and/or non-linear relationship.
Fig. 13 is an illustrative example for showing typical correspondence between image representation symbols and variables contained in equations 14-19. In other words, equations 14-19 can be derived from the representation illustrated in FIG. 13. Here, the imaging devices 131, 132 (shown in fig. 1) are represented by two cameras Cam 1 and Cam 2 having a baseline "a" and a focal length "f".
As shown in fig. 13, triangle ABO2 is similar to triangle CDO2 because AB is parallel to CD. Thus, we can get equation 14:
Figure BDA0001404530710000221
in addition, since CD is parallel to O1O2Triangle CDE is similar to triangle O1O 2E. Thus, we can get equation 15:
Figure BDA0001404530710000222
by combining equation 14 and equation 15, we can arrive at equation 16:
Figure BDA0001404530710000223
for the same reason, equation 19 can be derived from a combination of the similarity relationship of triangular AHO2 and the similarity relationship of FGO2 with O1O 2C. As shown in fig. 13, in equations 16 and 19, D is the actual depth between the scene 100 and the imaging plane, a is the baseline between the two imaging devices 131, 132, b is the distance of the two adjacent pixels, and f is the focal length of the imaging devices 131, 132.
Although the estimation errors-x and + x are the same absolute value for exemplary purposes only, as shown and described in fig. 13, the estimation errors-x and + x may be different absolute values as a result of equations 16 and 19, in which case b may have different values in each equation.
Based on the characteristics of the estimation errors shown in fig. 11 and 12, the depth map may also be optimized by applying a non-local optimization equation, whose Jacobi iteration may be obtained by recursive filtering. In particular, the non-local optimization equation is defined according to the following equation:
E(d)=∑|d(x,y)-d*(x,y)|2+∑exp(|IL(x,y)-IL(x′,y′)|+|x′-x|
+|y′-y|)d(x,y)-d(x′y′)|
(equation 20)
Wherein d is*(x, y) indicates an optimal depth map and d (x, y) is an estimated depth map; i (x, y) represents image intensity; x, y are the coordinates of the pixel in the image coordinates; x ', y' are the coordinates of x, y neighboring pixels in the same image.
Similar to the operations performed by the object determination mechanism 7012 and the calculation mechanism 7013, in step 9012 of fig. 10, the object of interest 120 is determined, and in step 9013 of fig. 10, the distance between the object of interest 120 and the imaging mechanism 130 is calculated, which in turn serves as a basis for automatically adjusting the focal length of the imaging mechanism 130.
According to any embodiment of the present disclosure, a stereoscopic imaging system configured to perform the foregoing operations to perform automatic focus adjustment may be obtained.
Furthermore, according to an embodiment of the present disclosure, a computer program product may be obtained comprising instructions for automatically adjusting the focal length of a stereoscopic imaging system having at least two imaging devices according to the aforementioned operations. Preferably, the method for automatically adjusting the focal length according to the present disclosure may be implemented by a general computing device, such as a personal computer and/or a microcomputer.
The disclosed embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the disclosed embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the disclosed embodiments are to cover all modifications, equivalents, and alternatives.

Claims (63)

1. A method for focus adjustment, comprising:
determining a distance between the object of interest and the imaging mechanism; and
automatically adjusting the focal length of the imaging mechanism according to the determined distance;
wherein the determining comprises determining a distance between an object of interest in the scene and the imaging mechanism;
wherein the determining comprises imaging the scene with a first imaging device and a second imaging device included in the imaging mechanism;
wherein the determining further comprises: obtaining a depth map of the scene;
wherein the obtaining comprises calculating a disparity of the scene images from the first and second imaging devices;
wherein the computing the disparity comprises optimizing a global energy function;
the method further comprises the following steps:
aggregating data items in a plurality of directions to obtain a directional energy function for each of the directions; and
accumulating the directional energy functions in the directions to obtain the global energy function.
2. The method of claim 1, wherein the determining further comprises:
selecting an object of interest in the scene; and
calculating a distance of the object of interest from the depth map of the scene.
3. The method of claim 2, wherein the optimizing the global energy function comprises summing a disparity energy function and a scaled smoothing term, the scaled smoothing term representing a weighted adjustment to a smoothing term.
4. The method of claim 3, wherein the disparity energy function is represented by a Birchfield-Tomasi term.
5. The method of claim 4, wherein the Birchfield-Tomasi term is defined by accumulating a minimum disparity of pixel coordinates in a first image of the scene captured by the first imaging device and a second image of the scene captured by the second imaging device.
6. The method of any of claims 3-5, further comprising: accumulating a scaled trigger function of the disparity between two adjacent pixels for all neighbors of a pixel to obtain the smoothing term, the scaled trigger function representing a weighted adjustment of the trigger function.
7. The method of claim 6, wherein the accumulating comprises: the scaling trigger function of the disparity between two adjacent pixels from the four domains is accumulated for all neighbors of the pixel.
8. The method of claim 6, wherein the smoothing term is obtained by accumulating a scaling trigger function of disparities of all neighbors of each pixel.
9. The method of claim 2, wherein the aggregating comprises obtaining a directional energy function in a predetermined number of directions.
10. The method of claim 2 or claim 9, wherein the aggregating comprises obtaining directional energy functions in four directions or eight directions.
11. The method of claim 2 or 9, wherein the aggregated data item comprises: the directional energy function in a direction is obtained by summing the corresponding smoothing term with the dynamic programming in that direction.
12. The method of claim 11, wherein the summing of the corresponding smoothing terms with the dynamic plan in the direction comprises: the dynamic programming in that direction is represented by a recursion based on the directional energy function of its neighbors in that direction.
13. The method of claim 12, wherein the direction comprises a horizontal direction.
14. The method of claim 13, wherein the aggregating data items in a horizontal direction comprises: the energy is calculated by recursion based on the directional energy function of its neighbors in the horizontal direction.
15. The method of claim 2 or 9, further comprising obtaining an optimal depth.
16. The method of claim 15, wherein the obtaining an optimal depth comprises seeking a disparity value that minimizes a sum of energies in multiple directions.
17. The method of claim 15, wherein the obtaining an optimal depth comprises seeking the disparity value based on a directional energy function in one direction.
18. The method of any of claims 2-5, further comprising reducing noise by performing at least one of: matching scene images from the first and second imaging devices while setting the disparity to-1, and identifying respective distinctiveness of the scene images.
19. The method of any of claims 1-5, further comprising compensating for errors based on at least factors selected from the group consisting of: a distance between centerlines of the two imaging devices, an actual distance of two adjacent pixels, focal lengths of the two imaging devices, and a depth between the object of interest and the first and second imaging devices.
20. The method of any of claims 2-5, further comprising optimizing the depth map by using a non-local optimization equation.
21. The method of claim 20, further comprising obtaining Jacobi iterations of the non-local optimization equations by using recursive filtering.
22. The method of one of claims 2-5, wherein said selecting an object of interest in said scene comprises receiving an external instruction to select said object of interest.
23. The method of claim 22, wherein the receiving instructions includes identifying an object of interest selected on either of the images of the scene from the first or second imaging devices.
24. The method of claim 23, wherein the identifying the selected object of interest comprises: sensing a box framing the object of interest on any of the scene images or sensing a click on the object of interest on any of the scene images.
25. The method of claim 24, wherein the receiving an external instruction comprises receiving a voice instruction to determine the object of interest, the voice instruction being a preset name of the object of interest.
26. The method of any of claims 2-5, wherein the selecting an object of interest in the scene comprises: a determination is made under at least one preset rule and the object of interest is automatically determined based on the determination.
27. The method of claim 26, wherein said determining under at least one preset rule comprises determining whether said object is approaching or within a certain distance of said imaging mechanism.
28. The method of any of claims 1-5, wherein the automatically adjusting the focal length comprises automatically adjusting the focal length of the imaging mechanism in real-time with a tracking learning detection based on gray level information of the object of interest.
29. A focus adjustment apparatus comprising:
a distance component for determining a distance between an object of interest in a scene and an imaging mechanism for imaging the scene; and
a focus component for automatically adjusting a focus of the imaging mechanism according to the determined distance:
wherein the imaging mechanism comprises a first imaging device and a second imaging device that image the scene to obtain a first scene image and a second scene image for determining the distance;
wherein the distance assembly comprises:
a depth estimation mechanism for obtaining a depth map of the scene;
wherein the depth map is obtained based on a disparity of the first scene image and a second scene image;
wherein the depth estimation mechanism optimizes a global energy function;
wherein the global energy function is optimized by:
aggregating data items in a plurality of directions to obtain a directional energy function for each of the directions; and
accumulating the directional energy functions in the directions to obtain the global energy function.
30. The apparatus of claim 29, wherein either of the first and second imaging devices is a camera or a sensor.
31. The apparatus of claim 29 or claim 30, wherein the first and second imaging devices are selected from the group comprising: laser cameras, infrared cameras, ultrasound cameras, and time-of-flight cameras.
32. The apparatus of any of claims 29-30, wherein the first and second imaging devices are red-green-blue cameras.
33. The apparatus of claim 29, wherein the distance component comprises:
an object determination mechanism for determining an object of interest in the scene; and
a computing mechanism for computing a distance of the object of interest from a depth map of the scene.
34. The apparatus of claim 33, wherein the global energy function is defined as a sum of a disparity energy function and a scaled smoothing term, the scaled smoothing term representing a weighted adjustment of a smoothing term.
35. The device of claim 34, wherein the disparity energy function comprises a Birchfield-Tomasi data item.
36. The apparatus of claim 35, wherein the Birchfield-Tomasi data item is defined based on a minimum disparity of pixel coordinates in a first image of the scene captured by the first imaging device and a second image of the scene captured by the second imaging device.
37. The apparatus of any of claims 34-36, wherein the smoothing term employs an energy function of the disparity of the disparities.
38. The apparatus of any one of claims 34-36, wherein the smoothing term is a sum of scaled trigger functions of the disparity between two neighboring pixels for all neighbors of a pixel having coordinates (x, y), the scaled trigger functions representing weighted adjustments to the trigger functions.
39. The apparatus of claim 38, wherein the neighbors are pixels from four domains.
40. The device of claim 38, wherein the smoothing term is defined based on a scaling trigger function of disparities of all neighbors of each pixel.
41. The device of claim 33, wherein the directions comprise a predetermined number of directions.
42. The device of claim 41, wherein the predetermined number of directions comprises four directions or eight directions.
43. The apparatus of claim 33, wherein the directional energy function in one direction is based on dynamic programming in that direction.
44. The apparatus of claim 43, wherein the directional energy function in one direction is obtained by summing a corresponding smoothing term with a dynamic plan in that direction.
45. The apparatus of claim 43, wherein the dynamic programming in the direction is based on a recursion of directional energy functions of its neighbors in the direction.
46. The apparatus of claim 45, wherein the direction comprises a horizontal direction.
47. The apparatus of claim 46, wherein the directional energy function in a horizontal direction is obtained by a recursion based on the directional energy function of its neighbors in the horizontal direction.
48. The apparatus of claim 33, wherein the optimal depth is obtained by seeking a disparity value that minimizes the sum of the energies in the multiple directions.
49. The apparatus of claim 48, wherein the optimal depth is obtained based on a directional energy function in one direction.
50. The apparatus according to any of claims 33-36, wherein noise is reduced by matching the first and second scene images and/or identifying the respective distinctiveness of the first and second scene images when setting the disparity to-1.
51. The apparatus of any of claims 33-36, wherein the error is compensated based on at least factors selected from the group consisting of: a distance between centerlines of the two imaging devices, an actual distance of two adjacent pixels, focal lengths of the two imaging devices, and a depth between the object of interest and the first and second imaging devices.
52. The apparatus of any one of claims 33-36, wherein the depth map is optimized by using a non-local optimization equation.
53. The apparatus of claim 52, comprising Jacobi iterations of the non-local optimization equations obtained by recursive filtering.
54. The apparatus of claim 33, wherein the object determination mechanism receives an external instruction to determine the object of interest.
55. The apparatus of claim 54, wherein the object determination mechanism enables identification of a selected object of interest on either of the first scene image and the second scene image.
56. The apparatus of claim 55, wherein the object determination mechanism enables identification of the object of interest by at least one of: sensing a box on any one of the first and second scene images to frame the object of interest, and sensing a click on the object of interest on any one of the first and second scene images.
57. The apparatus of claim 55, wherein the object determination mechanism receives an external voice command to determine the object of interest, the external voice command being a preset name of the object of interest.
58. The apparatus of claim 33, wherein the object determination mechanism automatically determines the object of interest based on a determination under at least one preset rule.
59. The apparatus of claim 58, wherein the preset rules comprise: whether the object of interest is approaching the first imaging device and the second imaging device or is within a certain distance.
60. The apparatus of claim 29, wherein the focus component automatically adjusts the focus of the imaging mechanism in real-time with a tracking learning detection based on gray level information of the object of interest.
61. A mobile platform comprising the apparatus of any one of claims 29-60.
62. The mobile platform of claim 61, in which the mobile platform is an unmanned aerial vehicle.
63. The mobile platform of claim 61 or claim 62, wherein the mobile platform is a self-stabilizing platform.
CN201580077679.7A 2015-03-16 2015-03-16 Apparatus and method for focus adjustment and depth map determination Expired - Fee Related CN107409205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010110338.8A CN111371986A (en) 2015-03-16 2015-03-16 Apparatus and method for focus adjustment and depth map determination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/074336 WO2016145602A1 (en) 2015-03-16 2015-03-16 Apparatus and method for focal length adjustment and depth map determination

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202010110338.8A Division CN111371986A (en) 2015-03-16 2015-03-16 Apparatus and method for focus adjustment and depth map determination

Publications (2)

Publication Number Publication Date
CN107409205A CN107409205A (en) 2017-11-28
CN107409205B true CN107409205B (en) 2020-03-20

Family

ID=56918392

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201580077679.7A Expired - Fee Related CN107409205B (en) 2015-03-16 2015-03-16 Apparatus and method for focus adjustment and depth map determination
CN202010110338.8A Pending CN111371986A (en) 2015-03-16 2015-03-16 Apparatus and method for focus adjustment and depth map determination

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010110338.8A Pending CN111371986A (en) 2015-03-16 2015-03-16 Apparatus and method for focus adjustment and depth map determination

Country Status (4)

Country Link
US (2) US10574970B2 (en)
EP (1) EP3108653A1 (en)
CN (2) CN107409205B (en)
WO (1) WO2016145602A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016206107A1 (en) 2015-06-26 2016-12-29 SZ DJI Technology Co., Ltd. System and method for selecting an operation mode of a mobile platform
CN108323190B (en) * 2017-12-15 2022-07-29 深圳市道通智能航空技术股份有限公司 Obstacle avoidance method and device and unmanned aerial vehicle
CN108989687A (en) * 2018-09-07 2018-12-11 北京小米移动软件有限公司 camera focusing method and device
CN109856015B (en) * 2018-11-26 2021-08-17 深圳辉煌耀强科技有限公司 Rapid processing method and system for automatic diagnosis of cancer cells
CN109451241B (en) * 2018-12-10 2021-06-18 无锡祥生医疗科技股份有限公司 Automatic focus adjusting method and device for ultrasonic image
JP6798072B2 (en) * 2019-04-24 2020-12-09 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Controls, mobiles, control methods, and programs
CN112995516B (en) * 2019-05-30 2022-07-29 深圳市道通智能航空技术股份有限公司 Focusing method and device, aerial camera and unmanned aerial vehicle
US10895637B1 (en) * 2019-07-17 2021-01-19 BGA Technology LLC Systems and methods for mapping manmade objects buried in subterranean surfaces using an unmanned aerial vehicle integrated with radar sensor equipment
CN112154650A (en) * 2019-08-13 2020-12-29 深圳市大疆创新科技有限公司 Focusing control method and device for shooting device and unmanned aerial vehicle
CN111579082B (en) * 2020-05-09 2021-07-30 上海交通大学 Automatic error compensation method for infrared thermal imaging temperature measurement system
US20220021822A1 (en) * 2020-07-14 2022-01-20 International Business Machines Corporation Guided multi-spectral inspection
WO2022147703A1 (en) * 2021-01-07 2022-07-14 深圳市大疆创新科技有限公司 Focus following method and apparatus, and photographic device and computer-readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840146A (en) * 2010-04-20 2010-09-22 夏佳梁 Method and device for shooting stereo images by automatically correcting parallax error

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6148270A (en) 1996-10-30 2000-11-14 Yamatake-Honeywell Co., Ltd. Fast target distance measuring device and high-speed moving image measuring device
US6701081B1 (en) 2000-06-06 2004-03-02 Air Controls, Inc. Dual camera mount for stereo imaging
US7929801B2 (en) * 2005-08-15 2011-04-19 Sony Corporation Depth information for auto focus using two pictures and two-dimensional Gaussian scale space theory
WO2007050776A2 (en) 2005-10-25 2007-05-03 University Of Kentucky Research Foundation System and method for 3d imaging using structured light illumination
EP2293588A1 (en) 2009-08-31 2011-03-09 Robert Bosch GmbH Method for using a stereovision camera arrangement
EP2336027A1 (en) 2009-12-18 2011-06-22 EADS Construcciones Aeronauticas, S.A. Method and system for enhanced vision in aerial refuelling operations
CN102117576A (en) 2009-12-31 2011-07-06 鸿富锦精密工业(深圳)有限公司 Digital photo frame
CN201803697U (en) 2010-06-03 2011-04-20 蒋安邦 Sensor capable of determining position of moving light source target in three-dimensional space with two cameras
KR101012691B1 (en) 2010-07-05 2011-02-09 주훈 3d stereo thermal imaging camera system
JP5870510B2 (en) 2010-09-14 2016-03-01 株式会社リコー Stereo camera device, calibration method and program
KR20120070129A (en) 2010-12-21 2012-06-29 한국전자통신연구원 Apparatus and method for photographing stereoscopic image
TWI507807B (en) 2011-06-24 2015-11-11 Mstar Semiconductor Inc Auto focusing mthod and apparatus
CN102592117B (en) 2011-12-30 2014-04-16 杭州士兰微电子股份有限公司 Three-dimensional object identification method and system
CN102609936A (en) * 2012-01-10 2012-07-25 四川长虹电器股份有限公司 Stereo image matching method based on belief propagation
CN102591532B (en) 2012-01-22 2015-01-21 南京先能光电科技有限公司 Dual-reflector cross-positioning electronic whiteboard device
US9230306B2 (en) * 2012-02-07 2016-01-05 Semiconductor Components Industries, Llc System for reducing depth of field with digital image processing
CN102779274B (en) 2012-07-19 2015-02-25 冠捷显示科技(厦门)有限公司 Intelligent television face recognition method based on binocular camera
TWI471677B (en) 2013-04-11 2015-02-01 Altek Semiconductor Corp Auto focus method and auto focus apparatus
JP6020923B2 (en) * 2013-05-21 2016-11-02 パナソニックIpマネジメント株式会社 Viewer having variable focus lens and video display system
CN103350281B (en) * 2013-06-20 2015-07-29 大族激光科技产业集团股份有限公司 Laser marking machine automatic focusing device and automatic focusing method
US20150010236A1 (en) * 2013-07-08 2015-01-08 Htc Corporation Automatic image refocusing method
CN103595916A (en) 2013-11-11 2014-02-19 南京邮电大学 Double-camera target tracking system and implementation method thereof
US9571819B1 (en) * 2014-09-16 2017-02-14 Google Inc. Efficient dense stereo computation
KR102251483B1 (en) * 2014-10-23 2021-05-14 삼성전자주식회사 Electronic device and method for processing image
US9704250B1 (en) * 2014-10-30 2017-07-11 Amazon Technologies, Inc. Image optimization techniques using depth planes
US9292926B1 (en) * 2014-11-24 2016-03-22 Adobe Systems Incorporated Depth map generation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840146A (en) * 2010-04-20 2010-09-22 夏佳梁 Method and device for shooting stereo images by automatically correcting parallax error

Also Published As

Publication number Publication date
EP3108653A4 (en) 2016-12-28
US20170374354A1 (en) 2017-12-28
EP3108653A1 (en) 2016-12-28
CN111371986A (en) 2020-07-03
WO2016145602A1 (en) 2016-09-22
US10574970B2 (en) 2020-02-25
CN107409205A (en) 2017-11-28
US20200195908A1 (en) 2020-06-18

Similar Documents

Publication Publication Date Title
CN107409205B (en) Apparatus and method for focus adjustment and depth map determination
US12056886B2 (en) Systems and methods for depth estimation using generative models
US11790481B2 (en) Systems and methods for fusing images
US11798147B2 (en) Image processing method and device
US10244164B1 (en) Systems and methods for image stitching
EP3158532B1 (en) Local adaptive histogram equalization
US9300946B2 (en) System and method for generating a depth map and fusing images from a camera array
CN106960454B (en) Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
US20210314543A1 (en) Imaging system and method
US9886769B1 (en) Use of 3D depth map with low and high resolution 2D images for gesture recognition and object tracking systems
CN108510540A (en) Stereoscopic vision video camera and its height acquisition methods
CN109661815B (en) Robust disparity estimation in the presence of significant intensity variations of the camera array
WO2019144269A1 (en) Multi-camera photographing system, terminal device, and robot
JP4394487B2 (en) Stereo image processing device
WO2021049281A1 (en) Image processing device, head-mounted display, and spatial information acquisition method
CN102760296B (en) Movement analyzing method for objects in multiple pictures
KR101823657B1 (en) Stereo image rectification method for mono cameras
CN116363156A (en) Image ranging method, device, equipment and storage medium
JP6103767B2 (en) Image processing apparatus, method, and program
GB2624542A (en) Stabilised 360 degree depth imaging system without image rectification
EP2913797A1 (en) Accurate parallax calculation in image regions with low texture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200320

CF01 Termination of patent right due to non-payment of annual fee