WO2017028652A1 - 一种镜头、摄像机、包裹检测系统和图像处理方法 - Google Patents

一种镜头、摄像机、包裹检测系统和图像处理方法 Download PDF

Info

Publication number
WO2017028652A1
WO2017028652A1 PCT/CN2016/090685 CN2016090685W WO2017028652A1 WO 2017028652 A1 WO2017028652 A1 WO 2017028652A1 CN 2016090685 W CN2016090685 W CN 2016090685W WO 2017028652 A1 WO2017028652 A1 WO 2017028652A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
image
lens
target
photosensitive chip
Prior art date
Application number
PCT/CN2016/090685
Other languages
English (en)
French (fr)
Inventor
何品将
朱勇
张文聪
谢明强
管达
张振华
Original Assignee
杭州海康机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人技术有限公司 filed Critical 杭州海康机器人技术有限公司
Priority to US15/753,804 priority Critical patent/US10386632B2/en
Priority to EP16836516.1A priority patent/EP3340600B1/en
Publication of WO2017028652A1 publication Critical patent/WO2017028652A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters
    • G02B5/201Filters in the form of arrays
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B43/00Testing correct operation of photographic apparatus or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B1/00Optical elements characterised by the material of which they are made; Optical coatings for optical elements
    • G02B1/10Optical coatings produced by application to, or surface treatment of, optical elements
    • G02B1/11Anti-reflection coatings
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/42Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
    • G02B27/44Grating systems; Zone plate systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/18Diffraction gratings
    • G02B5/1876Diffractive Fresnel lenses; Zone plates; Kinoforms
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters
    • G02B5/208Filters for use with infrared or ultraviolet radiation, e.g. for separating visible light from infrared and/or ultraviolet radiation

Definitions

  • the present application relates to the field of machine vision, and in particular to a lens, a camera, a package detection system, and an image processing method.
  • Depth of field is the range of distance allowed by the camera lens before and after the camera lens can be clearly imaged. That is, when the camera lens focuses on an object clearly, the object on the same plane (ie, the object surface) of the vertical lens optical axis At the point, a clear image can be formed on the receiver, and a certain range of points in front of the object can also form a clear image, and the distance between the front and rear ranges is the depth of field of the camera. The greater the depth of field of the camera means that it can clearly image objects in a wider depth range, so the control of depth of field has great practical significance in the fields of machine vision and video surveillance.
  • Figure 1 is a schematic illustration of the depth of field of a camera.
  • the object at the nominal object distance the emitted light passes through the lens and is clearly focused on the nominal image surface.
  • the light emitted by the objects located at the front and rear sides of the nominal object is concentrated on the front and rear sides of the nominal image surface after passing through the lens, and a diffuse image spot of a certain size is formed on the nominal image surface. If the diffuse image is small enough, then the object can also be considered to be clearly imaged. Therefore, the object between the far object distance and the near object distance in Fig. 1 can be regarded as clear imaging.
  • the axial distance between the far object distance and the near object distance is the depth of field of the lens.
  • the camera When the camera is shooting, if the object distance of the subject is different and the range of variation exceeds the depth of field of the camera, the image of the subject will be blurred. Or, in some scenarios (such as intelligent transportation), the camera must be installed obliquely with respect to the scene being monitored. At this time, because the scene that the camera is aimed at has both a close-up and a distant view, it may not be possible to simultaneously focus on the focus, that is, it cannot be guaranteed. The far and near scenes are all within the depth of field of the camera, resulting in poor clarity of the shot.
  • Lens focal length The longer the focal length of the lens, the smaller the depth of field, the shorter the focal length, and the greater the depth of field;
  • Size of the photosensitive element pixel The larger the pixel size, the larger the depth of field.
  • the aperture of the lens can usually be changed. For this reason, the aperture is minimized in many imaging conditions that require increased depth of field.
  • the light energy entering the photosensitive element decreases with the square of the aperture, and the aperture is too small, which causes the image to become very dark.
  • the diffraction effect of the light becomes apparent. The image point that is clearly imaged will gradually become a large diffuse spot, resulting in a decrease in image sharpness.
  • one method is to increase the depth of field of the camera by means of liquid lens focusing.
  • the principle is that the focal length of the liquid lens can be dynamically adjusted by the DC voltage.
  • the driving voltage changes, the focus of the lens follows. After moving, the voltage signal can be used to control the object that the lens focuses on.
  • the focusing method is similar to the human eye, and has the advantages of fast response speed and long life, but the disadvantage is that the lens is expensive, which is not conducive to large-scale promotion; in addition, although the liquid lens zooms rapidly, it cannot be simultaneously in the same picture.
  • the application range is limited by identifying different objects in the distance.
  • Another method is image processing by deconvolution.
  • the defocus blur of the image can be regarded as the result of the convolution operation of the point spread function of the lens at the out-of-focus position and the input image from the perspective of signal processing.
  • the point spread function of the lens defocus has a relatively simple mathematical model, it can be estimated and modeled in advance; the Wiener filter method can be used to restore the input image. After taking an out-of-focus image, using different deconvolution nucleuses, you can restore sharp images at different object distances.
  • the advantage of this method is that it has wide adaptability, no need to add additional optical components, and can obtain clear images of different object distances by using one image; but the disadvantages are also very obvious.
  • the present application provides a lens, a camera, a package detection system, and an image processing method to at least solve the technical problem that the depth of field of the lens in the related art is small.
  • a lens comprising: an optical lens and a photosensitive chip, the lens further comprising: a filter comprising a first filter portion and a second filter portion, the filter is disposed at Between the optical lens and the photosensitive chip, wherein the first object point is imaged on the photosensitive chip via the optical lens and the first filter portion, and the second object point is imaged on the photosensitive chip through the optical lens and the second filter portion, wherein The thickness of the first filter portion is greater than the thickness of the second filter portion.
  • the first filter portion and the second filter portion constitute a stepped structure.
  • a ratio of an area of an incident surface of the first filter portion and the second filter portion is a ratio of a field of view of the far-end depth of field region to a field of view of the near-end depth of field region.
  • the filter comprises a plurality of transparent flat-plate filters, wherein the plurality of transparent flat-plate filters are bonded to a stepped structure by optical glue.
  • the filter comprises a transparent stepped structure filter.
  • the filter is connected to the control component via the transmission component to be controlled and moved to the target position, wherein, at the target location, the imaging light path of the first object point passes through the first filter.
  • the imaging light path of the second object point passes through the second filter portion.
  • the surface of the photosensitive chip is provided with a protective glass, and the filter is glued to the surface of the protective glass.
  • the incident surface and the exit surface of the filter are plated with an optical anti-reflection film and/or an infrared cut-off coating.
  • a lens is also provided.
  • the lens comprises an optical lens and a photosensitive chip, the lens further comprising: a filter disposed between the optical lens and the photosensitive chip, wherein the first object is imaged on the photosensitive chip through the optical lens and the filter, and the second object The dots are imaged on the photosensitive chip via an optical lens.
  • the central axis of the filter is parallel to the optical axis of the optical lens, and the central axis has a predetermined distance from the optical axis.
  • the filter is connected to the control component via the transmission component to be controlled and moved to the target position, wherein, at the target location, the imaging light path of the first object point passes through the filter, The imaging path of the second object point does not pass through the filter.
  • a video camera is also provided.
  • the camera includes any of the lenses provided herein.
  • a parcel detection system is also provided.
  • the package inspection system includes any of the cameras provided herein.
  • an image processing method includes: respectively acquiring a first target image and a second target image, wherein the first target image is an image formed on the photosensitive chip by the optical lens and the first filter portion in the image to be processed, and the second target image is to be processed Processing an image formed on the photosensitive chip by the optical lens and the second filter portion in the image, wherein the image to be processed is an image obtained by photographing a target object to be measured through a lens, and the lens includes an optical lens, a photosensitive chip, and a filter, and the filter And disposed between the optical lens and the photosensitive chip, the filter includes a first filter portion and a second filter portion, the thickness of the first filter portion is greater than the thickness of the second filter portion; determining the first target image and the second Whether there is an overlapping area of the target object image in the target image; and if it is determined that there is an overlapping area of the target object image in the first target image and the second target image, the overlapping area in the image
  • the method before acquiring the first target image and the second target image respectively, the method further includes: acquiring the first ratio and the second ratio respectively, wherein the first ratio is the image to be processed The proportion of the image of the object to be tested in the first region, and the second ratio is the proportion of the image of the object to be tested in the second region of the image to be processed; determining whether the difference between the first ratio and the second ratio reaches a preset a threshold, wherein the first ratio is greater than the second ratio; and if it is determined that the difference between the first ratio and the second ratio reaches a preset threshold, adjusting a position of the filter to increase the second ratio, and reacquiring the image to be processed .
  • adjusting the position of the filter to increase the second ratio comprises: determining a target moving amount according to the first ratio, the second ratio, the area of the first area, and the area of the second area; The filter is moved toward the target direction by a target movement amount, wherein the target direction is a direction in which the first region points to the second region.
  • an image processing method includes: dividing Do not acquire the first target image and the second target image, wherein the first target image is an image formed on the photosensitive chip by the optical lens and the filter in the image to be processed, and the second target image is an optical lens in the image to be processed.
  • An image formed on the photosensitive chip without a filter, and the image to be processed is an image obtained by photographing a target object to be measured through a lens
  • the lens includes an optical lens, a photosensitive chip, and a filter
  • the filter is disposed on the optical lens and the photosensitive
  • the filter is a filter of uniform thickness; determining whether there is an overlapping area of the target object image in the first target image and the second target image; and if the first target image and the second target are determined If there is an overlapping area of the target object image in the image, the deduplication process is performed on the overlapping area in the image to be processed.
  • an application for executing any of the above image processing methods at runtime.
  • a storage medium for storing an application for executing any of the image processing methods described above at runtime.
  • the lens provided by the present application further includes a filter including an optical filter and a photosensitive chip, the filter includes a first filter portion and a second filter portion, the filter is disposed between the optical lens and the photosensitive chip Wherein the first object point is imaged on the photosensitive chip via the optical lens and the first filter portion, and the second object point is imaged on the photosensitive chip via the optical lens and the second filter portion, wherein the thickness of the first filter portion
  • the method is larger than the thickness of the second filter portion, and solves the technical problem that the depth of field of the lens is small in the related art, and the depth of the lens is increased by providing filters of different thickness filters between the optical lens and the photosensitive chip. In the different areas of the surface of the photosensitive chip, objects corresponding to different object distances can be clearly imaged separately.
  • Figure 1 is a schematic view of the depth of field of the camera
  • FIG. 2 is a schematic view of a lens according to a first embodiment of the present application.
  • FIG. 3 is a schematic diagram of an optical path for extending a lens back focus according to a filter according to a first embodiment of the present application
  • Figure 4 (a) is a schematic view showing the imaging of a general optical system
  • Figure 4 (b) is a schematic view of the optical path of the telephoto point in the optical system
  • Figure 4 (c) is a schematic view showing the imaging of an optical system using a stepped filter
  • Figure 5 (a) is a cross-sectional view of a two-step type filter and a corresponding top view
  • Figure 5 (b) is a cross-sectional view of the three-step type filter and a corresponding top view
  • Figure 5 (c) is a cross-sectional view of the four-step type filter and corresponding top view
  • FIG. 6(a) is a schematic view showing the relative positions of the filter and the photosensitive chip in the vertical direction of the boundary line of the filter according to the first embodiment of the present application;
  • 6(b) is a schematic view showing the relative positions of the filter and the photosensitive chip in the horizontal direction according to the boundary line of the filter according to the first embodiment of the present application;
  • FIG. 7(a) is a schematic view showing a filter mounting structure for mounting a filter in a glued manner according to a first embodiment of the present application
  • FIG. 7(b) is a schematic view showing a filter mounting structure for mounting a filter in a bracket manner according to a first embodiment of the present application
  • Figure 8 (a) is a cross-sectional view of a lens having a step type filter according to a first embodiment of the present application
  • Figure 8 (b) is a plan view of a drive system of a lens having a step type filter according to a first embodiment of the present application;
  • FIG. 9 is a schematic view of a lens according to a second embodiment of the present application.
  • Figure 10 (a) is a cross-sectional view of a lens having a filter of uniform thickness in accordance with a second embodiment of the present application;
  • Figure 10 (b) is a plan view of a drive system of a lens having a filter of uniform thickness according to a second embodiment of the present application;
  • FIG. 12 is a schematic diagram of a parcel detection system in accordance with an embodiment of the present application.
  • FIG. 13 is a flowchart of an image processing method according to a first embodiment of the present application.
  • FIG. 14 is a flowchart of an image processing method according to a second embodiment of the present application.
  • 16 is a schematic diagram of an image processing method applied in a pipeline according to an embodiment of the present application.
  • a lens is provided below.
  • the lens includes an optical lens 1, a filter 2, and a photosensitive chip 3.
  • the filter 2 includes a first filter portion and a second filter portion.
  • the filter 2 is disposed between the optical lens 1 and the photosensitive chip 3, wherein the first object passes through the optical lens 1 and the first filter.
  • the portion is imaged on the photosensitive chip 3, and the second object is imaged on the photosensitive chip 3 via the optical lens 1 and the second filter portion, wherein the thickness of the first filter portion is greater than the thickness of the second filter portion.
  • the filter 2 in order to increase the depth of field of the camera, is disposed to include a first filter portion and a second filter portion, the thickness of the first filter portion being greater than the thickness of the second filter portion.
  • the incident surface of the first filter portion corresponds to the field of view of the distal depth of field region
  • the incident surface of the second filter portion corresponds to the field of view of the proximal depth of field region.
  • the far-end depth of field area is an area where the object distance (the vertical distance of the object point from the lens) is greater than the preset distance
  • the near-end depth of field area is the area where the object distance is less than the preset distance
  • the preset distance is the nominal object distance
  • the object distance of the first object point is greater than the object distance of the second object point.
  • the first object point may be an object point located in a field of view of the far-end depth of field region
  • the second object point may be an object point within a field of view of the near-end depth of field region.
  • the first object point in comparison to the first object point and the second object point, the first object point can be considered as the distal object point, and the second object point can be considered as the near object point.
  • the first object point is the distal object point A
  • the second object point is the proximal object point B.
  • the imaging light path of the distal object point A passes through the thick end of the filter 2 (First filter portion)
  • the imaging optical path of the proximal object point B passes through the end (second filter portion) having a small thickness of the filter 2.
  • the imaging beam of the near object point B is indicated by a thick solid line
  • the imaging beam of the far object point A is indicated by a thin solid line.
  • the image point corresponding to the near object point B is located at B1
  • the image point corresponding to the far object point A is located at A1, and A1 and B1 are located on the photosensitive chip 3, that is, on the same image plane.
  • the filter 2 having different thickness filter portions, the object point which is larger than the object distance or the object point with a small object distance can be clearly imaged on the image surface (photosensitive chip 3), the total depth of field L3 (the sum of the near-end depth of field L1 and the far-end depth of field L2) is greatly increased.
  • the object distance is u
  • the image distance is v
  • the focal length of the lens is f
  • the image distance v corresponding to any object distance u can be calculated. It can be seen from equation (1) that as u increases, v decreases, that is, the object and the image move in the same direction. A longer object distance corresponds to a shorter image distance, in which case a thicker filter 2 is required, so that the image distance is correspondingly extended.
  • the amount of change in the image distance and the thickness of the filter 2 for compensating the image distance can be calculated in the case where the object distance is different.
  • the filter 2 disposed in the lens according to the embodiment of the present application may be a fully permeable filter, or may be a filter that selectively transmits light of a specific wavelength.
  • the filter 2 has only two filter portions of different thicknesses, combined with the difference in thickness between the first filter portion and the second filter portion of the filter 2, and the original depth of field of the lens, it can be calculated that after the filter 2 is added The depth of field of the lens.
  • the depth of field of the lens For example, with a 16mm lens with a 3MP 1/1.8" industrial camera, if the lens aperture is set to 4.0 and the lens is aimed at a target that is 2 meters away, the actual depth of field range is approximately 1.62 to 2.6 meters.
  • the difference in thickness between the first filter portion and the second filter portion is 0.2 mm, and the lens is refocused.
  • the image area corresponding to the first filter portion is still most clearly focused on an object 2 meters away, and the depth range corresponding to the first filter portion is also in the range of 1.62 to 2.6 meters; at this time, the second filter portion corresponds to In the image area, the distance from the sharpest point is about 1.37 meters, and the depth of field is in the range of 1.18 to 1.62 meters. Therefore, for the lens with the original depth of field range of 1.62 to 2.6 meters, after using filter 2, The total depth of field range can be increased to the range of 1.18 to 2.6 meters.
  • a filter 2 is further included, the filter 2 includes a first filter portion and a second filter portion, and the filter 2 is disposed on the optical lens 1 and the photosensitive chip 3, wherein the first object point, that is, the distal object point is imaged on the photosensitive chip 3 via the optical lens 1 and the first filter portion, and the second object point, that is, the near object point passes through the optical lens 1 and the second filter portion is imaged on the photosensitive chip 3, wherein the thickness of the first filter portion is greater than the thickness of the second filter portion, which solves the technical problem that the depth of field of the camera lens in the related art is small, and further The filter 2 is disposed between the optical lens 1 and the photosensitive chip 3, which increases the depth of field of the camera, so that objects corresponding to different object distances can be clearly imaged in different regions on the surface of the photosensitive chip 3.
  • the first filter portion and the second filter portion constitute a stepped structure.
  • the stepped structure may include two or more filter portions, and object points of different object distances may be imaged in different regions of the photosensitive chip 3 through filter portions of different thicknesses, respectively.
  • the number and thickness of the filter portions of the step filter can be set according to a specific shooting scene. By adjusting the thickness of the step filter to adjust the focus position of the lens, the objects corresponding to different object distances can be realized in different regions of the surface. Clear imaging.
  • the number of the filter portions included in the filter 2 is not limited to two as shown in FIG. 2, and the number of the filter portions 2 may be two or more, and the two or more filters may be used.
  • the light portion can also constitute a stepped structure.
  • the number of filters may be three as shown in FIG. 5(b) or four as shown in FIG. 5(c), which is possible. In this way, by adjusting the position of the focus point of the lens by using the thickness variation of the step filter, it is possible to achieve clear imaging of objects corresponding to different object distances in different regions of the surface.
  • the ratio of the areas of the entrance faces of the first filter portion and the second filter portion is a ratio of a field of view of the far field depth region to a field of view of the near field depth region.
  • the far-end depth of field area is an area where the object distance (the vertical distance of the object point from the lens) is greater than the preset distance
  • the near-end depth of field area is the area where the object distance is less than the preset distance.
  • the ratio of the area of the entrance surface of the first filter and the second filter is 1:1;
  • the ratio of the field of view of the depth of field region and the near-field depth of field region is 2:1, and the ratio of the area of the incident surface of the first filter portion and the second filter portion is 2:1.
  • Fig. 4(a) is a schematic view showing the imaging of a general optical system.
  • Fig. 4(b) is a schematic view showing the optical path of the telephoto point in the optical system.
  • Fig. 4(c) is a schematic view showing the imaging of an optical system using a stepped filter.
  • the imaging beams of the solid and dashed portions in the figure represent the imaging beams between the object images of the left and right fields of view, respectively, and the object distances on both sides are the same, so the image distance is also equal (where 1 is an optical lens, and 3 is a photosensitive chip).
  • 1 is an optical lens
  • 3 is a photosensitive chip
  • the object point D on the left side of the object side of the optical system forms an image point D1 on the photosensitive chip 3, and the object point C on the right side of the object side of the optical system is in a defocused position, such as As indicated by the dotted line in the figure, because the object distance increases, according to the imaging optical theorem, the image distance is shortened correspondingly, that is, the imaging beam has already converged before the surface of the photosensitive chip 3 (the image point is C1), and the surface of the photosensitive chip 3 corresponds. The imaging beam at object point C has been diverged into a small round spot. The image is unclear at this time. As shown in FIG.
  • the object point D on the left side of the object side of the optical system forms an image point D2 on the photosensitive chip 3, since a stepped filter is used in the optical system, although the object on the right side of the object side Point C is in the out-of-focus position, but the corresponding image side, the light passes through the thicker end of the filter 2, and the convergence point moves correspondingly backward (image side) to form the image point C2 on the photosensitive chip.
  • the greater the thickness of the filter 2 the greater the distance from which the light convergence point is delayed.
  • the objects on the left and right sides of the field of view have different object distances, and the image distances are also different.
  • the filter 2 to the image side to extend the back focus, and in the left and right fields of view, At different thicknesses, the back focus on both sides can be made completely the same, so that the objects in the left and right field of view, even at different object distances, can be simultaneously and clearly focused on the same image plane.
  • the filter 2 comprises a filter having a transparent stepped structure.
  • Fig. 5(a) is a cross-sectional view of a two-step type filter and a corresponding plan view thereof.
  • Fig. 5(b) is a cross-sectional view of the three-step type filter and a corresponding plan view.
  • Fig. 5 (c) is a cross-sectional view of the four-step type filter and a corresponding plan view.
  • the two-step filter also includes two filter portions, that is, the three-step filter includes three filter portions, and the four-step filter also includes four filter portions.
  • the filter can be made using two layers of flat glass (or other flat transparent optical material) bonded together with optical glue (see the two on the left side of Figure 5(a)).
  • the upper left picture is a top view of the filter, and the lower left picture is its corresponding cross-sectional view).
  • the size ratio of the thick and thin areas of the filter can be determined according to the actual application scenario. For example, if the far field depth area to be detected and the near field depth field have the same field of view, the width ratio of the filter thick area and the thin area can be set to 1:1.
  • the filter can also be cut into a stepped shape by the same transparent optical material (as shown in the two figures on the right side of Figure 5 (a), the top right view is the top view of the filter, and the lower right view is the corresponding cross-sectional view. ).
  • FIG. 5(b) and 5(c) are similar to FIG. 5(a) and will not be described here.
  • the two figures on the left side in FIG. 5(b) are a plan view and a cross-sectional view of a filter made of a flat type transparent optical material
  • the two figures on the right side are A top view and a cross-sectional view of a filter made by cutting the shape of a stepped shape using the same transparent optical material, wherein the upper drawing is a plan view, and the lower drawing is a cross-sectional view.
  • the filter is set to a multi-step (at least 3 steps) structure, the depth of field of the camera can be further increased.
  • the camera's light-sensing area that monitors a particular depth of field will also shrink accordingly.
  • four or more filter portions may be included, and are not specifically limited herein.
  • the surface of the photosensitive chip 3 is provided with a cover glass, and the filter is glued to the surface of the cover glass.
  • the filter is optionally secured to the printed circuit board by a bracket.
  • Fig. 7 (a) is a schematic view showing the mounting structure of the filter 2 for mounting the filter 2 in a gluing manner according to the first embodiment of the present application.
  • Fig. 7 (b) is a schematic view showing the mounting structure of the filter 2 in which the filter 2 is mounted in a bracket manner according to the first embodiment of the present application.
  • the filter 2 is fixed to the surface of the photosensitive chip by gluing. Since the surface of the photosensitive chip generally has a protective glass, the stepped filter 2 can be directly bonded to the surface of the protective glass by using the optical adhesive 10. Pay special attention when gluing, do not let the glue overflow, otherwise it will have a greater impact on the image.
  • the filter 2 is held by a holder 7, and the holder 7 is fixed to the printed circuit board 8 (PCB) with screws 9.
  • the photosensitive chip 3 is glued to the printed circuit board 8.
  • the filter 2 is connected to the control member via a transmission member to be controlled and moved to a target position, wherein, at the target position, the imaging optical path of the first object point passes through the first filter portion, and the imaging of the second object point The light path passes through the second filter.
  • Figure 8(a) is a cross-sectional view of a lens having a step type filter 2 according to a first embodiment of the present application.
  • Fig. 8 (b) is a plan view of a drive system of a lens having a step type filter 2 according to the first embodiment of the present application.
  • the stepped filter 2 is fixed to the carrier member 14.
  • the photosensitive chip 3 is glued to the PCB board 16. As shown in FIG.
  • the thin and thick regions of the stepped filter correspond to the region M and the region N, that is, the ladder, respectively.
  • the thin region of the type filter corresponds to the region M
  • the thick region of the step filter corresponds to the region N.
  • the stepped filter 2 is connected to the lead screw 18 via a carrier member 14.
  • the lead screw 18 has a threaded rod pair (paw) 19 connected to the motor 20, and the motor 20 can drive the lead screw 18 along the sliding guide rail.
  • the movement of 22 (through the sliding hole 23) causes the carrier member 14 to move, so that the stepped filter 2 can be moved to the target position.
  • the incident surface and the exit surface of the filter 2 are plated with an optical anti-reflection film and/or an infrared cut-off coating.
  • the optical anti-reflection film can reduce the reflection of incident light and improve the optical imaging quality of the camera.
  • the lens includes an optical lens 1, a photosensitive chip 3, and a filter 4.
  • the filter 4 is disposed between the optical lens 1 and the photosensitive chip 3, wherein the first object is imaged on the photosensitive chip 3 via the optical lens 1 and the filter 4, and the second object is illuminated by the optical lens 1. Imaging on chip 3.
  • the object distance of the first object point is greater than the object distance of the second object point. That is to say, in comparison with the first object point and the second object point, the first object point can be regarded as the distal object point, and the second object point can be regarded as the near object point.
  • the filter 4 is a filter of uniform thickness, and the filter 4 is disposed between the optical lens 1 and the photosensitive chip 3, wherein the first object point (E) is passed through the optical lens 1 and the filter 4
  • the imaging position on the photosensitive chip 3 is the first image point (E1)
  • the second object point (F) is the second image point (F1) through the imaging position of the optical lens 1 on the photosensitive chip 3, the object of the first object point
  • the object distance is greater than the object point of the second object point.
  • E1 and F1 are both located above the photosensitive chip 3, that is, object points of different object distances are imaged on the same image surface.
  • the image point falls on the photosensitive chip 3 or has a certain (sufficiently small) distance from the photosensitive chip 3, it can be considered to be a clear image.
  • the filter 4 is disposed between the optical lens 1 and the photosensitive chip 3, the imaging position of the first object point having a larger object distance is delayed relative to the optical lens, and the photosensitive chip 3 is realized. Imaging.
  • This embodiment increases the depth of field of the original camera.
  • the optical path of the first image point E1 passes through the first object point E, the optical lens 1, and the filter 4, and is the first optical path, and the depth of field is called the first depth of field (L4);
  • the object point F, the optical lens 1, and the optical path of the second image point F1 are the second optical path, and the depth of field is referred to as the second depth of field (L5).
  • the depth of field of the camera is assumed that the optical path of the first image point E1 passes through the first object point E, the optical lens 1, and the filter 4, and is the first optical path, and the depth of field is called the first depth of field (L4);
  • the near object distance of the first optical path is smaller than the far object distance of the second optical path, and the depth of field of the two optical paths partially overlap, but the total depth of field is still greater than the depth of field of the first optical path or the second optical path (total The depth of field L6 is greater than L4 and greater than L5), and in this case, the effect of increasing the depth of field of the camera will also be achieved.
  • the lens provided by this embodiment further includes a filter 4 because it includes an optical lens 1 and a photosensitive chip 3, wherein the filter 4 is disposed between the optical lens 1 and the photosensitive chip 3, wherein the first object point, The distal object point is imaged on the photosensitive chip 3 via the optical lens 1 and the filter 4, and the second object point is near The object point is imaged on the photosensitive chip 3 via the optical lens 1 to solve the technical problem that the depth of field of the camera lens is small in the related art, and further, by providing a filter 4 of uniform thickness between the optical lens 1 and the photosensitive chip 3, The depth of field of the lens is increased, so that objects corresponding to different object distances can be clearly imaged in different regions on the surface of the photosensitive chip 3.
  • the filter 4 is a filter of uniform thickness, it is easier to obtain and easier to operate.
  • the central axis of the filter 4 is parallel to the optical axis of the optical lens, and the central axis has a predetermined distance from the optical axis.
  • the imaging light path of the first object point passes through the filter light sheet 4, and the imaging light path of the second object point does not pass through the filter light sheet 4, it is necessary to adjust the light of the central axis of the filter 4 and the optical lens.
  • the axes are parallel and there is a preset distance between the central axis and the optical axis. At a preset distance, as much of the imaging light path as far away as the object point passes through the filter 4.
  • the value of the preset distance can be adjusted according to the ratio of the far object end field of view and the near object end field of view so that the area of the optical path passing through the filter 4 and the unfiltered light 4
  • the ratio of the area is close to the range of the far object end field (corresponding to the field of view of the far end depth field in the first embodiment of the present application) and the field of view of the near object end (corresponding to the first embodiment of the present application)
  • the ratio of the field of view of the near-field depth field in the middle
  • the filter 4 is connected to the control unit via a transmission member to be controlled and moved to a target position, wherein at the target position, the imaging light path of the first object point passes through the filter 4 and the imaging of the second object point The light path does not pass through the filter 4 .
  • Figure 10 (a) is a cross-sectional view of a lens of a filter 4 having a uniform thickness according to a second embodiment of the present application.
  • Figure 10 (b) is a plan view of a drive system of a lens having a filter of uniform thickness in accordance with a second embodiment of the present application.
  • the filter 4 of uniform thickness is fixed to the carrier member.
  • the photosensitive chip 3 is glued to the PCB board 16. As shown in FIG.
  • the filter of uniform thickness corresponds to the area P (the area other than the P on the carrier is air).
  • the filter 4 of uniform thickness is connected to the lead screw 18 via the carrier member 14.
  • the lead screw 18 has a threaded rod pair (paw) 19 which is connected to the motor 20, and the motor 20 can drive the screw rod 18 to slide along
  • the guide rail 22 is moved (through the sliding hole 23) to drive the carrier member 14 to move, so that the filter 4 of uniform thickness can be moved to the target position.
  • a camera is further provided, which includes the present application. Any of the lenses provided by the embodiments.
  • the camera further comprises: an integrated circuit chip connected to the photosensitive chip for performing processing on the electrical signal generated on the photosensitive chip to obtain a first processing result; and at least one voltage signal conversion circuit connected to the integrated circuit chip And performing a conversion on the voltage signal input or outputted by the integrated circuit chip; and a color coding circuit connected to the integrated circuit chip for performing color coding processing on the first processing result to obtain a second processing result.
  • an integrated circuit chip connected to the photosensitive chip for performing processing on the electrical signal generated on the photosensitive chip to obtain a first processing result
  • at least one voltage signal conversion circuit connected to the integrated circuit chip And performing a conversion on the voltage signal input or outputted by the integrated circuit chip
  • a color coding circuit connected to the integrated circuit chip for performing color coding processing on the first processing result to obtain a second processing result.
  • FIG. 11 is a schematic diagram of a video camera in accordance with an embodiment of the present application.
  • the camera includes a lens 24, an integrated circuit chip 25, a first voltage signal conversion circuit 26, a second voltage signal conversion circuit 27, and a color coding circuit 28, wherein the lens 24 includes an optical lens 112 and a filter.
  • the filter 114 may be a step filter or a filter of uniform thickness.
  • the integrated circuit chip 25 is connected to the photosensitive chip 116 for performing processing on the electrical signal generated on the photosensitive chip to obtain a first processing result;
  • the first voltage signal conversion circuit 26 is connected to the integrated circuit chip 25, For performing conversion on a voltage signal input to the integrated circuit chip 25;
  • a second voltage signal conversion circuit 26 connected to the integrated circuit chip 25 for performing conversion on a voltage signal output from the integrated circuit chip 25;
  • a color encoding circuit 28 And connected to the integrated circuit chip 25, for performing color coding processing on the first processing result, to obtain a second processing result.
  • the photosensitive chip 116 converts the received optical signal into an electrical signal
  • the integrated circuit chip 25 performs processing on the electrical signal generated on the photosensitive chip.
  • the voltage signal conversion circuit can convert the voltage signal generated by the integrated circuit chip 25 to transmit the voltage signal to other processing modules; or can convert the electrical signals generated by the other processing modules into electrical signals that the integrated circuit chip 25 can receive.
  • the color encoding circuit 28 can encode the processing result output by the integrated circuit chip 25 (for example, RGB, YUV, etc.).
  • the first voltage signal conversion circuit 26 or the second voltage signal conversion circuit 27 can be implemented by an IO port module, and the integrated circuit chip 25 can be implemented by a SOC (System on Chip) module.
  • SOC System on Chip
  • a parcel detection system is also provided.
  • the package inspection system includes any of the cameras provided herein.
  • the package detecting system comprises: a camera, comprising a lens, an integrated circuit chip, at least one voltage signal conversion circuit and a color coding circuit, wherein the lens comprises an optical lens, a photosensitive chip and a filter, and the filter comprises a first filter Light portion and second filter portion, filter is disposed on the optical lens And the photosensitive chip, wherein the first object point is imaged on the photosensitive chip via the optical lens and the first filter portion, and the second object point is imaged on the photosensitive chip via the optical lens and the second filter portion, wherein The thickness of the filter portion is greater than the thickness of the second filter portion, and the integrated circuit chip is connected to the photosensitive chip for performing processing on the electrical signal generated on the photosensitive chip to obtain a first processing result, at least one voltage signal conversion circuit and integration
  • the circuit chip is connected to perform conversion on a voltage signal input or outputted from the integrated circuit chip
  • the color coding circuit is connected to the integrated circuit chip, and is configured to perform color coding processing on the first processing result to obtain a second processing result
  • FIG. 12 is a schematic illustration of a package inspection system in accordance with an embodiment of the present application.
  • the package detection system includes: a camera module 32, a post-processing circuit 29, a laser trigger 30, and a fill light 31.
  • the camera module 32 includes a lens 24, an integrated circuit chip 25, and a first voltage signal conversion.
  • the circuit 26, the second voltage signal conversion circuit 27, and the color coding circuit 28, the lens 24 includes an optical lens 112, a filter 114, and a photosensitive chip 116.
  • the filter 114 may be a step filter or a filter of uniform thickness.
  • the color coding circuit 28 of the camera module 32 is connected to the post-processing circuit 29, and the laser trigger 30 is connected to the first voltage signal conversion circuit 26 of the camera module 32, and the second voltage of the fill light 31 and the camera module 32.
  • the signal conversion circuits 27 are connected.
  • the object to be tested is imaged on the photosensitive chip 116 through the optical lens 112 and the filter 114.
  • the laser trigger 30 sends a trigger signal, and after receiving the signal, the camera module 32 takes a picture of the object to be tested, and encodes the captured picture through the color coding circuit 28, and The processing result is output to the post-processing circuit 29.
  • the post-processing circuit 29 performs defect detection, color discrimination, size measurement, three-dimensional imaging, barcode recognition, counting, and the like according to actual detection requirements (ie, performs preset processing).
  • the fill lamp 31 receives the control information generated by the integrated circuit chip 25 through the second voltage signal conversion circuit 27 to achieve synchronization between illumination and capture, or to control its illumination intensity, color, switch, area illumination, etc., to satisfy diversification Lighting needs.
  • an image processing method is further provided.
  • FIG. 13 is a flowchart of an image processing method according to a first embodiment of the present application. As shown in FIG. 13, the method includes the following steps:
  • Step S1302 Acquire a first target image and a second target image, respectively, wherein the first target image is an image formed on the photosensitive chip by the optical lens and the first filter portion in the image to be processed, and the second target image is to be processed.
  • An image formed on the photosensitive chip by the optical lens and the second filter portion in the image, and the image to be processed is an image obtained by photographing a target object to be measured through a lens, and the lens includes an optical lens, a photosensitive chip, and a filter, and the filter is set.
  • the filter includes a first filter portion and a second filter portion, and the thickness of the first filter portion is greater than the thickness of the second filter portion.
  • the image portion preferred in the present application is an image formed on the photosensitive chip through the first filter portion at the telephoto end, and the near-end end passes through the second filter portion.
  • the object point at the telephoto end may form a blurred image on the photosensitive chip through the second filter portion, and similarly, the near view end may form a light on the photosensitive chip through the first filter portion. Blurred image. Therefore, after the image is captured by the camera (the image to be processed), the image needs to be further processed to obtain an image formed on the photosensitive chip through the first filter portion at the near-end end, and the second end portion passes through the second image.
  • the image formed by the filter on the sensor chip, and the final image thus obtained is the clearest image.
  • step S1304 it is determined whether there is an overlapping area of the target object image in the first target image and the second target image.
  • the first target image is an image formed on the photosensitive chip by the optical lens and the first filter portion in the image to be processed
  • the second target image is the optical lens and the second filter portion in the image to be processed.
  • step S1306 if it is determined that there is an overlapping area of the target object image in the first target image and the second target image, the overlapping area in the image to be processed is subjected to deduplication processing.
  • the target object to be tested in the first target image and the second target image is determined there are overlapping regions of the volume image. For example, there are images in which both the image 1 and the image 2 are the first end of the object to be tested (where the image 1 is relatively blurred), and the two are superimposed together, resulting in a decrease in the sharpness of the image to be processed. According to the embodiment of the present application, the image 1 should be removed, and only the image 2 in the image to be processed is retained.
  • whether there is an overlapping area of the target object image in the first target image and the second target image is related to factors such as the relative position of the lens and the shooting scene, and the shooting scene itself. This application does not limit the specific reasons for overlapping areas.
  • the image processing method includes: acquiring the first target image and the second target image, respectively, wherein the first target image is formed on the photosensitive chip via the optical lens and the first filter portion in the image to be processed
  • the image, the second target image is an image formed on the photosensitive chip by the optical lens and the second filter portion in the image to be processed
  • the image to be processed is an image obtained by photographing the object to be tested through the lens
  • the lens includes an optical lens and a photosensitive chip.
  • the filter is disposed between the optical lens and the photosensitive chip, the filter comprises a first filter portion and a second filter portion, the thickness of the first filter portion is greater than the thickness of the second filter portion; Determining whether there is an overlapping area of the target object image in the first target image and the second target image; if it is determined that there is an overlapping area of the target object image in the first target image and the second target image, The overlapping regions in the image perform deduplication processing, and the graphics formed by the different thickness filter portions in the image to be processed are subjected to deduplication processing, thereby ensuring The different regions of the finally acquired image correspond to the object points of different object distances, that is, the acquired images are the imaging points of the far object object point and the near object object point on different regions of the photosensitive chip, respectively, due to the increased lens The depth of field, thus making the acquired image clearer.
  • the method further comprises: respectively acquiring the first ratio and the second ratio, wherein the first ratio is the first region of the image to be processed The ratio of the image of the object to be tested in the middle target, and the second ratio is the proportion of the image of the object to be tested in the second region of the image to be processed; determining whether the difference between the first ratio and the second ratio reaches a preset threshold, wherein The first ratio is greater than the second ratio; and if it is determined that the difference between the first ratio and the second ratio reaches a preset threshold, adjusting the position of the filter to increase the second ratio, and reacquiring the image to be processed.
  • the filter of the camera is disposed between the optical lens of the camera and the photosensitive chip of the camera.
  • the object point is imaged by the filter, which causes the corresponding image point of the object point to be delayed relative to the optical lens of the camera, that is, the filter makes the camera
  • the back focus of the machine is extended. If the telescopic end of the object to be measured (the end of the object is far away) is filtered by the filter when shooting, the imaging at the telephoto end can be made more clear.
  • the filter is disposed between the optical lens and the photosensitive chip, there is a possibility that the camera cannot recognize the entire object to be measured when the size of the object to be measured is large.
  • the position of the filter can be adjusted according to the imaging condition of the initial image to ensure that the entire object to be tested is imaged on the photosensitive chip. After the filter is adjusted, the image to be processed can be captured.
  • the initial image is divided into different regions according to a preset rule.
  • the initial image may be evenly divided into two regions (ie, the first region and the second region), or the initial image may be divided into a plurality of regions (including the first region and the second region).
  • the first ratio the ratio of the object to be tested in the first region
  • the second ratio the proportion of the object to be tested in the second region
  • the thicker end of the filter corresponds to the telescopic end; the thinner end of the filter corresponds to the near end.
  • the imaged area corresponding to the near-end end cannot be completely imaged (ie, the imaged area corresponding to the near-end end may not be able to image the entire near-end end).
  • the filter movement may be controlled by the motor, that is, when it is determined that the difference between the first ratio and the second ratio reaches a preset threshold, a motor control signal is generated. And send it to the motor, the motor drives the transmission part to move, which drives the filter to move.
  • the filter may be set to a position at a position where the central axis of the filter coincides with the optical axis of the optical lens.
  • the filter is moved according to the shooting effect of the initial image.
  • adjusting the position of the filter to increase the second ratio comprises: according to the first ratio, the second ratio, the area of the first area, the area of the second area (or the first ratio, the second ratio, the first area And the area of the image of the object to be tested on the second area) determining the amount of movement of the target;
  • the target direction moves the target movement amount, wherein the target direction is a direction in which the first area points to the second area.
  • the target movement amount x can be solved.
  • FIG. 14 is a flowchart of an image processing method according to a second embodiment of the present application. This embodiment can be used as a preferred embodiment of the embodiment shown in FIG. As shown in FIG. 14, the method includes the following steps:
  • Step S1402 image acquisition.
  • the image is the image to be processed as described above.
  • the image is divided into a first target image and a second target image according to the setting of the filter (or the thickness of the filter).
  • the first target image is an image formed on the photosensitive chip by the thicker region of the filter
  • the second target image is an image formed on the photosensitive chip by the thinner region of the filter.
  • the repeated object images in the first target image and the second target image are excluded according to the topographical features (that is, the deduplication processing is performed).
  • step S1412 the result is output.
  • the first target image and the second target image are obtained by image partitioning, for the first target image and the second target
  • the object to be tested of the image is identified, and the image of the repeated object is judged and screened, so that the finally obtained image is more clear.
  • the filter is reset to a preset initial position each time the initial image is acquired. Since the position of the filter is uncertain (determined by the near end and the far end of the object to be measured) when shooting different scenes, it is necessary to re-adjust the filter relative to the object to be tested when the scenes are different. position. For cameras that are used in a fixed position, the position of the filter can be adjusted to a fixed effective position. For cameras with an unfixed position, reset the filter to the preset initial position each time the initial image is acquired.
  • FIG. 15 is a schematic diagram of an image processing method applied in intelligent transportation according to an embodiment of the present application.
  • Cameras used in intelligent transportation must be installed obliquely with respect to the scene being monitored. At this time, because the scene that the camera is aimed at has both a close-up and a distant view, it may not be possible to simultaneously focus on the focus.
  • the camera uses stepped filters (or uniform).
  • the thickness of the filter allows the near and far scenes to be within the depth of field of the camera. In this case, the filter can be adjusted to the optimal position for the scene being shot when it is used for the first time, and can no longer be readjusted in later use.
  • the image processing method of the present application can be utilized to process the image to achieve image sharpening.
  • FIG. 16 is a schematic diagram of an image processing method applied in a pipeline according to an embodiment of the present application.
  • the left side is a cross-sectional view of the object to be tested, and includes an object R to be measured and an object S to be measured, wherein the height of the object R to be measured is greater than the height of the object S to be tested.
  • the camera is erected above the pipeline (when the height of the object to be measured is high, it is close to the camera, when the height of the object to be tested is low) When it is through the field of view of the camera, the object can be counted, quality detected, barcode recognized, etc. through the intelligent algorithm.
  • a step-type filter (or a filter of uniform thickness) can be used on the camera, so that a part of the photographic element of the camera monitors a distant object. And another part of the pixel detects a closer object.
  • the measured objects passing through the entire total depth of field can be effectively identified and monitored.
  • the image processing method of the embodiment of the present application can be utilized to obtain a clearer image of the measured object.
  • an image processing method is further provided. This image processing method is used for an image to be processed taken by a camera having a filter of uniform thickness.
  • Step S1702 respectively acquiring the first target image and the second target image, wherein the first target image is an image formed on the photosensitive chip by the optical lens and the filter in the image to be processed, and the second target image is in the image to be processed.
  • An image formed by an optical lens and an unfiltered light film on the photosensitive chip, the image to be processed is an image obtained by photographing a target object to be measured through a lens, the lens includes an optical lens, a photosensitive chip, and a filter, and the filter is disposed on the optical Between the lens and the photosensitive chip, the filter is a filter of uniform thickness.
  • Step S1704 determining whether there is an overlapping area of the target object image to be detected in the first target image and the second target image.
  • step S1706 if it is determined that there is an overlapping area of the target object image in the first target image and the second target image, the overlapping area in the image to be processed is subjected to deduplication processing.
  • the image processing method includes: acquiring the first target image and the second target image, respectively, wherein the first target image is an image formed on the photosensitive chip by the optical lens and the filter in the image to be processed,
  • the second target image is an image formed on the photosensitive chip by the optical lens and the unfiltered light in the image to be processed, and the image to be processed is an image obtained by photographing the target object to be measured through the lens, and the lens includes an optical lens, a photosensitive chip and a filter.
  • the filter is disposed between the optical lens and the photosensitive chip, the filter is a filter of uniform thickness; and determining whether there is an overlapping area of the target object image in the first target image and the second target image; If it is determined that there is an overlapping area of the target object image in the first target image and the second target image, the overlapping area in the image to be processed is subjected to deduplication processing, which solves the problem that the depth of field of the camera is small in the related art.
  • deduplication processing A technical problem that results in a poor definition of the captured image, and further, by performing deduplication processing on the overlapping region in the image to be processed through the filter portion and the pattern not formed by the filter, the finally acquired image can be ensured.
  • Different regions correspond to object points of different object distances, that is, the acquired images are image points of different object distance points and near object object points on different regions of the photosensitive chip, which increase the depth of field of the lens, thereby making The captured image is clearer.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium including instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform various embodiments of the present application. All or part of the steps of the method described.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
  • an application program is further provided for executing the image processing method provided by the embodiment of the present application at runtime.
  • the overlapping area in the image to be processed is subjected to deduplication processing.
  • the method before acquiring the first target image and the second target image respectively, the method further includes:
  • first ratio is a proportion of the image of the target object to be tested in the first region of the image to be processed
  • second ratio is the target object to be tested in the second region of the image to be processed
  • the position of the filter is adjusted to increase the second ratio, and the image to be processed is reacquired.
  • adjusting the position of the filter to increase the second ratio package include:
  • the filter is moved toward the target direction by a target movement amount, wherein the target direction is a direction in which the first region points to the second region.
  • the first target image is an image formed on the photosensitive chip by the optical lens and the filter in the image to be processed
  • the second target image is an optical lens in the image to be processed
  • An image formed on the photosensitive chip without a filter, and the image to be processed is an image obtained by photographing a target object to be measured through a lens
  • the lens includes an optical lens, a photosensitive chip, and a filter, and the filter is disposed on the optical lens and the photosensitive Between the chips, the filter is a filter of uniform thickness;
  • the operation of the application provided by the embodiment of the present application can increase the depth of field of the camera, so that the object point with a large object distance or the object point with a small object distance can be clearly imaged.
  • the camera lens is on the sensor chip.
  • a storage medium for storing an application for executing an image processing method provided by an embodiment of the present application at runtime.
  • the image to be processed is an image obtained by photographing a target object to be measured through a lens, and the lens includes an optical lens and a photosensitive chip.
  • a filter the filter is disposed between the optical lens and the photosensitive chip, the filter comprises a first filter portion and a second filter portion, the thickness of the first filter portion is greater than the thickness of the second filter portion;
  • first ratio is a proportion of the image of the target object to be tested in the first region of the image to be processed
  • second ratio is the target object to be tested in the second region of the image to be processed
  • the position of the filter is adjusted to increase the second ratio, and the image to be processed is reacquired.
  • adjusting the position of the filter to increase the second ratio includes:
  • the first target image is an image formed on the photosensitive chip by the optical lens and the filter in the image to be processed
  • the second target image is an optical lens in the image to be processed
  • An image formed on the photosensitive chip without a filter, and the image to be processed is an image obtained by photographing a target object to be measured through a lens
  • the lens includes an optical lens, a photosensitive chip, and a filter, and the filter is disposed on the optical lens and the photosensitive Between the chips, the filter is a filter of uniform thickness;
  • the overlapping area in the image to be processed is subjected to deduplication processing.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the operation of the application stored in the storage medium provided by the embodiment of the present application can increase the depth of field of the camera, so that the object point with a large object distance or the object point with a small object distance is Can be clearly imaged on the sensor chip of the camera lens.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Blocking Light For Cameras (AREA)
  • Studio Devices (AREA)
  • Lenses (AREA)

Abstract

本发明公开了一种镜头、摄像机、包裹检测系统和图像处理方法。其中,该镜头包括:光学透镜和感光芯片,该镜头还包括:滤光片,包括第一滤光部和第二滤光部,滤光片设置在光学透镜和感光芯片之间,其中,第一物点经光学透镜和第一滤光部在感光芯片上成像,第二物点经光学透镜和第二滤光部在感光芯片上成像,其中,第一滤光部的厚度大于第二滤光部的厚度。通过本发明,解决了相关技术中镜头的景深较小的技术问题。

Description

一种镜头、摄像机、包裹检测系统和图像处理方法
本申请要求于2015年8月18日提交中国专利局、申请号为201510508464.8、发明名称为“镜头、摄像机、包裹检测系统和图像处理方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器视觉领域,具体而言,涉及一种镜头、摄像机、包裹检测系统和图像处理方法。
背景技术
景深是摄影机镜头能清晰成像的前提下,被摄物体前后允许的距离范围,也即当摄像机的镜头对某一物体聚焦清晰时,垂直镜头光轴的同一平面(即物面)上的物方点,都可以在接收器上形成清晰的图像,物面前后一定范围的点也可以形成较清晰的像,该前后范围的间距为摄像机的景深。摄像机的景深越大,意味着可以对更大深度范围内的物体清晰成像,因此景深的控制在机器视觉,视频监控等领域都有重大的现实意义。图1是摄像机的景深的示意图。如图1所示,位于标称物距的物体,发出的光线经过镜头后清晰聚焦在标称像面上。位于标称物距前后两侧的物体发出的光线,经过镜头后分别会聚在标称像面的前后两侧,在标称像面上则形成一定尺寸的弥散像斑。如果弥散像斑足够小,那么也可以认为物体也是清晰成像的。因此图1中处于远物距和近物距之间的物体,都可以认为是清晰成像。远物距和近物距之间的轴向距离,即为镜头的景深。
摄像机在进行拍摄时,如果被拍摄物的物距不同且变化范围超出了摄像机的景深,则会导致物体成像模糊。或者,在某些场景下(例如智能交通),摄像机必须相对被监测的场景倾斜安装,此时,因为摄像机所对准的场景既有近景也有远景,可能无法同时兼顾聚焦清楚,也即无法保证远近场景都在摄像机的景深范围以内,进而导致拍摄的清晰度很差。
一般情况下,影响景深的主要因素有以下4个:
1)镜头光圈:光圈越小,即光圈值(F#)越大,景深越大;
2)镜头焦距:镜头焦距越长,景深越小,焦距越短,景深越大;
3)拍摄距离:拍摄距离越远,景深越大,拍摄距离越近,景深越小;
4)感光元件像元的尺寸:像元尺寸越大,景深越大。
一般而言,在选定了摄像机并确定拍摄场景后,后面三个参数可以改变的余地不大,通常可以改变的是镜头的光圈。因为该原因,在许多需要提升景深的成像条件下,都会把光圈尽量缩到最小。但光圈缩小主要有两个问题:一是进入感光元件的光能量随光圈的平方下降,光圈过小会导致图像变得非常暗;另外,光圈小到一定程度后,光的衍射效应变得明显,原来清晰成像的像点会逐渐变成一个较大的弥散斑,从而导致图像清晰度的下降。
在相关技术中,一种方法是采用液态镜头调焦的方式来增加摄像机的景深,其原理为:液态镜头的焦距可以通过直流电压动态地进行调节,当驱动电压变化时,镜头的焦点随之前后移动,因此可以通过电压信号来控制镜头所聚焦的物体。其调焦方式类似于人眼,具有响应速度快,寿命长等优点,但缺点是镜头价格昂贵,不利于大规模推广;另外液态镜头虽然变焦迅速,但是并不能在同一幅拍摄的画面中同时识别远近不同的物体,其应用范围受到一定限制。
另外一种方法是通过反卷积进行图像处理。图像的离焦模糊,从信号处理的角度来说,可以看作镜头处于离焦位置的点扩散函数与输入图像进行卷积运算的结果。因为镜头离焦的点扩散函数有相对简单的数学模型,可以预先进行估计和建模;利用维纳滤波的方法,可以把输入图像还原出来。拍摄一幅离焦图像以后,利用不同的反卷积的核,可以还原出不同物距处的清晰图像。该方法的优点是适应性广,不需要增加额外的光学元件,利用一幅图像即可获得不同物距的清晰像;但缺点也非常明显,一是反卷积运算的计算量非常大,需要消耗大量的计算资源,造成硬件成本的增加;另外反卷积运算获取的过程中,图像中的噪声也会随之放大,导致图像质量的严重下降。
针对相关技术中镜头的景深较小的技术问题,目前尚未提出有效的解决方案。
发明内容
本申请提供了一种镜头、摄像机、包裹检测系统和图像处理方法,以至少解决相关技术中镜头的景深较小的技术问题。
根据本申请的一个方面,提供了一种镜头,该镜头包括:光学透镜和感光芯片,该镜头还包括:滤光片,包括第一滤光部和第二滤光部,滤光片设置在光学透镜和感光芯片之间,其中,第一物点经光学透镜和第一滤光部在感光芯片上成像,第二物点经光学透镜和第二滤光部在感光芯片上成像,其中,第一滤光部的厚度大于第二滤光部的厚度。
在本申请的一种具体实现方式中,第一滤光部和第二滤光部构成阶梯型结构。
在本申请的一种具体实现方式中,第一滤光部和第二滤光部的入射面的面积之比为远端景深区域的视场范围与近端景深区域的视场范围之比。
在本申请的一种具体实现方式中,滤光片包括多个透明的平板型滤光片,其中,多个透明的平板型滤光片通过光学胶粘接成阶梯型结构。
在本申请的一种具体实现方式中,滤光片包括一个透明的阶梯型结构的滤光片。
在本申请的一种具体实现方式中,滤光片,经由传输部件与控制部件相连接以受控并移动至目标位置,其中,在目标位置,第一物点的成像光路经过第一滤光部,第二物点的成像光路经过第二滤光部。
在本申请的一种具体实现方式中,感光芯片的表面设置有保护玻璃,并且滤光片胶合于保护玻璃的表面。
在本申请的一种具体实现方式中,滤光片的入射面和出射面镀有光学减反射膜和/或红外截止镀膜。
根据本申请的另一方面,还提供了一种镜头。该镜头包括光学透镜和感光芯片,该镜头还包括:滤光片,设置在光学透镜和感光芯片之间,其中,第一物点经光学透镜和滤光片在感光芯片上成像,第二物点经光学透镜在感光芯片上成像。
在本申请的一种具体实现方式中,滤光片的中轴线与光学透镜的光轴平行,并且中轴线与光轴之间具有预设距离。
在本申请的一种具体实现方式中,滤光片,经由传输部件与控制部件相连接以受控并移动至目标位置,其中,在目标位置,第一物点的成像光路经过滤光片,第二物点的成像光路不经过滤光片。
根据本申请的另一方面,还提供了一种摄像机。该摄像机包括本申请提供的任一种镜头。
根据本申请的另一方面,还提供了一种包裹检测系统。该包裹检测系统包括本申请提供的任一种摄像机。
根据本申请的另一方面,还提供了一种图像处理方法。该方法包括:分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,第二目标图像为待处理图像中经光学透镜和第二滤光部在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片包括第一滤光部和第二滤光部,第一滤光部的厚度大于第二滤光部的厚度;判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;以及如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
在本申请的一种具体实现方式中,在分别获取第一目标图像和第二目标图像之前,该方法还包括:分别获取第一比例和第二比例,其中,第一比例为待处理图像的第一区域中目标待测物体图像所占的比例,第二比例为待处理图像的第二区域中目标待测物体图像所占的比例;判断第一比例与第二比例之差是否达到预设阈值,其中,第一比例大于第二比例;以及如果判断出第一比例与第二比例之差达到预设阈值,则调节滤光片的位置以增大第二比例,并重新获取待处理图像。
在本申请的一种具体实现方式中,调节滤光片的位置以增大第二比例包括:根据第一比例、第二比例、第一区域的面积、第二区域的面积确定目标移动量;将滤光片朝目标方向移动目标移动量,其中,目标方向为第一区域指向第二区域的方向。
根据本申请的另一方面,还提供了一种图像处理方法。该方法包括:分 别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经过光学透镜和滤光片在感光芯片上形成的图像,第二目标图像为待处理图像中经过光学透镜、未经过滤光片在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片为均匀厚度的滤光片;判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;以及如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
根据本申请的另一个方面,还提供了一种应用程序,所述应用程序用于在运行时执行上述任一种图像处理方法。
根据本申请的另一个方面,还提供了一种存储介质,所述存储介质用于存储应用程序,所述应用程序用于在运行时执行上述任一种图像处理方法。
本申请提供的镜头,由于包括光学透镜和感光芯片,还包括滤光片,该滤光片包括第一滤光部和第二滤光部,该滤光片设置在光学透镜和感光芯片之间,其中,第一物点经光学透镜和第一滤光部在感光芯片上成像,第二物点经光学透镜和第二滤光部在感光芯片上成像,其中,第一滤光部的厚度大于第二滤光部的厚度,解决了相关技术中镜头的景深较小的技术问题,进而通过在光学透镜和感光芯片之间设置不同厚度滤光部的滤光片,增大了镜头的景深,使得感光芯片表面的不同区域上,对应不同物距的物体能够分别清晰成像。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是摄像机的景深的示意图;
图2是根据本申请第一实施例的镜头的示意图;
图3是根据本申请第一实施例的滤光片使镜头后焦延长的光路原理图;
图4(a)是普通光学系统的成像示意图;
图4(b)是光学系统中远物距点离焦的光路示意图;
图4(c)是利用阶梯形滤光片的光学系统的成像示意图;
图5(a)是二阶梯型滤光片的剖视图及对应的俯视图;
图5(b)是三阶梯型滤光片的剖视图及对应的俯视图;
图5(c)是四阶梯型滤光片的剖视图及对应的俯视图;
图6(a)是根据本申请第一实施例的滤光片厚薄的分界线在竖直方向时滤光片和感光芯片的相对位置的示意图;
图6(b)是根据本申请第一实施例的滤光片厚薄的分界线在水平方向时滤光片和感光芯片的相对位置的示意图;
图7(a)是根据本申请第一实施例的以胶合方式安装滤光片的滤光片安装结构示意图;
图7(b)是根据本申请第一实施例的以支架方式安装滤光片的滤光片安装结构示意图;
图8(a)是根据本申请第一实施例的具有阶梯型滤光片的镜头的剖面图;
图8(b)是根据本申请第一实施例的具有阶梯型滤光片的镜头的驱动系统的俯视图;
图9是根据本申请第二实施例的镜头的示意图;
图10(a)是根据本申请第二实施例的具有均匀厚度的滤光片的镜头的剖面图;
图10(b)是根据本申请第二实施例的具有均匀厚度的滤光片的镜头的驱动系统的俯视图;
图11是根据本申请实施例的摄像机的示意图;
图12是根据本申请实施例的包裹检测系统的示意图;
图13是根据本申请第一实施例的图像处理方法的流程图;
图14是根据本申请第二实施例的图像处理方法的流程图;
图15是根据本申请实施例的图像处理方法在智能交通中应用的示意图;以及
图16是根据本申请实施例的图像处理方法在流水线中应用的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
下面根据本申请的实施例,提供了一种镜头。
图2是根据本申请第一实施例的镜头的示意图,如图2所示,该镜头包括:光学透镜1、滤光片2以及感光芯片3。
其中,滤光片2包括第一滤光部和第二滤光部,滤光片2设置在光学透镜1和感光芯片3之间,其中,第一物点经光学透镜1和第一滤光部在感光芯片3上成像,第二物点经光学透镜1和第二滤光部在感光芯片3上成像,其中,第一滤光部的厚度大于第二滤光部的厚度。
在该实施例中,为了增大摄像机的景深,滤光片2设置为包括第一滤光部和第二滤光部,第一滤光部的厚度大于第二滤光部的厚度。第一滤光部的入射面对应的是远端景深区域的视场范围,第二滤光部的入射面对应的是近端景深区域的视场范围。相对于未经过滤光片2的情况而言,物点经较厚的滤光部成像时,像点后移较大,物点经较薄的滤光部成像时,像点后移较小。通过该实施例,使得摄像机的景深增大。
其中,远端景深区域为物距(物点距离镜头的垂直距离)大于预设距离的物点所在的区域,近端景深区域为物距小于预设距离的物点所在的区域, 并且,预设距离为标称物距。
需要强调的是,本实施例中,第一物点的物距大于第二物点的物距。具体地,第一物点可以为位于远端景深区域的视场范围内的物点,第二物点可以为位于近端景深区域的视场范围内的物点。这样,第一物点和第二物点两者相比较而言,第一物点可以认为是远端物点,第二物点可以认为是近端物点。
具体地,假设第一物点为远端物点A,第二物点为近端物点B,如图2所示,远端物点A的成像光路经过该滤光片2厚度大的一端(第一滤光部),近端物点B的成像光路经过该滤光片2厚度小的一端(第二滤光部)。近端物点B的成像光束,用粗实线表示;远端物点A的成像光束,用细实线表示。近端物点B对应的像点位于B1,远端物点A对应的像点位于A1,A1和B1位于感光芯片3上,也即位于同一像面。通过具有不同厚度滤光部的滤光片2,使得无论是物距较大的物点还是物距较小的物点,都可以清晰的成像在像面(感光芯片3)上,总的景深L3(近端景深L1和远端景深L2之和)大大增加了。
图3是根据本申请第一实施例的滤光片使镜头后焦延长的光路原理图。如图3所示,光线通过滤光片2(第一滤光部或者第二滤光部)聚焦,其中,虚线表示镜头中未设置滤光片2时的成像光路,实线表示设置有滤光片2时的成像光路。其中,△表示两种情况下焦点的移动量,d为滤光片2的厚度。
假设物距为u,像距为v,镜头的焦距为f,根据成像的高斯公式:
Figure PCTCN2016090685-appb-000001
根据(1)式可以计算出任意物距u对应的像距v。从(1)式可以看出,当u增加时,v减少,也就是物和像朝同一个方向运动。较长的物距对应较短的像距,此时需要使用较厚的滤光片2,使得像距作相应的延长。
假设滤光片2的材质折射率为n,厚度为d,根据几何光学的折射定理,成像的会聚光束垂直入射经过该滤光片2之后,会聚点相应的向后移动移动量为:
Δ=(n-1)*d    (2)
因此,根据式(1)和(2),可以计算出在物距不同的情况下,像距的变化量以及用来补偿像距的滤光片2的厚度。
需要说明的是,该实施例中的滤光片2使得后焦延长的原因是滤光片2与空气的折射率(即n)不同。根据本申请实施例的镜头内设置的滤光片2,可以为全透的滤光片,或者也可以为选择性透过特定波长的光的滤光片。
假设滤光片2只具备两个不同厚度的滤光部,结合滤光片2第一滤光部和第二滤光部的厚度差,以及镜头原来的景深,可以计算增加滤光片2之后镜头的景深。例如,用16mm的镜头搭配3MP 1/1.8”规格的工业相机,如果镜头光圈设置为4.0,且镜头对准2米远的目标调焦,此时实际的景深范围大约在1.62到2.6米范围内。使用具有两个滤光部的滤光片2,假定滤光片2的折射率为1.5,第一滤光部和第二滤光部的厚度差为0.2mm,并重新对镜头调焦,使得第一滤光部所对应的图像区域仍然对2米远的物体聚焦最清楚,那么第一滤光部对应的景深范围也是1.62到2.6米范围内;此时第二滤光部所对应的图像区域,聚焦最清晰点的距离大约为1.37米,景深范围则大约在1.18到1.62米的范围内。因此,对于原来景深范围本来在1.62到2.6米范围的镜头,使用滤光片2后,可以将总景深范围增加到1.18到2.6米范围。
根据该实施例的镜头,由于包括光学透镜1和感光芯片3,还包括滤光片2,该滤光片2包括第一滤光部和第二滤光部,滤光片2设置在光学透镜1和感光芯片3之间,其中,第一物点,即远端物点经光学透镜1和第一滤光部在感光芯片3上成像,第二物点,即近端物点经光学透镜1和第二滤光部在感光芯片3上成像,其中,第一滤光部的厚度大于第二滤光部的厚度,解决了相关技术中摄像机镜头的景深较小的技术问题,进而通过在光学透镜1和感光芯片3之间设置滤光片2,增大了摄像机的景深,使得感光芯片3表面的不同区域上,对应不同物距的物体能够分别清晰成像。
优选地,第一滤光部和第二滤光部构成阶梯型结构。该阶梯型结构可以包括2个或者2个以上的滤光部,不同的物距的物点可以分别通过不同厚度的滤光部在感光芯片3的不同的区域成像。阶梯型滤光片的滤光部的个数以及厚度可以按照具体拍摄的场景设定。利用阶梯型滤光片的厚度变化来调节镜头的聚焦点位置,可使表面的不同区域上,对应不同物距的物体分别实现 清晰成像。
需要说明的是,滤光片2中所包含的滤光部的数量并不局限于图2中所示的两个,滤光部2的数量也可以为2个以上,这2个以上的滤光部也可以构成阶梯型结构。举例而言,滤光部的数量可以为图5(b)中所示的3个,或者图5(c)中所示的四个,这都是可行的。这样,通过利用阶梯型滤光片的厚度变化来调节镜头的聚焦点位置,可使表面的不同区域上,对应不同物距的物体分别实现清晰成像。
优选地,第一滤光部和第二滤光部的入射面的面积之比为远端景深区域的视场范围与近端景深区域的视场范围之比。其中,远端景深区域为物距(物点距离镜头的垂直距离)大于预设距离的物点所在的区域,近端景深区域为物距小于预设距离的物点所在的区域。例如,需要检测的远端景深区域和近端景深区域的视场范围相等,那么第一滤光部和第二滤光部的入射面的面积之比为1:1;如果需要检测的远端景深区域和近端景深区域的视场范围比例为2:1,那么第一滤光部和第二滤光部的入射面的面积之比为2:1。
图4(a)是普通光学系统的成像示意图。图4(b)是光学系统中远物距点离焦的光路示意图。图4(c)是利用阶梯形滤光片的光学系统的成像示意图。如图4(a)所示,图中的实线和虚线部分的成像光束分别代表左右两个视场的物像之间的成像光束,两边的物距相同,所以像距也相等(其中,1为光学透镜,3为感光芯片)。如图4(b)所示,光学系统的物方的左侧的物点D在感光芯片3上形成像点D1,光学系统的物方的右侧的物点C处于离焦的位置,如图上虚线所示,因为物距增加,根据成像光学定理,像距相应缩短,即成像光束在还未到感光芯片3表面之前就已经会聚了(像点为C1),而感光芯片3表面对应于物点C的成像光束已经发散为一个小的圆斑。此时图像是不清楚的。如图4(c)所示,光学系统的物方的左侧的物点D在感光芯片3上形成像点D2,由于光学系统中使用了阶梯形滤光片,虽然物方右侧的物点C处于离焦的位置,但是对应的像方,光线穿过了滤光片2较厚的一端,其会聚点会相应的向后(像方)移动,在感光芯片上形成像点C2,其中,滤光片2厚度越大,光线会聚点延后的距离也越大。本来左右两边视场的物体物距不同,像距也相应的不同,但通过在像方增加滤光片2来延长后焦,且在左右视场两方对 应不同的厚度,就可以使得两边的后焦变得完全相同,从而使得左右视场的物体,即使处于不同的物距,也有实现了同时清晰地聚焦在同一个像面上。
可选地,滤光片2包括一个透明的阶梯型结构的滤光片。
在上述实施例中,滤光片2可以由多片平板型滤光片粘接而成,或者也可以由同一块透明光学材料切割研磨成阶梯形状。本申请不对阶梯型结构滤光片的具体成型过程作具体的限定。
图5(a)是二阶梯型滤光片的剖视图及对应的俯视图。图5(b)是三阶梯型滤光片的剖视图及对应的俯视图。图5(c)是四阶梯型滤光片的剖视图及对应的俯视图。二阶梯型滤光片也即包含两个滤光部,三阶梯型滤光片也即包含三个滤光部,四阶梯型滤光片也即包含四个滤光部。在图5(a)中,滤光片可使用两层平板玻璃(或者其他平板的透明光学材料)制作而成,中间使用光学胶粘接在一起(如图5(a)左侧的两个图所示,左上图为滤光片的俯视图,左下图为其对应的剖视图)。滤光片厚区和薄区的尺寸比例,可以根据实际的应用场景来确定。例如,需检测的远端景深区域和近端景深区域的视场范围相等,则滤光片厚区和薄区的宽度比例可设置为1:1。该滤光片也可由同一块透明光学材料切割研磨成阶梯的形状(如图5(a)右侧的两个图所示,右上图为滤光片的俯视图,右下图为其对应的剖视图)。
图5(b)、图5(c)与图5(a)类似,这里不予赘述。需要说明的是,图5(b)(或者图5(c))中左侧的两个图为使用平板型透明光学材料制成的滤光片的俯视图和剖视图,右侧的两个图为使用同一块透明光学材料切割研磨成阶梯的形状所制成的滤光片的俯视图和剖视图,其中,上图均为俯视图,下图均为剖视图。将滤光片设置为多阶梯(至少3个阶梯)的结构时,可以更进一步地增加摄像机的景深。但相应地,监测某个特定景深范围的摄像机感光区域也会相应缩小。此外也可以包含4个以上的滤光部,这里不作具体限定。
图6(a)是根据本申请第一实施例的滤光片厚薄的分界线在竖直方向时滤光片和感光芯片的相对位置的示意图。图6(b)是根据本申请第一实施例的滤光片厚薄的分界线在水平方向时滤光片和感光芯片的相对位置的示意图。在图6(a)中,左边的图为不带滤光片的感光芯片,右边为带滤光片的情况。 填充斜线部分表示滤光片的薄区,而网格线部分为厚区。图6(b)与图6(a)类似,不予赘述。
可选地,感光芯片3的表面设置有保护玻璃,并且滤光片胶合于保护玻璃的表面。或者,可选地,滤光片通过支架固定于印制电路板。
图7(a)是根据本申请第一实施例的以胶合方式安装滤光片2的滤光片2安装结构示意图。图7(b)是根据本申请第一实施例的以支架方式安装滤光片2的滤光片2安装结构示意图。在图7(a)中,滤光片2用胶合方式固定在感光芯片表面。因为感光芯片表面一般都有一层保护玻璃,利用光学胶10可以把阶梯型滤光片2直接胶合在保护玻璃的表面。胶合时要特别注意不要让胶水溢出,否则对图像会有较大影响。在图7(b)中,利用一个支架7夹住滤光片2,支架7用螺钉9固定在印制电路板8(PCB)上。感光芯片3胶合于印制电路板8。
优选地,滤光片2经由传输部件与控制部件相连接以受控并移动至目标位置,其中,在目标位置,第一物点的成像光路经过第一滤光部,第二物点的成像光路经过第二滤光部。
图8(a)是根据本申请第一实施例的具有阶梯型滤光片2的镜头的剖面图。图8(b)是根据本申请第一实施例的具有阶梯型滤光片2的镜头的驱动系统的俯视图。如图8(a)所示,在具有阶梯型滤光片2的镜头的剖面图(未示出马达20、导轨22和丝杆18)中,阶梯形滤光片2固定在载体部件14上,感光芯片3胶合于PCB板16上。如图8(b)所示,在该驱动系统的俯视图中(未示出感光芯片3和PCB板16),阶梯形滤光片的薄区和厚区分别对应区域M和区域N,即阶梯型滤光片的薄区对应区域M,阶梯型滤光片的厚区对应区域N。阶梯形滤光片2通过载体部件14与丝杆18相连接,丝杆18上有丝杆副(爪子)19,丝杆18与马达20相连接,通过马达20可以驱动丝杆18沿滑动导轨22(穿过滑动孔23)运动,以带动载体部件14运动,从而阶梯形滤光片2可被移动到目标位置。
优选地,滤光片2的入射面和出射面镀有光学减反射膜和/或红外截止镀膜。光学减反射膜(增透膜)可以减少入射光的反射,能够提升摄像机的光学成像的品质。
图9是根据本申请第二实施例的镜头的示意图,如图9所示,该镜头包括:光学透镜1、感光芯片3以及滤光片4。
其中,滤光片4设置在光学透镜1和感光芯片3之间,其中,第一物点经光学透镜1和滤光片4在感光芯片3上成像,第二物点经光学透镜1在感光芯片3上成像。
与本申请第一实施例相似,在本申请第二实施例中,第一物点距的物距大于第二物点的物距。也就是说,第一物点和第二物点两者相比较而言,第一物点可以认为是远端物点,第二物点可以认为是近端物点。
具体地,滤光片4为均匀厚度的滤光片,滤光片4设置在光学透镜1和感光芯片3之间,其中,第一物点(E)经光学透镜1和滤光片4在感光芯片3上的成像位置为第一像点(E1),第二物点(F)经光学透镜1在感光芯片3上的成像位置为第二像点(F1),第一物点的物距大于第二物点的物距。E1和F1均位于感光芯片3之上,也即不同物距的物点在同一像面上成像。一般来说,像点落在感光芯片3上或者与感光芯片3具有一定(足够小的)的距离时,可认为会成清晰的像。在该实施例中,由于在光学透镜1、感光芯片3之间设置了滤光片4,使得物距较大的第一物点的成像位置相对光学透镜后延,实现了在感光芯片3上成像。
该实施例增大了原有摄像机的景深。具体而言,设经过第一物点E、光学透镜1、滤光片4,成像为第一像点E1的光路,为第一光路,其景深称为第一景深(L4);经过第二物点F、光学透镜1,成像为第二像点F1的光路,为第二光路,其景深称为第二景深(L5)。如图9所示,第一光路的近物距和第二光路的远物距相等,相当于第一光路和第二光路的景深拼接到一起(总景深L6=L4+L5),增大了摄像机的景深。或者,在一些情况下,第一光路的近物距比第二光路的远物距小,两者的景深有部分重合,但总的景深仍然大于第一光路或者第二光路单独的景深(总景深L6大于L4,并且大于L5),进而在该情况下也将达到增大摄像机景深的效果。
该实施例提供的镜头,由于包括光学透镜1、感光芯片3,还包括滤光片4,其中,滤光片4设置在光学透镜1和感光芯片3之间,其中,第一物点,即远端物点经光学透镜1和滤光片4在感光芯片3上成像,第二物点,即近 端物点经光学透镜1在感光芯片3上成像,解决了相关技术中摄像机镜头的景深较小的技术问题,进而通过在光学透镜1和感光芯片3之间设置均匀厚度的滤光片4,增大了镜头的景深,使得感光芯片3表面的不同区域上,对应不同物距的物体能够分别清晰成像。在该实施例中,由于滤光片4为均匀厚度的滤光片,因此更易获取,操作起来更加简单易行。
优选地,滤光片4的中轴线与光学透镜的光轴平行,并且中轴线与光轴之间具有预设距离。
在该实施例中,为了使得第一物点的成像光路经过滤光片4,而第二物点的成像光路不经过滤光片4,需要调整滤光片4的中轴线与光学透镜的光轴平行,并且中轴线与光轴之间具有预设距离。在预设距离处,有尽可能多的远物距端物点的成像光路经过滤光片4。对于不同的拍摄场景,根据远物距端视场和近物距端视场的比值可以相应调整该预设距离的值,以使得光路中经过滤光片4的区域和未经过滤光片4的区域的比例接近于远物距端视场范围(相当于本申请第一实施例中的远端景深区域的视场范围)和近物距端视场范围(相当于本申请第一实施例中的近端景深区域的视场范围)的比值。
优选地,该滤光片4经由传输部件与控制部件相连接以受控并移动至目标位置,其中,在目标位置,第一物点的成像光路经过滤光片4,第二物点的成像光路不经过滤光片4。
图10(a)是根据本申请第二实施例的具有均匀厚度的滤光片4的镜头的剖面图。图10(b)是根据本申请第二实施例的具有均匀厚度的滤光片的镜头的驱动系统的俯视图。如图10(a)所示,在具有均匀厚度的滤光片4的镜头的剖面图中(未示出马达20、导轨22和丝杆18),均匀厚度的滤光片4固定在载体部件14上,感光芯片3胶合于PCB板16上。如图10(b)所示,在驱动系统的俯视图中(未示出感光芯片3和PCB板16),均匀厚度的滤光片对应区域P(载体上P之外的区域为空气)。均匀厚度的滤光片4通过载体部件14与丝杆18相连接,丝杆18上有丝杆副(爪子)19,丝杆18与马达20相连接,通过马达20可以驱动丝杆18沿滑动导轨22(穿过滑动孔23)运动,以带动载体部件14运动,从而均匀厚度的滤光片4可被移动到目标位置。
下面根据本申请的实施例,还提供了一种摄像机,该摄像机包括本申请 实施例提供的任意一种镜头。
优选地,该摄像机还包括:集成电路芯片,与感光芯片相连接,用于对感光芯片上产生的电信号执行处理,得到第一处理结果;至少一个电压信号转换电路,与集成电路芯片相连接,用于对集成电路芯片输入或者输出的电压信号执行转换;以及颜色编码电路,与集成电路芯片相连接,用于对第一处理结果执行颜色编码处理,得到第二处理结果。
图11是根据本申请实施例的摄像机的示意图。如图11所示,该摄像机包括:镜头24、集成电路芯片25、第一电压信号转换电路26、第二电压信号转换电路27和颜色编码电路28,其中,镜头24包括光学透镜112、滤光片114和感光芯片116。其中,滤光片114可以为阶梯型滤光片或者均匀厚度的滤光片。具体地,集成电路芯片25,与感光芯片116相连接,用于对感光芯片上产生的电信号执行处理,得到第一处理结果;第一电压信号转换电路26,与集成电路芯片25相连接,用于对输入集成电路芯片25的电压信号执行转换;第二电压信号转换电路26,与集成电路芯片25相连接,用于对从集成电路芯片25输出的电压信号执行转换;以及颜色编码电路28,与集成电路芯片25相连接,用于对第一处理结果执行颜色编码处理,得到第二处理结果。
其中,感光芯片116将接收到的光信号转换为了电信号,集成电路芯片25对感光芯片上产生的电信号执行处理。电压信号转换电路可以将集成电路芯片25产生的电压信号进行转换,以将电压信号传输至其他的处理模块;或者也可以将其他处理模块产生的电信号转换为集成电路芯片25能够接收的电信号。颜色编码电路28可对集成电路芯片25输出的处理结果进行编码处理(例如,RGB、YUV等)。
第一电压信号转换电路26或者第二电压信号转换电路27可通过IO口模块实现,集成电路芯片25可通过SOC(System on chip)模块实现。
另外,根据本申请的实施例,还提供了一种包裹检测系统。该包裹检测系统包括本申请提供的任意一种摄像机。
优选地,该包裹检测系统包括:摄像机,包括镜头、集成电路芯片、至少一个电压信号转换电路和颜色编码电路,其中,镜头包括光学透镜、感光芯片和滤光片,滤光片包括第一滤光部和第二滤光部,滤光片设置在光学透镜 和感光芯片之间,其中,第一物点经光学透镜和第一滤光部在感光芯片上成像,第二物点经光学透镜和第二滤光部在感光芯片上成像,其中,第一滤光部的厚度大于第二滤光部的厚度,集成电路芯片与感光芯片相连接,用于对感光芯片上产生的电信号执行处理,得到第一处理结果,至少一个电压信号转换电路与集成电路芯片相连接,用于对集成电路芯片输入或者输出的电压信号执行转换,颜色编码电路与集成电路芯片相连接,用于对第一处理结果执行颜色编码处理,得到第二处理结果,至少一个电压信号转换电路包括第一电压信号转换电路和第二电压信号转换电路;激光触发器,与第一电压信号转换电路相连接,用于触发摄像机进行拍摄;补光灯,与第二电压信号转换电路相连接,用于为摄像机补光;以及后处理电路,与颜色编码电路相连接,用于对第二处理结果执行预设处理。
图12是根据本申请实施例的包裹检测系统的示意图。如图12所示,该包裹检测系统包括:摄像机模块32、后处理电路29、激光触发器30、补光灯31,其中,摄像机模块32包括镜头24、集成电路芯片25、第一电压信号转换电路26、第二电压信号转换电路27以及颜色编码电路28,镜头24包括光学透镜112、滤光片114和感光芯片116。其中,滤光片114可以为阶梯型滤光片或者均匀厚度的滤光片。
具体地,摄像机模块32的颜色编码电路28与后处理电路29相连接,激光触发器30与摄像机模块32的第一电压信号转换电路26相连接,补光灯31与摄像机模块32的第二电压信号转换电路27相连接。待测物体通过光学透镜112和滤光片114,在感光芯片116上成像。当待测物体通过检测区域时,激光触发器30发出一个触发信号,摄像机模块32在接收到该信号后拍摄待测物体的图片,并通过颜色编码电路28对拍摄的图形进行编码处理,并将处理结果输出至后处理电路29。后处理电路29根据实际检测需求来完成缺陷检测、颜色判别、尺寸测量、三维成像、条码识别、计数等工作(即执行预设处理)。补光灯31通过第二电压信号转换电路27接收集成电路芯片25生成的控制信息,以实现照明和抓拍之间的同步,或者控制其发光强度、颜色、开关、区域照明等,可满足多样化的照明需求。
下面根据本申请实施例,还提供了一种图像处理方法。
图13是根据本申请第一实施例的图像处理方法的流程图。如图13所示,该方法包括如下步骤:
步骤S1302,分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,第二目标图像为待处理图像中经光学透镜和第二滤光部在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片包括第一滤光部和第二滤光部,第一滤光部的厚度大于第二滤光部的厚度。
在待测物体经过滤光片在感光芯片上成像时,本申请所优选的图像部分是远景端经过第一滤光部在感光芯片上所形成的图像,以及近景端经过第二滤光部在感光芯片上所形成的图像。但是,实际在成像的过程中,远景端的物点有可能经过第二滤光部在感光芯片上形成一个模糊的像,同样地,近景端也有可能经过第一滤光部在感光芯片上形成一个模糊的像。因此,在利用摄像机拍摄到图像(待处理图像)后,需要对该图像进行进一步的处理,以获取到远近端经过第一滤光部在感光芯片上所形成的图像,以及近景端经过第二滤光部在感光芯片上所形成的图像,这样得到的最终的图像是最为清晰的图像。
步骤S1304,判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域。
在该步骤中,第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,第二目标图像为待处理图像中经光学透镜和第二滤光部在感光芯片上形成的图像。通过步骤S1302中的分析,可以获知第一目标图像和第二目标图像中可能存在重叠的区域。例如,待测物体的第一端(远景端)通过光学透镜和第一滤光部在感光芯片上形成一个清晰的像,同时,该第一端还通过光学透镜和第二滤光部在感光芯片上形成一个模糊的像(在后续处理中,需要将该模糊的像清除掉)。
步骤S1306,如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
在该步骤中,如果判断出第一目标图像和第二目标图像中的目标待测物 体图像存在重叠的区域,例如,存在图像1和图像2均为待测物体的第一端的像(其中,图像1较为模糊),二者叠加在了一起,导致待处理图像清晰度下降。则根据本申请实施例,应该去除图像1,只保留待处理图像中的图像2。
需要说明的是,第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域,与镜头和拍摄场景的相对位置、拍摄场景本身等因素有关。本申请不对重叠区域产生的具体原因进行限定。
根据该实施例的图像处理方法,由于包括:分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,第二目标图像为待处理图像中经光学透镜和第二滤光部在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片包括第一滤光部和第二滤光部,第一滤光部的厚度大于第二滤光部的厚度;判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理,通过将待处理图像中通过不同厚度滤光部而形成的图形进行去重处理,可以保证最终获取的图像的不同的区域对应不同物距的物点,也即获取的图像为远物距物点和近物距物点分别在感光芯片的不同区域上的成像点,由于增大了镜头的景深,因此使得获取到的图像更加清晰。
优选地,为了提升成像质量,在分别获取第一目标图像和第二目标图像之前,该方法还包括:分别获取第一比例和第二比例,其中,第一比例为待处理图像的第一区域中目标待测物体图像所占的比例,第二比例为待处理图像的第二区域中目标待测物体图像所占的比例;判断第一比例与第二比例之差是否达到预设阈值,其中,第一比例大于第二比例;以及如果判断出第一比例与第二比例之差达到预设阈值,则调节滤光片的位置以增大第二比例,并重新获取待处理图像。
在获取待测物体的初始图像时,摄像机的滤光片设置在摄像机的光学镜头和摄像机的感光芯片之间。在待测物体成像的过程中,物点经滤光片成像,会使该物点对应的像点相对摄像机的光学镜头后延,也即,滤光片使该摄像 机的后焦延长。如果在拍摄时,待测物体的远景端(物距较远的一端)成像时经过滤光片,则可使远景端的成像更加的清晰。但是,由于在光学镜头和感光芯片之间设置了滤光片,又有可能造成在待测物体的尺寸较大时摄像机无法对整个待测物体进行识别的问题。在该实施例中,在获取初始图像后,可根据初始图像的成像情况,对滤光片进行位置的调节,以保证整个待测物体在感光芯片上成像。在对滤光片进行调节之后,即可拍摄获取待处理图像。
在获取到初始图像后,将初始图像按照预设规则划分为不同的区域。例如,可以将初始图像均匀划分为两个区域(即第一区域和第二区域),或者也可以将初始图像划分为多个区域(包括第一区域和第二区域)。分别获取第一区域中待测物体所占的比例(第一比例),以及第二区域中待测物体所占的比例(第二比例)。通常,滤光片较厚的一端,对应的为远景端;滤光片较薄的一端,对应的为近景端。假设近景端区域中较大区域成像时经过滤光片的厚端,则很可能造成近景端对应的成像区域无法完整成像(也即近景端对应的成像区域可能不能实现对整个近景端成像)。
第一比例与第二比例之差可以表征滤光片对待测物体的成像的影响,也即滤光片的设置位置不同,第一比例与第二比例之差也会发生变化。例如,当待测物体的近景端经滤光片成像,而远景端没有经滤光片成像,此时,将初始图像划分为远景端和近景端对应的两个区域时,这两个区域中,远景端对应区域的待测物体所占的比例会远远小于近景端对应区域的待测物体所占的比例,造成不能完整成像。此时,需要调节滤光片的位置,以增大远景端对应区域的待测物体所占的比例。预设阈值可以根据经验值进行设定。
可选地,为了提升滤光片移动的便捷性,可以通过电机来控制滤光片移动,也即,当判断出第一比例与第二比例之差达到预设阈值,则将生成电机控制信号并将其发送至电机,电机驱动传动部件运动,从而带动滤光片移动。
优选地,初始时,滤光片可以设置预设位置为在滤光片的中心轴线与光学镜头的光轴重合的位置。根据初始图像的拍摄效果,对滤光片进行移动。
优选地,调节滤光片的位置以增大第二比例包括:根据第一比例、第二比例、第一区域的面积、第二区域的面积(或者第一比例、第二比例、第一区域和第二区域上的待测物体的图像的面积)确定目标移动量;将滤光片朝 目标方向移动目标移动量,其中,目标方向为第一区域指向第二区域的方向。
假定第一区域的面积为a1,第二区域的面积为a2,第一比例和第二比例分别为k1和k2;第一区域和第二区域上的待测物体的图像的面积分别为u1和u2,可以通过以下公式计算目标移动量x:
u1=k1*a1,
u2=k2*a2,
假定k1>k2+d,d为预设阈值,滤光片需要朝第一区域指向第二区域的方向移动的目标移动量为x,那么有如下的方程:
Figure PCTCN2016090685-appb-000002
根据上式即可以求解出目标移动量x。
或者,也可以采用手动调节的方式,来移动滤光片。例如,滤光片采用丝杆传动的方式来实现移动。滤光片设置在载体部件上,载体部件通过丝杆与调节部件相连接。用户可以手动旋转调节部件,例如,每旋转一圈,可以使滤光片向预设的方向移动预设距离。当判断出初始图像的第一比例与第二比例的差值达到预设阈值,则用户可以旋转调节部件,每旋转一次,即可再次判断是否达到预设阈值,如果依旧达到预设阈值,则可再次旋转调节部件,直到获得的图像的第一比例与第二比例的差值小于预设阈值。
图14是根据本申请第二实施例的图像处理方法的流程图。该实施例可以作为图13所示实施例的一种优选实施方式。如图14所示,该方法包括如下的步骤:
步骤S1402,图像采集。
图像即上述的待处理图像。
步骤S1404,图像分区。
即按照滤光片的设置(或者滤光片的薄厚)将图像分为第一目标图像和第二目标图像。其中,第一目标图像为经过滤光片较厚区域在感光芯片上所形成的图像;第二目标图像为经过滤光片较薄区域在感光芯片上所形成的图像。
步骤S1406,对第一目标图像进行目标识别。
目标识别即待测物体的识别。
步骤S1408,对第二目标图像进行目标识别。
步骤S1410,重复物体的判断和筛除。
根据形貌特征来排除第一目标图像和第二目标图像中的重复的物体图像(也即执行去重处理)。
步骤S1412,结果输出。
根据该实施例的图像处理方法,通过图像分区,得到第一目标图像和第二目标图像(分别为经过滤光片的薄区和厚区得到的图像),对第一目标图像和第二目标图像的待测物体进行识别,判断并筛除重复物体的图像,使得最终获取到的图像更加清晰。
优选地,在每次获取初始图像之前,将滤光片复位至预设初始位置。由于拍摄不同的场景时,滤光片的位置是不确定的(由待测物体的近景端和远景端决定),因此,在拍摄的场景不同时,需要重新调整滤光片相对待测物体的位置。对于固定位置使用的摄像机,可以将滤光片的位置调节为一个固定有效的位置。对于位置不固定的摄像机,在每次获取初始图像之前,将滤光片复位至预设初始位置。
下面通过两个具体的应用场景来说明本申请所提供的图像处理方法。
图15是根据本申请实施例的图像处理方法在智能交通中应用的示意图。在智能交通中应用的摄像机必须相对被监测的场景倾斜安装,此时因为摄像机所对准的场景既有近景也有远景,可能无法同时兼顾聚焦清楚,在摄像机使用阶梯状的滤光片(或者均匀厚度的滤光片)可实现远近场景都在摄像机的景深范围以内。在这种情况下,可以在初次使用时,针对拍摄的场景调节滤光片至最佳位置,在以后的使用中可以不再重新调节。在拍摄到交通、车辆或者道路的图像之后,可以利用本申请的图像处理方法,对图像进行处理,以实现图像的清晰化。
图16是根据本申请实施例的图像处理方法在流水线中应用的示意图。如图16所示,左侧为待测物体的剖视图,包括待测物体R和待测物体S,其中,待测物体R的高度大于待测物体S的高度。摄像机架设在流水线上方(当待测物体的高度较高时,相对摄像机来说其为近景端,当待测物体的高度较低 时,相对摄像机来说其为远景端),当被检测物通过摄像机的视场范围时,可以通过智能算法实现计数、品质检测、条码识别等功能。但当被检测物的高度不同且变化范围超出摄像机的景深,就可以在摄像机上使用阶梯型的滤光片(或者均匀厚度的滤光片),使得摄像机的一部分感光像元监测较远的物,而另一部分像元检测较近的物。在整个总景深范围内通过的被测物,都可以被有效识别和监测。在获取到被测物体的图像之后,可以利用本申请实施例的图像处理方法,得到被测物体更为清晰的图像。
下面根据本申请实施例,还提供了一种图像处理方法。该图像处理方法用于具有均匀厚度的滤光片的摄像机所拍摄出的待处理图像。
根据该实施例的图像处理方法包括如下的步骤:
步骤S1702,分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经过光学透镜和滤光片在感光芯片上形成的图像,第二目标图像为待处理图像中经过光学透镜、未经过滤光片在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片为均匀厚度的滤光片。
步骤S1704,判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域。
步骤S1706,如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
根据该实施例的图像处理方法,由于包括:分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经过光学透镜和滤光片在感光芯片上形成的图像,第二目标图像为待处理图像中经过光学透镜、未经过滤光片在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片为均匀厚度的滤光片;判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理,解决了相关技术中由于摄像机的景深较小 导致拍摄出的图像的清晰度较差的技术问题,进而通过将待处理图像中通过滤光部和未通过滤光片形成的图形中的重合区域进行去重处理,可以保证最终获取的图像的不同的区域对应不同物距的物点,也即获取的图像为远物距物点和近物距物点分别在感光芯片的不同区域上的成像点,由于增大了镜头的景深,因此使得获取到的图像更加清晰。
需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的 全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
另外,根据本申请的实施例,还提供了一种应用程序,该应用程序用于在运行时执行本申请实施例提供的图像处理方法。
其中,本申请第一实施例提供的图像处理方法,包括:
分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,第二目标图像为待处理图像中经光学透镜和第二滤光部在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片包括第一滤光部和第二滤光部,第一滤光部的厚度大于第二滤光部的厚度;
判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;以及
如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
在本申请的一种具体实现方式中,在分别获取第一目标图像和第二目标图像之前,该方法还包括:
分别获取第一比例和第二比例,其中,第一比例为待处理图像的第一区域中目标待测物体图像所占的比例,第二比例为待处理图像的第二区域中目标待测物体图像所占的比例;
判断第一比例与第二比例之差是否达到预设阈值,其中,第一比例大于第二比例;以及
如果判断出第一比例与第二比例之差达到预设阈值,则调节滤光片的位置以增大第二比例,并重新获取待处理图像。
在本申请的一种具体实现方式中,调节滤光片的位置以增大第二比例包 括:
根据第一比例、第二比例、第一区域的面积、第二区域的面积确定目标移动量;以及
将滤光片朝目标方向移动目标移动量,其中,目标方向为第一区域指向第二区域的方向。
其中,本申请第二实施例提供的图像处理方法,包括:
分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经过光学透镜和滤光片在感光芯片上形成的图像,第二目标图像为待处理图像中经过光学透镜、未经过滤光片在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片为均匀厚度的滤光片;
判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;以及
如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
对于应用程序实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
容易看出,通过本申请实施例提供的应用程序的运行,可以使摄像机的景深增大,这样,无论是物距较大的物点还是物距较小的物点,都可以清晰的成像在摄像机镜头的感光芯片上。
另外,根据本申请的实施例,还提供了一种存储介质,该存储介质用于存储应用程序,该应用程序用于在运行时执行本申请实施例提供的图像处理方法。
其中,本申请第一实施例提供的图像处理方法,包括:
分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,第二目标图像为待处理图像中经光学透镜和第二滤光部在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片 和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片包括第一滤光部和第二滤光部,第一滤光部的厚度大于第二滤光部的厚度;
判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠的区域;以及
如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
在本申请的一种具体实现方式中,在分别获取第一目标图像和第二目标图像之前,该方法还包括:
分别获取第一比例和第二比例,其中,第一比例为待处理图像的第一区域中目标待测物体图像所占的比例,第二比例为待处理图像的第二区域中目标待测物体图像所占的比例;
判断第一比例与第二比例之差是否达到预设阈值,其中,第一比例大于第二比例;以及
如果判断出第一比例与第二比例之差达到预设阈值,则调节滤光片的位置以增大第二比例,并重新获取待处理图像。
在本申请的一种具体实现方式中,调节滤光片的位置以增大第二比例包括:
根据第一比例、第二比例、第一区域的面积、第二区域的面积确定目标移动量;以及
将滤光片朝目标方向移动目标移动量,其中,目标方向为第一区域指向第二区域的方向。
其中,本申请第二实施例提供的图像处理方法,包括:
分别获取第一目标图像和第二目标图像,其中,第一目标图像为待处理图像中经过光学透镜和滤光片在感光芯片上形成的图像,第二目标图像为待处理图像中经过光学透镜、未经过滤光片在感光芯片上形成的图像,待处理图像为通过镜头拍摄目标待测物体得到的图像,镜头包括光学透镜、感光芯片和滤光片,滤光片设置于光学透镜和感光芯片之间,滤光片为均匀厚度的滤光片;
判断第一目标图像和第二目标图像中的目标待测物体图像是否存在重叠 的区域;以及
如果判断出第一目标图像和第二目标图像中的目标待测物体图像存在重叠的区域,则对待处理图像中重叠的区域执行去重处理。
对于存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
容易看出,通过本申请实施例提供的存储介质中存储的应用程序的运行,可以使摄像机的景深增大,这样,无论是物距较大的物点还是物距较小的物点,都可以清晰的成像在摄像机镜头的感光芯片上。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (19)

  1. 一种镜头,其特征在于,包括光学透镜和感光芯片,其特征在于,所述镜头还包括:
    滤光片,包括第一滤光部和第二滤光部,所述滤光片设置在所述光学透镜和所述感光芯片之间,其中,第一物点经所述光学透镜和所述第一滤光部在所述感光芯片上成像,第二物点经所述光学透镜和所述第二滤光部在所述感光芯片上成像,其中,所述第一滤光部的厚度大于所述第二滤光部的厚度。
  2. 根据权利要求1所述的镜头,其特征在于,所述第一滤光部和所述第二滤光部构成阶梯型结构。
  3. 根据权利要求1所述的镜头,其特征在于,所述第一滤光部和所述第二滤光部的入射面的面积之比为远端景深区域的视场范围与近端景深区域的视场范围之比。
  4. 根据权利要求2所述的镜头,其特征在于,所述滤光片包括多个透明的平板型滤光片,其中,所述多个透明的平板型滤光片通过光学胶粘接成阶梯型结构。
  5. 根据权利要求2所述的镜头,其特征在于,所述滤光片包括一个透明的阶梯型结构的滤光片。
  6. 根据权利要求1所述的镜头,其特征在于,
    所述滤光片,经由传输部件与控制部件相连接以受控并移动至目标位置,其中,在所述目标位置,所述第一物点的成像光路经过所述第一滤光部,所述第二物点的成像光路经过所述第二滤光部。
  7. 根据权利要求1所述的镜头,其特征在于,所述感光芯片的表面设置有保护玻璃,并且所述滤光片胶合于所述保护玻璃的表面。
  8. 根据权利要求1所述的镜头,其特征在于,所述滤光片的入射面和出射面镀有光学减反射膜和/或红外截止镀膜。
  9. 一种镜头,其特征在于,包括光学透镜和感光芯片,其特征在于,所述镜头还包括:
    滤光片,设置在所述光学透镜和所述感光芯片之间,其中,第一物点经所述光学透镜和所述滤光片在所述感光芯片上成像,第二物点经所述光学透镜 在所述感光芯片上成像。
  10. 根据权利要求9所述的镜头,其特征在于,所述滤光片的中轴线与所述光学透镜的光轴平行,并且所述中轴线与所述光轴之间具有预设距离。
  11. 根据权利要求9所述的镜头,其特征在于,
    所述滤光片,经由传输部件与控制部件相连接以受控并移动至目标位置,其中,在所述目标位置,所述第一物点的成像光路经过所述滤光片,所述第二物点的成像光路不经过所述滤光片。
  12. 一种摄像机,其特征在于,包括权利要求1至权利要求11中任意一项所述的镜头。
  13. 一种包裹检测系统,其特征在于,包括权利要求12所述的摄像机。
  14. 一种图像处理方法,其特征在于,包括:
    分别获取第一目标图像和第二目标图像,其中,所述第一目标图像为待处理图像中经光学透镜和第一滤光部在感光芯片上形成的图像,所述第二目标图像为所述待处理图像中经所述光学透镜和第二滤光部在所述感光芯片上形成的图像,所述待处理图像为通过镜头拍摄目标待测物体得到的图像,所述镜头包括所述光学透镜、所述感光芯片和滤光片,所述滤光片设置于所述光学透镜和所述感光芯片之间,所述滤光片包括所述第一滤光部和所述第二滤光部,所述第一滤光部的厚度大于所述第二滤光部的厚度;
    判断所述第一目标图像和所述第二目标图像中的目标待测物体图像是否存在重叠的区域;以及
    如果判断出所述第一目标图像和所述第二目标图像中的所述目标待测物体图像存在重叠的区域,则对所述待处理图像中所述重叠的区域执行去重处理。
  15. 根据权利要求14所述的方法,其特征在于,在分别获取第一目标图像和第二目标图像之前,所述方法还包括:
    分别获取第一比例和第二比例,其中,所述第一比例为所述待处理图像的第一区域中所述目标待测物体图像所占的比例,所述第二比例为所述待处理图像的第二区域中所述目标待测物体图像所占的比例;
    判断所述第一比例与所述第二比例之差是否达到预设阈值,其中,所述 第一比例大于所述第二比例;以及
    如果判断出所述第一比例与所述第二比例之差达到所述预设阈值,则调节所述滤光片的位置以增大所述第二比例,并重新获取所述待处理图像。
  16. 根据权利要求15所述的方法,其特征在于,调节所述滤光片的位置以增大所述第二比例包括:
    根据所述第一比例、所述第二比例、所述第一区域的面积、所述第二区域的面积确定目标移动量;以及
    将所述滤光片朝目标方向移动所述目标移动量,其中,所述目标方向为所述第一区域指向所述第二区域的方向。
  17. 一种图像处理方法,其特征在于,包括:
    分别获取第一目标图像和第二目标图像,其中,所述第一目标图像为待处理图像中经过光学透镜和滤光片在感光芯片上形成的图像,所述第二目标图像为所述待处理图像中经过所述光学透镜、未经过所述滤光片在所述感光芯片上形成的图像,所述待处理图像为通过镜头拍摄目标待测物体得到的图像,所述镜头包括所述光学透镜、所述感光芯片和所述滤光片,所述滤光片设置于所述光学透镜和所述感光芯片之间,所述滤光片为均匀厚度的滤光片;
    判断所述第一目标图像和所述第二目标图像中的目标待测物体图像是否存在重叠的区域;以及
    如果判断出所述第一目标图像和所述第二目标图像中的所述目标待测物体图像存在重叠的区域,则对所述待处理图像中所述重叠的区域执行去重处理。
  18. 一种应用程序,其特征在于,所述应用程序用于在运行时执行如权利要求14至权利要求17中任意一项所述的图像处理方法。
  19. 一种存储介质,其特征在于,所述存储介质用于存储应用程序,所述应用程序用于在运行时执行如权利要求14至权利要求17中任意一项所述的图像处理方法。
PCT/CN2016/090685 2015-08-18 2016-07-20 一种镜头、摄像机、包裹检测系统和图像处理方法 WO2017028652A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/753,804 US10386632B2 (en) 2015-08-18 2016-07-20 Lens, camera, package inspection system and image processing method
EP16836516.1A EP3340600B1 (en) 2015-08-18 2016-07-20 Lens, camera, package inspection system and image processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510508464.8 2015-08-18
CN201510508464.8A CN106470299B (zh) 2015-08-18 2015-08-18 镜头、摄像机、包裹检测系统和图像处理方法

Publications (1)

Publication Number Publication Date
WO2017028652A1 true WO2017028652A1 (zh) 2017-02-23

Family

ID=58050772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/090685 WO2017028652A1 (zh) 2015-08-18 2016-07-20 一种镜头、摄像机、包裹检测系统和图像处理方法

Country Status (4)

Country Link
US (1) US10386632B2 (zh)
EP (1) EP3340600B1 (zh)
CN (1) CN106470299B (zh)
WO (1) WO2017028652A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018169551A1 (en) * 2017-03-17 2018-09-20 Intel Corporation An apparatus for semiconductor package inspection
DE102017115021A1 (de) * 2017-07-05 2019-01-10 Carl Zeiss Microscopy Gmbh Digitale Bestimmung der Fokusposition
JP7207888B2 (ja) * 2018-07-31 2023-01-18 キヤノン株式会社 制御装置、撮像装置、制御装置の制御方法、および、プログラム
CN111901502A (zh) * 2019-05-06 2020-11-06 三赢科技(深圳)有限公司 相机模组
CN110166675B (zh) * 2019-06-14 2024-02-13 深圳扑浪创新科技有限公司 同步拍摄装置与同步拍摄方法
CN112601071B (zh) * 2020-11-13 2022-08-26 苏州华兴源创科技股份有限公司 光轴校准方法、光轴校准装置和成像设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314058A (ja) * 2001-04-17 2002-10-25 Sony Corp 固体撮像素子およびその製造方法
US20070035705A1 (en) * 2005-07-15 2007-02-15 James Hurd System, method and apparatus for enhancing a projected image
CN101877763A (zh) * 2009-04-29 2010-11-03 弗莱克斯电子有限责任公司 图像空间聚焦
JP2014015542A (ja) * 2012-07-09 2014-01-30 Nippon Shokubai Co Ltd フタロシアニン化合物
CN204859348U (zh) * 2015-08-18 2015-12-09 杭州海康威视数字技术股份有限公司 镜头、摄像机以及包裹检测系统

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1355506A (zh) * 2000-11-28 2002-06-26 力捷电脑股份有限公司 鉴知系统调整转换作用值范围的方法
JP2002333574A (ja) * 2001-05-08 2002-11-22 Konica Corp デジタルカメラ
FI20011272A (fi) * 2001-06-15 2002-12-16 Nokia Corp Kameran tarkennusmenetelmä ja kamera
DE102004001556A1 (de) * 2004-01-10 2005-08-04 Robert Bosch Gmbh Nachtsichtsystem für Kraftfahrzeuge mit partiellem optischem Filter
JP2007103401A (ja) * 2005-09-30 2007-04-19 Matsushita Electric Ind Co Ltd 撮像装置及び画像処理装置
US8102599B2 (en) * 2009-10-21 2012-01-24 International Business Machines Corporation Fabrication of optical filters integrated with injection molded microlenses
JP2011237638A (ja) * 2010-05-11 2011-11-24 Fujifilm Corp 撮影装置
KR20140089441A (ko) * 2011-10-24 2014-07-14 아사히 가라스 가부시키가이샤 광학 필터와 그의 제조 방법, 및 촬상 장치
EP3010392A4 (en) * 2013-06-18 2017-01-25 Delta ID Inc. Iris imaging apparatus and methods for configuring an iris imaging apparatus
CN104486537B (zh) * 2014-10-27 2018-09-04 北京智谷技术服务有限公司 光场采集控制方法和装置
CN104394306B (zh) * 2014-11-24 2018-02-27 北京中科虹霸科技有限公司 用于虹膜识别的多通道多区域镀膜的摄像头模组及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314058A (ja) * 2001-04-17 2002-10-25 Sony Corp 固体撮像素子およびその製造方法
US20070035705A1 (en) * 2005-07-15 2007-02-15 James Hurd System, method and apparatus for enhancing a projected image
CN101877763A (zh) * 2009-04-29 2010-11-03 弗莱克斯电子有限责任公司 图像空间聚焦
JP2014015542A (ja) * 2012-07-09 2014-01-30 Nippon Shokubai Co Ltd フタロシアニン化合物
CN204859348U (zh) * 2015-08-18 2015-12-09 杭州海康威视数字技术股份有限公司 镜头、摄像机以及包裹检测系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3340600A4 *

Also Published As

Publication number Publication date
EP3340600B1 (en) 2022-02-16
CN106470299A (zh) 2017-03-01
EP3340600A4 (en) 2019-05-15
US10386632B2 (en) 2019-08-20
CN106470299B (zh) 2022-12-23
EP3340600A1 (en) 2018-06-27
US20180284429A1 (en) 2018-10-04

Similar Documents

Publication Publication Date Title
WO2017028652A1 (zh) 一种镜头、摄像机、包裹检测系统和图像处理方法
US8508652B2 (en) Autofocus method
EP3010393B1 (en) Optimized imaging apparatus for iris imaging
US9838592B2 (en) Lens barrel, imaging device body, and imaging device
TWI471630B (zh) 主動式距離對焦系統及方法
JP2020056839A (ja) 撮像装置
CN103167236A (zh) 摄像设备、图像传感器和焦点检测方法
EP2208974A1 (en) Wavelength detecting apparatus and focus detecting apparatus having the same
US10356384B2 (en) Image processing apparatus, image capturing apparatus, and storage medium for storing image processing program
KR102480618B1 (ko) 낮은 광 및 높은 광 레벨 이미징을 위한 단일 옵틱
US10063795B2 (en) Image capturing apparatus, method for controlling the same, and storage medium
EP2204683A2 (en) Single lens reflex camera comprising a focus detecting apparatus and method of photographing
JP2020057869A (ja) 撮像装置
CN204859348U (zh) 镜头、摄像机以及包裹检测系统
JP2016004134A (ja) 撮像装置及びその制御方法、プログラム、記憶媒体
US8823860B2 (en) Apparatus for auto-focusing detection, camera applying the same, and method for calculating distance to subject
US20100182488A1 (en) Photographing apparatus and focus detecting method using the same
KR20200117507A (ko) 카메라 장치 및 이의 동작 방법
WO2022022682A1 (zh) 摄像模组装置、多摄摄像模组、摄像系统、电子设备和自动变焦成像方法
EP3163369B1 (en) Auto-focus control in a camera to prevent oscillation
WO2018068505A1 (zh) 终端设备、对焦方法及装置
CN111258166B (zh) 摄像模组及其潜望式摄像模组和图像获取方法及工作方法
CN111683195B (zh) 一种摄像装置及其控制方法
KR20110027120A (ko) 촬상 장치
JP5679718B2 (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16836516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 15753804

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2016836516

Country of ref document: EP