WO2021103297A1 - 复眼摄像装置及复眼系统 - Google Patents

复眼摄像装置及复眼系统 Download PDF

Info

Publication number
WO2021103297A1
WO2021103297A1 PCT/CN2020/071734 CN2020071734W WO2021103297A1 WO 2021103297 A1 WO2021103297 A1 WO 2021103297A1 CN 2020071734 W CN2020071734 W CN 2020071734W WO 2021103297 A1 WO2021103297 A1 WO 2021103297A1
Authority
WO
WIPO (PCT)
Prior art keywords
ommatidium
compound
eye
array
photosensitive
Prior art date
Application number
PCT/CN2020/071734
Other languages
English (en)
French (fr)
Inventor
张晓林
徐越
谷宇章
郭爱克
Original Assignee
中国科学院上海微系统与信息技术研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院上海微系统与信息技术研究所 filed Critical 中国科学院上海微系统与信息技术研究所
Priority to EP20893641.9A priority Critical patent/EP4068747A4/en
Priority to JP2022528046A priority patent/JP7393542B2/ja
Priority to US17/777,403 priority patent/US20220407994A1/en
Publication of WO2021103297A1 publication Critical patent/WO2021103297A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/232Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the invention relates to the field of optical imaging, in particular to a compound eye camera device and a compound eye system.
  • compound eye refers to a visual organ composed of a large number of single eyes, which mainly appear in arthropods such as insects and crustaceans.
  • arthropods such as insects and crustaceans.
  • each single eye in its compound eye has a lens (similar to Microlens) and photoreceptor cells corresponding to the lens.
  • These photoreceptor cells will transmit photosensitive information to the insect’s cranial nervous system to effectively calculate the position and distance between themselves and the observed object, that is, to achieve stereo vision, which is conducive to compound eyes Insects make quick judgments and reactions.
  • Artificial compound eye technology is a technology developed inspired by the unique compound eye structure of animals. Compared with the traditional optical system, the artificial compound eye has the advantages of small size, light weight, large field of view and high sensitivity, so it has a wide range of application prospects.
  • the research on artificial compound eye technology involves radar systems, missile guidance devices, micro air vehicles, ship search and tracking systems, night vision equipment, micro compound eye cameras, robots and other fields.
  • artificial compound eye technology can be used in the vision system of an intelligent robot. The system can realize target recognition, tracking, speed measurement, etc. by processing the external information collected by the artificial compound eye detector.
  • the currently disclosed artificial compound eye usually uses a plurality of cameras or camera arrays, uses different cameras to obtain symmetrical images, and then performs stereo matching through algorithms to obtain stereo vision. This is different from the principle of compound eyes of animals, and cannot be regarded as compound eyes in the true sense. It is not only large in size, but also requires a lot of calculation cost.
  • the invention provides a compound eye camera device, which adopts a bionic structure design, and aims to obtain accurate three-dimensional spatial information and achieve better stereo vision functions.
  • the present invention additionally provides a compound eye system including the compound eye imaging device.
  • a compound-eye imaging device includes an ommatidium array and a processor.
  • the ommatidium array includes a plurality of ommatidiums that optically do not interfere with each other and are arranged in a row.
  • Each of the ommatidiums includes an optical element and at least one photosensitive unit arranged near the focal plane of the optical element, and the optical element is used to face the subject and receive the light beam incident in the field of view, wherein each ommatidium
  • the column corresponds to at least one ommatidium column view plane, which passes through the optical center of each ommatidium in the ommatidium column and the vicinity of the center of at least one photosensitive unit of each ommatidium.
  • the unit intersects at least one of the ommatidium line of sight planes, and the line of sight of each photosensitive unit passes through the center of the photosensitive unit and the optical center of the ommatidium; the processor is configured to be based on the light-sensing in the ommatidium.
  • the information received by the unit generates an image and processes the image to obtain information about the subject.
  • each ommatidium includes one photosensitive unit, and the line of sight of each photosensitive unit does not cross each other in the field of view of the ommatidium array.
  • each ommatidium includes one or more than two photosensitive units, and the line of sight of at least one of the photosensitive units is within the field of view of the ommatidium array.
  • the line of sight of the unit crosses.
  • the compound-eye camera device includes more than two ommatidium arrays, an angle is formed between the viewing planes of the ommatidium arrays of different ommatidium arrays, and each ommatidium belongs to one or two ommatidiums.
  • a plurality of the ommatidium columns are successively adjacent to form an ommatidium array, and the optical elements of each ommatidium in the ommatidium array are arranged in a honeycomb curved surface or arranged in a two-dimensional plane.
  • the processor includes a compound-eye image imaging unit, and the compound-eye image imaging unit is configured to, after obtaining the information of the photosensitive unit in each ommatidium, all or part of the sight lines that do not cross each other The information of the photosensitive unit is processed and formed into an image to obtain at least one single and compound eye image.
  • the processor further includes a compound-eye matching unit, a disparity calculation unit, and a position analysis unit, where the compound-eye matching unit is configured to belong to the same ommatidium view plane, and at least part of the photosensitive cells
  • the two or more single-compound-eye images whose sight lines of the unit cross each other are matched to obtain a group of matched compound-eye images, and each of the matched compound-eye images includes the subject according to the intersection of the sight lines.
  • the disparity calculation unit is configured to obtain the disparity information between the pixels in the compound eye image that is generated based on the information of the subject at the intersection of the sight lines in the compound eye image, so
  • the position analysis unit is configured to acquire information of the subject located at the intersection of the lines of sight based on the information of the ommatidium column and the parallax information.
  • the mono-compound-eye images in the matched compound-eye image are all obtained according to the information of the photosensitive units in each of the ommatidiums obtained at the same time or at the same time period.
  • the position analysis unit is further configured to obtain the motion information of the subject in the field of view through the information of the subject at multiple times or in the same time period.
  • the compound eye camera device further includes a storage unit and a display unit, the storage unit is configured to store the single compound eye image, the matched compound eye image, and information about the subject, and the display unit is configured to be It is configured to output and display the single compound eye image, or the display unit is configured to calculate the texture color, three-dimensional pose and shape of the object based on the information of the object obtained by the position analysis unit At least one of the output and display.
  • the storage unit is configured to store the single compound eye image, the matched compound eye image, and information about the subject
  • the display unit is configured to be It is configured to output and display the single compound eye image, or the display unit is configured to calculate the texture color, three-dimensional pose and shape of the object based on the information of the object obtained by the position analysis unit At least one of the output and display.
  • the same ommatidium array includes at least one sub-ommatidium array, each of the ommatidium sub-arrays includes a plurality of ommatidiums that are successively adjacent to each other, and there is a setting between two adjacent ommatidium sub-arrays. spacing.
  • an ommatidium that does not belong to the same ommatidium row as the ommatidium sub-row is arranged between two adjacent ommatidium sub-rows.
  • the line connecting the optical centers of the ommatidiums in each of the sub ommatidium columns is a straight line segment or an arc line segment.
  • the line of sight of each photosensitive unit has a divergence angle associated with the photosensitive area of the photosensitive unit, and the divergence angle is less than or equal to two adjacent photosensitive units in the same ommatidium view plane. The angle of the line of sight of the unit.
  • the spread angle is also less than or equal to two adjacent ones located in the arc segment. Describe the angle between the axes of the ommatidium.
  • the optical elements in each ommatidium are microlenses, and the diameters of the microlenses are the same or not all, and the focal lengths are the same or not all the same.
  • the cross-section of the microlens of each ommatidium perpendicular to the viewing plane of the ommatidium array is a circle, an ellipse, or a polygon.
  • the number of the photosensitive units in each ommatidium is the same.
  • the information received by the photosensitive unit includes intensity information and color information of the incident light beam corresponding to the line of sight.
  • each ommatidium in the ommatidium column is integrated on the same semiconductor substrate, and each ommatidium is isolated by a dielectric.
  • the present invention also provides a compound eye system.
  • the compound eye system includes a plurality of the above compound eye imaging devices arranged at a set interval.
  • a plurality of the compound eye camera devices are symmetrical with respect to a center line.
  • the compound eye system further includes a control device for controlling the posture of the ommatidium array, and the control device is connected to the processor of each compound eye camera device.
  • the compound-eye imaging device includes at least one ommatidium array and a processor.
  • the ommatidium array includes a plurality of ommatidiums that are optically non-interfering with each other and arranged in a row, each of the ommatidiums includes an optical element and At least two photosensitive units arranged near the focal plane of the optical element.
  • the compound-eye imaging device is similar to the compound eye function of compound-eye animals. Specifically, each ommatidium in the ommatidium column may have the function of a single eye in the compound eye of an animal, and each ommatidium column corresponds to at least one small eye.
  • the ocular line view plane, the ommatidium line view plane passes through the optical center of each ommatidium in the ommatidium line and the vicinity of the center of at least one photosensitive unit of each ommatidium, and each photosensitive unit is connected to at least one of the The ommatidium view planes intersect, and the line of sight of each photosensitive unit passes through the center of the photosensitive unit and the optical center of the ommatidium.
  • the line of sight of different photosensitive units in the same ommatidium view plane can cross or not, corresponding to Yes, the image in the field of view can be obtained through the photosensitive unit with high definition.
  • the compound eye camera provided by the present invention has a bionic design very close to the compound eye of an animal, which is beneficial to obtain accurate three-dimensional spatial information and achieve better stereo vision.
  • the compound eye system provided by the present invention includes a plurality of the above compound eye camera devices arranged at a set interval.
  • the compound eye camera device can be used for two-dimensional plane or three-dimensional detection from different directions, which is beneficial to obtain accurate two-dimensional and three-dimensional space.
  • Information, to achieve better stereo vision, can be used in robot vision, aircraft and other fields, and has a wide range of application prospects.
  • FIG. 1 is a schematic diagram of the structure of a compound eye imaging device according to an embodiment of the present invention.
  • Fig. 2(a) is a schematic diagram of imaging of the ommatidium in an embodiment of the present invention.
  • Fig. 2(b) is a schematic diagram of an ommatidium in Fig. 2(a).
  • Fig. 2(c) is a schematic diagram of the optical cone corresponding to a photosensitive unit in Fig. 2(a) and the surface sensing light beam of the pixel.
  • Fig. 3 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • Fig. 7 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of the ommatidium arranged in a hexagonal shape in a compound-eye imaging device according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the visible range of the compound eye camera device in an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of the visible range of the compound eye camera device in an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the visible range of the compound eye camera device in an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of the optical cone of the ommatidium array and the surface sensing light beam of the pixel in an embodiment of the present invention.
  • FIG. 13 is a schematic plan view of an ommatidium array in an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of distance measurement of a compound eye camera device according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of distance measurement of a compound eye camera device in an embodiment of the present invention.
  • the present invention provides a compound eye camera device and a compound eye system.
  • the compound-eye camera device proposed by the present invention includes an ommatidium array and a processor, the ommatidium array includes a plurality of ommatidiums optically non-interfering with each other and arranged in a row, each of the ommatidiums includes optical elements and settings.
  • At least one photosensitive unit near the focal plane of the optical element the optical element is used to face the subject and receive the light beam incident in the field of view, wherein each ommatidium array corresponds to at least one ommatidium array
  • the viewing plane of the ommatidium array passes through the optical center of each ommatidium in the ommatidium array and the vicinity of the center of at least one photosensitive unit of each ommatidium, and each photosensitive unit is connected to at least one ommatidium array.
  • the visual planes intersect, and the line of sight of each photosensitive unit passes through the center of the photosensitive unit and the optical center of the ommatidium; the processor is configured to generate an image based on the information received by the photosensitive unit in the ommatidium, and The image is processed to obtain information about the subject.
  • the compound eye system proposed by the present invention includes the compound eye camera device.
  • the compound-eye camera device proposed by the present invention is similar to the compound eyes of compound-eye animals in a bionic sense, and the compound-eye system is similar to the visual system of compound-eye animals.
  • a sufficient number of ommatidiums in the ommatidium column can be set as required, and “columns” can also be replaced by “rows” here.
  • the optical element of each ommatidium in the ommatidium column is used to receive light, and therefore is arranged on the same side of the ommatidium column, and is used to face the subject during shooting.
  • incident light passes through the optical element and reaches the corresponding photosensitive unit to be sensed and converted into image information.
  • the compound eye camera device can be any object or creature in space, and the subject can be regarded as a combination of spatial points with a certain texture, color, three-dimensional pose, and shape.
  • the compound eye camera device has a bionic design that is very close to the compound eye of an animal, which is conducive to obtaining accurate two-dimensional or three-dimensional spatial information, and helps to achieve better stereo vision.
  • This embodiment introduces the main structure and function of the compound eye camera device of the present invention.
  • FIG. 1 is a schematic diagram of the structure of a compound eye imaging device according to an embodiment of the present invention.
  • the compound-eye camera device includes the ommatidium array and the processor described above.
  • the compound-eye imaging device may include one or more than two ommatidium arrays, a plurality of ommatidium arrays can form an ommatidium array, and a clip is formed between the ommatidium arrays of different ommatidium arrays.
  • each ommatidium belongs to one or more than two ommatidium columns.
  • each ommatidium may include one or more (including two) photosensitive units, and the line of sight of each photosensitive unit is on the same plane, so it can be formed as two lines that cross or do not cross. .
  • whether or not to intersect refers to whether the line of sight on the front of the ommatidium row, that is, the optical element and the photosensitive unit on the side receiving the incident light, intersect at one point, that is, cross within the field of view of the ommatidium row.
  • the processor is configured to generate an image based on the received information of the photosensitive unit in the ommatidium, and process the image to obtain information about the subject. Further, in order to process the information of the photosensitive unit and obtain information about the subject, the processor may optionally include the following components or modules: a single compound-eye image imaging unit, a matching compound-eye image unit, a parallax calculation unit, and position analysis unit. The specific functions are described as follows.
  • the single compound-eye image imaging unit is configured to process all or part of the information of the photosensitive units whose sight lines do not cross each other after obtaining the information from the photosensitive units of each of the ommatidiums in the ommatidium column And compose the image to get at least one single and compound eye image.
  • the single compound eye image only obtains the subject information of different sight directions according to the separate photosensitive unit, although it does not have the stereo vision effect, but because there is no focus problem, compared with the ordinary 2D camera, there is no need to set a special lens focus. , Thus can realize the plane vision with better definition.
  • the ommatidium column is a structure composed of multiple ommatidiums, so the lines of sight of each photosensitive unit cross (here refers to crossing within the field of view of the ommatidium column, excluding the same ommatidium
  • the lines of sight of each photosensitive unit cross here refers to crossing within the field of view of the ommatidium column, excluding the same ommatidium
  • the parallax can be obtained by processing the two single compound eye images where the two light-sensitive units where the line of sight crosses.
  • the optional methods for calculating the parallax are block matching, deep neural network learning, and feature matching, etc.
  • a public calculation method regarding the parallax of the binocular vision sensor can be adopted.
  • the position of the intersection is obtained by using the information of the photosensitive unit whose sight line crosses, so that the compound-eye camera device can generate stereo vision.
  • the processor also includes a matching compound-eye image unit, a disparity calculation unit, and a position analysis unit.
  • the matching compound-eye image unit is configured to match the single-compound-eye images belonging to the same ommatidium view plane and in which the sight lines of at least part of the photosensitive units cross each other to obtain a set of matched compound-eye images.
  • Each of the compound-eye images in the compound-eye image includes pixel points formed in the image corresponding to the information of the subject at the intersection of the lines of sight.
  • the parallax calculation unit is configured to obtain parallax information between pixel points generated corresponding to the information of the subject at the intersection of the sight lines in the matching compound eye image.
  • the position analysis unit is configured to obtain information of the subject located at the intersection of the lines of sight based on the information of the ommatidium column and the parallax information.
  • the compound-eye camera device may further include a storage unit and a display unit, the storage unit being configured to store the single compound-eye image, the matched compound-eye image, and information of the subject.
  • the storage unit can be stored by a medium such as a random access memory (RAM), a random read only memory (ROM), a hard disk, a magnetic disk, an optical disk, a register in a central processing unit (CPU), and the like.
  • the display unit is configured to output and display the single and compound eye image, or, based on the information of the subject acquired by the position analysis unit, at least one of the texture color, three-dimensional pose, and shape of the subject An output and display.
  • the display unit may include a display, and the display may be a flat image display or a three-dimensional image display.
  • processors single compound-eye image imaging unit, matching compound-eye image unit, parallax calculation unit, position analysis unit, etc.
  • processors can be combined into one module for implementation, or any one of the devices can be split into multiple modules Or, at least part of the functions of one or more of these devices may be combined with at least part of the functions of other devices and implemented in one module.
  • At least one of the single compound-eye image imaging unit, the matched compound-eye image unit, the disparity calculation unit, and the position analysis unit may be at least partially implemented as a hardware circuit, such as a field programmable gate array (FPGA), Programming logic array (PLA), system on chip, system on substrate, system on package, application specific integrated circuit (ASIC), or can be implemented in hardware or firmware such as any other reasonable way to integrate or package the circuit, or It is realized by an appropriate combination of three implementation modes: software, hardware, and firmware.
  • FPGA field programmable gate array
  • PLA Programming logic array
  • ASIC application specific integrated circuit
  • the signal processing unit, image processing unit, position analysis unit, storage unit, and output unit may be at least partially implemented as a computer program module, and when the program is run by a computer, the function of the corresponding module may be executed.
  • the compound-eye camera device can be manufactured using an integrated circuit manufacturing process.
  • the compound-eye camera device is a chip-level device.
  • the ommatidiums in each ommatidium column of the compound-eye imaging device may be integrated on the same semiconductor substrate, such as a silicon (Si) substrate, a germanium (Ge) substrate, or other semiconductor substrates. It is a ceramic substrate, quartz or glass substrate of materials such as alumina.
  • each ommatidium is isolated by a medium, and the medium is preferably a light-blocking material.
  • the compound-eye camera device can realize planar vision (or 2D vision) and stereo vision (or 3D vision).
  • This embodiment mainly introduces the ommatidium array of the compound-eye imaging device of the present invention.
  • Fig. 2(a) is a schematic diagram of imaging of the ommatidium in an embodiment of the present invention.
  • Fig. 2(b) is a schematic diagram of an ommatidium in Fig. 2(a).
  • Fig. 2(c) is a schematic diagram of the optical cone corresponding to a photosensitive unit in Fig. 2(a) and the surface sensing light beam of the pixel.
  • the incident light beams L1, L2, and L3 are incident on the ommatidium column 100 and projected into each ommatidium 10 along the line of sight of the photosensitive unit in each ommatidium.
  • the parallel light beam enters through the optical center C of the ommatidium 10 (more specifically, the optical center of the optical element in the ommatidium), it is focused on the focal plane 10a, that is, any focal point corresponds to the parallel direction between the point and the optical center C. The point of convergence of the light beam at the focal plane 10a.
  • the focal plane 10a of the ommatidium may be a flat surface, or it may have a certain curvature.
  • Figure 2(a) three points on the focal plane 10a of each ommatidium 10 are shown. The lines determined by connecting these three points to the optical center C are the light beams corresponding to the light incident direction. axis.
  • Fig. 2(b) the three points on the focal plane 10a of the single ommatidium 10 in Fig. 2(a) correspond to incident light beams in different directions.
  • a photosensitive element can be arranged near the focal plane 10a of the optical element of the ommatidium 10 to obtain the intensity (or brightness, color) information of the incident light beam, and the photosensitive surface 11a of the pixel in the photosensitive element can be set
  • the focal plane 10a of the corresponding optical element the vertebral body formed by the photosensitive surface 11a and the optical center C of the corresponding optical element passing through the vertebral body extending in the opposite direction of the optical center C is called the optical center cone ( Figure 1(c)) Indicated by the dotted horizontal line), the optical cone represents the cone formed by all the points in the range of the corresponding photosensitive surface through the optical axis of the optical center.
  • the incident light beam received by the photosensitive element in the range of the photosensitive surface 11a is a mesa light beam with the corresponding optical center cone thickened to the light transmission area of the optical element.
  • This mesa light beam can be called the pixel surface light beam corresponding to the photosensitive surface ( Figure 1(c) is represented by long and short horizontal lines).
  • the compound-eye imaging device of the present invention utilizes the above-mentioned optical principle.
  • the compound-eye imaging device includes a plurality of ommatidiums, and each ommatidium is arranged in a column (or row) with at least two other ommatidiums and is called a small eye. Eye column.
  • Each ommatidium includes an optical element and at least one photosensitive unit arranged near the focal plane of the optical element, and the optical element is used to face the subject and receive the incident light beam.
  • Each ommatidium does not interfere with each other optically, so the light entering each ommatidium can only be light-sensitive on the focal plane of the ommatidium.
  • Each ommatidium 10 of the ommatidium array 100 includes an optical element and a photosensitive element arranged on one side of the focal plane of the optical element. It may include one or more than two photosensitive units (or called photosensitive pixels). Referring to Fig. 2(a), three light-sensing units can be respectively arranged at the three convergent points on the focal plane 10a of each ommatidium, and the optical axis determined by the center of the light-sensing unit and the optical center C is the line of sight of the light-sensing unit.
  • the incident light beams in different directions can be sensed by different photosensitive units located near the focal plane after being converged by the optical element. Therefore, the direction of the incident light beam and the position of the photosensitive unit Related.
  • the line of sight of a photosensitive unit in the field of view of the ommatidium column 100 will cross the line of sight of some photosensitive units in other ommatidiums, and thus the image generated by the photosensitive unit whose sights are crossed With parallax, the parallax can be obtained by calculation.
  • Each ommatidium column corresponds to at least one ommatidium column visual plane (as shown in Figure 2(a), the plane parallel to the paper surface), and the ommatidium column visual plane passes through the ommatidium in the ommatidium column.
  • each of the photosensitive units Near the center of the optical center and the center of at least one photosensitive unit of each ommatidium, each of the photosensitive units intersects at least one of the ommatidium line of sight planes, and the line of sight of the photosensitive unit is in the corresponding ommatidium line of sight .
  • they can belong to the same ommatidium view plane or different ommatidium view planes.
  • the line of sight of the photosensitive units of each ommatidium belonging to the same ommatidium array viewing plane is in the ommatidium array viewing plane.
  • the compound-eye imaging device may include one or more than one ommatidium array, and two or more ommatidium arrays can be arranged in a certain arrangement to form an ommatidium array.
  • two or more ommatidium arrays can be arranged in a certain arrangement to form an ommatidium array.
  • For different ommatidium rows they do not have a common ommatidium row view plane, that is, an angle (here, an angle greater than 0) is formed between the respective ommatidium row view planes.
  • each ommatidium may belong to one or more than two ommatidium arrays according to the distribution of the ommatidium array viewing plane passing through its optical center and the photosensitive unit.
  • each ommatidium in the ommatidium array it can usually be arranged in a compact arrangement as shown in Figure 2, that is, a plurality of ommatidiums are arranged next to each other in sequence, but it is not limited to this.
  • Spaces can also be set between each ommatidium.
  • another ommatidium of another ommatidium column can be provided.
  • the same ommatidium row may also include at least one ommatidium sub-row, each of the ommatidium sub-rows includes a plurality of ommatidiums adjacent to each other in sequence, and a predetermined distance is maintained between adjacent ommatidium sub-rows.
  • the line between the optical centers of each ommatidium can be a straight line or an arc segment.
  • the line between the optical centers of each ommatidium is Arc segment.
  • Fig. 3 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • each ommatidium in the ommatidium column includes one photosensitive unit, and the line of sight of each photosensitive unit does not cross each other in the field of view of the ommatidium column.
  • a single-multi-eye image can be obtained according to the information of each photosensitive unit.
  • the single compound eye image is a two-dimensional image of the subject within the field of view of the ommatidium, that is, the ommatidium of this structure can be used for two-dimensional vision, such as a 2D camera.
  • a plurality of ommatidium arrays as shown in FIG. 3 are arranged in a spherical surface, and a corresponding single and compound eye image can be obtained by using image information taken by each photosensitive unit at the same time or at the same time period.
  • Fig. 4 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • each ommatidium in the ommatidium column includes one photosensitive unit, but by designing the mutual position between the ommatidium and the position of the photosensitive unit, relative to the figure 3.
  • the line of sight of each of the photosensitive units crosses in the field of view of the ommatidium column. Therefore, a group of matched compound-eye images can be formed based on two or more single-compound-eye images, and a stereoscopic visual effect can be realized.
  • the optical center line of each ommatidium in the ommatidium column is a straight line segment. As the distance from the ommatidium column increases, the range accuracy of the ommatidium column in the depth of field of view remains basically unchanged.
  • Fig. 6 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • the ommatidium array includes two sub-ommatidium arrays, each of the ommatidium sub-arrays includes a plurality of ommatidiums that are successively adjacent to each other, and in each sub-ommatidium array, each ommatidium only includes A photosensitive unit, the line of sight of the photosensitive unit does not cross each other. A certain distance is maintained between two adjacent sub ommatidium columns.
  • the line of sight of the photosensitive unit in the ommatidium column is at least partially crossed, so the ommatidium column composed of these two or more sub-ommonia columns can also be used to realize stereo vision.
  • Fig. 7 is a schematic diagram of an ommatidium array in an embodiment of the present invention.
  • the ommatidium array includes two sub-ommatiform arrays, each of the ommatidium arrays includes a plurality of ommatidiums adjacent to each other in sequence, and each ommatidium is provided with more than one photosensitive unit.
  • the line of sight of the photosensitive unit in each sub ommatidium column crosses in the field of view, and the line of sight of the photosensitive unit in different sub ommatidium columns also crosses in the field of view, compared to the ommatidium shown in FIG.
  • the accuracy of the stereo vision generated by the intersection of the sight lines of the two sub ommatidium columns in Figure 7 is higher.
  • the stereo vision calculated from each sub ommatidium column is re-measured by the combination of the two sub ommatidium columns, not only has a single sub ommatidium column.
  • the small eye column has a low error rate, and the accuracy will be greatly improved.
  • each ommatidium column can be used as a compound eye of the compound eye camera device, or each sub ommatidium column can be used as a compound eye of the compound eye camera device to realize two-dimensional and/or three-dimensional vision.
  • the incident light beam detected by each photosensitive unit has the same The spread angle associated with the photosensitive area of the photosensitive unit.
  • the spread angle is preferably less than or equal to the same in the ommatidium view plane. The angle between the sight lines of the two adjacent photosensitive units.
  • the spread angle is also less than or equal to the included angle between the axes of two adjacent ommatidiums in the arc segment.
  • the axis of the ommatidium refers to a straight line perpendicular to the incident surface of the optical element of the ommatidium and passing through the optical center.
  • the arrangement of the ommatidium in the ommatidium column and the number of photosensitive units in the embodiment of the present invention can be set as needed, and according to whether the line of sight of the photosensitive unit crosses, the compound-eye imaging device including the ommatidium array is equipped with Two-dimensional plane and stereo vision.
  • This embodiment mainly introduces the specific structure of the compound eye imaging device of the embodiment of the present invention.
  • the optical element in each ommatidium may be a microlens, where the microlens is a convex lens, and its focal plane is on the opposite side of the object.
  • the ommatidium described below is a microlens Described for optical components.
  • the optical element of at least one ommatidium may also be a composite imaging objective lens including multiple optical components (such as one or more lenses, filters, and/or apertures). In a bionic sense, the function of the optical element in the ommatidium is similar to the function of the lens in the single eye of a compound eye animal.
  • the photosensitive units corresponding to the same optical element in the ommatidium can be regarded as pixels arranged on the focal plane of the optical element, and an array of at least two photosensitive units corresponding to the same optical element can be referred to as a pixel array.
  • a pixel array In a bionic sense, multiple photoreceptor units are used to perform functions similar to photoreceptor cells under each single eye in an animal compound eye.
  • each photosensitive unit may include a photodiode and a plurality of MOS transistors used as a driving circuit.
  • the photodiodes can convert incident light signals into electrical signals.
  • each Each ommatidium may further include a peripheral circuit electrically connected to each photosensitive unit, so as to transmit the image signal generated by the photosensitive unit with respect to the incident light beam to the processor.
  • the photosensitive unit is arranged near the focal plane of the optical element, so that the photosensitive surface of each photosensitive unit (for example, the PN junction layer in the photodiode) is located on the focal plane of the optical element to detect the light information on the corresponding line of sight.
  • the structure of the pixel array formed by the photosensitive unit can be implemented in accordance with the disclosed technology, as long as the above functions can be realized.
  • the intensity information of incident light beams in different directions can be detected by the photosensitive units at different positions in each ommatidium.
  • the photosensitive cell in each ommatidium can also Obtaining the color information of the received incident light beam is conducive to more comprehensively obtaining the information of the subject.
  • the shape of the ommatidium can be set similarly to the pixels in a normal camera or display.
  • the shape of the incident surface of each ommatidium can be set to a quadrilateral and arranged horizontally and vertically, or the shape of the incident surface of each ommatidium can be set to a hexagonal (hexagonal) shape, etc., like a honeycomb.
  • the optical elements of each ommatidium can be arranged in a honeycomb curved surface or in a two-dimensional plane.
  • FIG. 8 is a schematic diagram of the ommatidium arranged in a hexagonal shape in a compound-eye imaging device according to an embodiment of the present invention.
  • the planar shape of each ommatidium 20 is a triangle, and the six ommatidiums are arranged in a hexagonal structure. .
  • the light-transmitting surface of the optical element can be set to a hexagon, and for the tetragonal array of ommatidiums, the light-transmitting surface of the optical element can be set to be a quadrilateral.
  • the embodiments shown in FIGS. 2 to 7 are described by taking the ommatidium as an example in which the optical element of each ommatidium is a circular microlens.
  • the present invention is not limited to this.
  • the structure of the optical components of each ommatidium in the ommatidium array may be the same or different.
  • the optical components in the ommatidium arrays that can be implemented may not be exactly the same.
  • optical elements of the same structure or two or more different structures can be selected to form an ommatidium array and an ommatidium array.
  • the ommatidium array has a curved structure, and most of the areas use ommatidiums of the first structure.
  • the ommatidiums of the first structure all include photosensitive units with microlenses of the same shape and size.
  • the ommatidium of the second structure is used.
  • the ommatidium of the second structure includes microlenses or other kinds of optical elements with different shapes and sizes from the ommatidium of the first structure.
  • the ommatidium of the second structure can be interspersed and distributed in the range of the ommatidium array according to a certain rule.
  • the structures of the photosensitive elements provided corresponding to the optical elements of the ommatidium may also be the same or different.
  • some of the ommatidiums in the same ommatidium column use the first type of photosensitive elements, and the first type of photosensitive elements include the same number and arrangement (including the size of the photosensitive surface, spacing, etc.) of photosensitive elements.
  • the second type of photosensitive element is used in another part of the ommatidium, and the number and/or arrangement of the photosensitive unit of the second type of photosensitive element is different from that of the first type of photosensitive element.
  • the diameters of the individual microlenses may be the same or not all the same, and their focal lengths may also be the same or not all the same.
  • the structure of each ommatidium can be set as required.
  • each ommatidium there are multiple arrangements of each ommatidium.
  • the ommatidium array is preferred, especially in each sub ommatidium array.
  • the incident surfaces of each ommatidium are close to each other.
  • the microlenses of two adjacent ommatidiums are arranged adjacently, and the microlenses of each ommatidium are closely arranged in the plane where the incident surface is located.
  • the cross-section of the microlenses of each ommatidium parallel to the incident surface may be circular, elliptical, or polygonal (such as quadrilateral, pentagon, hexagon, heptagon, octagon), and the like.
  • the total number of photosensitive units in the ommatidium array corresponding to each ommatidium may be the same or different.
  • the photosensitive cells in each ommatidium in the ommatidium array may be corresponding, that is, all the photosensitive cells in the ommatidium may be corresponding.
  • Each of the ommatidiums is provided with a photosensitive unit corresponding to the same position to detect the line of sight in the same direction, and the photosensitive unit corresponding to the same line of sight direction has no line of sight crossed, and the information obtained by each can be used to generate a single and compound eye image.
  • the information acquired by each is reflected in different single and compound eye images, and because they all contain the information of the subject located at the intersection of the sight lines, they can belong to the same one.
  • the single compound eye images in the ommatidium view plane in which at least part of the light-sensing unit's sight lines cross each other are matched to obtain a group of matched compound eye images, and each of the matched compound eye images includes one
  • each intersection corresponds to at least two single frames.
  • a group of matched compound eye images composed of compound eye images.
  • a group of matched compound eye images related to the same ommatidium relates to the single compound eye image formed by each photosensitive unit in the ommatidium.
  • This embodiment mainly introduces the field of view and the ranging accuracy of the compound eye camera device related to the embodiment of the present invention.
  • the field of view and the angle of view are determined by the positions of the ommatidium at both ends and the photosensitive unit.
  • ommatidiums with crossed eyes as shown in Figure 2(a), Figure 4, Figure 5, Figure 6, Figure 7
  • more than one single compound eye image can be obtained, and a set of matching compound eye images can be obtained accordingly.
  • the outermost ommatidium of the ommatidium column can be intersected with the sight lines of other ommatidiums. The position and direction of the outermost line of sight can obtain the stereoscopic viewing range of the compound-eye camera.
  • FIG. 9 is a schematic diagram of the visible range of the compound eye camera device in an embodiment of the present invention.
  • each ommatidium 20 has two photosensitive cells 21 located in the focal plane, and the centers of the two photosensitive cells 21 and
  • the two line-of-sight directions determined by the line corresponding to the optical center F of the ommatidium 20 are the two incident directions of the light beams that can be obtained by the corresponding ommatidium column 200, that is, the light beams determined corresponding to these two line-of-sight directions can be All the light-sensing units 21 in the ommatidium 20 are acquired, and the intersection of the optical axes of all the ommatidiums 20 in the ommatidium column 200 is the space point where the distance can be measured by the light beams in these two directions, and the space point where the distance can be measured is in the small eye.
  • the distance in the normal direction of the eye array 200 is the distance measurement accuracy D1 of the ommatidium array in the spatial depth direction.
  • the distance measurement accuracy here reflects the distance accuracy in the normal direction between two adjacent spatial points that can be distinguished by the same photosensitive unit 21.
  • the ranging accuracy D1 remains basically unchanged as the depth direction of the space changes.
  • the field angle (FOV) of the ommatidium array 200 can be obtained. Since the connection between the center of the photosensitive unit 21 in the ommatidium array 200 and the optical center C of the corresponding ommatidium determines the measurable direction of the line of sight, the field angle of the ommatidium array 200 is composed of the photosensitive unit in each ommatidium 20 The width of the pixel array is related. The field angle of the ommatidium in this article reflects the angle that the stereoscopic visual range deviates from the normal direction to the outside, which is also called the stereoscopic field angle. In the embodiment shown in FIG.
  • the visible range defined by the two thick lines is the field of view of the compound eye camera device. Since the stereoscopic viewing range does not deviate outward from the ommatidium line of the straight line, the field angle of the ommatidium line shown in FIG. 9 is less than zero.
  • FIG. 10 is a schematic diagram of the visible range of the compound eye camera device in an embodiment of the present invention.
  • each ommatidium 30 has four photosensitive cells 31 located in the focal plane.
  • the center of the four photosensitive cells 31 is connected to the optical center F of the corresponding ommatidium.
  • the determined sight directions are the four measurable sight directions of the ommatidium array 300.
  • the stereoscopic field angle determined by the outermost sight line of the outermost ommatidium that can intersect with the sight lines of other ommatidiums is greater than 0 ( The range defined by the two thick lines in FIG.
  • the ranging accuracy D2' of the peripheral area of the field of view in FIG. 10 is significantly smaller than the ranging accuracy D2 of the central area of the field of view. Therefore, if it is necessary to maintain a better ranging accuracy in a two-dimensional plane, the field of view of the ommatidium array arranged in straight segments is relatively narrow. In other words, when the ommatidium array of the present invention is used to make a compound eye camera device, if The length of the ommatidium column in a certain direction is not long enough. In order to avoid the range that can be measured is too narrow, it is better to consider setting the ommatidium to be arranged in an arc segment. It is further illustrated by examples as follows.
  • FIG. 11 is a schematic diagram of the visible range of the compound eye camera device in an embodiment of the present invention.
  • each ommatidium 40 of the ommatidium column 400 is arranged in an arc segment (the connecting line of the optical center is an arc).
  • Each ommatidium 40 has two photosensitive units located in the focal plane, and the optical axis direction determined by the connection between the two photosensitive units and the optical center of the corresponding ommatidium 40 is the measurable line of sight direction of the ommatidium 400.
  • FIG. 10 additionally illustrates the stereoscopic viewable range of the ommatidium column 400 (the range defined by two thick lines in FIG.
  • the field angle of the ommatidium array 400 with the arc-segment structure is larger than the field angle of the ommatidium array 200 with the straight-line segment structure shown in FIG. 8 with the same number of photosensitive units.
  • the distance measurement accuracy D3 of the ommatidium array gradually decreases in the normal direction from the near and far directions from the ommatidium array 400.
  • the maximum resolution is mainly related to the distance between adjacent ommatidiums and the position of the outermost photosensitive unit. Therefore, the distance between the ommatidiums and the position of the photosensitive unit can be adjusted as needed to improve the resolution of the compound-eye imaging device.
  • the arrangement and spacing of the plurality of ommatidiums in the ommatidium column of the present invention can be set according to the requirements for the field of view, the accuracy of ranging.
  • the angle of view can be increased by increasing the number of photosensitive units in the ommatidium or arranging each ommatidium into arc segments, and from the perspective of effect and process difficulty, it is better to increase the number of photosensitive units
  • the ground is an arrangement of ommatidiums using arc segments (or spheres). The larger the field of view, the farther the compound-eye camera can "see", and the farther the subject, the smaller the image.
  • the spatial resolution basically does not change with the distance, but the viewing angle range is limited by the two-dimensional area of the ommatidium array, which is embodied as a columnar space. . Therefore, when using the ommatidium array of the embodiment of the present invention to make a compound eye imaging device, it is preferable to use ommatidiums arranged in arc segments (or spherical surfaces) as shown in FIG. Object, the smaller the image, the better the bionic effect.
  • This embodiment mainly introduces the parallax caused by the ommatidium.
  • the processor can obtain single-compound-eye images and at least two single-compound-eye images through the single-compound-eye imaging unit and the matching compound-eye imaging unit respectively.
  • a group of images matches the compound eye image.
  • Each single-compound-eye image in the matched compound-eye image has parallax.
  • FIG. 12 is a schematic diagram of the optical cone of the ommatidium array and the surface sensing light beam of the pixel in an embodiment of the present invention.
  • the left half of FIG. 12 illustrates the distribution of the optical cone, and the right half illustrates the distribution of the pixel surface sensing light beam.
  • each ommatidium has two photosensitive units arranged in the photosensitive surface, and the line of sight of the photosensitive unit at a corresponding position in each ommatidium is not within the field of view of the ommatidium array.
  • Cross the line of sight of the photosensitive unit in the other corresponding position in each ommatidium does not cross in the field of view of the ommatidium column.
  • two single-compound-eye images can be obtained.
  • their sight lines cross in the field of view of the ommatidium, so the two single-compound-eye images reflect the same intersection of pixels with parallax, forming a group of matched compound-eye images.
  • the first detection area S1 is the area that can be detected by the two photosensitive cells (or pixels) inside the ommatidium A at the same time, so the parallax is 0, and the second detection area S2 It is the area that can be detected at the same time by the left photosensitive unit of ommatidium A and the right photosensitive unit of ommatidium B, and the parallax is 1 (here, the unit of ommatidium is used as an example.
  • the third detection area S3 can be detected at the same time by the left photosensitive unit of the ommatidium A and the right photosensitive unit of the ommatidium C.
  • the corresponding parallax is 2.
  • the fourth detection area S4 is the area that can be detected at the same time by the left photosensitive unit of the ommatidium A and the right photosensitive unit of the ommatidium D.
  • the corresponding parallax is 3, and so on, you can get the pixel surface beams of different detection areas. The parallax generated after entering each ommatidium.
  • the parallax is obtained after the above-mentioned matching compound eye image is calculated, which can be further calculated and derived.
  • the motion information of the subject in the three-dimensional space can be further obtained.
  • the three-dimensional spatial information and motion information of the subject can be expressed and output through a three-dimensional coordinate system, and can also be demonstrated through three-dimensional technologies such as VR, AR, and MR.
  • FIG. 13 is a schematic plan view of an ommatidium array in an embodiment of the present invention.
  • a plurality of ommatidium rows are neatly arranged horizontally and vertically on a two-dimensional plane, and each ommatidium is correspondingly provided with a plurality of photosensitive units arranged horizontally and vertically.
  • the optical center position of each ommatidium is the same as the center position of the pixel array formed by the corresponding photosensitive unit.
  • the optical element of each ommatidium is, for example, a microlens.
  • the microlens array is the ommatidium array, which is represented by X ⁇ Y.
  • the pixel array formed by the photosensitive cells in each ommatidium is represented by I ⁇ J, and both I and J are integers greater than 1.
  • obtaining the position (or depth) of the spatial point by the processor may include the following process.
  • the image signals from the photoreceptor units of each ommatidium and whose sight lines do not cross each other are processed into a single-compound eye image to obtain at least two single-compound eye images .
  • the image signal of the photoreceptor unit of the ommatidium is preferably the signal obtained at the same time or the same time period.
  • I ⁇ J X ⁇ Y single-compound-eye images can be obtained. That is to say, in the example shown in Figure 13, nine single and compound eye images can be obtained, and each single and compound eye image has a total of 8 ⁇ 7 pixels. It can be considered that the size of each single and compound eye image is 8 ⁇ 7, where the pixel points refer to The image signal generated by the photosensitive unit is embodied in the corresponding single-compound-eye image. It should be noted that the specific number of ommatidiums shown in FIG. 13 is for illustration only.
  • the number of ommatidiums and the number of photosensitive units in the ommatidium used to construct compound-eye imaging devices may be different from those shown in FIG.
  • the number of ommatidiums may be much larger than the number of photosensitive cells in the pixel array corresponding to each ommatidium.
  • the coordinates can determine the position of each ommatidium , the unit in the ommatidium array coordinate system ⁇ XY is l; in addition, the coordinate system of the photosensitive unit in each ommatidium is the pixel array coordinate system ⁇ IJ , and its origin O'is set at The center of the photosensitive cell of each ommatidium, so that the position of each photosensitive cell in the ommatidium can be determined according to the coordinates in the pixel array coordinate system ⁇ IJ , and the unit in the pixel array coordinate system ⁇ IJ is h.
  • the photosensitive unit E shown in FIG. 13 corresponds to the origin of the microlens with coordinates (5, 2) in the pixel array coordinate system ⁇ I
  • FIG. 14 is a schematic diagram of distance measurement of a compound eye camera device according to an embodiment of the present invention.
  • Fig. 14 can be regarded as adding a depth coordinate axis Z to the ommatidium array coordinate system ⁇ XY in Fig. 13, so that it can indicate the subject located in the three-dimensional space (here represented by the spatial point R).
  • the coordinate system of FIG. 13 is referred to as the compound eye coordinate system ⁇ XYZ
  • the origin in the compound eye coordinate system ⁇ XYZ is set on the optical center of the ommatidium in the lower left corner of FIG. 12.
  • the array surface of the ommatidium array is a two-dimensional plane, and the structure of each ommatidium is the same.
  • the position of the corresponding ommatidium can be directly represented by the array where each ommatidium is located, and the plane and pixels of the ommatidium array can be set
  • the plane of the array is parallel, and the line connecting the optical center of each ommatidium and the center of the ommatidium pixel array is perpendicular to the plane of the pixel array.
  • the pixel array of each ommatidium may also have a certain curvature, and the connection between the optical center of each ommatidium and the center of the corresponding pixel array may not be perpendicular to the plane of the ommatidium array.
  • both ommatidiums in the ommatidium array have detected the image signal of the spatial point R.
  • the coordinates of the spatial point R in the compound eye coordinate system are (x, y, z).
  • the optical center A of the first ommatidium (for the convenience of description, the ommatidium is called ommatidium A) in the compound eye coordinate system ⁇ XYZ has coordinates (x A, y A, f), for ommatidium A
  • the coordinate of the photosensitive unit c that generates the image signal of the spatial point R in the pixel array coordinate system ⁇ IJ is (i, j), and the light beam intensity (or brightness) of the photosensitive unit c is expressed as u i, j (x, y) ,
  • the information of the photosensitive cells in each ommatidium that do not cross the line of sight of the photosensitive cell c at the same time t is arranged in order of the ommatidium position to form a single compound eye
  • ommatidium B its optical center B (for the convenience of description, the ommatidium is called ommatidium B) coordinates in the compound eye coordinate system ⁇ XYZ is (x B, y B, f), for ommatidium B, the coordinate of the photosensitive unit d that generates the image signal of the spatial point R in the pixel array coordinate system ⁇ IJ is (m, n), and the sight lines of the photosensitive unit c and the photosensitive unit d are in the field of view Cross, they both detect the image information of the same spatial point R.
  • Arranging the information of the photosensitive cells in each ommatidium that do not cross the line of sight of the photosensitive cell d at the same time t can also form a single compound eye image, which is called the second compound eye image U m at time t. ,n .
  • the line of sight cA and the line of sight dB can intersect at the spatial point R in space.
  • the spatial point R generates an image signal through the photosensitive unit c in the ommatidium A, and is embodied as a pixel point in the first compound eye image U i,j , denoted as the pixel point c.
  • the spatial point R also passes through the photosensitive cell in the ommatidium B
  • the unit d generates an image signal and is embodied as a pixel point in the second compound eye image U m,n , which is denoted as pixel point d.
  • the points c and d both reflect the information of the spatial point R, so they are a set of corresponding pixels.
  • the pixel points c and d represent the mapping of the same spatial point R in the real space in the two single-compound-eye images.
  • the two single compound eye images can form a set of matched compound eye images.
  • the stereo matching algorithm disclosed in the art such as the sift algorithm, the surf algorithm, and the global stereo
  • the matching algorithm or the local stereo matching algorithm is obtained. Take the local stereo matching algorithm as an example. It is also called the window-based method or the method based on the support area.
  • the local stereo matching algorithm calculates an appropriate size and shape for each pixel in the reference image And the weighted window, and then the disparity value in this window is weighted and averaged.
  • the ideal support window can completely cover the weak texture area, and the depth is continuous in the window. Similar to the global stereo matching algorithm, the optimal disparity is calculated by optimizing a cost function.
  • the specific process can refer to the disclosed technology in this field, which will not be repeated here.
  • the first compound eye image Ui,j composed of pixel positions (i,j) in each lens and the second compound eye image U m,n composed of pixel positions (m,n) in each lens are in the first compound eye image
  • the parallax generated by the pixel (x, y) in Ui, j is expressed as e (i, j) (m, n) (x, y).
  • the distance between the pixel point d in the X-axis direction and the pixel point c is the parallax of the pixel point d in the X direction, which is expressed as e (i,j)( m,n) (x);
  • the distance between the pixel point d in the Y-axis direction and the pixel point c is the parallax of the pixel point d in the Y direction, which is expressed as e (i,j)(m,n) (y);
  • the linear distance between the pixel point d and the pixel point c, that is, the parallax of the pixel point d can be expressed as e (i, j) (m, n) (x, y).
  • e (i,j)(m,n) (x,y) can also represent the distance between the optical centers A and B of the two ommatidiums. That is the length of AB.
  • the relationship between the parallax e (i,j)(m,n) (x) of the pixel point d in the X direction and the distance of the corresponding two ommatidiums in the X direction can be obtained, and The relationship between the parallax e (i, j) (m, n) (y) of the pixel point d in the Y direction and the distance of the corresponding two ommatidiums in the Y direction.
  • the above-mentioned relationship between the parallax and the distance between the ommatidium A and the ommatidium B is only an example of the present invention.
  • the parallax of the pixel point d based on the reference image satisfies the following relationship (1):
  • the spatial depth of R that is, the coordinate z of R in the compound eye coordinate system ⁇ XYZ can be obtained as:
  • the coordinates (x, y, z) of the spatial point R are related to the structure and parallax of the ommatidium.
  • the coordinate information of the two ommatidiums in the ommatidium array coordinate system and the corresponding specific detection of the spatial point image information The coordinate information of the two photosensitive units in the pixel array coordinate system and the parallax of the corresponding pixel points on the two images are obtained. If the subject is a combination of multiple spatial points, the stereo information of the subject can be obtained by obtaining the depth of each spatial point in the field of view.
  • the set distance between the two ommatidium arrays needs to be considered when calculating the parallax.
  • Go in, the coordinates of the space point are also related to the set distance.
  • the ommatidium array and the pixel array are used as a two-dimensional plane for calculation.
  • the spatial distribution of the ommatidiums in the ommatidium array is not a plane (for example, the spherical surface shown in FIG.
  • each ommatidium as long as the relative position of each ommatidium is fixed, that is, the relative position data of each ommatidium is known or Measurable, it can also obtain a single compound eye image by processing the image signal obtained by the ommatidium, and a matching compound eye image can be obtained from two or more single compound eye images, and a set of matching compound eye images can be obtained by visual processing the matching compound eye image
  • the parallax information of the corresponding pixels in the image is further obtained through the structure information of the ommatidium array, the structure information of the ommatidium pixel array, the line of sight information, and the parallax information to obtain the three-dimensional spatial information of the subject, so as to realize stereo vision.
  • the spatial distribution of the ommatidium is not flat, but is of any fixed shape and fixed on a certain device (such as the head), the coordinates of each ommatidium in the coordinate system of the device are fixed.
  • This embodiment mainly introduces the process of obtaining three-dimensional spatial information of a spatial point through two ommatidiums in any pose.
  • FIG. 15 is a schematic diagram of distance measurement of a compound eye camera device in an embodiment of the present invention.
  • two ommatidiums P and Q in any pose can detect the light beam emitted by the spatial point R, thereby obtaining corresponding image signals.
  • the optical center coordinates of the ommatidium P as (x p , y p , z p ), and the direction of the center optical axis of the ommatidium P (the straight line passing through the optical center of the microlens and perpendicular to the incident surface) is the Euler angle ( ⁇ p , ⁇ p , ⁇ p ) means that (x p , y p , z p , ⁇ p , ⁇ p , ⁇ p ) can be used to represent the pose parameters of the ommatidium P.
  • the optical center coordinates of the ommatidium Q can be set as (x q , y q , z q ), and the direction of the central optical axis of the ommatidium Q is represented by Euler angles ( ⁇ q , ⁇ q , ⁇ q ), (X q , y q , z q , ⁇ q , ⁇ q , ⁇ q ) can be used to represent the pose parameters of the ommatidium Q.
  • two single-compound-eye images can be generated, where u i,j (x p ,y p ) pixels of the first compound-eye image U i,j
  • the point corresponds to the imaging of the space point R in the real space on the ommatidium P
  • the pixel points of the second compound eye image U m,n are u m,n (x q ,y q ) corresponding to the imaging of the space point R on the ommatidium Q
  • the first compound eye image U i,j and the second compound eye image U m,n are a set of matching compound eye images.
  • the coordinates (x,y,z) of R can be based on (x p ,y p ,z p ,a p , ⁇ p , ⁇ p ), (x q , y q , z q , ⁇ q , ⁇ q , ⁇ q ) and (i, j) and (m, n) are calculated.
  • the specific description is as follows.
  • the vector p represents the coordinates of the optical center of the ommatidium P.
  • the coordinates of the photosensitive unit corresponding to the spatial point R in the ommatidium P in the pixel array are (i, j), and the coordinates in the three-dimensional space are (x i , y i , z i ),
  • the parametric equation can be expressed as p+k 1 d 1 , where k 1 is the coefficient.
  • the coordinates of the photosensitive unit that emits the light beam at the spatial point in the pixel array are (m, n), which corresponds to the coordinates in the three-dimensional space as (x m , y m , z m ), and the small eye Q
  • the light direction obtained by the corresponding photosensitive unit (m, n) in d 2 (x q -x m ,y q -y m ,z q -z m ).
  • the parametric equation of the light QR can be expressed as q+k 2 d 2 , where k 2 is a coefficient.
  • the processor through the signal processing unit, the image processing unit, and the position analysis unit, can form at least one image signal after obtaining all the image signals generated by the ommatidium.
  • Single and compound eye images can be used for two-dimensional vision with high line-of-sight clarity, and any two single- and compound-eye images with crossed sight lines can be used as matching compound eye images to obtain the disparity information of the corresponding pixels, and pass the ommatidium array Structural information such as distance information of the ommatidium, information about the line of sight of the photosensitive unit, and the parallax information are calculated to obtain three-dimensional space information of the subject.
  • the compound-eye camera device has a three-dimensional image acquisition function through the ommatidium array and processor designed with the above-mentioned bionic structure, which can be used to detect the position of the subject and obtain a three-dimensional image of the subject, which is beneficial to Obtain accurate three-dimensional spatial information to achieve better stereo vision.
  • the compound eye system includes a plurality of compound eye imaging devices arranged at intervals. From a bionic point of view, the function of the compound eye system here is similar to that of a group of compound eyes of compound eyes (such as the two compound eyes of a dragonfly).
  • the multiple compound eye camera devices in the above compound eye system can be respectively set at different positions of the functional main body according to design requirements to perform three-dimensional measurement from different directions, or the multiple compound eye camera devices can also be symmetrical with respect to a center point or center line distributed. From a bionic point of view, many compound eye animals work through two compound eyes that are symmetrical along the centerline (although human eyes are not compound eyes, they are also symmetrical along the centerline). Therefore, the compound eye system of this embodiment can be applied to the vision of intelligent robots. System design, but not limited to this. In other embodiments, the above compound eye system can also be applied to radar systems, missile guidance devices, micro air vehicles, ship search and tracking systems, night vision equipment, micro compound eye cameras, etc.
  • the two or more compound-eye camera devices used at the same time can be symmetrical around or up and down around a center line, or can be symmetrical around a central part (or central point). Or it may be distributed asymmetrically.
  • the above compound eye system may also include a control device for controlling the position and posture of the ommatidium row in the compound eye camera device.
  • the pose of the ommatidium here mainly refers to the shooting direction of the ommatidium.
  • the controller can be realized by the CPU of the computer.
  • the control device is connected to the processor of each compound eye camera device.
  • the control device can also control the process of each processor to obtain the two-dimensional and/or three-dimensional spatial information of the subject For example, control the processor of the corresponding part of the ommatidium array to work, etc.
  • the control device can also perform unified analysis and processing on the three-dimensional spatial information of the object output by each processor, such as eliminating the measurement between two compound eye camera devices Error, and finally obtain more accurate three-dimensional spatial information of the subject.
  • the function of the control device is similar to the brain of a compound eye animal, and it acts as a commander in charge of the operation of the compound eye system.
  • each compound eye camera device can acquire the two-dimensional and/or three-dimensional spatial information of the camera object in the field of view
  • the structured compound eye system has a larger field of view, which is beneficial to achieve better stereo vision functions.

Abstract

本发明涉及一种复眼摄像装置及一种复眼系统。所述复眼摄像装置包括排成一行或列的复数个小眼,每个所述小眼均包括光学元件和对应的感光单元,每个所述小眼列至少对应于一个小眼列视平面,小眼列视平面通过小眼列中各小眼的光心以及各小眼的至少一个感光单元的中心附近,每个感光单元至少与一个小眼列视平面相交,每个感光单元的视线通过感光单元的中心与所在小眼的光心,处理器被配置为基于感光单元接收的信息生成图像,并对图像进行处理来获得有关被摄对象的信息。所述复眼系统可以包括上述复眼摄像装置。利用上述复眼摄像装置可以从不同方向进行二维平面或者立体探测,有利于获取准确的二维和三维空间信息。

Description

复眼摄像装置及复眼系统
相关申请的交叉引用
本申请主张2019年11月26日提交的申请号为201911173889.2的中国发明专利申请的优先权,其内容通过引用的方式并入本申请中。
技术领域
本发明涉及光学成像领域,尤其涉及一种复眼摄像装置和一种复眼系统。
背景技术
动物界中,复眼指的是一种由大量的单眼组成的视觉器官,主要在昆虫及甲壳类等节肢动物身上出现,以复眼类昆虫为例,其复眼中每个单眼均具有晶状体(类似于微透镜)以及对应于晶状体设置的感光细胞,这些感光细胞会将感光信息传递到昆虫的脑神经系统,从而有效地计算自身与所观察物体的方位和距离,即实现立体视觉,这有利于复眼类昆虫进行快速判断和反应。
人工复眼技术是受动物的独特复眼结构启发而发展起来的技术。同传统光学系统相比,人工复眼具有体积小、重量轻、视场大以及灵敏度高等优点,因而具有广泛的应用前景。关于人工复眼技术的研究涉及雷达系统、导弹的导引装置、微型飞行器、舰艇搜索与跟踪系统、夜视设备、微型复眼相机、机器人等领域。例如,人工复眼技术可以用在智能机器人的视觉系统中,系统通过对人工复眼探测器收集到的外界信息进行处理,可以实现对目标的识别、跟踪、测速等等。
但是,目前公开的人工复眼通常采用复数台相机或者相机阵列,利用不同相机获得对称图像,再通过算法进行立体匹配,以获得立体视觉。这与动物的复眼的原理不同,并不能算是真正意义上的复眼,不仅体积大,需要的计算成本也很大。
发明内容
本发明提供一种复眼摄像装置,采用了仿生结构设计,目的是获取准确的三维空间信息,实现更佳的立体视觉功能。本发明另外提供了一种包括所述复眼摄像装置的复眼系统。
一方面,本发明提供的一种复眼摄像装置包括小眼列和处理器,所述小眼列,所述小眼 列包括复数个光学上互不干涉且排成一列的小眼,每个所述小眼均包括光学元件和设置在所述光学元件的焦平面附近的至少一个感光单元,所述光学元件用来朝向被摄对象并接收视野内入射的光束,其中,每个所述小眼列至少对应于一个小眼列视平面,所述小眼列视平面通过所述小眼列中各小眼的光心以及每个小眼的至少一个感光单元的中心附近,每个所述感光单元至少与一个所述小眼列视平面相交,每个所述感光单元的视线通过所述感光单元的中心与所在小眼的光心;所述处理器被配置为基于所述小眼中的感光单元接收的信息生成图像,并对所述图像进行处理来获得有关被摄对象的信息。
可选的,所述小眼列中,每个小眼包括一个所述感光单元,且各个所述感光单元的视线在所述小眼列的视野内互不交叉。
可选的,所述小眼列中,每个小眼包括一个或两个以上的所述感光单元,且至少一个所述感光单元的视线在所述小眼列的视野内与其它所述感光单元的视线交叉。
可选的,所述复眼摄像装置包括两个以上的所述小眼列,不同所述小眼列的小眼列视平面之间形成一夹角,每个所述小眼属于一个或两个以上的所述小眼列。
可选的,多个所述小眼列中依次邻接构成小眼阵列,所述小眼阵列中的各个所述小眼的光学元件在蜂窝形曲面内排布或者在二维平面内排布。
可选的,所述处理器包括单复眼像成像单元,所述单复眼像成像单元被配置为在获得各个所述小眼中的所述感光单元的信息后,将视线互不交叉的全部或部分所述感光单元的信息进行处理并组成图像,以得到至少一幅单复眼像。
可选的,所述处理器还包括匹配复眼像单元、视差计算单元以及位置分析单元,所述匹配复眼像单元被配置为将属于同一所述小眼列视平面、且其中至少部分所述感光单元的视线相互交叉的两幅以上的所述单复眼像进行匹配,以得到一组匹配复眼像,所述匹配复眼像中的各幅所述单复眼像均包括根据视线交叉处的被摄对象的信息而形成于单复眼图像中的像素点,所述视差计算单元被配置为获取所述匹配复眼像中根据视线交叉处的被摄对象的信息而生成的像素点之间的视差信息,所述位置分析单元被配置为基于所述小眼列的信息以及所述视差信息获取位于视线交叉处的被摄对象的信息。
可选的,所述匹配复眼像中的单复眼像均根据同一时刻或同一时间段获得的各个所述小眼中的所述感光单元的信息得到。
可选的,所述位置分析单元还被配置为通过多个时刻或同一时间段的所述被摄对象的信息,获取所述被摄对象在视野内的运动信息。
可选的,所述复眼摄像装置还包括存储单元和显示单元,所述存储单元被配置为存储所 述单复眼像、所述匹配复眼像以及所述被摄对象的信息,所述显示单元被配置为输出所述单复眼像并显示,或者,所述显示单元被配置为基于所述位置分析单元获取的被摄对象的信息,将所述被摄对象的纹理颜色、三维位姿及形状中的至少一种输出并显示。
可选的,同一所述小眼列包括至少一个子小眼列,每个所述子小眼列包括复数个依次邻接的小眼,相邻两个所述子小眼列之间具有设定间距。
可选的,相邻两个所述子小眼列之间设置有与所述子小眼列不属于同一所述小眼列的小眼。
可选的,每个所述子小眼列中的小眼的光心的连线为直线段或者弧线段。
可选的,每个所述感光单元的视线具有与所述感光单元的感光面积关联的扩散角,所述扩散角小于或等于同一所述小眼列视平面中相邻的两个所述感光单元的视线的夹角。
可选的,当所述子小眼列的各个所述小眼的光心的连线为弧线段时,所述扩散角还小于或等于位于所述弧线段中的相邻两个所述小眼的轴线之间的夹角。
可选的,所述小眼列中,各个所述小眼中的所述光学元件均为微透镜,各个所述微透镜的直径相同或不全部相同,焦距相同或不全部相同。
可选的,所述小眼列中,各个所述小眼的微透镜垂直于所述小眼列视平面的截面为圆形、椭圆形或者多边形。
可选的,同一所述小眼列中,各个所述小眼中的所述感光单元的数量相同。
可选的,所述感光单元接收的信息包括对应视线上的入射光束的强度信息和色彩信息。
可选的,所述小眼列中的各个所述小眼集成在同一半导体衬底上,各个所述小眼之间通过介质隔离。
一方面,本发明还提供一种复眼系统,所述复眼系统包括以设定间距排布的复数个上述的复眼摄像装置。
可选的,复数个所述复眼摄像装置相对于一中心线对称。
可选的,所述复眼系统还包括控制装置,用以控制所述小眼列的位姿,所述控制装置与每个所述复眼摄像装置的处理器连接。
本发明提供的复眼摄像装置,包括至少一个小眼列以及处理器,所述小眼列包括复数个光学上互不干涉且排成一列的小眼,每个所述小眼均包括光学元件和设置在所述光学元件的焦平面附近的至少两个感光单元。所述复眼摄像装置与复眼类动物的复眼功能相似,具体而言,小眼列中的每个小眼可具有如动物复眼中的单眼的作用,每个所述小眼列至少对应于一个小眼列视平面,所述小眼列视平面通过所述小眼列中各小眼的光心以及每个小眼的至少一 个感光单元的中心附近,每个所述感光单元至少与一个所述小眼列视平面相交,每个所述感光单元的视线通过所述感光单元的中心与所在小眼的光心,在同一小眼列视平面的不同感光单元的视线可以交叉或不交叉,对应的,通过感光单元可以获得视野中的图像,清晰度较高,本发明提供的复眼摄像装置具有与动物复眼非常接近的仿生设计,有利于获取准确的三维空间信息,实现更佳的立体视觉。
本发明提供的复眼系统,包括以设定间距排布的复数个上述的复眼摄像装置,利用上述复眼摄像装置可以从不同方向进行二维平面或者立体探测,有利于获取准确的二维和三维空间信息,实现更佳的立体视觉,可以用在机器人视觉、飞行器等领域,具有广泛的应用前景。
附图说明
图1是本发明一实施例的复眼摄像装置的结构示意图。
图2(a)是本发明一实施例中小眼列的成像示意图。
图2(b)是图2(a)中的一个小眼的成像示意图。
图2(c)是图2(a)中的一个感光单元所对应的光心锥和像素面感光束的示意图。
图3是本发明一实施例中的小眼列的示意图。
图4是本发明一实施例中的小眼列的示意图。
图5是本发明一实施例中的小眼列的示意图。
图6是本发明一实施例中的小眼列的示意图。
图7是本发明一实施例中的小眼列的示意图。
图8是本发明一实施例的复眼摄像装置中小眼排列为六角形的示意图。
图9是本发明一实施例中复眼摄像装置的可视范围的示意图。
图10为本发明一实施例中复眼摄像装置的可视范围的示意图。
图11为本发明一实施例中复眼摄像装置的可视范围的示意图。
图12为本发明一实施例中小眼列的光心锥和像素面感光束的示意图。
图13是本发明一实施例中小眼阵列的平面示意图。
图14为本发明一实施例的复眼摄像装置的测距示意图。
图15是本发明一实施例中复眼摄像装置的测距示意图。
附图标记说明:
100、200、300、400   小眼阵列
10、20、30、40       小眼
21、31   感光单元
10a      焦平面
具体实施方式
目前关于人工复眼技术虽然已经有很多研究和成果,但是,像复眼类昆虫那样,直接通过复眼来获取立体视觉信息的人工复眼技术仍未出现,主要原因在于缺乏更接近动物复眼的高仿生复眼结构以及基于高仿生复眼结构实现立体视觉功能的计算机技术开发。显然,通过高仿生复眼结构来实现立体视觉功能,对人工复眼的发展极具开创意义,采用相关技术的复眼摄像装置有利于获取准确的三维空间信息,相较于通过普通复数台相机分别拍摄图像再进行处理的方法可以实现更佳的立体视觉功能。
基于上述目的,本发明提出了一种复眼摄像装置以及一种复眼系统。其中,本发明提出的复眼摄像装置包括小眼列和处理器,所述小眼阵列包括复数个光学上互不干涉且排成一列的小眼,每个所述小眼均包括光学元件和设置在所述光学元件的焦平面附近的至少一个感光单元,所述光学元件用来朝向被摄对象并接收视野内入射的光束,其中,每个所述小眼列至少对应于一个小眼列视平面,所述小眼列视平面通过所述小眼列中各小眼的光心以及每个小眼的至少一个感光单元的中心附近,每个所述感光单元至少与一个所述小眼列视平面相交,每个所述感光单元的视线通过所述感光单元的中心与所在小眼的光心;所述处理器被配置为基于所述小眼中的感光单元接收的信息生成图像,并对所述图像进行处理来获得有关被摄对象的信息。本发明提出的复眼系统包括所述复眼摄像装置。
本发明提出的复眼摄像装置从仿生意义上来看,类似于复眼类动物的复眼,而复眼系统类似于复眼类动物的视觉系统。在所述复眼摄像装置中,小眼列中的小眼可以根据需要设置足够的数量,此处“列”也可以用“行”代替。小眼列中每个小眼的光学元件用于接收光线,因而在小眼列的同一侧设置,用于在摄像时朝向被摄对象。沿感光单元的视线方向,入射光线通过光学元件并到达对应的感光单元而被感应,转换为图像信息。对于复眼摄像装置“看到”的被摄对象,其可以是空间中的任何物体或生物,被摄对象可以看作具有一定纹理颜色、三维位姿及形状的空间点的组合。在拍摄时,光入射到小眼列中的小眼,通过小眼中的感光单元,可以根据测到的光强、颜色等生成有关视线上的被摄对象的图像。复眼摄像装置具有与动物复眼非常接近的仿生设计,有利于获取准确的二维或三维空间信息,有助于实现较佳的立体视觉。
以下结合附图和多个具体的实施例对本发明的复眼摄像装置和复眼系统作进一步详细说明。根据下面的说明,本发明的优点和特征将更清楚。应当理解,各个实施例仅是制造和应用实施例的示例性的具体实施方式,并不构成在制造和应用本发明时的范围限制。并且,对多个实施例分别进行描述仅是为了更清晰地阐释本发明的内涵,但每个实施例中的技术特征并不属于该实施例所独有的特征,各个实施例的全部特征也可以作为一个总的实施例的特征。在某些实施方式下,下述多个实施例中的技术特征也可以相互关联、启发,以构成新的实施例。
实施例一
本实施例介绍本发明的复眼摄像装置的主要结构和功能。
图1是本发明一实施例的复眼摄像装置的结构示意图。参照图1,一实施例中,复眼摄像装置包括上述的小眼列和处理器。
具体的,所述复眼摄像装置可以包括一个或两个以上的所述小眼列,多个小眼列可以构成小眼阵列,不同所述小眼列的小眼列视平面之间形成一夹角,每个所述小眼属于一个或两个以上的所述小眼列。对于同一所述小眼列,每个小眼可以包括一个或两个以上(包括两个)的感光单元,各个感光单元的视线均在一个平面,因而可以形成为交叉或者不交叉的两条线。此处,是否交叉指的是在小眼列的正面即光学元件和感光单元的接收入射光的一侧视线是否相交于一个点,即在小眼列的视野内交叉。
所述处理器被配置为基于接收的所述小眼中感光单元的信息生成图像,并对所述图像进行处理来获得有关被摄对象的信息。进一步的,为了对感光单元的信息进行处理并获得有关被摄对象的信息,所述处理器可选地包括如下组件或模块:单复眼像成像单元、匹配复眼像单元、视差计算单元以及位置分析单元。具体功能说明如下。
所述单复眼像成像单元被配置为在获得所述小眼列中来自各个所述小眼的所述感光单元的信息后,将视线互不交叉的全部或部分所述感光单元的信息进行处理并组成图像,以得到至少一幅单复眼像。所述单复眼像由于仅根据分离的感光单元获取不同视线方向的被摄对象信息,虽然不具有立体视觉效果,但由于没有对焦问题,相对于普通的2D相机,不需要再设置专门的镜头对焦,因而可以实现清晰度较佳的平面视觉。
本实施例的复眼摄像装置中,小眼列是由多个小眼组成的结构,所以在各个感光单元的 视线发生交叉(此处指的是在小眼列的视野内交叉,不包括同一小眼的感光单元的视线在光心相交的情况)时,由于交叉处的被摄对象与感光单元的距离不同,对于视线交叉的两个感光单元,其所获得的交叉点的图像会发生视差,该视差可以通过对视线交叉的两个感光单元所在的两幅单复眼像进行处理获得,可选的计算视差的方法如块匹配(block matching)法、深度神经网络学习法以及特征匹配法等,此外可以采用关于双目视觉传感器的视差的公开计算方法。
本实施例中,利用视线交叉的感光单元的信息以获得交叉点的位置,从而使复眼摄像装置能够产生立体视觉。
具体的,所述处理器除了单复眼像成像单元外,还包括匹配复眼像单元、视差计算单元以及位置分析单元。所述匹配复眼像单元被配置为将属于同一所述小眼列视平面、且其中至少部分感光单元的视线相互交叉的所述单复眼像进行匹配,以得到一组匹配复眼像,所述匹配复眼像中的各幅所述单复眼像均包括对应于视线交叉处的被摄对象的信息而形成于图像中的像素点。所述视差计算单元被配置为获取所述匹配复眼像中对应于视线交叉处的被摄对象的信息而生成的像素点之间的视差信息。所述位置分析单元被配置为基于所述小眼列的信息以及所述视差信息获取位于视线交叉处的被摄对象的信息。
此外,参照图1,所述复眼摄像装置还可包括存储单元以及显示单元,所述存储单元被配置为存储所述单复眼像、所述匹配复眼像以及所述被摄对象的信息。所述存储单元可以通过随机存取存储器(RAM)、随机只读存储器(ROM)、硬盘、磁碟、光盘、中央处理单元(CPU)中的寄存器等介质进行存储。
所述显示单元被配置为输出所述单复眼像并显示,或者,基于所述位置分析单元获取的被摄对象的信息,将所述被摄对象的纹理颜色、三维位姿及形状中的至少一种输出并显示。所述显示单元可以包括一显示器,所述显示器可以是平面图像显示器或者三维图像显示器。
上述的处理器的各个组件(单复眼像成像单元、匹配复眼像单元、视差计算单元以及位置分析单元等)可以合并在一个模块中实现,或者其中的任意一个装置可以被拆分成多个模块,或者,这些装置中的一个或多个装置的至少部分功能可以与其它装置的至少部分功能相结合,并在一个模块中实现。根据本发明的实施例,单复眼像成像单元、匹配复眼像单元、视差计算单元以及位置分析单元中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、 专用集成电路(ASIC),或可以以对电路进行集成或封装的任何其它的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式的适当组合来实现。或者,信号处理单元、图像处理单元、位置分析单元、存储单元和输出单元中的至少一个可以至少被部分地实现为计算机程序模块,当该程序被计算机运行时,可以执行相应模块的功能。所述复眼摄像装置可以利用集成电路制作工艺制作,一实施例中,所述复眼摄像装置为芯片级装置。所述复眼摄像装置的各个小眼列中的小眼可以均集成在同一半导体衬底上,所述半导体衬底例如硅(Si)衬底、锗(Ge)衬底等半导体衬底,也可以是氧化铝等材料的陶瓷基底、石英或玻璃基底等。为避免相邻小眼之间的内部光线发生干扰,各个小眼之间通过介质隔离,所述介质优选为阻光材料。
根据上述说明可知,利用本发明实施例的复眼摄像装置,可以实现平面视觉(或2D视觉)及立体视觉(或3D视觉)。
实施例二
本实施例主要介绍本发明复眼摄像装置的小眼列。
图2(a)是本发明一实施例中小眼列的成像示意图。图2(b)是图2(a)中的一个小眼的成像示意图。图2(c)是图2(a)中的一个感光单元所对应的光心锥和像素面感光束的示意图。
参照图2(a)和图2(b),入射光束L1、L2及L3入射到小眼列100,并沿各小眼中感光单元的视线方向投射到每个小眼10中。平行光束经过小眼10的光心C(更具体的为小眼中光学元件的光心)入射后,在焦平面10a聚焦,即任意一个焦点对应于该点与光心C连线方向上的平行光束在焦平面10a的汇聚点。根据小眼10中光学元件的具体结构的选择不同,其焦平面10a可以是平面,也可以是具有一定曲率。如图2(a)中示意出了每个小眼10的焦平面10a上的三个点,这三个点分别与光心C连线而确定的直线为对应光入射方向上的光束的光轴。如图2(b)所示,图2(a)中单个小眼10的焦平面10a上的三个点对应于不同方向的入射光束。参照图2(c),可以通过在小眼10的光学元件的焦平面10a附近设置感光元件来获取入射光束的强度(或亮度、色彩)的信息,通过将感光元件中像素的感光面11a设置在对应光学元件的焦平面10a上,所述感光面11a与对应光学元件的光心C形成的椎体通过光心C的反方向延伸的椎体称为光心锥(图1(c)中以点横线表示),光心锥表示的是对 应感光面范围内的所有点通过光心的光轴形成的椎体。而感光面11a范围的感光元件接受的入射光束是对应的光心锥加粗到光学元件透光面积范围的台型光柱,该台型光柱可以称为与感光面对应的像素面感光束(图1(c)中以长短横线表示)。
本发明的复眼摄像装置利用了上述光学原理,一实施例中,复眼摄像装置包括复数个小眼,每个小眼至少与其它两个以上小眼排成一列(或行)并称之为小眼列。每个小眼均包括光学元件和设置在所述光学元件的焦平面附近的至少一个感光单元,所述光学元件用来朝向被摄对象并接收入射的光束。各个小眼在光学上互不干涉,因而进入各个小眼的光线仅可以在该小眼的焦平面上感光。
入射光束经小眼列中小眼10的光学元件入射后汇聚在焦平面上,小眼阵列100的各个小眼10包括光学元件以及设置在光学元件的焦平面一侧的感光元件,所述感光元件可以包括一个或两个以上的感光单元(或称为感光像素)。参照图2(a),可以在各个小眼的焦平面10a上的三个汇聚点分别设置三个感光单元,由感光单元的中心与光心C而确定的光轴为该感光单元的视线。在同一小眼中设置多个感光单元的情况下,不同方向的入射光束经所述光学元件汇聚后即可以被设置在焦平面附近的不同感光单元感应,因而,入射光束的方向与感光单元的位置相关。将同一小眼中的不同感光单元的位置对应于不同的光束入射方向设置,通过不同感光单元来检测从不同方向入射所述光学元件的光束,则可以根据感光单元检测的信号获得其视线方向上的光线信息。在某些情况下(如图2(a)),一感光单元的视线在小眼列100的视野内会与其它小眼中的部分感光单元的视线交叉,因而通过视线交叉的感光单元生成的图像具有视差,该视差可以通过计算获得。
每个所述小眼列至少对应于一个小眼列视平面(如图2(a)中平行于纸面的平面),所述小眼列视平面通过所述小眼列中各小眼的光心以及每个小眼的至少一个感光单元的中心附近,每个所述感光单元至少与一个所述小眼列视平面相交,该感光单元的视线在对应的所述小眼列视平面内。对于同一小眼内的不同感光单元,其可以属于同一小眼列视平面,也可以属于不同的小眼列视平面。属于同一小眼列视平面内的各个小眼的感光单元,其视线均在该小眼列视平面内。
所述复眼摄像装置可以包括一个或大于一个的上述小眼列,两个以上的小眼列可以按照一定排列构成小眼阵列。对于不同的小眼列,它们不具有共同的小眼列视平面,即各自的小眼列视平面之间会形成一夹角(此处指大于0的角度)。但对于小眼列中的小眼,根据通过其 光心和感光单元的小眼列视平面的分布,每个所述小眼可以属于一个或两个以上的所述小眼列。
对于小眼列中的各个小眼,通常可以设置为如图2所示的紧凑排列的形式,即其中复数个小眼依次邻接排列,但不限于此,对于满足上述条件的小眼列,其中各个小眼之间也可以设置有间隔。并且,在间隔位置,可以设置另一小眼列的小眼。此外,对于同一小眼列,也可以包括至少一个子小眼列,每个所述子小眼列包括复数个依次邻接的小眼,相邻所述子小眼列之间保持设定间距。对于同一子小眼列,各个小眼的光心之间的连线可以是直线段,也可以是弧线段,优选方案中,为了扩大视角,各个小眼的光心之间的连线为弧线段。
图3是本发明一实施例中的小眼列的示意图。参照图3,一实施例中,小眼列中的每个小眼均包括一个所述感光单元,且各个所述感光单元的视线在所述小眼列的视野内互不交叉。在获得各个小眼列的感光单元的信息后,可以根据各个感光单元的信息得到一幅单复眼像。该单复眼像为小眼列的视野内被摄对象的二维图像,即该结构的小眼列可以被用于二维视觉,例如可用于2D像机,虽然没有立体视觉效果,但是可以达到具有镜头的普通摄像机的效果,相对于普通2D像机,该结构因为有了小透镜,不需要专门的镜头,更为便利。而且,该种类的相机没有对焦问题,也就是说多远的物体都能够看清,只是随着距离增加,远处的物体分辨率会降低。另一实施例中,多个如图3所示的小眼列排列为球面,利用各个感光单元在同一时刻或同一时间段拍摄的图像信息,可以获得一张相应的单复眼像,因而也可以作为2D相机。
图4是本发明一实施例中的小眼列的示意图。图5是本发明一实施例中的小眼列的示意图。参照图4和图5,在一些实施例中,小眼列中的每个小眼均包括一个所述感光单元,但通过对小眼之间的相互位置以及感光单元的位置设计,相对于图3,图4和图5中各个所述感光单元的视线在所述小眼列的视野内是存在交叉的。因而,可以根据两个以上的单复眼像组成一组匹配复眼像,可以实现立体视觉效果。图4和图5中,小眼列中各个小眼的光心连线为直线段。随着离小眼列的距离的增加,小眼列在视野深度上的可测距精度基本保持不变。
图6是本发明一实施例中的小眼列的示意图。参照图6,一实施例中,小眼列包括两个子小眼列,每个所述子小眼列包括复数个依次邻接的小眼,并且每个子小眼列中,每个小眼仅包括一个感光单元,感光单元的视线彼此不交叉。相邻的两个所述子小眼列之间保持一定的间距。该实施例中,小眼列中的感光单元的视线至少部分是存在交叉的,因而由这两个子 小眼列或者更多的子小眼列构成的小眼列也可以用来实现立体视觉。
图7是本发明一实施例中的小眼列的示意图。参照图7,一实施例中,小眼列包括两个子小眼列,每个所述子小眼列包括复数个依次邻接的小眼,且每个小眼中均设置有不止一个感光单元,对于该小眼列,每个子小眼列中感光单元的视线在视野内存在交叉,并且,不同子小眼列中的感光单元的视线在视野内也存在交叉,相对于图6所示的小眼列,图7中由两个子小眼列的视线相交部分产生的立体视觉的精度更高,由每个子小眼列算出的立体视觉再通过两个子小眼列的组合重新测量,不仅具有单个子小眼列的低错误率,而且精度会大幅提高。
需要说明的是,上述图3至图7仅示出了小眼列在一个小眼列视平面上的排列,但各个实施例中在每个小眼列的位置也可以设置有不止一个小眼列,即各个小眼的光心可以排列为平面或者平面,而仍然可以实现上述的二维视觉及三维视觉。每个小眼列可以作为复眼摄像装置的一个复眼,或者,每个子小眼列可以作为复眼摄像装置的一个复眼,以实现二维和/或三维视觉。
此外,考虑到每个感光单元的尺寸,其可以检测的是由与感光单元的感光面积有关对应的像素面感光束的图像信号,实施例中,每个所述感光单元检测的入射光束具有与所述感光单元的感光面积关联的扩散角。为了使每个感光单元检测的入射光束与相邻感光单元检测的入射光束能够清晰分辨,提高复眼摄像装置的分辨率,所述扩散角优选小于或等于同一所述小眼列视平面中彼此相邻的两个所述感光单元的视线的夹角。在包括弧线段形状的子小眼列时,所述扩散角还小于或等于位于所述弧线段中的相邻两个所述小眼的轴线之间的夹角。这里小眼的轴线指的是垂直于小眼的光学元件的入射面且穿过光心的直线。
根据上述说明可知,本发明实施例的小眼列中小眼的排列以及感光单元的数量均可以根据需要设置,并且根据感光单元视线是否交叉的情况,使包括所述小眼列的复眼摄像装置具备二维平面及立体视觉。
实施例三
本实施例主要介绍本发明实施例的复眼摄像装置的具体结构。
本发明复眼摄像装置中的小眼列,每个小眼中的光学元件可以是微透镜,此处微透镜为凸透镜,其焦平面与被摄对象位于相反侧,以下描述的小眼即以微透镜为光学元件来描述。 但本发明不限于此,在另一实施例中,至少一个小眼的光学元件还可以是包括多个光学组件(诸如一个或多个透镜、滤光片和/或孔径)的复合成像物镜。从仿生意义上理解,小眼中光学元件的功能与复眼类动物的单眼中的晶状体的功能类似。上述小眼中与同一个光学元件对应的感光单元可以看作设置在光学元件的焦平面上的像素,与同一光学元件对应的至少两个感光单元构成的阵列可称为像素阵列。从仿生意义上理解,多个感光单元用于执行类似于动物复眼中每个单眼下的感光细胞的功能。
所述像素阵列中,每个感光单元可包括一个光电二极管和多个用作驱动电路的MOS晶体管,所述光电二极管可以将入射的光信号转换为电信号,为了获得感光单元的电信号,每个小眼还可包括电连接每个感光单元的外围电路,以便于将感光单元针对入射光束生成的图像信号传输给处理器。此处将感光单元设置在光学元件的焦平面附近,目的是使每个感光单元的感光面(例如光电二极管中的PN结层)位于光学元件的焦平面上以检测对应视线上的光线信息。感光单元形成的像素阵列的结构可以按照公开技术实施,只要能实现上述功能即可。结合实施例一可知,通过设置小眼列,可以通过每个小眼中的不同位置的感光单元检测不同方向的入射光束的强度信息。更佳的,通过对小眼中感光单元或者由感光单元组成的像素阵列的功能和结构进行优化,例如在光学元件和感光单元之间增加滤光层,通过每个小眼中的感光单元还可以对获取接收的入射光束的色彩信息,有利于更全面地获取被摄对象的信息。
可以采用类似普通相机或者显示器中的像素那样设置小眼的形状。例如,可将每个小眼的入射面形状设置为四边形并横竖排列,也可以像蜂窝那样将每个小眼的入射面形状设置为六角(六边)形等等,可选的,对于多个小眼列形成的小眼阵列,各个所述小眼的光学元件可以在蜂窝形曲面内排布或者在二维平面内排布。工业上四角形小眼的横竖排列比较容易生产和运算,而从扩大视野的角度或者更贴近仿生应用的角度,利用六角形的小眼进行蜂窝型排列更具优势。图8是本发明一实施例的复眼摄像装置中小眼排列为六角形的示意图,参照图8,一实施例中,各个小眼20的平面形状为三角形,且六个小眼排列为六角形结构。出于紧凑排列的目的,对于蜂窝形结构的小眼阵列,可以设置光学元件的透光面为六边形,对于四角形排列的小眼阵列,可以设置光学元件的透光面为四边形。图2至图7所示的实施例以小眼为圆柱形为例进行说明,其中示意的每个小眼的光学元件为圆形的微透镜。但本发明不限于此,在另一实施例中,小眼阵列中的各个小眼的光学组件的结构可以相同,也可以不同。 可实施的小眼列中光学组件可以不完全相同,具体可以根据复眼摄像装置的整体结构要求,选择同种结构或者两种以上的不同结构的光学元件进行排布形成小眼列以及小眼阵列,例如,一实施例中,小眼阵列为曲面结构,其中大部分区域采用的是第一结构的小眼,第一结构的小眼均包括同种形状和尺寸的微透镜的感光单元,而在一些特殊位置,采用的是第二结构的小眼,第二结构的小眼包括不同于第一结构的小眼的形状和尺寸的微透镜或其它种类的光学元件,并且,第二结构的小眼可以按照一定规律穿插分布在小眼阵列的范围内。同一小眼列中,对应于小眼的光学元件设置的感光元件的结构也可以相同或不同。例如,一实施例中,同一小眼列中的部分小眼采用的是第一类型的感光元件,第一类型的感光元件包括相同数量和排布方式(包括感光面大小、间距等)的感光单元,而另外一部分小眼采用的是第二类型的感光元件,第二类型感光元件的感光单元的数量和/或排布方式与第一类型的感光元件不同。此外,在以微透镜作为小眼的光学元件的示例中,各个微透镜的直径可以均相同,也可以不全部相同,而它们的焦距也可以均相同或不全部相同。各个小眼的结构可以根据需要设置。总的来说,微透镜的直径越大,则感光能力越强,但在感光单元的数量不变的情况下,微透镜的直径越小,感光元件对光入射方向的变化更加敏感,即视觉的分辨率越高。
小眼列中,各个小眼的排列方式有复数种,为了在不影响感光能力的条件下缩小复眼摄像装置的体积,提高视觉的分辨率,优选小眼列尤其是每个子小眼列中的各个小眼入射面彼此靠近。作为示例,每个子小眼列中,相邻两个小眼的微透镜邻接排布,在入射面所在面内,各个小眼的微透镜紧密排布。各个所述小眼的微透镜平行于入射面的截面可以为圆形、椭圆形或者多边形(如四边形、五边形、六边形、七边形、八边形)等。
对于实施例的复眼摄像装置,小眼阵列中对应每个小眼设置的感光单元的总数可以相同也可以不同,具体的,小眼阵列中各个小眼中的感光单元可以是对应的,即全部所述小眼中均设置有位置对应以检测同一方向视线的感光单元,对应于同一视线方向的感光单元,其视线未发生交叉,各自获取的信息可以用来生成一幅单复眼像。而对于在小眼视野内视线相交的感光单元,各自获取的信息分别体现在不同的单复眼像中,并且,由于均包含位于视线交叉处的被摄对象的信息,因此可以将属于同一所述小眼列视平面、且其中至少部分感光单元的视线相互交叉的所述单复眼像进行匹配,以得到一组匹配复眼像,所述匹配复眼像中的各幅所述单复眼像均包括一个对应于视线交叉处的被摄对象的信息而生成的像素点。因此,本实施例中,同一小眼里的感光单元的视线若在小眼的视场内与其它小眼的视线交叉而产生多 个交叉点,则每个交叉点均对应于至少两幅单复眼像组成的一组匹配复眼像,与同一小眼相关的一组匹配复眼像,涉及该小眼中的各个感光单元所形成的单复眼像。
实施例四
本实施例主要介绍有关本发明实施例的复眼摄像装置的视场角以及测距精度。
对于图3所示的无视线交叉的小眼列,其视野及视场角由两端小眼和感光单元的位置决定。而对于存在视线交叉的小眼列(如图2(a)、图4、图5、图6、图7),可以获得的单复眼像不止一幅,对应可以得到一组匹配复眼像,为了确定视线交叉处被摄对象的位置,需要根据匹配复眼像计算视差来获得,此处对于存在视线交叉的小眼列,通过获得小眼列最外侧小眼的可与其它小眼的视线相交的最外侧视线的位置和方向,即可以获得复眼摄像装置的立体可视范围。
图9是本发明一实施例中复眼摄像装置的可视范围的示意图。参照图9,作为本发明的一个示例,从小眼列200的小眼列视平面来看,每个小眼20均具有位于焦平面的两个感光单元21,这两个感光单元21的中心与对应小眼20的光心F的连线确定的两个视线方向为所属的小眼列200可获取的光束的两个入射方向,也即,对应于这两个视线方向所确定的光束才能被所有小眼20中的感光单元21获取,而小眼列200中全部小眼20的光轴的交叉点为通过这两个方向的光束可测量距离的空间点,可测量距离的空间点在小眼列200的法线方向的间距为该小眼列在空间深度方向上的测距精度D1。此处测距精度体现了同一感光单元21可分辨的相邻两个空间点之间在上述法线方向上的距离精度。图4、图5及图9所示的实施例中测距精度D1随着空间深度方向的变化基本保持不变。
根据小眼列200的可视范围,可以获得小眼列200的视场角(FOV)。由于小眼列200中感光单元21的中心与对应小眼的光心C的连线确定了可测量的视线方向,因而小眼列200的视场角与每个小眼20中的感光单元组成的像素阵列的宽度有关。本文中小眼列的视场角体现了立体可视范围相对于法线方向向外侧偏离的角度,也称为立体视场角。图9所示的实施例中,两条粗线限定的可视范围为复眼摄像装置的视场。由于立体可视范围相对于直线段的小眼列没有向外偏离,因而图9所示的小眼列的视场角小于0。
当小眼中感光单元的数量增加时,视场角会发生变化。图10为本发明一实施例中复眼摄像装置的可视范围的示意图。参照图10,作为示例,该小眼列300中,每个小眼30均具有 位于焦平面的四个感光单元31,这四个感光单元31的中心与对应小眼的光心F的连线确定的视线方向为该小眼列300可测量的四个视线方向,该实施例中,通过最外侧小眼的可与其它小眼的视线相交的最外侧视线确定的立体视场角大于0(如图10中两条粗线限定的范围),较图9所示的实施例中小眼列200的视场角增大。但是,图10中视场外围区域的测距精度D2'较视场靠中心区域的测距精度D2明显变小。因此,若需要在二维平面内均保持较佳的测距精度,直线段排布的小眼列的视野相对较窄,换言之,利用本发明实施例的小眼列制作复眼摄像装置时,如果在某个方向小眼列的长度不是足够长,为了避免可测距范围太窄,优选考虑设置将小眼设置为弧线段排布。进一步通过实施例说明如下。
图11为本发明一实施例中复眼摄像装置的可视范围的示意图。参照图11,作为示例,小眼列400的各个小眼40排布为弧线段(光心的连线为弧线)。每个小眼40均具有位于焦平面的两个感光单元,这两个感光单元与对应小眼40的光心的连线确定的光轴方向为小眼列400可测量的视线方向。图10另外示意了根据最外侧小眼的可与其它小眼视线相交的最外侧视线确定的小眼列400的立体可视范围(如图11中两条粗线限定的范围)。显然,弧线段结构的小眼列400的视场角较具有相同感光单元数量的图8所示的直线段结构的小眼列200的视场角增大。此外,如图11所示,对于弧线段结构的小眼列400,在法线方向上距离小眼列400由近及远的方向,小眼列的测距精度D3逐渐降低。
本实施例中,小眼列的测距精度越高,复眼摄像装置的分辨率则越高。并且,最大分辨率主要与相邻小眼之间的间隔和最外侧的感光单元的位置有关,因此可以根据需要调整小眼间隔和感光单元的位置来提高复眼摄像装置的分辨率。
本发明的小眼列中的复数个小眼的排布方式、间距可以根据针对视场角、测距精度等要求进行设置。优选方案中,可以通过增加小眼中感光单元的数量或者使各个小眼排列为弧线段来增大视场角,并且,从效果和工艺难度来看,相对于增加感光单元的数量,更佳地是采用弧线段(或球面)的小眼排列。视场角越大,复眼摄像装置可以“看”的越远,并且越远的被摄对象,图像越小。对于直线段结构的小眼列及二维平面的小眼阵列,其空间分辨率基本不会随着距离变化而变化,但是视角范围受限于小眼列的二维面积,体现为一个柱状空间。因此,在利用本发明实施例的小眼列制作复眼摄像装置时,优选采用如图11所示的弧线段(或球面)排列的小眼,一方面视野更广,另一方面越远的物体,图像就越小,具有更佳的仿生效果。
实施例五
本实施例主要介绍小眼列产生的视差。
根据前述实施例对复眼摄像装置的说明可知,利用本发明复眼摄像装置的小眼列,处理器可以分别通过单复眼像成像单元和匹配复眼像单元分别获得单复眼像和包括至少两幅单复眼像的一组匹配复眼像。所述匹配复眼像中的各幅单复眼像存在视差。
图12为本发明一实施例中小眼列的光心锥和像素面感光束的示意图。图12中的左半部分示意了光心锥的分布情况,右半部分示意了像素面感光束的分布情况。如图12所示的小眼列中,每个小眼具有在感光面内设置的两个感光单元,并且,每个小眼中处于一个对应位置的感光单元的视线在小眼列的视野内不交叉,每个小眼中处于另一个对应位置的感光单元的视线在小眼列的视野内也不交叉,因而根据各个小眼中对应的这两个感光单元的信号,可以获得两幅单复眼像。对于不是同一对应位置的感光单元,它们的视线在小眼列的视野内交叉,因而这两幅单复眼像中体现同一交叉点的像素点具有视差,构成一组匹配复眼像。
作为示例,以小眼A的图像作为基准图像,第一检测区域S1是小眼A内部的两个感光单元(或像素)可同时侦测到的区域,因而视差为0,第二检测区域S2是小眼A的左侧感光单元和小眼B的右侧感光单元可同时侦测到的区域,视差为1(此处示例地以小眼为单位,在另外的实施例中,也可以按照感光单元为单位),依据同样的方法,可以得到第三检测区域S3是小眼A的左侧感光单元和小眼C的右侧感光单元可同时侦测到的区域,对应的视差为2,第四检测区域S4是小眼A的左侧感光单元和小眼D的右侧感光单元可同时侦测到的区域,对应的视差为3,依次类推,可以得到不同检测区域的像素面感光束在入射各个小眼后产生的视差。可见,在获得的小眼列的结构数据如小眼的排布方式、小眼的间距、感光单元的大小以及间距等后,在对上述匹配复眼像进行计算后得到视差,可以进一步计算推导获得被摄对象或空间点在三维空间中的具体位置。
在获得连续多个时刻的被摄对象的三维空间信息后,进一步可以得到被摄对象在三维空间的运动信息。被摄对象的三维空间信息和运动信息可以通过立体坐标系表示并输出,另外还可以通过VR、AR、MR等三维立体技术演示。
实施例六
图13是本发明一实施例中小眼阵列的平面示意图。参照图13,一实施例中,多个小眼列按照横竖整齐排列在二维平面,每个小眼对应设置有横纵排列的若干感光单元。图13中,每个小眼的光心位置与对应的感光单元组成的像素阵列的中心位置相同。
每个小眼的光学元件例如是微透镜,微透镜阵列即小眼阵列,以X×Y表示,X和Y均为大于1的整数,如图13所示,一实施例中,X=8,Y=7;另外,每个小眼中的感光单元构成的像素阵列以I×J表示,I和J均为大于1的整数,一实施例中,I=3,J=3。即,图13所示的是一个8×7的小眼阵列,每个小眼包括3×3个像素。本实施例中,处理器获取空间点的位置(或深度)可包括如下过程。
首先,在获得全部所述小眼的感光单元生成的图像信号后,将来自各个小眼且视线互不交叉的感光单元的图像信号处理为一幅单复眼像,以获得至少两幅单复眼像。所述小眼的感光单元的图像信号优选是同一时刻或同一时间段获得的信号,通过对先后多个时刻或同一时间段的被摄对象的位置进行跟踪,可以捕捉动态的被摄对象,从而可以实现动态摄像能力。
对于上述X×Y的小眼阵列,可以获取I×J张X×Y大小的单复眼像。即图13的示例中可获得九幅单复眼像,每幅单复眼像中,共有8×7个像素点,可以认为每张单复眼像的大小为8×7,此处像素点指的是由感光单元生成的图像信号体现在对应的单复眼像中的图像。需要说明的是,图13所示的小眼的具体数量仅为了说明,本发明其它实施例中用于构造复眼摄像装置的小眼的数量和小眼中感光单元的数量可以是不同于图13所示的其它数,例如,在一实施例中,小眼的数量可以远大于每个小眼对应的像素阵列中的感光单元的数量。
参照图13,设定相邻两个微透镜的间距即相邻两个小眼之间的间距为l,每个小眼中相邻两个感光单元之间的间距为h,每个微透镜的光心到感光单元的感光面之间的距离为f(图14)。图13所示的小眼阵列中,小眼阵列所在平面的坐标系为小眼阵列坐标系∑ XY,其原点O设置在左下角的小眼的中央,根据在小眼阵列坐标系∑ XY的坐标可以确定每个小眼的位置,小眼阵列坐标系∑ XY中的单位为l;另外,每个小眼中的感光单元所在的坐标系为像素阵列坐标系∑ IJ,其原点O'设置在每个小眼的中心感光单元的中央,从而根据在像素阵列坐标系∑ IJ的坐标可以确定小眼中每个感光单元的位置,像素阵列坐标系∑ IJ中的单位为h。如图13所示的感光单元E对应的是小眼阵列坐标系∑ XY中坐标为(5,2)的微透镜在像素阵列坐标系∑ IJ的原点。
图14为本发明一实施例的复眼摄像装置的测距示意图。图14可以看作在图13中小眼 阵列坐标系∑ XY的基础上增加了深度坐标轴Z,因而可以示意出位于立体空间的被摄对象(此处以空间点R代表)。以下将图13的坐标系称为复眼坐标系∑ XYZ,复眼坐标系∑ XYZ中的原点设置在如图12中左下角小眼的光心上。作为示例,小眼阵列的阵列面为二维平面,每个小眼的结构相同,可直接以每个小眼所在的阵列来表示对应小眼的位置,可以设定小眼阵列的平面和像素阵列的平面平行,且每个小眼的光心和小眼像素阵列的中心连线垂直于像素阵列的平面。需要说明的是,图14中的上述设定仅是为了方便说明,而不构成对本发明复眼摄像装置的限制。在本发明另一实施例中,每个小眼的像素阵列也可以是具有一定曲率,每个小眼的光心和对应像素阵列中心的连线也可以不与小眼阵列的平面垂直。
参照图14,小眼阵列中的两个小眼均检测到了空间点R的图像信号。空间点R在复眼坐标系中的坐标为(x,y,z)。具体的,第一个小眼的光心A(为了描述方便,该小眼称为小眼A)在复眼坐标系∑ XYZ中的坐标为(x A,y A,f),对于小眼A,生成空间点R的图像信号的感光单元c在像素阵列坐标系∑ IJ中的坐标为(i,j),感光单元c的光束强度(或亮度)表示为u i,j(x,y),将同一时刻t每个小眼内与感光单元c的视线不交叉的的感光单元的信息按照小眼的位置顺序排列组成一幅单复眼像,称之为时刻t的第一复眼图像U i,j。同理,对于检测到空间点R图像信号的另一只小眼,其光心B(为了描述方便,该小眼称为小眼B)在复眼坐标系∑ XYZ中的坐标为(x B,y B,f),对于小眼B,生成空间点R的图像信号的感光单元d在像素阵列坐标系∑ IJ中的坐标为(m,n),感光单元c和感光单元d视线在视野内交叉,它们均检测到了同一空间点R的图像信息。将同一时刻t每个小眼内与感光单元d的视线不交叉的感光单元的信息按照小眼的位置顺序排列也可以组成一幅单复眼像,称之为时刻t的第二复眼图像U m,n
参照图14,当cABd这四个点在同一个小眼列视平面上时,视线cA与视线dB在空间上可以交于空间点R。空间点R通过小眼A中的感光单元c生成图像信号,并体现为第一复眼图像U i,j中的一个像素点,记为像素点c,空间点R还通过小眼B中的感光单元d生成图像信号,并体现为第二复眼图像U m,n中的一个像素点,记为像素点d。设定像素点c在第一复眼图像U i,j中的坐标为(x c,y c),像素点d在第二复眼图像U m,n的坐标为(x d,y d),像素点c和d均体现了空间点R的信息,因而它们为一组对应的像素点,像素点c与d代表的是在真实空间上的同一空间点R在两幅单复眼图像中的映射,这两幅单复眼像可以组成一组匹配复眼像。
对于从单复眼像中获得属于同一空间点的映射而具有对应关系的像素点(如上述像素点 c和d)的方法,可以采用本领域公开的立体匹配算法例如sift算法、surf算法、全局立体匹配算法或局部立体匹配算法获得,以局部立体匹配算法为例,其又称为基于窗口的方法或基于支持区域的方法,局部立体匹配算法对参考图像中的每个像素计算一个合适大小、形状和权重的窗口,然后对这个窗口内的视差值进行加权平均。理想的支持窗口可以完全覆盖弱纹理区域,并在窗口内深度连续。与全局立体匹配算法相似,通过优化一个代价函数的方法计算最佳视差。具体过程可以参考本领域的公开技术,此处不再赘述。
以下对获得空间点R的空间深度信息(即Z方向的坐标)的过程进行说明。
参照图14,由各透镜内像素位置(i,j)组成的第一复眼图像Ui,j与各透镜内像素位置(m,n)组成的第二复眼图像U m,n在第一复眼图像Ui,j中的像素点(x,y)产生的视差表示为e (i,j)(m,n)(x,y)。具体的,以第一复眼图像U i,j为基准图像,像素点d在X轴方向与像素点c的距离为像素点d在X方向的视差,在此表示为e (i,j)(m,n)(x);像素点d在Y轴方向与像素点c的距离为像素点d在Y方向的视差,在此表示为e (i,j)(m,n)(y);像素点d与像素点c的直线距离即像素点d的视差可表示为e (i,j)(m,n)(x,y)。
在微透镜的焦距足够小时,可以认为AB≈cd,因此,e (i,j)(m,n)(x,y)也可以代表两个小眼的光心A和B两点的距离,即AB的长度。根据两个小眼的位置,可以得到像素点d在X方向的视差e (i,j)(m,n)(x)与对应的两个小眼在X方向的距离之间的关系,以及像素点d在Y方向的视差e (i,j)(m,n)(y)与对应的两个小眼在Y方向的距离之间的关系。但上述视差及小眼A和小眼B之间的距离之间的关系仅是本发明的一个示例,在另一实施例中,也可以通过更加精确的运算过程获得了两个小眼中对应感光单元的视差与小眼距离之间的关系,如另一实施例中,经过计算得到的
Figure PCTCN2020071734-appb-000001
(cd为感光单元c和感光单元d之间的距离)。总之,可以通过对第一复眼图像U i,j和第二复眼图像U m,n的处理,得到两个小眼检测同一空间点R时的视差。
本实施例中,像素点d基于基准图像的视差满足如下关系式(1):
Figure PCTCN2020071734-appb-000002
直线AR与小眼A的像素阵列所在平面之间的夹角α满足如下关系式(2):
Figure PCTCN2020071734-appb-000003
直线BR与小眼B的像素阵列所在平面的夹角β满足如下关系式(3):
Figure PCTCN2020071734-appb-000004
基于上述关系,可得到R的空间深度即R在复眼坐标系∑ XYZ中的坐标z为:
z=e (i,j)tanα
=(e (i,j)(m,n)(x,y)-e (i,j))tanβ
=e (i,j)(m,n)(x,y)tanβ-e (i,j)tanβ
进而,得到
Figure PCTCN2020071734-appb-000005
所以z满足下述关系式(4),
Figure PCTCN2020071734-appb-000006
因此x和y分别满足下述关系式(5)和(6),
Figure PCTCN2020071734-appb-000007
Figure PCTCN2020071734-appb-000008
可见,空间点R的坐标(x,y,z)与小眼的结构和视差有关,可以通过两个小眼在小眼阵列坐标系中的坐标信息、对应的具体检测到空间点图像信息的两个感光单元在像素阵列坐标系中的坐标信息以及两幅图像上相应像素点的视差获得。如果被摄对象为多个空间点的组合,则可以通过获得每个空间点的在视野中的深度而得到被摄对象的立体信息。
对于如图6和图7所示的由相距设定距离的子小眼列中的非交叉视线确定的空间交叉点,在计算视差时需要将两个子小眼列之间的设定距离也考虑进去,空间点的坐标还与该设定距离有关。
上述实施例以小眼阵列及像素阵列为二维平面进行计算。但可以理解,若小眼阵列中各小眼在空间上的分布不是平面(例如是图11所示的球面),只要各小眼的相对位置固定,即各小眼的相对位置数据已知或可测,也可以通过对小眼获得的图像信号进行处理得到单复眼像,并可以通过两幅以上的单复眼像得到匹配复眼像,并通过对匹配复眼像进行视觉处理, 得到一组匹配复眼像中的对应像素点的视差信息,进一步通过小眼阵列的结构信息、小眼中像素阵列的结构信息、视线信息以及视差信息获取被摄对象的三维空间信息,从而实现立体视觉。
实施例七
如果小眼在空间上的分布不是平面,但是是任意固定形状并固定在某个装置(如头部)上时,每个小眼在该装置坐标系中的坐标是固定的。本实施例主要介绍通过任意位姿的两个小眼获得空间点的三维空间信息的过程。
图15是本发明一实施例中复眼摄像装置的测距示意图。参照图15,任意位姿的两个小眼P、Q可以检测空间点R发出的光束,从而获得了相应的图像信号。这里设定小眼P的光心坐标为(x p,y p,z p),小眼P的中心光轴(穿过微透镜光心且垂直于入射面的直线)的方向以欧拉角(α ppp)表示,可用(x p,y p,z pppp)来表示小眼P的位姿参数。类似的,可以设定小眼Q的光心坐标为(x q,y q,z q),小眼Q的中心光轴的方向以欧拉角(α qqq)表示,可用(x q,y q,z qqqq)来表示小眼Q的位姿参数。根据小眼P和小眼Q中视线不交叉的两组感光单元的检测信号可以生成两幅单复眼像,其中第一复眼图像U i,j的u i,j(x p,y p)像素点对应真实空间的空间点R在小眼P上的成像,第二复眼图像U m,n的u m,n(x q,y q)像素点对应空间点R在小眼Q上的成像,因而第一复眼图像U i,j和第二复眼图像U m,n为一组匹配复眼像,因而,R的坐标(x,y,z)可以根据(x p,y p,z p,a ppp)、(x q,y q,z qqqq)以及(i,j)、(m,n)计算得到。具体说明如下。
三维空间中,以向量p表示小眼P的光心的坐标,向量p与小眼P的坐标有关,可表示为p=OP=(x p,y p,z p),O为小眼坐标系的原点。小眼P中对应于空间点R的感光单元在像素阵列(原点记为o)中的坐标为(i,j),对应于三维空间中的坐标为(x i,y i,z i),由小眼P中的对应感光单元(i,j)得出的光线方向d 1=OP-oi=(x p-x i,y p-y i,z p-z i),则光线PR的参数方程可以表示为p+k 1d 1,其中k 1为系数。
同理,小眼Q的光心的坐标可以用向量q表示,向量q与小眼Q的坐标有关,可表示为q=OQ=(x q,y q,z q),小眼Q中对应于空间点发出光束的感光单元在像素阵列(原点记为o)中的坐标为(m,n),对应于三维空间中的坐标为(x m,y m,z m),由小眼Q中的对应感光单元(m,n)得出的光线方向d 2=(x q-x m,y q-y m,z q-z m)。光线QR的参数方程可以表示为q+k 2d 2,其中k 2为 系数。
由于光线PR和光线QR相交于空间点R,可以设定k 2=k时,光线QR取到空间点R,则空间点R的矢量R=q+kd 2。此时
Figure PCTCN2020071734-appb-000009
即(q+kd 2-p)×d 1=0,因此,可以得到
Figure PCTCN2020071734-appb-000010
Figure PCTCN2020071734-appb-000011
空间点R的坐标为
Figure PCTCN2020071734-appb-000012
这为理想情况,考虑到实际应用中可能存在误差,向量相除不便计算,可用
Figure PCTCN2020071734-appb-000013
代替。因此,空间点R的坐标为
Figure PCTCN2020071734-appb-000014
Figure PCTCN2020071734-appb-000015
结合前述实施例中对复眼摄像装置描述可知,具体而言,处理器通过信号处理单元、图像处理单元以及位置分析单元,可以在获得全部所述小眼生成的图像信号后,通过形成至少一幅单复眼像,可以视线清晰度较高的二维视觉,并且,对于存在视线交叉的任意两幅单复眼像,可以作为匹配复眼像,获取其中对应像素点的视差信息,并通过小眼阵列的结构信息如小眼的距离信息、有关感光单元的视线的信息以及所述视差信息等计算得到被摄对象的三维空间信息。可见,所述复眼摄像装置通过上述仿生结构设计的小眼阵列和处理器,具备了三维立体图像采集功能,可以用来对被摄对象的位置进行探测以及获得被摄对象的三维图像,有利于获取准确的三维空间信息,实现更佳的立体视觉。
实施例八
本实施例主要介绍一种复眼系统。所述复眼系统包括间隔排布的复数个所述复眼摄像装置。从仿生角度看,这里的复眼系统的功能与复眼类动物的一组复眼(如蜻蜓的两只复眼)功能类似。
上述复眼系统中的复数个复眼摄像装置可以根据设计需要分别设置在功能主体的不同位置,以从不同方向进行三维立体测量,或者,复数个复眼摄像装置还可以相对于一中心点或者中心线对称分布。从仿生角度看,许多复眼类动物通过两只沿中心线对称的复眼工作(虽然人的眼睛不是复眼,但也是沿中心线对称的),因而本实施例的复眼系统可以应用于智能机器人的视觉系统设计,但不限于此,在另外的实施例中,上述复眼系统还可以应用于雷达系统、导弹的导引装置、微型飞行器、舰艇搜索与跟踪系统、夜视设备、微型复眼相机等方面,根据具体的三维侦测需要,同时采用的两个以上的复眼摄像装置尤其是小眼阵列可以以一中心线为周线左右或上下对称,也可以是以一中心部位(或中心点)对称,或者也可以是非对 称状分布。
上述复眼系统除了复眼摄像装置外,还可包括控制装置,用于控制所述复眼摄像装置中的小眼列的位姿。此处小眼列的位姿主要指的是小眼列的拍摄方向。控制器可以通过计算机的CPU实现。控制装置与每个复眼摄像装置的处理器相连接,除了小眼列的位姿之外,控制装置还可以对每个处理器获得被摄对象的二维和/或三维空间信息的过程进行控制,例如控制对应部分小眼阵列的处理器进行工作等,控制装置还可以对每个处理器输出的被摄对象的三维空间信息进行统一分析和处理,例如消除两个复眼摄像装置之间的测量误差,并最终获得更准确的被摄对象的三维空间信息。从仿生角度来看,控制装置的功能类似于复眼类动物的大脑,对复眼系统的运作起到总指挥的作用。
由于每个复眼摄像装置均可以获取视野范围内的摄像对象的二维和/或三维空间信息,因而构造成的复眼系统的视野范围更大,有利于实现更佳的立体视觉功能。
上述描述仅是对本发明较佳实施例的描述,并非对本发明权利范围的任何限定,任何本领域技术人员在不脱离本发明的精神和范围内,都可以利用上述揭示的方法和技术内容对本发明技术方案做出可能的变动和修改,因此,凡是未脱离本发明技术方案的内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化及修饰,均属于本发明技术方案的保护范围。

Claims (23)

  1. 一种复眼摄像装置,其特征在于,包括:
    小眼列,所述小眼列包括复数个光学上互不干涉且排成一列的小眼,每个所述小眼均包括光学元件和设置在所述光学元件的焦平面附近的至少一个感光单元,所述光学元件用来朝向被摄对象并接收视野内入射的光束,其中,每个所述小眼列至少对应于一个小眼列视平面,所述小眼列视平面通过所述小眼列中各小眼的光心以及每个小眼的至少一个感光单元的中心附近,每个所述感光单元至少与一个所述小眼列视平面相交,每个所述感光单元的视线通过所述感光单元的中心与所在小眼的光心;以及,
    处理器,被配置为基于所述小眼中的感光单元接收的信息生成图像,并对所述图像进行处理来获得有关被摄对象的信息。
  2. 根据权利要求1所述的复眼摄像装置,其特征在于,所述小眼列中,每个小眼包括一个所述感光单元,且各个所述感光单元的视线在所述小眼列的视野内互不交叉。
  3. 根据权利要求1所述的复眼摄像装置,其特征在于,所述小眼列中,每个小眼包括一个或两个以上的所述感光单元,且至少一个所述感光单元的视线在所述小眼列的视野内与其它所述感光单元的视线交叉。
  4. 根据权利要求1所述的复眼摄像装置,其特征在于,所述复眼摄像装置包括两个以上的所述小眼列,不同所述小眼列的小眼列视平面之间形成一夹角,每个所述小眼属于一个或两个以上的所述小眼列。
  5. 根据权利要求4所述的复眼摄像装置,其特征在于,多个所述小眼列中依次邻接构成小眼阵列,所述小眼阵列中的各个所述小眼的光学元件在蜂窝形曲面内排布或者在二维平面内排布。
  6. 根据权利要求1所述的复眼摄像装置,其特征在于,所述处理器包括:
    单复眼像成像单元,被配置为在获得各个所述小眼中的所述感光单元的信息后,将视线互不交叉的全部或部分所述感光单元的信息进行处理并组成图像,以得到至少一幅单复眼像。
  7. 根据权利要求6所述的复眼摄像装置,其特征在于,所述处理器还包括:
    匹配复眼像单元,被配置为将属于同一所述小眼列视平面、且其中至少部分所述感光单元的视线相互交叉的两幅以上的所述单复眼像进行匹配,以得到一组匹配复眼像,所述匹配复眼像中的各幅所述单复眼像均包括根据视线交叉处的被摄对象的信息而形成于单复眼图像中的像素点;
    视差计算单元,被配置为获取所述匹配复眼像中根据视线交叉处的被摄对象的信息而生成的像素点之间的视差信息;以及,
    位置分析单元,被配置为基于所述小眼列的信息以及所述视差信息获取位于视线交叉处的被摄对象的信息。
  8. 根据权利要求7所述的复眼摄像装置,其特征在于,所述匹配复眼像中的单复眼像均根据同一时刻或同一时间段获得的各个所述小眼中的所述感光单元的信息得到。
  9. 根据权利要求8所述的复眼摄像装置,其特征在于,所述位置分析单元还被配置为通过先后多个时刻或同一时间段的所述被摄对象的信息,获取所述被摄对象在视野内的运动信息。
  10. 根据权利要求7所述的复眼摄像装置,其特征在于,所述复眼摄像装置还包括:
    存储单元,被配置为存储所述单复眼像、所述匹配复眼像以及所述被摄对象的信息;以及,
    显示单元,被配置为输出所述单复眼像并显示,或者,基于所述位置分析单元获取的被摄对象的信息,将所述被摄对象的纹理颜色、三维位姿及形状中的至少一种输出并显示。
  11. 根据权利要求1所述的复眼摄像装置,其特征在于,同一所述小眼列包括至少一个子小眼列,每个所述子小眼列包括复数个依次邻接的小眼,相邻两个所述子小眼列之间具有设定间距。
  12. 根据权利要求11所述的复眼摄像装置,其特征在于,相邻两个所述子小眼列之间设置有与所述子小眼列不属于同一所述小眼列的小眼。
  13. 根据权利要求11所述的复眼摄像装置,其特征在于,每个所述子小眼列中的小眼的光心的连线为直线段或者弧线段。
  14. 根据权利要求13所述的复眼摄像装置,其特征在于,每个所述感光单元的视线具有与所述感光单元的感光面积关联的扩散角,所述扩散角小于或等于同一所述小眼列视平面中相邻的两个所述感光单元的视线的夹角。
  15. 根据权利要求14所述的复眼摄像装置,其特征在于,当所述子小眼列的各个所述小眼的光心的连线为弧线段时,所述扩散角还小于或等于位于所述弧线段中的相邻两个所述小眼的轴线之间的夹角。
  16. 根据权利要求1所述的复眼摄像装置,其特征在于,所述小眼列中,各个所述小眼中的所述光学元件均为微透镜,各个所述微透镜的直径相同或不全部相同,焦距相同或不全部相同。
  17. 根据权利要求16所述的复眼摄像装置,其特征在于,所述小眼列中,各个所述小眼的微透镜垂直于所述小眼列视平面的截面为圆形、椭圆形或者多边形。
  18. 根据权利要求1所述的复眼摄像装置,其特征在于,同一所述小眼列中,各个所述小眼中的所述感光单元的数量相同。
  19. 根据权利要求1所述的复眼摄像装置,其特征在于,所述感光单元接收的信息包括对应视线上的入射光束的强度信息和色彩信息。
  20. 根据权利要求1所述的复眼摄像装置,其特征在于,所述小眼列中的各个所述小眼集成在同一半导体衬底上,各个所述小眼之间通过介质隔离。
  21. 一种复眼系统,其特征在于,包括以设定间距排布的复数个根据权利要求1所述的复眼摄像装置。
  22. 根据权利要求21所述的复眼系统,其特征在于,复数个所述复眼摄像装置相对于一中心线对称。
  23. 根据权利要求21所述的复眼系统,其特征在于,所述复眼系统还包括控制装置,用以控制所述复眼摄像装置中的小眼列的位姿,所述控制装置与每个所述复眼摄像装置的处理器连接。
PCT/CN2020/071734 2019-11-26 2020-01-13 复眼摄像装置及复眼系统 WO2021103297A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20893641.9A EP4068747A4 (en) 2019-11-26 2020-01-13 COMPOUND EYE CAMERA DEVICE AND COMPOUND EYE SYSTEM
JP2022528046A JP7393542B2 (ja) 2019-11-26 2020-01-13 複眼カメラ装置および複眼システム
US17/777,403 US20220407994A1 (en) 2019-11-26 2020-01-13 Compound eye camera device and compound eye system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911173889.2 2019-11-26
CN201911173889.2A CN112866512B (zh) 2019-11-26 2019-11-26 复眼摄像装置及复眼系统

Publications (1)

Publication Number Publication Date
WO2021103297A1 true WO2021103297A1 (zh) 2021-06-03

Family

ID=75985715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071734 WO2021103297A1 (zh) 2019-11-26 2020-01-13 复眼摄像装置及复眼系统

Country Status (5)

Country Link
US (1) US20220407994A1 (zh)
EP (1) EP4068747A4 (zh)
JP (1) JP7393542B2 (zh)
CN (1) CN112866512B (zh)
WO (1) WO2021103297A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917579A (zh) * 2021-09-28 2022-01-11 屏丽科技成都有限责任公司 一种复眼透镜和液晶显示器背光模组
CN116698189B (zh) * 2023-06-06 2024-03-29 北京理工大学长三角研究院(嘉兴) 一种感算一体仿生复眼传感器及构建方法
CN116990963B (zh) * 2023-09-28 2023-12-26 安徽大学 一种复眼事件相机的设计方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086013A1 (en) * 2001-11-02 2003-05-08 Michiharu Aratani Compound eye image-taking system and apparatus with the same
CN1655013A (zh) * 2005-02-28 2005-08-17 北京理工大学 复眼立体视觉装置
CN101548154A (zh) * 2007-07-23 2009-09-30 松下电器产业株式会社 具有测距功能的复眼式摄像装置
CN101753849A (zh) * 2008-12-01 2010-06-23 厦门市罗普特科技有限公司 复眼全景摄像机
CN107613177A (zh) * 2017-10-16 2018-01-19 上海斐讯数据通信技术有限公司 基于复眼透镜的摄像头、成像方法及移动终端
CN107809610A (zh) * 2016-09-08 2018-03-16 松下知识产权经营株式会社 摄像头参数集算出装置、摄像头参数集算出方法以及程序
CN109934854A (zh) * 2019-03-28 2019-06-25 南京邮电大学 一种利用多目摄像头检测运动目标的装置及方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517019A (en) * 1995-03-07 1996-05-14 Lopez; Luis R. Optical compound eye sensor with ommatidium sensor and related methods
US20090314929A1 (en) * 2006-01-19 2009-12-24 The Regents Of The University Of California Biomimetic Microfabricated Compound Eyes
ATE538406T1 (de) * 2009-09-30 2012-01-15 Fraunhofer Ges Forschung Verfahren zur herstellung eines künstliches facettenauges
JP2011109630A (ja) 2009-11-20 2011-06-02 Advas Co Ltd カメラ装置用雲台
US8576489B2 (en) * 2010-08-02 2013-11-05 Spectral Imaging Laboratory Multihybrid artificial compound eye with varied ommatidia
JPWO2014156712A1 (ja) * 2013-03-26 2017-02-16 コニカミノルタ株式会社 複眼光学系及び撮像装置
WO2015133226A1 (ja) * 2014-03-05 2015-09-11 コニカミノルタ株式会社 複眼撮像光学系、レンズユニット、撮像装置及び携帯端末
WO2015182488A1 (ja) * 2014-05-26 2015-12-03 コニカミノルタ株式会社 複眼撮像光学系及び複眼撮像装置
US10911738B2 (en) * 2014-07-16 2021-02-02 Sony Corporation Compound-eye imaging device
CN104597599A (zh) * 2015-02-16 2015-05-06 杭州清渠科技有限公司 一种基于可调控微镜阵列的成像装置
KR101738883B1 (ko) * 2016-01-06 2017-05-23 한국과학기술원 초박형 디지털 카메라 및 그 제조 방법
JP2018045464A (ja) 2016-09-14 2018-03-22 株式会社東芝 画像処理装置、画像処理方法、およびプログラム
CN109360238B (zh) * 2018-09-25 2020-05-12 广东国地规划科技股份有限公司 一种基于双目测距的外业数据采集方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086013A1 (en) * 2001-11-02 2003-05-08 Michiharu Aratani Compound eye image-taking system and apparatus with the same
CN1655013A (zh) * 2005-02-28 2005-08-17 北京理工大学 复眼立体视觉装置
CN101548154A (zh) * 2007-07-23 2009-09-30 松下电器产业株式会社 具有测距功能的复眼式摄像装置
CN101753849A (zh) * 2008-12-01 2010-06-23 厦门市罗普特科技有限公司 复眼全景摄像机
CN107809610A (zh) * 2016-09-08 2018-03-16 松下知识产权经营株式会社 摄像头参数集算出装置、摄像头参数集算出方法以及程序
CN107613177A (zh) * 2017-10-16 2018-01-19 上海斐讯数据通信技术有限公司 基于复眼透镜的摄像头、成像方法及移动终端
CN109934854A (zh) * 2019-03-28 2019-06-25 南京邮电大学 一种利用多目摄像头检测运动目标的装置及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068747A4 *

Also Published As

Publication number Publication date
US20220407994A1 (en) 2022-12-22
CN112866512A (zh) 2021-05-28
CN112866512B (zh) 2022-03-22
EP4068747A1 (en) 2022-10-05
JP2023502942A (ja) 2023-01-26
JP7393542B2 (ja) 2023-12-06
EP4068747A4 (en) 2023-12-27

Similar Documents

Publication Publication Date Title
WO2021103297A1 (zh) 复眼摄像装置及复眼系统
US11948089B2 (en) Sparse image sensing and processing
KR20190013888A (ko) 이미지 픽셀, 이미지 획득 장치, 지문 획득 장치 및 디스플레이 장치
CN110044300A (zh) 基于激光器的两栖三维视觉探测装置及探测方法
CN108463767A (zh) 虚拟/增强现实系统中的光束角度传感器
Stürzl et al. Mimicking honeybee eyes with a 280 field of view catadioptric imaging system
CN102438111A (zh) 一种基于双阵列图像传感器的三维测量芯片及系统
KR102633636B1 (ko) 모바일 장치를 위한 플렌옵틱 카메라
CN109496316B (zh) 图像识别系统
CN108292431A (zh) 光场数据表示
US20160165214A1 (en) Image processing apparatus and mobile camera including the same
CN202406199U (zh) 一种基于双阵列图像传感器的三维测量芯片及系统
CN103033166B (zh) 一种基于合成孔径聚焦图像的目标测距方法
US20160123801A1 (en) Vector light sensor and array thereof
CN106066207A (zh) 一种平行光路组合式多源信息采集处理装置及方法
Neumann et al. Eyes from eyes: analysis of camera design using plenoptic video geometry
US11422264B2 (en) Optical remote sensing
CN111398898B (zh) 用于大视场三维运动探测的神经拟态仿生曲面复眼系统
CN112859484A (zh) 结构光成像装置的红外激光元件及结构光成像装置
CN117751302A (zh) 一种接收光学系统、激光雷达系统及终端设备
CN117751306A (zh) 一种激光雷达及终端设备
EP3145168A1 (en) An apparatus and a method for generating data representing a pixel beam
RU2573245C2 (ru) Способ бесконтактного управления с помощью поляризационного маркера и комплекс его реализующий
CN207516656U (zh) 一种用于不同视角成像的成像装置
KR102402432B1 (ko) 픽셀 빔을 표현하는 데이터를 생성하기 위한 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893641

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022528046

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020893641

Country of ref document: EP

Effective date: 20220627