WO2020024079A1 - Système de reconnaissance d'image - Google Patents

Système de reconnaissance d'image Download PDF

Info

Publication number
WO2020024079A1
WO2020024079A1 PCT/CN2018/097687 CN2018097687W WO2020024079A1 WO 2020024079 A1 WO2020024079 A1 WO 2020024079A1 CN 2018097687 W CN2018097687 W CN 2018097687W WO 2020024079 A1 WO2020024079 A1 WO 2020024079A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
different
microlenses
image recognition
image information
Prior art date
Application number
PCT/CN2018/097687
Other languages
English (en)
Chinese (zh)
Inventor
王星泽
Original Assignee
合刃科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(深圳)有限公司 filed Critical 合刃科技(深圳)有限公司
Priority to CN201880002314.1A priority Critical patent/CN109496316B/zh
Priority to PCT/CN2018/097687 priority patent/WO2020024079A1/fr
Publication of WO2020024079A1 publication Critical patent/WO2020024079A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present application relates to the technical field of optical image processing, and in particular, to an image recognition system.
  • the object recognition process can be divided into: image acquisition, feature extraction, classifier classification, and classification results.
  • image acquisition mainly uses ordinary imaging systems to project the three-dimensional image information into two-dimensional color pictures, so the third-dimensional information of the actual object is lost, that is, there is no depth and longitudinal information, and the computer finally obtains only the features on the planar image. the difference.
  • the probe method which uses a probe to directly locate the surface of an object. This method is inefficient and will damage the object itself;
  • the binocular vision method which uses the triangle principle to calculate the distance between objects, requires two cameras, is expensive, and is not suitable for objects with smooth and non-textured surfaces;
  • Structured light method which projects a specific light signal onto the surface of the object, calculates the position and depth information of the object through the change of the light signal caused by the object, is not suitable for long-distance use, and can not be used under strong light environment, because the projected Coded light will be overwhelmed;
  • the time-of-flight method emits light pulses from an emitter to an object, and determines the distance of the measured object by calculating the time of flight of the light pulse. This method has low depth accuracy, the recognition distance is limited by the intensity of the light source, and consumes a lot of energy;
  • the main purpose of this application is to provide an image recognition system with a multi-view imaging module capable of collecting three-dimensional image data and a high recognition rate.
  • the present invention provides an image recognition system including a multi-view imaging module, a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the multi-view imaging module includes:
  • a microlens array disposed between the lens and the photosensitive element and located on a focal plane of an imaging side of the lens, the microlens array including a plurality of microlenses arranged in an array;
  • the light of the imaged object is projected onto the plurality of microlenses of the microlens array from the different directions through the lens, refracted through the plurality of microlenses, and then incident on the light receiving element.
  • the image information of multiple to-be-identified objects at different angles obtained by imaging the to-be-recognized object through the multi-view imaging module is brought into the target model to perform image recognition on the to-be-recognized object.
  • the structure of the plurality of microlenses in the microlens array is one selected from the following structures:
  • the lens shapes and sizes of the plurality of microlenses are the same, and the focal lengths are the same and fixed;
  • Lens shapes and sizes of the plurality of microlenses are different, and lens focal lengths of the plurality of microlenses are different and fixed;
  • the lens shapes and sizes of the plurality of microlenses are the same, and the focal length is adjustable;
  • the lens shapes and sizes of the plurality of micro lenses are different, and the focal length is adjustable.
  • the plurality of microlenses in the microlens array are distributed on a transparent structure in a uniform or non-uniform manner; the transparent structure is a convex transparent structure, a concave transparent structure, or a planar transparent structure.
  • microlens array is a one-time imaging microlens array or a multi-time imaging microlens array.
  • the multi-imaging microlens array includes at least two microlens arrays arranged in parallel.
  • the photosensitive element is a complementary metal oxide semiconductor image sensor or a charge-coupled device image sensor.
  • the photosensitive element is provided with a multi-pixel photosensitive array
  • the multi-pixel photosensitive array includes a plurality of photosensitive regions provided in one-to-one correspondence with a plurality of micro lenses on the micro lens array.
  • the method further includes:
  • the step of obtaining the target model by training the convolutional neural network model based on the collection of the image information of the multiple identified objects at different angles each time as the sample training data includes:
  • a set of multiple different-angle images and depth information of the identified object collected each time is used as sample training data, and training is performed based on a convolutional neural network model to obtain a target model.
  • the method further includes the step of repeatedly collecting image information of multiple identified objects at different angles obtained by imaging the multi-view imaging module for multiple different perspectives of the identified object.
  • the step of repeatedly collecting image information of multiple identified objects at different angles obtained by the multi-view imaging module imaging a single perspective of the identified object includes:
  • multiple pieces of image information of the identified object at different angles obtained by the multi-view imaging module imaging at multiple different perspectives of the identified object are repeatedly collected.
  • a multi-view imaging module introduces a micro lens array into a conventional imaging system, and can obtain images and depth information of multiple weakly different perspectives of an object at a time, which has a simple structure and is applicable.
  • FIG. 1 is a schematic structural diagram of a multi-view imaging module according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a photosensitive element of the multi-view imaging module in FIG. 1;
  • FIG. 3 is a schematic structural diagram of an image recognition system according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an image recognition result according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another image recognition result in an embodiment of the present invention.
  • FIG. 6 is a flowchart of an image recognition method according to an embodiment of the present invention.
  • fixed may be a fixed connection, may be a detachable connection, or be integrated into one; It is a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium. It can be the internal connection of two elements or the interaction relationship between two elements, unless it is clearly defined otherwise.
  • fixed may be a fixed connection, may be a detachable connection, or be integrated into one; It is a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium. It can be the internal connection of two elements or the interaction relationship between two elements, unless it is clearly defined otherwise.
  • FIG. 1 it is a schematic structural diagram of a multi-view imaging module 100 in a first embodiment of the present invention.
  • the multi-view imaging module 100 includes a lens 10, a microlens array 20, and a photosensitive element 30.
  • the microlens array 20 is disposed between the lens 10 and the photosensitive element 30 and is located on a focal plane of the imaging side of the lens 10.
  • the micro lens array 20 includes a plurality of micro lenses 21 arranged in an array.
  • the light of the imaged object 101 is incident on the inside of the multi-view imaging module 100 through the lens 10, that is, the light of the imaged object 101 is projected through the lens 10 to the micro lens array 20 from different directions.
  • the microlenses 21 are refracted by the plurality of microlenses 21 and are incident on different photosensitive areas 31 of the photosensitive element 30 to form image information of a plurality of imaged objects with different angles. Since the source angles of the light collected by each photosensitive region 31 are different, it is possible to record images and depth information of multiple weakly different perspectives of the imaged object 101.
  • the multi-view imaging module 100 introduces a micro lens array 20 into a conventional imaging system, and can obtain images and depth information of multiple weakly different perspectives of an object at a time, with a simple structure and a wide application range. .
  • the photosensitive element 30 is provided with a multi-pixel photosensitive array 33.
  • the multi-pixel photosensitive array 33 includes a plurality of photosensitive elements arranged one-to-one corresponding to the plurality of micro-lenses on the micro-lens array 20. Area 31.
  • the photosensitive element 30 may be a complementary metal oxide semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor.
  • CMOS complementary metal oxide semiconductor
  • CCD charge-coupled device
  • the microlens array 20 is set on the focal plane of the imaging side of the lens 10, and a photosensitive element 30 (CCD or CMOS) is placed behind the microlens array 20, and the light refracted by each microlens 21 can be covered correspondingly.
  • a plurality of pixels on the photosensitive element 30 form a large pixel unit; a corresponding one of the plurality of micro lenses on the micro lens array 20 is provided on the multi-pixel photosensitive array 33 corresponding to each of the large pixel units.
  • Set the photosensitive area 31 is set on the focal plane of the imaging side of the lens 10
  • a photosensitive element 30 CCD or CMOS
  • the light emitted by the imaged object 101 enters from the lens 10 of the multi-view imaging module 100, and then is projected from different directions to the microlens array 20.
  • Each microlens 21 of the microlens array 20 refracts and then projects the light to the back of the light.
  • a complete image can be collected by the photosensitive area 31 corresponding to each large pixel unit; and the light angle collected by the photosensitive area 31 corresponding to each large pixel unit is different, so that the imaged image can be recorded. Images and depth information of multiple weakly different perspectives of the object 101.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be microlenses with the same lens shape and size, the same focal length, and a fixed structure.
  • each of the microlenses 21 may be a convex lens with convex surfaces on both sides, or a semi-convex convex lens with convex surfaces on one side;
  • the shape of the microlenses 21 may be circular, square, hexagonal, or octagonal. Shape etc.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be different in lens shape and size, and the lens focal lengths of the plurality of microlenses are different and fixed.
  • Micro lens composition may be a plurality of convex lenses having convex surfaces on both sides having different sizes and focal lengths, or the microlens 21 may be a plurality of semi-convex convex lenses having convex surfaces on different sides having different sizes and focal lengths.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be microlenses with the same lens shape and size and adjustable focal lengths.
  • the micro-lens 21 may be composed of multiple micro-lenses with the same size and size but adjustable focal length; wherein the implementation of the adjustable focal length may be electrical, light, thermal, etc. The technician can set it according to the needs, and it will not be repeated here.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be microlenses with different lens shapes and sizes and adjustable focal lengths.
  • the micro-lens 21 may be composed of multiple micro-lenses with different sizes and sizes, but with adjustable focal length; wherein the implementation of the adjustable focal length may be electrical, light, thermal, etc.
  • the plurality of microlenses 21 in the microlens array 20 may be distributed on a transparent structure in a uniform or non-uniform manner; the transparent structure is a convex transparent structure and a concave surface. Transparent structure or flat transparent structure.
  • the micro lens array 20 may be a plurality of convex lenses 211 uniformly distributed on a flat transparent structure, and the plurality of convex lenses 211 are connected to form a whole micro lens array 20 through a flat transparent structure;
  • the micro lens array 20 may be a plurality of convex lenses 211 unevenly distributed on a planar transparent structure, and the plurality of convex lenses 211 are connected to form a whole micro lens array 20 through a planar transparent structure;
  • the micro lens array 20 may also be a plurality of convex lenses 211 uniformly distributed on a convex transparent structure.
  • the plurality of convex lenses 211 are connected to form a whole micro lens array 20 through the convex transparent structure.
  • the microlens array 20 may be a one-time imaging microlens array or a multi-time imaging microlens array.
  • the multi-imaging microlens array 20 may include multiple layers of microlenses, for example, at least two microlens arrays arranged in parallel. The light reaching the microlens array 20 is refracted by the multiple layers of microlenses, and then emitted to Imaging is performed on the photosensitive element 30.
  • the present invention further provides an image recognition system 200 including the multi-view imaging module 100, a memory 201, a processor 202, and stored in the memory 201 and stored in the processor 202.
  • the image recognition system 200 uses the multi-view imaging module 100 to repeatedly acquire image information of multiple identified objects at different angles obtained by repeatedly imaging a single perspective of the identified object; and then collects each time
  • the set of image information of the identified objects at different angles is used as sample training data, and training is performed based on the convolutional neural network model to obtain a target model; the object to be identified is acquired through the multi-view imaging module to be imaged.
  • the obtained image information of the identified objects at different angles is brought into the target model to perform image recognition on the object to be identified.
  • multiple pieces of image information of the identified objects at different angles obtained by imaging the multi-angle imaging module 100 for multiple different perspectives of the identified object may be repeatedly collected; it can be understood that In this way, after collecting image information of multiple recognized objects at different angles obtained by imaging at different different perspectives as training samples, training can be performed to accurately identify each angle of the object to be recognized.
  • a microlens array 20 is introduced into a conventional imaging system, and images and depth information of multiple weakly different perspectives of an object can be obtained at one time, which is combined with image recognition of a neural network algorithm model Technology, in the later recognition of the identified object, the use of multiple weakly different images of the same object for recognition, greatly improving the accuracy of object recognition.
  • a plurality of different angle images and depth information of the identified object can be obtained through a light reconstruction algorithm;
  • the set of multiple different angle images and depth information of the identified object is used as sample training data, and training is performed based on the convolutional neural network model to obtain the target model.
  • image recognition system 200 in this embodiment, images and depth information of multiple different angles of the identified object can be obtained; compared with the traditional imaging system, only two-dimensional image information of the object can be obtained, and the third-dimensional information recognition method is lost.
  • the image recognition system 200 in this embodiment greatly improves the accuracy of object recognition.
  • the focal length of the microlens when the focal length of the microlens is adjustable, the focal lengths of multiple microlenses on the microlens array can also be adjusted; at different focal lengths, repeatedly acquiring the multi-view imaging module Image information of a plurality of identified objects at different angles obtained by imaging at different perspectives of the identified object. Therefore, a large-distance multi-point reversal can be implemented in the vertical range, and each object in the vertical range can be accurately identified.
  • the present invention further provides an image recognition method 300 including steps:
  • Step S10 repeatedly collecting image information of multiple identified objects at different angles obtained by imaging the multi-view imaging module for a single perspective of the identified object;
  • Step S20 Use the set of image information of the plurality of identified objects of different angles collected each time as sample training data, and perform training based on the convolutional neural network model to obtain a target model;
  • step S30 image information of multiple to-be-identified objects at different angles obtained by imaging the to-be-recognized object through the multi-view imaging module is brought into the target model to perform image recognition on the to-be-recognized object.
  • images and depth information of multiple weakly different perspectives of an object can be obtained by imaging at one time, and combined with the image recognition technology of the neural network algorithm model, when the recognized object is recognized at a later stage , Using multiple weakly different images of the same object for recognition, greatly improving the accuracy of object recognition.
  • the method further includes: obtaining, based on the image information of multiple identified objects at different angles obtained through the multiple imaging, multiple images at different angles of the identified object through a light reconstruction algorithm, and Depth information
  • the step 20 may specifically include: using a collection of multiple different-angle images and depth information of the identified object each time as sample training data, training based on a convolutional neural network model to obtain a target model .
  • the image recognition method 300 may further include: repeatedly collecting image information of multiple identified objects at different angles obtained by imaging the multi-view imaging module for multiple different perspectives of the identified object.
  • the step S20 is correspondingly performed during training based on a convolutional neural network model,
  • the input is the characteristic data of each angle of the identified object.
  • the step S10 may include:
  • multiple pieces of image information of the identified object at different angles obtained by the multi-view imaging module imaging at multiple different perspectives of the identified object are repeatedly collected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Vascular Medicine (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un système de reconnaissance d'image, comprenant un module d'imagerie multivue ayant un réseau de microlentilles. La lumière d'un objet imagé est réfractée par une pluralité de microlentilles respectivement et est ensuite incidente sur différentes régions photosensibles de l'élément photosensible. L'imagerie peut être réalisée une fois pour obtenir des images et des informations de profondeur d'une pluralité d'angles de visualisation faibles différents de l'objet ; des informations d'image d'un objet reconnu à une pluralité d'angles différents obtenus par imagerie d'angle de visualisation unique de l'objet reconnu par le module d'imagerie multivue sont acquises au moyen d'une répétition ; un ensemble d'informations d'image de l'objet reconnu au niveau de la pluralité d'angles différents acquis chaque fois est utilisé en tant que données d'apprentissage d'échantillon, et l'apprentissage est réalisé sur la base d'un modèle de réseau neuronal convolutionnel pour obtenir un modèle cible ; et ainsi, un objet à reconnaître est soumis à une reconnaissance d'image. Des images du même objet à une pluralité d'angles de visualisation faibles différents sont utilisées pour la reconnaissance, ce qui améliore considérablement la précision de la reconnaissance d'objets.
PCT/CN2018/097687 2018-07-28 2018-07-28 Système de reconnaissance d'image WO2020024079A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880002314.1A CN109496316B (zh) 2018-07-28 2018-07-28 图像识别系统
PCT/CN2018/097687 WO2020024079A1 (fr) 2018-07-28 2018-07-28 Système de reconnaissance d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097687 WO2020024079A1 (fr) 2018-07-28 2018-07-28 Système de reconnaissance d'image

Publications (1)

Publication Number Publication Date
WO2020024079A1 true WO2020024079A1 (fr) 2020-02-06

Family

ID=65713867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097687 WO2020024079A1 (fr) 2018-07-28 2018-07-28 Système de reconnaissance d'image

Country Status (2)

Country Link
CN (1) CN109496316B (fr)
WO (1) WO2020024079A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600994A (zh) * 2020-12-02 2021-04-02 达闼机器人有限公司 物体探测装置、方法、存储介质和电子设备
CN114200498A (zh) * 2022-02-16 2022-03-18 湖南天巡北斗产业安全技术研究院有限公司 卫星导航/光学组合的目标检测方法及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260299B (zh) * 2020-02-18 2023-07-18 中国联合网络通信集团有限公司 货品盘点及管理方法、装置、电子设备及存储介质
US11543654B2 (en) * 2020-09-16 2023-01-03 Aac Optics Solutions Pte. Ltd. Lens module and system for producing image having lens module
CN112329567A (zh) * 2020-10-27 2021-02-05 武汉光庭信息技术股份有限公司 自动驾驶场景中目标检测的方法及系统、服务器及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610353A (zh) * 2008-01-23 2009-12-23 奥多比公司 用于全分辨率光场捕获和绘制的方法和设备
CN106846463A (zh) * 2017-01-13 2017-06-13 清华大学 基于深度学习神经网络的显微图像三维重建方法及系统
CN107993260A (zh) * 2017-12-14 2018-05-04 浙江工商大学 一种基于混合型卷积神经网络的光场图像深度估计方法
CN108154066A (zh) * 2016-12-02 2018-06-12 中国科学院沈阳自动化研究所 一种基于曲率特征递归神经网络的三维目标识别方法
CN108175535A (zh) * 2017-12-21 2018-06-19 北京理工大学 一种基于微透镜阵列的牙科三维扫描仪

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118044B (zh) * 2015-06-16 2017-11-07 华南理工大学 一种轮形铸造产品缺陷自动检测方法
US20170085759A1 (en) * 2015-09-17 2017-03-23 Dan Valdhorn Method and appartaus for privacy preserving optical monitoring
US10115032B2 (en) * 2015-11-04 2018-10-30 Nec Corporation Universal correspondence network
CN106840398B (zh) * 2017-01-12 2018-02-02 南京大学 一种多光谱光场成像方法
CN107302695A (zh) * 2017-05-31 2017-10-27 天津大学 一种基于仿生视觉机理的电子复眼系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610353A (zh) * 2008-01-23 2009-12-23 奥多比公司 用于全分辨率光场捕获和绘制的方法和设备
CN108154066A (zh) * 2016-12-02 2018-06-12 中国科学院沈阳自动化研究所 一种基于曲率特征递归神经网络的三维目标识别方法
CN106846463A (zh) * 2017-01-13 2017-06-13 清华大学 基于深度学习神经网络的显微图像三维重建方法及系统
CN107993260A (zh) * 2017-12-14 2018-05-04 浙江工商大学 一种基于混合型卷积神经网络的光场图像深度估计方法
CN108175535A (zh) * 2017-12-21 2018-06-19 北京理工大学 一种基于微透镜阵列的牙科三维扫描仪

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600994A (zh) * 2020-12-02 2021-04-02 达闼机器人有限公司 物体探测装置、方法、存储介质和电子设备
CN114200498A (zh) * 2022-02-16 2022-03-18 湖南天巡北斗产业安全技术研究院有限公司 卫星导航/光学组合的目标检测方法及系统

Also Published As

Publication number Publication date
CN109496316A (zh) 2019-03-19
CN109496316B (zh) 2022-04-01

Similar Documents

Publication Publication Date Title
WO2020024079A1 (fr) Système de reconnaissance d'image
CN110036410B (zh) 用于从视图中获得距离信息的设备和方法
CN110462686B (zh) 用于从场景获得深度信息的设备和方法
WO2018049949A1 (fr) Procédé d'estimation de distance basé sur une caméra à champ lumineux portatif
US10715711B2 (en) Adaptive three-dimensional imaging system and methods and uses thereof
US10021340B2 (en) Method and an apparatus for generating data representative of a light field
CN103793911A (zh) 一种基于集成图像技术的场景深度获取方法
CN109883391B (zh) 基于微透镜阵列数字成像的单目测距方法
CN105282443A (zh) 一种全景深全景图像成像方法
US11632535B2 (en) Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
CN112866512B (zh) 复眼摄像装置及复眼系统
Martel et al. Real-time depth from focus on a programmable focal plane processor
US10872442B2 (en) Apparatus and a method for encoding an image captured by an optical acquisition system
WO2021121037A1 (fr) Procédé et système de reconstruction de champ lumineux par application d'un échantillonnage de profondeur
EP3350982B1 (fr) Appareil et procédé permettant de générer des données représentant un faisceau de pixels
CN112950727B (zh) 基于仿生曲面复眼的大视场多目标同时测距方法
US11092820B2 (en) Apparatus and a method for generating data representative of a pixel beam
CN107610170B (zh) 多目图像重聚焦的深度获取方法及系统
CN111164970B (zh) 一种光场采集方法及采集装置
EP3145195A1 (fr) Appareil et procédé de codage d'une image capturée par un système d'acquisition optique
KR20160120534A (ko) 전자 제어식 가림 마스크 어레이를 이용한 라이트 필드 카메라
US9442296B2 (en) Device and method for determining object distances
WO2019014846A1 (fr) Procédé d'identification de positionnement spatial utilisé dans la restauration d'un champ lumineux
TW201606420A (zh) 攝像裝置與光場攝像鏡頭

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928958

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18928958

Country of ref document: EP

Kind code of ref document: A1