WO2012020381A1 - Procédé et appareil permettant de reconnaître un objet intéressant - Google Patents

Procédé et appareil permettant de reconnaître un objet intéressant Download PDF

Info

Publication number
WO2012020381A1
WO2012020381A1 PCT/IB2011/053564 IB2011053564W WO2012020381A1 WO 2012020381 A1 WO2012020381 A1 WO 2012020381A1 IB 2011053564 W IB2011053564 W IB 2011053564W WO 2012020381 A1 WO2012020381 A1 WO 2012020381A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
sub
light
different
light beams
Prior art date
Application number
PCT/IB2011/053564
Other languages
English (en)
Inventor
Tommaso Gritti
Jelte Peter Vink
Ruud Vlutters
Yili Hu
Gerard De Haan
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2012020381A1 publication Critical patent/WO2012020381A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings

Definitions

  • the invention relates to an imaging system, in particular, a multispectral imaging method and an apparatus therefore.
  • Some of the methods involve expensive components, such as spectrometers, prisms, or Bayer filters.
  • the present invention has been developed in an effort to solve at least some of the above problems, and it is an object of the present invention to provide a multispectral imaging method and apparatus without the need of calibration.
  • a method of recognizing an interesting object comprising the steps of: a) sequentially illuminating a target object with a plurality of light beams, each light beam comprising at least one sub-beam having a predefined spectrum, and different light beams having at least one different sub-beam; b) capturing a plurality of images of the target object, each image corresponding to one light beam; c) comparing at least two images of the target object to extract at least one difference feature between every two images; d) identifying the interesting object from the target object based on the at least one difference feature of step c).
  • a set of computer executable instructions configured to perform the method of recognizing an interesting object.
  • an apparatus for recognizing an interesting object comprising: a light source, configured to generate a plurality of light beams, each light beam comprising at least one sub-beam having a predefined spectrum, and different light beams having at least one different sub-beam; a light sensor, configured to capture a plurality of images of a target object being illuminated by the light source, each image corresponding to one light beam; a processor, configured to compare at least two of the plurality of images captured by the light sensor to extract at least one difference feature between every two captured images; wherein the processor is further configured to identify the interesting object from the target object based on the at least one difference feature.
  • the basic idea of the present invention is to utilize the different features of the images illuminated under different lighting conditions, without the need of calibration of the images, for the purpose of recognizing the interesting object.
  • One advantage of one of the methods of the present invention is the improved robustness to varying ambient light.
  • Fig. l illustrates a block diagram of an apparatus for recognizing an interesting object according to an embodiment of the present invention
  • Fig.2 illustrates a flowchart of the method of recognizing an interesting object according to an embodiment of the present invention
  • Fig.3 illustrates a flowchart of the method of recognizing an interesting object according to an embodiment of the present invention
  • Fig.4 illustrates the LED spectrum of the light source 20 according to an embodiment of the present invention
  • Fig.5a illustrates images of the first sub-set of sample objects according to an embodiment of the present invention
  • Fig.5b illustrates images of the second sub-set of sample objects according to an embodiment of the present invention
  • Fig.5c illustrates pixel identification of the images of the first sub-set of sample objects according to an embodiment of the present invention
  • Fig.6 illustrates the identification results of three hands according to an embodiment of the present invention
  • Fig.7 illustrates the identification results of a hand under different ambient light conditions according to an embodiment of the present invention
  • the main purpose of the multispectral imaging technique of the present invention is to capture images of a target object and to identify an interesting object from the target object.
  • the interesting object in the present invention is preferably a certain material, such as human skin, some kind of food (fruit/vegetable), some kind of floor, etc.
  • Fig. 1 illustrates a block diagram of an apparatus for recognizing an interesting object according to an embodiment of the present invention.
  • the apparatus 10 for recognizing an interesting object includes a light source 20, a light sensor 30 and a processor 40.
  • the light source 20 is configured to generate a plurality of light beams, each light beam comprising at least one sub-beam having a predefined spectrum, and different light beams having at least one different sub-beam.
  • the light sensor 30 is configured to capture a plurality of images of a target object being illuminated by the light source 20, each image corresponding to one light beam.
  • a skilled person should understand that the impact of ambient illumination could be added to the captured images. However, with the methods and apparatus of the present invention, the impact of ambient illumination can be eliminated, or largely suppressed.
  • the processor 40 is configured to compare at least two of the plurality of images captured by the light sensor 30 to extract at least one difference feature between every two captured images.
  • Fig. 2 illustrates a flowchart of the method of recognizing an interesting object according to an embodiment of the present invention. As illustrated in the Figure, the method includes four steps a, b, c and d.
  • Step a is for sequentially illuminating a target object with a plurality of light beams, wherein each light beam comprises at least one sub-beam having a predefined spectrum, and different light beams have at least one different sub-beam.
  • Step b is for capturing a plurality of images of the target object, wherein each image corresponds to one light beam.
  • Step c is for comparing at least two images of the target object to extract at least one difference feature between every two images.
  • Step d is for identifying the interesting object from the target object, based on the at least one difference feature of step c.
  • Fig. 3 illustrates a flowchart of the method of recognizing an interesting object according to an embodiment of the present invention. As illustrated in the Figure, the method includes three steps e, f and g.
  • Step e is for sequentially illuminating a set of sample objects with a plurality of testing light beams, wherein the set of sample objects comprise a first sub-set of objects made of the same material as the interesting object and a second sub-set of objects made of material different from the interesting object, and each testing light beam comprises at least one testing sub-beam having a predefined spectrum, and different testing sub-beams have at least one different testing sub-beam.
  • Step f is for capturing a plurality of images of the set of sample objects, each image corresponding to one testing light beam.
  • Step g is for comparing at least two images of the set of sample objects to extract at least one parameter representing the characteristics of the material of the interesting object.
  • Step a and step e are executed by use of the light source 20.
  • Step b and step f are executed by use of the light sensor 30.
  • Steps c, d and g are executed by the processor 40.
  • steps e, f and g are executed in a learning phase, so that at least one parameter representing the characteristics of the material of the interesting object is determined.
  • steps a, b, c and d are executed in a recognition phase, in order to identify the interesting object from the images of a target object.
  • the light source 20 includes a plurality of LEDs, and each LED is configured to generate a sub-beam having a predefined spectrum, and at least two predefined spectrums are narrower than the spectrum of white light.
  • a set of computer executable instructions configured to perform any of steps a, b, c, d, e, f and g.
  • the set of computer executable instructions can be stored in a processor or other memories, in the form of software, firmware, hardware, or any combination of currently available implementations.
  • any one of the steps of a ⁇ g can be interactively performed between two or more separate units/components/devices, instructed by the computer executable instructions.
  • Fig.4 illustrates the LED spectrum of the light source 20 according to an embodiment of the present invention.
  • the x-axis in Fig.4 is the wavelength in nanometers.
  • the light source 20 in this embodiment includes eight LEDs.
  • Fig.4 illustrates the LED spectrum of seven of the eight LEDs, which corresponds to the colors royal blue, blue, cyan, green, amber, red-orange and red respectively, and the eighth LED has a broader spectrum to generate white light.
  • the setting of the number of lighting elements (LEDs) in the light source 20 and the spectrum of the lighting elements of this embodiment is exemplary rather than restrictive.
  • the light sensor 30 includes a trichromatic camera, and the light source 20 includes eight LEDs of the spectrum as illustrated in Fig.4, and the interesting object is human skin.
  • the processor 40 controls the eight LEDs to generate eight testing light beams to sequentially illuminate a set of sample objects.
  • Each testing light beam includes only one unique testing sub-beam having a predefined spectrum, wherein each sub-beam corresponds to one of the eight LEDs.
  • a person skilled in the art should also understand that the basic idea here is to generate different testing light beams for illuminating the sample objects. Besides using one light generated by one LED as one testing light beam, it is also practical to turn ON two or more LEDs to mix two or more different lights to obtain one testing light beam.
  • the number of EIGHT testing light beams also is not mandatory; other numbers of testing light beams are also practical, depending on the specific applications and their requirements.
  • the minimum requirement here is at least two testing light beams having different spectrums.
  • the set of sample objects comprise a first sub-set of objects made of the same material as the interesting object and a second sub-set of objects made of material different from the interesting object.
  • the first sub-set of objects includes three different human hands.
  • the second sub-set of objects includes wood board and a stuffed dummy.
  • step f of the learning phase the trichromatic camera 30 is used to capture a plurality of images of the set of sample objects, wherein each image corresponds to one testing light beam, which means one image is captured when the sample objects are illustrated by a corresponding testing light beam.
  • step g of the learning phase the processor 40 compares at least two images of the set of sample objects to extract at least one parameter representing the characteristics of the material of the interesting object, i.e. human skin.
  • the processor 40 compares any two images of the same sample object, such as the first human hand, to extract one difference feature of the two images. For each sample object, at most 28 difference features could be extracted, since there are 8 images of each sample object, and there are 28 combinations of any two images out of the 8 images.
  • Each image includes a plurality of pixels, and the difference feature of two compared images includes a plurality of sub-different features, wherein each sub-different feature represents the difference between two pixels having the same location within the two compared images. Since each image is trichromatic, each sub-different feature could be expressed as a ternary data.
  • step g the processor 40 will perform the following two operations.
  • the processor 40 identifies a plurality of positive pixels and/or a plurality of negative pixels from each compared image of the set of sample objects, each positive pixel representing the same material as the interesting object, each negative pixel representing material different from the interesting object.
  • the pixels corresponding to the sample object are positive pixels, such as the white ones illustrated in Fig.5c, whereas the other pixels are negative pixels, such as the black ones illustrated in Fig.5c.
  • all the pixels are negative pixels.
  • the processor 40 utilizes a machine learning algorithm to extract at least one parameter representing the characteristics of the material of the interesting object, i.e. human skin, from the sub-different features of the plurality of positive pixels and the plurality of negative pixels of the plurality of images of the set of sample objects.
  • a machine learning algorithm can be Adaboost and Support Vector Machines.
  • Each difference feature, as well as each sub-different feature, is related to the comparison of two different testing light beams.
  • the most discriminative and robust sub-different features which may effectively distinguish positive pixels from negative pixels, are extracted as the parameters representing the characteristics of the material of the interesting object, by use of the machine learning algorithm.
  • the extracted parameters could be stored in a database.
  • the three hands of the first sub-set of objects are of a different color, so that the extracted parameters representing the characteristics of human skin are robust to different skin colors.
  • step a of the recognition phase the processor 40 controls the eight LEDs of the light source 20 to generate eight light beams to sequentially illuminate a target object.
  • Each light beam includes only one unique sub-beam having a predefined spectrum, wherein each sub-beam corresponds to one of the eight LEDs.
  • a person skilled in the art should also understand that the basic idea here is to generate different light beams for illuminating the target objects. Besides using one light generated by one LED as one light beam, it is also practical to turn ON two or more LEDs to mix two or more different lights to obtain one light beam.
  • the number of EIGHT testing light beams also is not mandatory; other numbers of testing light beams are also practical, depending on the specific applications and their requirements.
  • the basic idea is that the generated plurality of light beams should be the entirety, or a subset of the generated plurality of testing light beams generated in the learning phase, so that the knowledge learned in the learning phase can be used in the recognition phase.
  • step b of the recognition phase the trichromatic camera 30 is used to capture a plurality of images of the target object, wherein each image corresponds to one light beam.
  • step c of the recognition phase the processor 40 compares at least two images of the target object to extract at least one difference feature between every two images. For the target object, at most 28 difference features could be extracted, since there are 8 images of each sample object, and there are 28 combinations of any two images out of the 8 images.
  • step d of the recognition phase the processor 40 identifies the interesting object from the target object, based on the at least one difference feature of step c.
  • each image of the target object comprises a plurality of pixels.
  • the processor 40 compares the plurality of pixels of the two compared images to extract the difference feature of the two compared images, wherein the difference feature comprises a plurality of sub-different features, each sub-different feature representing the difference between two pixels having the same location within the two compared images.
  • Each difference feature, as well as each sub-different feature is related to the comparison of two different light beams.
  • step d the processor 40 matches the difference feature of the target object with a known parameter representing the characteristics of the material of the interesting object to identify the interesting object.
  • the known parameter which should be extracted in the learning phase, might be obtained from a database.
  • Fig.6 illustrates the identification results by taking one of the three hands in the first sub-set of sample objects as the target object.
  • the white pixels, denoted by sign 1 in the images represent those pixels correctly detected as human skin.
  • the dark grey pixels, denoted by sign 2, in the images represent those pixels correctly detected as non- human- skin.
  • the light-grey pixels, denoted by sign 3, in the images represent those pixels falsely detected as human skin.
  • the black pixels, denoted by sign 4 in the images represent the pixels that missed detection.
  • each kind of pixels in the identification result could be remarkably well distinguished from others by use of their unique parameters.
  • step e, step f and the first operation of pixel identification of step g are repeated under different ambient light conditions, so as to extract parameters robust to different ambient light conditions.
  • the identification result is immune to different ambient light conditions.
  • Fig.7 illustrates the identification results of a hand under different ambient light conditions.
  • Each light beam generated in the recognition phase should be the same as one of the testing light beams generated in the learning phase.
  • the extracted parameters representing the characteristics of the material of the interesting object relate to only part of the testing light beams generated in the learning phase, and the light beams generated in the recognition phase belong to a subset of testing light beams, which relate to the extracted parameters.
  • the light sensor 30 comprises multiple photodiodes with color filters, and the images captured by the light sensor 30 are multicolored.
  • the light sensor 30 comprises a gray camera, and each sub-different feature of image pixels could be accordingly expressed as a monadic data.
  • the light source 20 comprises 6 LEDs of different color, and any testing light beam could combine multiple sub-beams of different LEDs.
  • the light source 20 comprises common LEDs and the light sensor 30 comprises a common trichromatic camera, thus the hardware cost of the apparatus 10 is low.
  • the illumination sequence of the plurality of light beams in step a is the same as the illumination sequence of the plurality of testing light beams in step e.
  • the comparison sequence of the plurality of images of the target object in step c is the same as the comparison sequence of the plurality of images of the sample objects in step g.
  • the processor 40 has no knowledge of the illumination sequence.
  • the processor 40 compares every two consecutive images of the sample objects to extract at least one parameter representing the characteristics of the material of the interesting object.
  • the processor 40 compares every two consecutive images of the target object to extract a plurality of difference features, each difference feature representing the difference between two consecutive images.
  • the processor 40 identifies the interesting object, based on the plurality of difference features.
  • the apparatus 10 further comprises a light detector.
  • the light detector is configured to directly detect the illumination sequence of the plurality of light beams in step a and/or the illumination sequence of the plurality of testing light beams in step e, so that the processor 40 can obtain the knowledge of the illumination sequence.
  • the processor it is also possible for the processor to control the light source to generate the plurality of (testing) light beams in the learning phase and the recognition phase, so that the processor already has the knowledge of the illumination sequence of the plurality of light beams. Therefore, there is no need for the above light detector.
  • the processor 40 further controls the illumination sequence in step a and/or step e, and controls the acquisition times in step b and/or step f.
  • the images are always captured at the same phase of electric power, so that the ambient light contribution, such as the illumination of an electronic ambient light source, is constant at the moment of capturing.
  • the light source 20, the light sensor 30 and the processor 40 are assembled in the same casing. In some other embodiments, the light source 20, the light sensor 30 and the processor 40 are mounted apart, co-operating with each other by use of any feasible communication techniques.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

L'invention concerne un procédé et un appareil d'imagerie à spectres multiples qui ne nécessite aucun calibrage. Conformément à un mode de réalisation, l'invention concerne un appareil (10) permettant de reconnaître un objet intéressant, comprenant : une source de lumière (20) configurée pour générer une pluralité de faisceaux lumineux, chaque faisceau lumineux comprenant au moins un sous-faisceau ayant un spectre prédéfini, et différents faisceaux lumineux comprenant au moins un sous-faisceau différent ; un capteur de lumière (30) configuré pour capturer une pluralité d'images d'un objet cible éclairé par la source de lumière, chaque image correspondant à un faisceau lumineux ; un processeur (40) configuré pour comparer au moins deux images de la pluralité d'images capturées par le capteur de lumière afin d'extraire au moins une caractéristique de différence entre les deux images capturées ; le processeur étant également configuré pour identifier l'objet intéressant à partir de l'objet cible, d'après au moins une caractéristique de différence.
PCT/IB2011/053564 2010-08-11 2011-08-10 Procédé et appareil permettant de reconnaître un objet intéressant WO2012020381A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2010/075872 2010-08-11
CN2010075872 2010-08-11

Publications (1)

Publication Number Publication Date
WO2012020381A1 true WO2012020381A1 (fr) 2012-02-16

Family

ID=44651885

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2011/053564 WO2012020381A1 (fr) 2010-08-11 2011-08-10 Procédé et appareil permettant de reconnaître un objet intéressant

Country Status (1)

Country Link
WO (1) WO2012020381A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013098708A2 (fr) 2011-12-30 2013-07-04 Koninklijke Philips Electronics N.V. Acquisition de données multispectrales
US9060113B2 (en) 2012-05-21 2015-06-16 Digimarc Corporation Sensor-synchronized spectrally-structured-light imaging
US9979853B2 (en) 2013-06-07 2018-05-22 Digimarc Corporation Information coding and decoding in spectral differences
US10498941B2 (en) 2012-05-21 2019-12-03 Digimarc Corporation Sensor-synchronized spectrally-structured-light imaging

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008566A1 (fr) * 2003-07-16 2005-01-27 Omniperception Limited Systeme permettant de determiner si un visage est reel
US20060034537A1 (en) * 2004-08-03 2006-02-16 Funai Electric Co., Ltd. Human body detecting device and human body detecting method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008566A1 (fr) * 2003-07-16 2005-01-27 Omniperception Limited Systeme permettant de determiner si un visage est reel
US20060034537A1 (en) * 2004-08-03 2006-02-16 Funai Electric Co., Ltd. Human body detecting device and human body detecting method

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
AKIRA KIMACHI: "Real-Time Detection of Natural Objects Using AM-Coded Spectral Matching Imager", PROC. SPIE, vol. 5667, no. 76, 15 January 2005 (2005-01-15), San Jose, CA, USA, pages 76 - 83, XP055012087, Retrieved from the Internet <URL:http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=PSISDG005667000001000076000001&idtype=cvips&doi=10.1117/12.586787&prog=normal> [retrieved on 20111114] *
D LANDGREBE: "Signal Theory Methods in Multispectral Remote Sensing", 2003, JOHN WILEY & SONS, Hoboken, US, pages: 273 - 280, XP002663894 *
DIMITRIOS KAPSOKALYVAS ET AL: "Multispectral dermoscope", IN CLINICAL AND BIOMEDICAL SPECTROSCOPY, VOL. 7368 OF PROCEEDINGS OF SPIE-OSA BIOMEDICAL OPTICS, 1 January 2009 (2009-01-01), pages 73680D-1 - 73680D-6, XP055012098, DOI: 10.1117/12.831564 *
EBISAWA Y: "Head pose detection with one camera based on pupil and nostril detection technique", VIRTUAL ENVIRONMENTS, HUMAN-COMPUTER INTERFACES AND MEASUREMENT SYSTEMS, 2008. VECIMS 2008. IEEE CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 14 July 2008 (2008-07-14), pages 172 - 177, XP031300748, ISBN: 978-1-4244-1927-2 *
HANS F GRAHN, PAUL GELADI: "Hyperspectral Image Analysis", 2007, JOHN WILEY & SONS, Chichester, GB, ISBN: 978-0-470-01086-0, article JACCO C NOORDAM, WILLIE H A M VAN DEN BROEK: "Clustering and Classification in Multispectral Imaging for Quality Inspection of Postharvest Products", pages: 43 - 67, XP002663547 *
HERWIG GUGGI ET AL: "Exploiting Feature-Based Fusion in LED-based Multi-Spectral Imaging", PROCEEDINGS OF THE 33RD ANNUAL WORKSHOP OF THE AUSTRIAN ASSOCIATION FOR PATTERN RECOGNITION, 1 May 2009 (2009-05-01), Vienna, pages 1 - 11, XP055012092, Retrieved from the Internet <URL:http://oagm2009.icg.tugraz.at/papers/p54.pdf> [retrieved on 20111114] *
YASUHIRO SUZUKI ET AL: "Skin Detection by Near Infrared Multi-band for Driver Support System", COMPUTER VISION - ACCV 2006 LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, vol. 3852, 1 January 2005 (2005-01-01), pages 722 - 731, XP019027499, ISBN: 978-3-540-31244-4 *
YILI HU ET AL: "Pixel-Based Skin Detection using Multispectral Imaging", MASTER GRADUATION PAPER, ELECTRICAL ENGINEERING, 30 August 2010 (2010-08-30), Department of Electrical Engineering, pages 1 - 13, XP055012080, Retrieved from the Internet <URL:http://alexandria.tue.nl/repository/books/709012.pdf> [retrieved on 20111114] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013098708A2 (fr) 2011-12-30 2013-07-04 Koninklijke Philips Electronics N.V. Acquisition de données multispectrales
US9060113B2 (en) 2012-05-21 2015-06-16 Digimarc Corporation Sensor-synchronized spectrally-structured-light imaging
US10498941B2 (en) 2012-05-21 2019-12-03 Digimarc Corporation Sensor-synchronized spectrally-structured-light imaging
US9979853B2 (en) 2013-06-07 2018-05-22 Digimarc Corporation Information coding and decoding in spectral differences
US10447888B2 (en) 2013-06-07 2019-10-15 Digimarc Corporation Information coding and decoding in spectral differences

Similar Documents

Publication Publication Date Title
US10498941B2 (en) Sensor-synchronized spectrally-structured-light imaging
Cheng et al. Effective learning-based illuminant estimation using simple features
US10621472B1 (en) Rapid onboarding system for visual item classification
WO2014052041A4 (fr) Procédés de génération automatique d&#39;une bibliothèque de jeux de cartes et d&#39;images maîtresses destinées à un jeu de cartes, et appareil de traitement de cartes associé
TWI551150B (zh) 電子照相機之自動白平衡之自動化自我訓練之裝置及方法
WO2013098708A2 (fr) Acquisition de données multispectrales
TWI620131B (zh) 物品辨識系統與方法
RU2006125736A (ru) Способ опознавания по радужной оболочке глаза и устройство для его осуществления
WO2012020381A1 (fr) Procédé et appareil permettant de reconnaître un objet intéressant
RU2447471C2 (ru) Цветная последовательная вспышка для получения цифровых изображений
US20100002956A1 (en) Method and system for converting at least one first-spectrum image into a second-spectrum image
US8570433B1 (en) Coloration artifact reduction
CN107038819A (zh) 受控及多色扫描器照明
CN106163373A (zh) 信号处理装置和内窥镜系统
KR102589555B1 (ko) 초분광 영상의 분광 밴드 선택 방법 및 이를 이용하는 분광 밴드 설정 장치
US11683551B2 (en) Systems and methods for detecting light signatures and performing actions in response thereto
AU2010363107A1 (en) Iris identification method of a person (alternatives)
Anami et al. Influence of light, distance and size on recognition and classification of food grains' images
US20220307981A1 (en) Method and device for detecting a fluid by a computer vision application
KR101049409B1 (ko) 영상 처리 시스템에서의 색 보정 장치 및 방법
KR101836768B1 (ko) 물체 인식 기반의 자동 영상 컬러 밸런스 시스템 및 방법과 이에 관한 기록매체
JP6508730B2 (ja) 発光マーカ装置、マーカ検出装置、伝送システム、マーカ発光方法、マーカ検出方法、及びプログラム
Vink et al. Robust skin detection using multi-spectral illumination
CN111144421B (zh) 一种物体颜色识别方法、装置及投掷设备
JP6278819B2 (ja) 光学式文字認識装置、光学式文字認識方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11757680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11757680

Country of ref document: EP

Kind code of ref document: A1