WO2024127274A1 - Améliorations dans la détection spectrale - Google Patents

Améliorations dans la détection spectrale Download PDF

Info

Publication number
WO2024127274A1
WO2024127274A1 PCT/IB2023/062622 IB2023062622W WO2024127274A1 WO 2024127274 A1 WO2024127274 A1 WO 2024127274A1 IB 2023062622 W IB2023062622 W IB 2023062622W WO 2024127274 A1 WO2024127274 A1 WO 2024127274A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical
optical radiation
intensity
target object
illumination sources
Prior art date
Application number
PCT/IB2023/062622
Other languages
English (en)
Inventor
William Lamb
Alexander PILKINGTON
Bernard PAYNE
Original Assignee
Dyson Technology Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dyson Technology Limited filed Critical Dyson Technology Limited
Publication of WO2024127274A1 publication Critical patent/WO2024127274A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/42Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/10Arrangements of light sources specially adapted for spectrometry or colorimetry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/42Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
    • G01J3/433Modulation spectrometry; Derivative spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/10Arrangements of light sources specially adapted for spectrometry or colorimetry
    • G01J2003/102Plural sources
    • G01J2003/104Monochromatic plural sources
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • G01J2003/2826Multispectral imaging, e.g. filter imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/42Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
    • G01J2003/425Reflectance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/42Absorption spectrometry; Double beam spectrometry; Flicker spectrometry; Reflection spectrometry
    • G01J3/433Modulation spectrometry; Derivative spectrometry
    • G01J2003/4334Modulation spectrometry; Derivative spectrometry by modulation of source, e.g. current modulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1757Time modulation of light being essential to the method of light modification, e.g. using single detector

Definitions

  • the present invention relates to spectral sensing methods and apparatuses, and particularly, although not exclusively, to imaging methods and apparatuses employing multiple spectrally distinct illumination sources to illuminate an object that is also illuminated by ambient illumination.
  • Fig. 1 schematically shows data generated by a hyper-spectral imaging system.
  • This data includes data describing a plurality of 2D images 2 of the same target subject.
  • the data was captured using an imaging apparatus able to create a plurality of 2D image data subsets each captured using a corresponding one of a plurality (usually a high number) of closely-spaced discrete wavelength bands collectively spanning a range of the electromagnetic spectrum from which diagnostic information on the target subject can be obtained.
  • each wavelength band has a Half Width Full Maximum (HWFM) of 1 to 5 nm and such an imaging system can capture several hundred wavelength bands collectively spanning from the ultraviolet to near infrared parts of the electromagnetic spectrum.
  • HWFM Half Width Full Maximum
  • the output of the hyperspectral imaging system is a 3D data cube 1 .
  • the data cube 1 contains data describing a spectrum 4 of light captured at each respective pixel of the hyperspectral imaging system when imaging the 2D scene.
  • the resulting spectrum 4 associated with any given pixel 3 of the 2D images hereby has a spectral resolution which is determined by how many of the aforesaid closely-spaced discrete wavelength bands there are in the imaging apparatus, and by the width of the spectral range they collectively span.
  • hyperspectral imaging enables the fusion of spectroscopy-based analysis with 2D object detection and computer vision algorithms to provide improved analysis capabilities.
  • Multiband imaging has many applications, particularly where both chemical composition and structural features provide important diagnostic insight.
  • Reference sample materials can be white (perfectly reflecting at all wavelengths) or grey with a given absorption/reflection coefficient for each wavelength band of interest in the spectral range spanned by the imaging apparatus.
  • an active calibration step is required, and this typically consists of placing a reference sample material at the approximate location of the target subject to be imaged.
  • the imaging system then images the reference sample material and performs a calibration based on the known reflectance/absorption properties of the reference sample material.
  • This approach has two main challenges. The first challenge is that the reference samples must be calibrated over all wavelengths of interest. Consistent reflectance and absorption levels can be difficult to achieve across a wide band of the electromagnetic spectrum in the spectral range spanned by the imaging apparatus. Thus, calibration requires very specific, potentially expensive materials or compensation curves specific to the reflectance sample used. Accumulated dirt, grease or damage to the reference sample will influence the calibration.
  • the second challenge is that calibration is typically performed using a reference material presenting a flat surface. This means that the calibration is only valid for a given target subject if that target subject shares the same orientation (i.e., image plane) as that of the reference material.
  • the illumination orientation/profile across the surface of the target subject can be very different to that of a flat surface of the reference material thereby reducing the effectiveness of the calibration. Compensation schemes have been suggested in the prior art however these approaches can add unwanted artefacts.
  • this method of calibration requires a trained operator to ensure accuracy and is wholly unsuitable for use in a diagnostic device intended for use in a domestic setting because the accuracy of the diagnostics derived from the calibrated spectral data will be entirely dependent upon the competence of the user in correctly performing a calibration step every time the diagnostic device is used.
  • a new data set 7 (i.e., for an image) can be obtained which is notionally (naively) equivalent of an image taken with no ambient light present. While this approach is conceptually simple, it suffers from drawbacks. The assumption is that the ambient light is static between the two images. If this is not the case, then the subtraction is imperfect and artefacts can be introduced. If the illumination source is not significantly brighter than the background light, very little signal will be extracted, resulting in a potentially noisy estimate. For spectral imaging and subsequent diagnostic analysis, the signal-to-noise ratio is critical, as subtle differences in the absorption or reflection coefficient indicate the presence and abundance of certain chromophores. Noise can therefore significantly affect the estimated abundance of different constituents of an imaged sample.
  • Existing diagnostic devices employ a variety of methods.
  • One method is implemented using an array of optical sensors (e.g., pixels in an imaging pixel array) in which each one of the optical sensors comprises a respective one of a plurality of different narrowband colour filters.
  • the different colour filters possess different respective colour bands such that the associated image sensor is responsive only to light falling within the respective colour band.
  • These systems normally have only a limited number of different colour filters corresponding to a limited number of colour bands (e.g., 8, 16, or 25 colour bands).
  • the colour filters are disposed on top of the optical sensors (e.g., pixels) on a standard imaging sensor. The effect is to reduce the spatial resolution of the image. This type of arrangement is often referred to as a multi- spectral imaging system.
  • a hyperspectral imaging system may possess hundreds of different colour bands. Such systems typically use different techniques as compares to multi-spectral imaging systems to achieve this.
  • a hyperspectral imaging system may employ a diffractive optical arrangement to disperse incoming light from a target scene into an optical spectrum (i.e., as in a rainbow). The optical spectrum is focused onto an image sensor array such that different pixels receive different respective sections of the spectrum. Each such spectral section defines a unique colour band. In this way no colour filters are needed, but a very precise optical alignment is required as between the diffractive optical arrangement and the image sensor array.
  • this methodology can only image a small portion of the scene at any one time.
  • the image sensor array To capture spectral data for a full target scene the image sensor array must generate spectra for many spots/locations in an imaged scene, thereby sampling the scene fully at a desired spatial sampling resolution. This is often done by scanning the scene in a raster scan or sweep scan the scene. This takes time, during which the imaged scene may change, leading to ambiguity and inaccuracies.
  • the present invention has been devised in light of the above considerations.
  • the invention may provide a system (e.g., an apparatus and method) for obtaining an optical reflection spectrum for a target object.
  • This may provide a means for imaging a target object for accurately tracking the relative reflection strength (e.g., reflectance, or relative intensity) for each wavelength of interest at each image pixel.
  • the relative reflection strength e.g., reflectance, or relative intensity
  • the invention may provide the benefit of an automated way of providing a relative colour balance for all reflected light at wavelengths of interest in the spectral range spanned by the apparatus.
  • the invention in another aspect, may provide an apparatus and method to generate multi-band spectral images on standard low-cost optical sensing/imaging hardware, enabling the cost of a spectral imaging system to be greatly reduced.
  • the invention may encode information into illuminating optical radiation which identifies a spectral position or band associated with the optical radiation sensors dedicated to sensing the illuminating optical radiation, such that a receiving optical sensor can “know” which spectral band of light it is receiving.
  • a multi-band imaging method and apparatus may be provided using conventional “off the shelf’ optical sensing/imaging sensors for some or all of the wavelengths of interest in the spectral range spanned by the apparatus.
  • the temporal encoding may enable a plurality of individual lighting sources to be individually tracked across a sequence of optical sensor sampling times (e.g., a sequence of images).
  • a multiband spectral imaging system can be provided where each spectral band of interest is illuminated by a specific illumination source conveying a specific temporal encoding not conveyed by any of the other illumination sources. Extracting this unique temporal encoding (e.g., amplitude modulation encoding) signature from the optical sensor signal (e.g., image) sequence provides one or both of the following advantages:
  • the first advantage may thereby be to decouple the number of spectral channels in the optical sensing/imaging hardware from the number of spectral bands the system is able to capture and analyse.
  • low-cost standard imaging hardware can be substituted for the expensive multi-band/hyperspectral imaging components typically used.
  • Typical imaging hardware possesses higher pixel density than a spectral imaging system of comparable size and cost.
  • the reflection strength/coefficient may be determined for each of a plurality of illumination sources to generate data describing a reflection spectrum. If the optical sensor comprises a digital imaging chip (e.g., a camera) then data describing spectra may be generated forthe scene across the whole camera capture view of the target subject. This may generate a multi-band spectral image data cube whose spectral content is determined principally by the number of individually activated illumination sources, each having a unique spectral band and a unique encoding.
  • systems can be easily configured to cover a range of applications, including skin diagnosis, oral care and food evaluation and preparation, and other applications.
  • the invention may provide a method for obtaining an optical reflection spectrum for a target object, the method comprising: illuminating the target object with optical radiation generated by each one of a plurality of optical illumination sources wherein each one of the plurality of optical illumination sources is spectrally distinct from each of the other optical illumination sources of the plurality of optical illumination sources; wherein optical radiation generated by each one of a plurality of optical illumination sources encodes information identifying the respective optical illumination source distinguishably from amongst the plurality of optical illumination sources, according to a time-varying modulation of the intensity of optical radiation generated thereby; by an optical sensor, receiving optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources; decoding the encoded information within the reflected component of the illumination optical radiation received by the optical sensor and assigning an optical spectral band to the respective illumination optical radiation according to the decoded information; generating data describing an optical reflection spectrum according to the reflected components of
  • the optical sensor may be a monochrome optical sensor.
  • the monochrome optical sensor may be operable to generate one single monochrome channel as an output signal in response to receiving polychromatic optical radiation from the illuminated target object.
  • the optical sensor may be a colour optical sensor.
  • the colour optical sensor may be operable to generate a plurality of defined colour channels as output signals in response to receiving polychromatic optical radiation from the illuminated target object.
  • the colour optical sensor may comprise a plurality of colour optical sensor elements (e.g., pixels) each of which is arranged to be responsive only to light within a respective one of a plurality of spectrally distinct colour bands within the polychromatic optical radiation.
  • a colour optical sensor element e.g., pixel
  • An example of three distinct colour bands include Red, Green and Blue (RGB) colour bands.
  • the step of generating data describing an optical reflection spectrum may be performed using a selected one (e.g., only one) of the plurality of defined colour channels that corresponds to a colour band of optical sensor elements of the colour optical sensor.
  • a selected one e.g., only one
  • an optical illumination source generates light exclusively within a narrow spectral band of light (e.g., in the 400nm - 420nm range)
  • B blue
  • This may be advantageous for improving a signal-to-noise (SNR) ratio, for example.
  • the step of generating data describing an optical reflection spectrum may be performed by combining the output signals of a plurality of (e.g., all of) the colour channels to convert the colour image data to monochrome image data.
  • the reflected component of the illuminating optical radiation received by the optical sensor may be represented as an electrical response signal generated by the optical sensor in response to the received optical radiation.
  • the electrical response signal may be represented as data (e.g., converted into digital data) describing the intensity of the reflected component of the illuminating optical radiation received by the optical sensor.
  • the step of decoding may be performed using such digital data.
  • a reflection spectrum may be obtained using a either a colour-distinguishing optical sensor (e.g., configured to generate a plurality of defined colour channels as output signals) or a monochrome optical sensor (e.g., configured to generate one single monochrome channel as output) in conjunction with a plurality of optical illumination sources each configured to communicate to the optical sensor the spectrally distinct nature of the light generated by it when received by the optical sensor.
  • This encoded information is conveyed concurrently with reflectance information regarding the target object because the encoded optical radiation arrives at the optical sensor via reflection from the target object.
  • the act of reflection of encoded light from the target object ‘imprints’ reflectance information into the encoded optical radiation, for receipt by the optical sensor.
  • either a colour-distinguishing optical sensor may be used or a monochrome optical sensor may be used. If a colour-distinguishing optical sensor is used, then a value of the average (e.g., arithmetic average value, or Root Mean Square value) of all of the multiple concurrent colour channel output signal values (e.g., RGB channels or other colour space) may be used to represent a measurement of the received optical radiation at any given point in time (e.g., at any given sensor exposure or frame).
  • a value of the average e.g., arithmetic average value, or Root Mean Square value
  • references herein to “optical” include a reference to electromagnetic radiation preferably in the wavelength range of about 0.3pm to about 3.0pm, or more preferably in the wavelength range of about 0.4pm to about 4.0pm, or yet more preferably in the wavelength range of about 0.4pm to about 2.0pm, or even more preferably in the wavelength range of about 0.4pm to about 1 ,0pm.
  • references herein to “spectrally distinct” as between a plurality of optical illumination sources include a reference to optical illumination sources configured to emit electromagnetic radiation spanning a respective one of a plurality of spectral bands of optical wavelengths in which each spectral band includes a range of optical wavelengths not included among a range of optical wavelengths spanned by the spectral band of any of the other spectral bands of the plurality of spectral bands.
  • each one of the plurality of optical illumination sources is configured to emit electromagnetic radiation having an optical wavelength that does not fall outside its respective spectral band.
  • the respective spectral bands of the plurality of optical illumination sources do not share any common optical wavelengths (i.e., do not overlap).
  • the spectral width of an optical illumination source may be such that the full width at half maximum (FWHM; bandwidth) of the spectral output thereof is any value in the range from 20nm and 70nm.
  • An optical illumination source may comprise a quasi-monochromatic light-emitting diode (LED).
  • the number of optical illumination sources may be a number selected from the range: three (3) to twenty (20).
  • references herein to “encode” include a reference to generating a signal according to an algorithm or code, and include a reference to expressing information via a code, or to expressing information in a different form or system of language.
  • references herein to “decode” include a reference to recovering information that has been encoded via a code, or to recovering information that has been encoded in a different form or system of language.
  • the optical radiation generated by each one of a plurality of optical illumination sources may encode information identifying the respective optical illumination source thereby to identify the spectrally distinct spectral band of optical wavelengths generated by that optical illumination source, or to allow the spectrally distinct spectral band to be identified e.g., by reference to pre-stored data mapping the encoding to a particular spectral band.
  • the encoded information thereby permits an optical source to be distinguished from other optical sources amongst the plurality of optical illumination sources.
  • the step of decoding may comprise comparing the reflected component of the illumination optical radiation received by a optical sensor, to a pre-stored reference encoding or signal corresponding to an encoding applicable to the encoded optical radiation, and determining that the received encoding corresponds to the pre-stored encoding when the comparison indicates a sufficient degree of similarity therebetween.
  • the comparison may be performed by correlating or convolving the received encoding and the pre-stored encoding.
  • the sufficient degree of similarity may comprise a convolution or correlation value of sufficient magnitude.
  • the received encoding and the prestored encoding may each comprise a time varying signal, and the comparison may be made in respect of a time interval within a total duration of the received encoding.
  • references herein to “monochrome” include a reference to monochrome sensor configured to capture incoming optical radiation regardless of optical wavelength (colour).
  • a colour-distinguishing sensor may be configured to generate a plurality of defined colour channels as output signals.
  • a monochrome optical sensor may be configured to generate one single monochrome channel as output.
  • a typical RGB colour sensor/camera chip with dimensions of 1000 pixels by 1000 pixels typically has a Bayer colour filter applied over the pixel array. This is typically arranged as a 2x2 square checkerboard array pattern of two green (G) pixel/filters to every one red (R) and one blue (B) pixel/filter.
  • the monochrome sensor as compared to a colour sensor, therefore receives (more energy) per pixel since light would not be required to pass through a band pass (R, G or B) filter blocking some light at each sensor.
  • the step of receiving optical radiation may comprise, by the optical sensor, generating an electrical signal corresponding to (i.e., in response to) the illumination optical radiation received thereby.
  • the step of decoding the encoded information may comprise decoding the encoded information within electrical signal.
  • the electrical signal may comprise an electrical detection signal generated by the optical sensor.
  • the amplitude (e.g., signal intensity) of the electrical detection signal is preferably proportional to the amplitude (e.g., optical intensity) of the optical radiation received by a optical sensor. Consequently, most desirably, the time-varying modulation of the intensity of optical radiation generated by each one of a plurality of optical illumination sources encodes information that is preserved as a corresponding timevarying modulation of the amplitude of the corresponding electrical detection signal.
  • the electrical detection signal may comprise a time-series comprising a plurality of discrete successive electrical detection signals whereby relative amplitudes of the discrete electrical detection signals encode the information received from a given one of the plurality of optical illumination sources.
  • the time-series may form at least a part of a plurality of discrete successive electrical detection signals forming an image sequence (e.g., a video image sequence) comprising a time-series of images each one of which corresponds to a frame (e.g., a video frame) of the image sequence.
  • the electrical signal generated by the optical sensor may correspond to a single imaging pixel sensor, or a group of imaging pixel sensors (e.g., binned pixels), of an imaging sensor array (e.g., an imaging pixel sensor array of a camera).
  • the frame rate of the image sequence is preferably sufficiently high as to affectively mitigate against the effects of motion of the target object such that light received by the optical sensor (e.g., any one given imaging pixel sensor or any one binned group thereof) from the target object during the majority of, or substantially all of, the frames of the image sequence comprises light reflected from the same given location on the target object. This means that the reflection spectrum generated from the received light more faithfully represents the reflection characteristics of that given location.
  • the frame rate may be in the range from 1 frame per second to 100 frames per second, or more preferably from 5 frames per second to 100 frames per second, such as 50 frames per second for example.
  • reflection spectrum include a reference to the spectrum obtained when incident electromagnetic radiation is selectively altered by a reflecting substance, in which the wavelength of the electromagnetic radiation is the independent variable.
  • the reflection spectrum may comprise a reflectance spectrum, or may comprise a reflectivity spectrum.
  • reflectance spectrum may be used interchangeably unless the context requires otherwise.
  • the method may comprise detecting a composition of the object using the obtained reflection spectrum.
  • the detecting of a composition may comprise determining any one or more of the following items within the obtained reflection spectrum:
  • the detecting of a composition may comprise comparing one or more of the items (a) to (f) to corresponding reference items associated with a pre-stored reference composition and determining a composition of the object according to the comparison.
  • the method may comprise determining that the composition of the target object corresponds to a given pre-stored reference composition if, according to the comparison, the one or more of the items (a) to (f) are sufficiently similar to corresponding reference items associated with the given pre-stored reference composition.
  • Spectral un-mixing may be used to identify the abundance (and/or distribution) of chromophores within the target subject based on the optical reflection spectrum obtained, according to the invention.
  • One may identify an abundance At(x,y) for each chromophore at a specified location/coordinates, (x,y), in the image of the target subject, where: is a function, f(x,y, ), both of the pixel location in the image of the target subject, and the wavelengths employed on the spectrum, where A is a vector with vector elements corresponding to wavelengths representing the plurality of distinct spectral bands employed (e.g., representing the band-centre wavelengths).
  • the spectral un-mixing may determine the inverse mapping from the spectrum to the abundance of each chromophore.
  • a machine learning (ML) model may be implemented whereby after training the model on a sufficient number of spectrum samples, the model may classify the composition/abundance of individual chromophores based on the wavelengths representing the plurality of distinct spectral bands employed.
  • a deterministic approach may be used whereby a physics-based model of the light-matter interaction is employed. Inverting the physics-based model may provide a chromophore abundance map.
  • the step of illuminating the target object with optical radiation generated by each one of a plurality of optical illumination sources may be performed during a plurality of separate respective time periods.
  • any one of (or each one of) the optical illumination sources may be operated to generate optical radiation only when none of the other optical illumination sources is concurrently operated to generate optical radiation.
  • the optical illumination sources may be operated to generate optical radiation so that sequential illumination of the target object by a sequence of spectrally distinct optical radiation is implemented.
  • the method may include, by the optical sensor during each respective time period, receiving optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources.
  • a coordination or synchronisation may be provided between the optical sensor and the sequential output of spectrally distinct optical radiation by the optical illumination sources. For example, this may ensure that the reflected component of the illuminating optical radiation received by the optical sensor from each respective one of the plurality of optical illumination sources, via reflection from the illuminated target object, comes uniquely from that respective one of the plurality of optical illumination sources in sequence.
  • the response of the optical sensor at any point in the sequence is thereby is a response to a spectrally distinct illumination as opposed to a superposition of multiple concurrent spectrally distinct illuminations which may reduce the spectral resolution of the reflection spectrum.
  • the step of illuminating of the target object may comprise controlling each of the plurality of optical illumination sources to encode the information according to a respective pseudo-random code.
  • references herein to “pseudo-random code” include a reference to a pseudo-random binary sequence (PRBS), pseudo-random binary code or a pseudo-random bitstream.
  • PRBS pseudo-random binary sequence
  • pseudo-random binary code or a pseudo-random bitstream.
  • the pseudo-random code may be a Maximum Length Sequence (MLS) code.
  • the encoding of the information may use any one of many different types of coding signal.
  • the encoding may be other than a binary sequences, such as, but not limited to, any one of: a chirp signal, a multi-level prime number sequence, or a random noise (e.g., continuous) with different amplitude levels anywhere between 0 and 1 (with 1 being the maximum illumination strength, as represented in bits).
  • a binary sequence is advantageous as it allows one to approximate the correlation operation as simple additions and subtractions, as discussed in more detail herein. This is explicitly because a binary sequence may employ a value zero (0) when illumination is not applied/activated, or one (1) when it is.
  • any image frame where the binary sequence is zero (0) can be discounted from correlation score calculations and one only need add image frames corresponding to the binary state of one (1) within the sequence. This greatly reduces the number of operations required in the calculation.
  • Other suitable pseudo-random codes include, but are not limited to: Gold Codes - these may be useful if simultaneous light transmissions by spectrally distinct illumination sources is implemented; Kasami Sequences. In general, many types of encoding may be used which allows for unique codes to be applied, whether it is binary or “continuous” in nature. However, as noted herein, there are advantages in using a binary signal (and in particular MLS codes) in terms of enhanced noise rejection and the ability to differentiate between codes.
  • the step of illuminating the target object with optical radiation generated by each one of a plurality of optical illumination sources may be performed simultaneously during a common time period, wherein the respective pseudo-random codes are mutually substantially orthogonal.
  • Two orthogonal pseudo-random codes may be such that the overlap integral of the two pseudo-random codes is negligibly small (e.g., practically zero). For example, given a first pseudo-random code Cl(t) defining a first modulation varying as a function of time, t, and a second pseudo-random code C2(t) defining a second modulation varying as a function of time, t, then these two pseudo-random codes are formally orthogonal if they satisfy the condition that:
  • the step of decoding may comprise correlating or convolving the reflected component of the illumination optical radiation received by a optical sensor, to a pre-stored reference encoding or signal corresponding to an encoding (e.g., pseudo-random code) applicable to the encoded optical radiation, and determining that the received encoding corresponds to the pre-stored encoding when the correlation or convolution indicates a sufficient degree of similarity therebetween.
  • the sufficient degree of similarity may comprise a convolution or correlation value of sufficient magnitude.
  • the orthogonality of pseudo-random codes means that when the reflected component of the illumination optical radiation received by an optical sensor contains encodings according to two or more orthogonal pseudo-random codes, the correlation/convolution of the received optical radiation (i.e., the sensor signal generated in response to that receipt) and any selected one of the same two or more orthogonal pseudorandom codes each representing a pre-stored reference encoding or signal, will produce non-negligible convolution or correlation values only in respect of those parts of the received optical radiation containing the selected reference signal.
  • the convolution, (Cl * C2)(t) may take the form:
  • each one of the two or more orthogonal pseudo-random codes may be extracted (decoded) form within the received optical radiation independently of the presence of any of the other orthogonal pseudo-random codes within the received optical radiation.
  • orthogonality are ‘ideal’ cases and, in practice, the invention may employ these codes on condition that they are substantially orthogonal, or approximately orthogonal to a degree sufficient for practical implementation.
  • the step of decoding may comprise calculating a correlation value quantifying a correlation between the reflected component of the illuminating optical radiation received by the optical sensor (e.g., the electrical response signal of the optical sensor) and the pre-stored reference encoding corresponding to a respective one of the encodings within the optical radiation generated by each respective one of the plurality of optical illumination sources.
  • the correlation value may be a cross-correlation value.
  • the encoding may be a binary code (i.e., having values normalised as representing “1 ” (one) or“0” (zero)).
  • the calculating of a correlation value may comprise the steps of: (a) calculating an average value (e.g., arithmetic mean) of the electrical response signal of the optical sensor corresponding to receipt of the encoded code in full; (b) calculating a sum of only those parts of the electrical response signal corresponding to receipt of an encoded code value representing binary value “1” (one); and (c) subtracting the value of the average from the value of the sum thereby to provide the correlation value.
  • This approach for calculating a correlation value is an approximation to a Pearson correlation. It is applicable when using a binary encoding sequence. This is an approximation that is computationally efficient.
  • the optical sensor comprises a pixel array comprising a plurality of optical sensor pixels.
  • the method desirably includes forming upon the pixel array an image of the target object using the reflected components of the illumination optical radiation, and generating a plurality of the optical reflection spectra according to the optical radiation within the image as received by the respective optical sensor pixel.
  • each optical reflection spectrum of the plurality of optical reflection spectra corresponds to reflected components of the illumination optical radiation received by a respective one of the plurality of optical sensor pixels.
  • the invention may provide a method of multi-band imaging or hyper-spectral imaging comprising obtaining the image of the target object and the plurality of optical reflection spectra according to the invention in its first aspect, and associating each image pixel of the obtained image with a given optical reflection spectrum from amongst the obtained optical reflection spectra.
  • the image pixel corresponds to an optical sensor pixel associated with the given optical reflection spectrum, thereby to provide a multi-band image or a hyper-spectral image.
  • the time-varying modulation of the intensity of optical radiation may be configured to result in an intensity modulation that is substantially imperceptible to direct human sight.
  • the invention may provide an apparatus for obtaining an optical reflection spectrum for a target object, the apparatus comprising: an illuminator comprising a plurality of optical illumination sources configured for illuminating the target object with optical radiation, wherein each one of the plurality of optical illumination sources is spectrally distinct from each of the other optical illumination sources of the plurality of optical illumination sources, and wherein the illuminator is configured to encode information identifying a respective optical illumination source distinguishably from amongst the plurality of optical illumination sources by applying a time-varying modulation of the intensity of optical radiation generated thereby; an optical sensor configured to receive optical radiation from said illuminated target object comprising a reflected component of said illuminating optical radiation generated by each respective one of the plurality of optical illumination sources; a decoder for decoding the encoded information within the reflected component of the illumination optical radiation received by the optical sensor and for assigning an optical spectral band to the respective illumination optical radiation according to the decoded information; a spectrum generator for generating data describing an optical reflection
  • the optical sensor may be a monochrome optical sensor.
  • the optical sensor may be configured to generate an electrical signal in response to the received optical radiation therewith to represent the reflected component of the illuminating optical radiation received by the optical sensor as an electrical response signal.
  • the optical sensor, or the decoder may be configured to represent the electrical response signal as data (e.g., converted into digital data) describing the intensity of the reflected component of the illuminating optical radiation received by the optical sensor.
  • the decoder may be configured to perform the step of decoding using such digital data. Accordingly, a reflection spectrum may be obtained using the optical sensor in conjunction with the plurality of optical illumination sources. Each one of the optical illumination sources is configured to communicate to the optical sensor the spectrally distinct nature of the light generated by it when received by the optical sensor.
  • the encoded optical radiation arrives at the optical sensor via reflection from the target object, the encoded information is conveyed concurrently with reflectance information regarding the target object.
  • the act of reflection of encoded light from the target object ‘imprints’ reflectance information into the encoded optical radiation, for receipt by the optical sensor.
  • the illuminator is configured such that each one of the plurality of optical illumination sources may be controlled to emit electromagnetic radiation having an optical wavelength that does not fall outside its respective spectral bands.
  • the respective spectral bands of the plurality of optical illumination sources do not share any common optical wavelengths (i.e., do not overlap).
  • the illuminator is configured such that each one of a plurality of optical illumination sources maybe controlled to generate optical radiation that encodes information identifying the respective optical illumination source thereby to identify the spectrally distinct spectral band of optical wavelengths generated by that optical illumination source, or to allow the spectrally distinct spectral band to be identified by the decoder e.g., by reference to pre-stored data mapping the encoding to a particular spectral band.
  • the encoded information thereby permits the decoder to distinguish an optical source from other optical sources amongst the plurality of optical illumination sources.
  • the decoder may be configured to decode by comparing the reflected component of the illumination optical radiation received by a optical sensor, to a pre-stored reference encoding or signal corresponding to an encoding applicable to the encoded optical radiation, and to determine that the received encoding corresponds to the pre-stored encoding when the comparison indicates a sufficient degree of similarity therebetween.
  • the decoder may be configured to perform the comparison by correlating or convolving the received encoding and the prestored encoding.
  • the sufficient degree of similarity may comprise a convolution or correlation value of sufficient magnitude.
  • the received encoding and the pre-stored encoding may each comprise a time varying signal, and the comparison may be made in respect of a time interval within a total duration of the received encoding.
  • the decoder may be configured to calculate a correlation value quantifying a correlation between the reflected component of the illuminating optical radiation received by the optical sensor (e.g., the electrical response signal of the optical sensor) and the pre-stored reference encoding corresponding to a respective one of the encodings within the optical radiation generated by each respective one of the plurality of optical illumination sources.
  • the correlation value may be a cross-correlation value.
  • the encoding may be a binary code (i.e., having values normalised as representing “1 ” (one) or“0” (zero)).
  • the decoder may be configured to calculate a correlation value by a process comprising the steps of: (a) calculating an average value (e.g., arithmetic mean) of the electrical response signal of the optical sensor corresponding to receipt of the encoded code in full; (b) calculating a sum of only those parts of the electrical response signal corresponding to receipt of an encoded code value representing binary value “1 ” (one); and (c) subtracting the value of the average from the value of the sum thereby to provide the correlation value.
  • This approach for calculating a correlation value is an approximation to a Pearson correlation. It is applicable when using a binary encoding sequence. This is an approximation that is computationally efficient.
  • the optical sensor may be configured for receiving optical radiation by generating an electrical signal corresponding to (i.e., in response to) the illumination optical radiation received thereby.
  • the decoder may be configured for decoding the encoded information by decoding the encoded information within electrical signal.
  • the electrical signal may comprise an electrical detection signal generated by the optical sensor.
  • the amplitude (e.g., signal intensity) of the electrical detection signal is preferably proportional to the amplitude (e.g., optical intensity) of the optical radiation received by a optical sensor.
  • the optical sensor may be configured to generate an electrical detection signal comprising a time-series comprising a plurality of discrete successive electrical detection signals whereby relative amplitudes of the discrete electrical detection signals encode the information received from a given one of the plurality of optical illumination sources.
  • the time-series may form at least a part of a plurality of discrete successive electrical detection signals forming an image sequence (e.g., a video image sequence) comprising a time-series of images each one of which corresponds to a frame (e.g., a video frame) of the image sequence.
  • the optical sensor may be configured such that the electrical signal generated thereby may correspond to a single imaging pixel sensor, or a group of imaging pixel sensors (e.g., binned pixels), of an imaging sensor array (e.g., an imaging pixel sensor array of a camera).
  • a group of imaging pixel sensors e.g., binned pixels
  • an imaging sensor array e.g., an imaging pixel sensor array of a camera.
  • the apparatus may comprise an analyser configured for detecting a composition of the object using the obtained reflection spectrum.
  • the analyser may be configured for detecting a composition of the target object using the obtained reflection spectrum.
  • the analyser may be configured for detecting a composition by determining any one or more of the following items within the obtained reflection spectrum:
  • the analyser may be configured for detecting a composition by comparing one or more of the items (a) to (f) to corresponding reference items associated with a pre-stored reference composition and determining a composition of the object according to the comparison.
  • the analyser may be configured for detecting a composition by determining that the composition of the target object corresponds to a given pre-stored reference composition if, according to the comparison, the one or more of the items (a) to (f) are sufficiently similar to corresponding reference items associated with the given pre-stored reference composition.
  • the illuminator may be configured to illuminate the target object with optical radiation generated by each one of the plurality of optical illumination sources during a plurality of separate respective time periods.
  • the illuminator may be configured such that any one of (or each one of) the optical illumination sources is operated to generate optical radiation only when none of the other optical illumination sources is concurrently operated to generate optical radiation.
  • the illuminator may be configured such that the optical illumination sources are operated to generate optical radiation so that sequential illumination of the target object by a sequence of spectrally distinct optical radiation is implemented.
  • the optical sensor may be configured to receive, during each respective time period, optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources.
  • the apparatus may be configured to implement a coordination or synchronisation between the optical sensor and the sequential output of spectrally distinct optical radiation by the optical illumination sources.
  • the illuminator may be configured to illuminate the target object by controlling each of the plurality of optical illumination sources to encode said information according to a respective pseudo-random code.
  • the pseudo-random code may be a Maximum Length Sequence (MLS) code.
  • the illuminator may be configured to illuminate the target object with optical radiation generated by each one of the plurality of optical illumination sources simultaneously during a common time period, wherein the respective pseudo-random codes are mutually orthogonal.
  • the decoder may be configured to decode by: correlating or convolving the reflected component of the illumination optical radiation received by a optical sensor, to a pre-stored reference encoding or signal corresponding to an encoding (e.g., pseudo-random code) applicable to the encoded optical radiation; and determining that the received encoding corresponds to the pre-stored encoding when the correlation or convolution indicates a sufficient degree of similarity therebetween.
  • the sufficient degree of similarity may comprise a convolution or correlation value of sufficient magnitude.
  • the apparatus my comprise an optical image forming assembly wherein the optical sensor comprises a pixel array comprising a plurality of optical sensor pixels.
  • the image forming assembly may be configured to form upon the pixel array an image of the target object using the reflected components of the illumination optical radiation, wherein the spectrum generator is configured to generate data describing a plurality of the optical reflection spectra according to the optical radiation within the image as received by the respective optical sensor pixel.
  • each optical reflection spectrum of the plurality of optical reflection spectra corresponds to reflected components of the illumination optical radiation received by a respective one of the plurality of optical sensor pixels.
  • the invention may provide a multi-band imager or a hyper-spectral imager comprising, an apparatus according to the invention in its second aspect configured to obtain the image of the target object and the data describing the plurality of optical reflection spectra, wherein the multi- band imager or hyper-spectral imager is configured to associate each image pixel of the obtained image with a given optical reflection spectrum from amongst the obtained plurality of data describing optical reflection spectra.
  • the image pixel corresponds to an optical sensor pixel associated with the given optical reflection spectrum, thereby to provide a multi-band image or a hyper-spectral image.
  • the illuminator may be configured such that the time-varying modulation of the intensity of optical radiation results in an intensity modulation that is substantially imperceptible to direct human sight.
  • the invention may provide a smartphone, or a smartphone attachment/accessory, or a handheld diagnostics/scanning device, or a wearable device such as a watch (e.g., a smartwatch or fit-bit or the like), comprising the apparatus according to any aspect describe above.
  • a smartphone or a smartphone attachment/accessory, or a handheld diagnostics/scanning device, or a wearable device such as a watch (e.g., a smartwatch or fit-bit or the like), comprising the apparatus according to any aspect describe above.
  • the spectrum of the illumination source must be known in advance or calibrated before/during use. This is because the spectrum of light sensed/imaged by the multiband/hyperspectral sensing/imaging system depends not only on the absorption and reflection of the target subject, but also the intensity of the illumination source and ambient lighting at each of the wavelengths of interest.
  • the invention proposes a method of distinguishing the light received from the illumination source from the light received via background ambient light.
  • the proposed method includes modulating the amplitude of the light output by the illumination source with a specific temporal encoding and using that encoding to calculate the relative strength (e.g., or relative intensity) of the reflection of the light from the illumination source as received at an optical sensor (e.g., at each pixel within an image). Calibration may be performed using the known spectrum of the illumination source used.
  • the invention may provide a method for obtaining an optical reflection spectrum for a target object, the method comprising: illuminating the target object with optical radiation generated by each one of a plurality of optical illumination sources wherein each one of the plurality of optical illumination sources is spectrally distinct from each of the other optical illumination sources of the plurality of optical illumination sources; wherein optical radiation generated by each one of a plurality of optical illumination sources encodes a pseudo-random code according to a time-varying modulation of the intensity of optical radiation generated thereby; by an optical sensor, receiving optical radiation from said illuminated target object comprising a reflected component of said illuminating optical radiation generated by each respective one of the plurality of optical illumination sources; comparing the reflected component of the illuminating optical radiation received by the optical sensor to said pseudo-random code encoded within the optical radiation generated by each respective one of the plurality of optical illumination sources, and according to the comparison calculating a relative intensity of the reflected component of the illuminating optical radiation relative to the intensity of
  • the optical sensor may be a monochrome optical sensor.
  • the reflected component of the illuminating optical radiation received by the optical sensor may be represented as an electrical response signal generated by the optical sensor in response to the received optical radiation.
  • the electrical response signal may be represented as data (e.g., converted into digital data) describing the intensity of the reflected component of the illuminating optical radiation received by the optical sensor.
  • the step of comparing may be performed using such digital data.
  • a reflection spectrum may be obtained using either a colour-distinguishing optical sensor (e.g., configured to generate a plurality of defined colour channels as output signals) or a monochrome optical sensor (e.g., configured to generate one single monochrome channel as output) in conjunction with a plurality of optical illumination sources each configured to provide the optical sensor with spectrally distinct light for detection by reflection from the target object.
  • This light is encoded with information permitting the calculation of a relative intensity of the reflected component of the illuminating optical radiation relative to the overall intensity of the received optical radiation from the illuminated target object. That overall intensity includes within it a component of reflected ambient light that does not originate directly from any of the plurality of optical illumination sources.
  • a colourdistinguishing optical sensor may be used or a monochrome optical sensor may be used. If a colourdistinguishing optical sensor is used, then a value of the average (e.g., arithmetic average value, or Root Mean Square value) of all of the multiple concurrent colour channel output signal values (e.g., RGB channels or other colour space) may be used to represent a measurement of the received optical radiation at any given point in time (e.g., at any given sensor exposure or frame).
  • a value of the average e.g., arithmetic average value, or Root Mean Square value
  • references herein to “optical” include a reference to electromagnetic radiation preferably in the wavelength range of about 0.3pm to about 3.0pm, or more preferably in the wavelength range of about 0.4pm to about 4.0pm, or yet more preferably in the wavelength range of about 0.4pm to about 2.0pm, or even more preferably in the wavelength range of about 0.4pm to about 1 .0pm.
  • references to “spectrally distinct” as between a plurality of optical illumination sources include a reference to optical illumination sources configured to emit electromagnetic radiation spanning a respective one of a plurality of spectral bands of optical wavelengths in which each spectral band includes a range of optical wavelengths not included among a range of optical wavelengths spanned by the spectral band of any of the other spectral bands of the plurality of spectral bands.
  • each one of the plurality of optical illumination sources is configured to emit electromagnetic radiation having an optical wavelength that does not fall outside its respective spectral band.
  • the respective spectral bands of the plurality of optical illumination sources do not share any common optical wavelengths (i.e., do not overlap).
  • references herein to “encode” include a reference to generating a signal according to an algorithm or code, and include a reference to expressing information via a code, or to expressing information in a different form or system of language.
  • references herein to “pseudo-random code” include a reference to a pseudo-random binary sequence (PRBS), pseudo-random binary code or a pseudo-random bitstream.
  • references herein to “monochrome” include a reference to monochrome sensor configured to capture incoming optical radiation regardless of optical wavelength (colour). Such a sensor may be configured to generate one single monochrome channel as output.
  • reflection spectrum include a reference to the spectrum obtained when incident electromagnetic radiation is selectively altered by a reflecting substance, in which the wavelength of the electromagnetic radiation is the independent variable.
  • the reflection spectrum may comprise a reflectance spectrum, or may comprise a reflectivity spectrum.
  • the method may comprise detecting a composition of the subject using the obtained data describing an optical reflection spectrum.
  • the detecting of a composition may comprise determining any one or more of the following items within the obtained reflection spectrum:
  • the detecting of a composition may comprise comparing one or more of the items (a) to (f) to corresponding reference items associated with a pre-stored reference composition and determining a composition of the object according to the comparison.
  • the method may comprise determining that the composition of the target object corresponds to a given pre-stored reference composition if, according to the comparison, the one or more of the items (a) to (f) are sufficiently similar to corresponding reference items associated with the given pre-stored reference composition.
  • the step of comparing may comprise calculating a correlation value quantifying a correlation between the reflected component of the illuminating optical radiation received by the optical sensor (e.g., the electrical response signal of the optical sensor) and the pseudo-random code encoded within the optical radiation generated by each respective one of the plurality of optical illumination sources.
  • the correlation value may be a cross-correlation value.
  • the pseudo-random code may be a binary code (i.e., having values normalised as representing “1” (one) or “0” (zero)).
  • the calculating of a correlation value may comprise the steps of: (a) calculating an average value (e.g., arithmetic mean) of the electrical response signal of the optical sensor corresponding to receipt of the encoded pseudo-random code in full; (b) calculating a sum of only those parts of the electrical response signal corresponding to receipt of an encoded pseudorandom code value representing binary value “1 ” (one); and (c) subtracting the value of the average from the value of the sum thereby to provide the correlation value.
  • This approach for calculating a correlation value is an approximation to a Pearson correlation. It is applicable when using a binary encoding sequence. This is an approximation that is computationally efficient.
  • the step of comparing may comprise determining a correlation peak value (e.g., r ⁇ ak , as noted below).
  • the correlation peak value may be a peak value of a Pearson correlation.
  • the correlation peak value may be linearly related to an intensity, a, of received optical radiation from the illuminated target object (i.e., the reflected signal).
  • the correlation peak value may be directly linearly proportional to an intensity, a, of received optical radiation (e.g., r ⁇ ak ⁇ x a).
  • the correlation peak value may be linearly proportional to the product of an intensity, a, of received optical radiation and proportional to a scale factor determined according to a length of the encoded pseudo-random code (e.g., an MLS sequence length, L).
  • the correlation peak value may be divided by the value of the scale factor to provide a value representing an intensity of received optical radiation from the illuminated target object (i.e., the reflected signal).
  • the correlation peak value (e.g., r ak , as noted below) from the correlation data one is able to extract a value related to an intensity, a, of received optical radiation from the illuminated target object which is proportional to the reflection coefficient of the target.
  • the correlation peak value may take the form of an impulse response function of the target subject.
  • the step of calculating a relative intensity may comprise determining a correlation peak value as described herein, or may comprise determining an intensity according to a correlation peak value, as described herein.
  • the step of comparing may comprise calculating a deconvolution configured to retrieve from within the illuminating optical radiation received by the optical sensor (e.g., the electrical response signal of the optical sensor), the relative intensity of the reflected component of the illuminating optical radiation originating from the plurality of optical illumination sources.
  • the invention may provide a relative intensity value, reflectance value, spectrum, or level. This may comprise relative values that are relative with respect to:
  • the step of comparing may comprise, according to the comparison, calculating a relative intensity of ambient optical radiation relative to the intensity of the received optical radiation from the illuminated target object, and therewith calculating the relative intensity of the reflected component of the illuminating optical radiation relative to the intensity of the received optical radiation from said illuminated target object.
  • the step of decoding/comparing may comprise aligning in time the time variations of the (pseudorandom) code with corresponding time-variations of modulation of the intensity of optical radiation received by the optical sensor.
  • the optical sensor may be configured to receive optical radiation as a plurality of successive discrete exposures (e.g., frames) occurring at a pre-set frequency or rate (e.g., frame rate) thereby to provide a temporal sequence of exposures.
  • the number of exposures (e.g., frames) within the temporal sequence is sufficient that the duration of the temporal sequence fully contains the encoded pseudo-random code present within the received optical radiation.
  • the number of exposures within the temporal sequence exceeds the number of samples within the (pseudorandom) code, such that the number of exposures via which the full encoding is captured exceeds the full code length (i.e., total number/length of code samples) of the code.
  • the number of exposures defining the temporal sequence exceeds the full code length by a factor of at least two (2), or at least five (5), of between five (5) and ten (10).
  • the duration of time between successive samples of the (pseudo-random) code exceeds the exposure time of each exposure of the optical sensor by a factor of at least two (2), or at least five (5), of between five (5) and ten (10).
  • discrete exposures may be provided by a shutter place before the optical sensor(s) and by controlling the shutter operation appropriately.
  • the operation of the shutter may be synchronised in time with each bit in the pseudo random code bit stream. In this case one need not oversample as the synchronisation (time alignment) may ensure that the apparatus captures the information needed.
  • time alignment may be performed after capturing the information (i.e., in post processing).
  • the number of exposures captured by the optical sensor(s) e.g., video capture exposures
  • the duration of time between successive samples of the (pseudo-random) code exceeds the exposure time of each exposure of the optical sensor by a factor of at least two (2), or at least five (5), of between five (5) and ten (10).
  • the step of illuminating the target object with optical radiation generated by each one of a plurality of optical illumination sources may be performed during a plurality of separate respective time periods.
  • any one of (or each one of) the optical illumination sources may be operated to generate optical radiation only when none of the other optical illumination sources is concurrently operated to generate optical radiation.
  • the optical illumination sources may be operated to generate optical radiation so that sequential illumination of the target object by a sequence of spectrally distinct optical radiation is implemented.
  • the method may include, by the optical sensor during each respective time period, receiving optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources.
  • a coordination or synchronisation may be provided between the optical sensor and the sequential output of spectrally distinct optical radiation by the optical illumination sources. For example, this may ensure that the reflected component of the illuminating optical radiation received by the optical sensor from each respective one of the plurality of optical illumination sources, via reflection from the illuminated target object, comes uniquely from that respective one of the plurality of optical illumination sources in sequence.
  • the response of the optical sensor at any point in the sequence is thereby is a response to a spectrally distinct illumination as opposed to a superposition of multiple concurrent spectrally distinct illuminations which may reduce the spectral resolution of the reflection spectrum.
  • the step of illuminating of the target object may comprise controlling each of the plurality of optical illumination sources to encode the information according to a respective pseudo-random code.
  • the pseudo-random code may be a Maximum Length Sequence (MLS) code.
  • the step of illuminating the target object with optical radiation generated by each one of a plurality of optical illumination sources may be performed simultaneously during a common time period, wherein the respective pseudo-random codes are mutually substantially orthogonal.
  • Two orthogonal pseudo-random codes may be such that the overlap integral of the two pseudo-random codes is negligibly small (e.g., practically zero).
  • a first pseudo-random code Cl(t) defining a first modulation varying as a function of time, t
  • C2(t) defining a second modulation varying as a function of time, t
  • the step of illuminating the target object may be such that the pseudo-random code encoded in the optical radiation generated by any one of the plurality of optical illumination sources is different (i.e., convey a different modulation or encoding) from the pseudo-random code encoded in the optical radiation generated by each of the other optical illumination sources of the plurality of optical illumination sources.
  • the step of illuminating the target object may be such that the pseudo-random code encoded in the optical radiation generated by each respective one of a plurality of optical illumination sources encodes information identifying the respective optical illumination source distinguishably from amongst the plurality of optical illumination sources.
  • the method may include decoding the encoded information within the reflected component of the illumination optical radiation received by a optical sensor and assigning an optical spectral band to the respective illumination optical radiation according to the decoded information.
  • decode include a reference to recovering information that has been encoded via a code, or to recovering information that has been encoded in a different form or system of language.
  • the method may comprise generating data describing an optical reflection spectrum an optical reflection spectrum according to the reflected components of the illumination optical radiation received by the optical sensor from the plurality of optical illumination sources, and according to the respective assigned spectral bands.
  • a reflection spectrum may be obtained using an optical sensor in conjunction with a plurality of optical illumination sources each configured to communicate to the optical sensor the spectrally distinct nature of the light generated by it when received by the optical sensor.
  • This encoded information is conveyed concurrently with reflectance information regarding the target object because the encoded optical radiation arrives at the optical sensor via reflection from the target object.
  • the act of reflection of encoded light from the target object ‘imprints’ reflectance information into the encoded optical radiation, for receipt by the optical sensor.
  • the optical radiation generated by each one of a plurality of optical illumination sources may encode information identifying the respective optical illumination source thereby to identify the spectrally distinct spectral band of optical wavelengths generated by that optical illumination source, or to allow the spectrally distinct spectral band to be identified e.g., by reference to pre-stored data mapping the encoding to a particular spectral band.
  • the encoded information thereby permits an optical source to be distinguished from other optical sources amongst the plurality of optical illumination sources.
  • the step of decoding may comprise comparing the reflected component of the illumination optical radiation received by an optical sensor, to a pre-stored reference encoding or signal corresponding to an encoding applicable to the encoded optical radiation, and determining that the received encoding corresponds to the pre-stored encoding when the comparison indicates a sufficient degree of similarity therebetween.
  • the comparison may be performed by correlating or convolving the received encoding and the pre-stored encoding.
  • the sufficient degree of similarity may comprise a convolution or correlation value of sufficient magnitude.
  • the received encoding and the pre-stored encoding may each comprise a time varying signal, and the comparison may be made in respect of a time interval within a total duration of the received encoding.
  • the step of decoding may comprise correlating or convolving the reflected component of the illumination optical radiation received by an optical sensor, to a pre-stored reference encoding or signal corresponding to an encoding (e.g., pseudo-random code) applicable to the encoded optical radiation, and determining that the received encoding corresponds to the pre-stored encoding when the correlation or convolution indicates a sufficient degree of similarity therebetween.
  • the sufficient degree of similarity may comprise a convolution or correlation value of sufficient magnitude.
  • the orthogonality of pseudo-random codes means that when the reflected component of the illumination optical radiation received by an optical sensor contains encodings according to two or more orthogonal pseudo-random codes, the correlation/convolution of the received optical radiation (i.e., the sensor signal generated in response to that receipt) and any selected one of the same two or more orthogonal pseudorandom codes each representing a pre-stored reference encoding or signal, will produce non-negligible convolution or correlation values only in respect of those parts of the received optical radiation containing the selected reference signal.
  • the convolution, (Cl * C2)(t) may take the form:
  • each one of the two or more orthogonal pseudo-random codes may be extracted (decoded) form within the received optical radiation independently of the presence of any of the other orthogonal pseudo-random codes within the received optical radiation.
  • the optical sensor may comprise a pixel array comprising a plurality of optical sensor pixels, the method including forming upon the pixel array an image of the target object using said reflected components of the illumination optical radiation, and generating data describing a plurality of the optical reflection spectra according to the optical radiation within the image as received by the respective optical sensor pixel, wherein each optical reflection spectrum of the plurality of optical reflection spectra corresponds to reflected components of the illumination optical radiation received by a respective one of the plurality of optical sensor pixels.
  • the invention may provide a method of hyper spectral imaging comprising, obtaining said image of the target object and said data describing a plurality of optical reflection spectra according to the method described above, and associating each image pixel of the obtained image with a given optical reflection spectrum from amongst the obtained optical reflection spectra, wherein the image pixel corresponds to an optical sensor pixel associated with the given optical reflection spectrum, thereby to provide a hyper-spectral image.
  • the time-varying modulation of the intensity of optical radiation may be configured such that it results in an intensity modulation that is substantially imperceptible to direct human sight.
  • the invention may provide an apparatus for obtaining an optical reflection spectrum for a target object, the apparatus comprising: an illuminator comprising a plurality of optical illumination sources configured for illuminating the target object with optical radiation, wherein each one of the plurality of optical illumination sources is spectrally distinct from each of the other optical illumination sources of the plurality of optical illumination sources, and wherein the illuminator is configured to encode a pseudo-random code according to a timevarying modulation of the intensity of optical radiation generated thereby; an optical sensor configured to receive optical radiation from said illuminated target object comprising a reflected component of said illuminating optical radiation generated by each respective one of the plurality of optical illumination sources; a comparison unit configured to: compare the reflected component of the illuminating optical radiation received by the optical sensor to said pseudo-random code encoded within the optical radiation generated by each respective one of the plurality of optical illumination sources, and according to the comparison, to calculate a relative intensity of said reflected component of said illuminating optical radiation relative to the
  • the optical sensor may be a monochrome optical sensor.
  • the optical sensor may be configured to generate an electrical signal in response to the received optical radiation therewith to represent the reflected component of the illuminating optical radiation received by the optical sensor as an electrical response signal.
  • the optical sensor, or the comparison unit may be configured to represent the electrical response signal as data (e.g., converted into digital data) describing the intensity of the reflected component of the illuminating optical radiation received by the optical sensor.
  • the comparison unit may be configured to perform the step of comparing using such digital data.
  • the apparatus may comprise an analyser configured for detecting a composition of the subject using the obtained data describing an optical reflection spectrum.
  • the comparison unit may be configured for calculating a correlation value quantifying a correlation between the reflected component of the illuminating optical radiation received by the optical sensor to the pseudo-random code encoded within the optical radiation generated by each respective one of the plurality of optical illumination sources.
  • the comparison unit may be configured for calculating a deconvolution configured to retrieve from within the illuminating optical radiation received by the optical sensor, the relative intensity of the reflected component of the illuminating optical radiation originating from the plurality of optical illumination sources.
  • the comparison unit may be configured for, according to the comparison, calculating a relative intensity of ambient optical radiation relative to the intensity of the received optical radiation from the illuminated target object, and therewith calculating the relative intensity of the reflected component of the illuminating optical radiation relative to the intensity of the received optical radiation from the illuminated target object.
  • the illuminator may be configured to illuminate the target object with optical radiation generated by each one of the plurality of optical illumination sources is performed simultaneously during a common time period, wherein the respective pseudo-random codes are mutually orthogonal.
  • the pseudo-random code may be a Maximum Length Sequence (MLS) code.
  • MLS Maximum Length Sequence
  • the comparison unit may be configured such that the step of comparing may comprise determining a correlation peak value (e.g., r y eak , as noted below).
  • the correlation peak value may be a peak value of a Pearson correlation.
  • the correlation peak value may be linearly related to an intensity, a, of received optical radiation from the illuminated target object (i.e., the reflected signal).
  • the correlation peak value may be directly linearly proportional to an intensity, a, of received optical radiation (e.g., r ak oc a).
  • the correlation peak value may be linearly proportional to the product of an intensity, a, of received optical radiation and proportional to a scale factor determined according to a length of the encoded pseudorandom code (e.g., an MLS sequence length, £).
  • the correlation peak value may be divided by the value of the scale factor to provide a value representing an intensity of received optical radiation from the illuminated target object (i.e., the reflected signal).
  • the comparison unit may be configured to extract a value related to an intensity, a, of received optical radiation from the illuminated target object which is proportional to the reflection coefficient of the target.
  • the correlation peak value may take the form of an impulse response function of the target subject.
  • the comparison unit may be configured to perform the step of calculating a relative intensity by determining a correlation peak value as described herein, or by determining an intensity according to a correlation peak value, as described herein.
  • the apparatus may comprise an optical image forming assembly, wherein the optical sensor comprises a pixel array comprising a plurality of optical sensor pixels, and the image forming assembly is configured to form upon the pixel array an image of the target object using said reflected components of the illumination optical radiation, wherein the spectrum generator is configured to generate data describing a plurality of the optical reflection spectra according to the optical radiation within the image as received by the respective optical sensor pixel, wherein each optical reflection spectrum of the plurality of optical reflection spectra corresponds to reflected components of the illumination optical radiation received by a respective one of the plurality of optical sensor pixels.
  • the invention may provide a hyper-spectral imager comprising, an apparatus according to the fourth aspect of the invention, configured to obtain the image of the target object and the data describing the plurality of optical reflection spectra, wherein the hyper-spectral imager is configured to associate each image pixel of the obtained image with a given optical reflection spectrum from amongst the obtained plurality of data describing optical reflection spectra, wherein the image pixel corresponds to an optical sensor pixel associated with the given optical reflection spectrum, thereby to provide a hyper-spectral image.
  • the illuminator may be configured such that the time-varying modulation of the intensity of optical radiation results in an intensity modulation that is substantially imperceptible to direct human sight.
  • the invention may provide a smartphone, or an accessory for a smartphone, or a hand-held diagnostic device, or a wearable device (e.g., a smartwatch) comprising an apparatus as disclosed herein.
  • a smartphone or an accessory for a smartphone, or a hand-held diagnostic device, or a wearable device (e.g., a smartwatch) comprising an apparatus as disclosed herein.
  • a wearable device e.g., a smartwatch
  • the invention includes the combination of the aspects and preferred features described except where such a combination is clearly impermissible or expressly avoided.
  • Figure 1 illustrates a hyper-spectral data cube and a reflection spectrum therefrom.
  • Figure 2 illustrates a schematic representation of a prior art methodology for removing the contribution of ambient light to an image.
  • Figure 3a schematically illustrates an apparatus configured for obtaining an optical reflection spectrum for a target object, in use.
  • Figure 3b schematically illustrates an example of data generated by the example shown in Fig. 3a, in use.
  • Figure 3c schematically illustrates an example of the cross-correlation of a pseudo-random sequence with data generated in the example shown in Fig. 3b.
  • Figure 4a shows an example of a Maximum Length Sequence.
  • Figure 4b shows an example of a noisy reflection signal corresponding to reflected light containing the Maximum Length Sequence of Fig. 4a.
  • Figure 4c shows an example of the result of a cross-correlation between the noisy reflection signal of Fig. 4b and the Maximum Length Sequence within it as shown in Fig. 4a.
  • Figure 5a schematically illustrates an apparatus configured for obtaining an optical reflection spectrum for a target object.
  • Figure 5b schematically illustrates a method for obtaining an optical reflection spectrum for a target object.
  • Figure 5c schematically illustrates an apparatus configured for obtaining an optical reflection spectrum for a target object.
  • Figure 5d schematically illustrates a method for obtaining an optical reflection spectrum for a target object.
  • Figure 5e schematically illustrates an apparatus configured for obtaining an optical reflection spectrum for a target object.
  • Figure 5f schematically illustrates an optical emission spectrum comprising a plurality of spectrally distinct emission spectral parts each separately temporally modulated in intensity according to a unique MLS code.
  • Figure 5g schematically illustrates several temporal modulations, of differing modulation depth, in the intensity of the optical output of an illumination source according to the same MLS code.
  • Figure 6a, 6b and 6c show images of: (a) a target subject when illuminated with ambient light; (b) the target subject when illuminated with illumination light of wavelengths confined to a first selected spectral band (centred on red light) an modulated in intensity according to an MLS code; (c) the cross-correlation of the image of Figure 6(b) and the MLS code.
  • Figure 7a, 7b and 7c show images of: (a) a target subject when illuminated with ambient light; (b) the target subject when illuminated with illumination light of wavelengths confined to a first selected spectral band (centred on blue light) an modulated in intensity according to an MLS code; (c) the cross-correlation of the image of Figure 6(b) and the MLS code.
  • Figure 8a and 8b show images of: (a) a target subject with a first (bad choice) region of interest identified; (b) the target subject with a second (good choice) region of interest identified differing from the first region of interest.
  • Figure 9a and 9b show graphs of: (a) a sequence of optical image sensor signals corresponding to a sequence of image frames generated by an optical sensor in respect of intensity-modulated (MLS- encoded) optical radiation reflected from the second (good choice) region of interest; (b) a sequence of optical image sensor signals corresponding to a sequence of image frames generated by an optical sensor in respect of intensity-modulated (MLS- encoded) optical radiation reflected from the first (bad choice) region of interest.
  • Figure 9c shows a single threshold applied to the sequence of optical image sensor signals from the sequence of Figure 9a for use in distinguishing between image frames of an image sequence which capture an image of the target subject when illuminated by higher-intensity portions of the intensity- modulated (MLS- encoded) optical radiation and image frames of the image sequence which capture an image of the target subject when illuminated by lower-intensity portions of the intensity-modulated (MLS- encoded) optical radiation.
  • MLS- encoded intensity-modulated
  • Figure 9d shows two concurrent thresholds applied to the sequence of optical image sensor signals from the sequence of Figure 9a for use in distinguishing between image frames of an image sequence which capture an image of the target subject when illuminated by higher-intensity portions of the intensity- modulated (MLS- encoded) optical radiation and image frames of the image sequence which capture an image of the target subject when illuminated by lower-intensity portions of the intensity-modulated (MLS- encoded) optical radiation.
  • MLS- encoded intensity-modulated
  • Figure 10 shows graph of the intensity of the sequence of optical image sensor signals from the sequence of Figure 9a in which the signal intensities are re-arranged into a sequence of progressively increasing intensity (from left to right).
  • Figure 11a and 11b show a sequence of optical image sensor signals corresponding to a sequence of image frames generated by an optical sensor according to an imaging sensor frame rate that produces a first number of frames over the duration of an optical illumination of an imaged target subject in which the modulation comprises a second number of illumination modulation samples, which is different to the first number.
  • Figure 12a and 12b show a sequence of optical image sensor signals taken from Figure 11a.
  • Figure 13 shoes a flow chart according to a process for generating a reflection spectrum for a target subject.
  • FIG. 3a schematically illustrates an apparatus configured for obtaining an optical reflection spectrum for a target object, in use.
  • the apparatus comprises a mirror unit 50 for personal use (e.g., domestic or industrial/medical settings) by a person 12.
  • the mirror unit comprises a reflector part 10 presenting outwardly a reflective surface via which a user 12 may regard themselves, and a frame part 8 upon which the reflector part is mounted.
  • An illuminator part 9 is housed in/on the frame part 8 adjacent to the reflector part such that both the reflector part and the illuminator part may be visible simultaneously to a user whereby the user may be illuminated by illumination radiation 11 generated by the illuminator while a user regard themselves via the reflector part.
  • the illuminator 8 preferably comprises a plurality of spectrally distinct light sources (e.g., LEDs) each of which is configured to emit light with an intensity temporally modulated (i.e. , encoded) with a pseudo-random sequence such as an MLS sequence.
  • An optical sensor 58 such as a camera, is positioned upon the frame part adjacent to the reflector part such that the user may be illuminated by illumination radiation generated by the illuminator while a user regards themselves via the reflector part and simultaneously the optical sensor 58 may receive a reflected component of the illumination light from the user’s face.
  • each of the reflector part, the illuminator part and the camera may be visible simultaneously to a user.
  • a control unit (not shown) and a spectrum generator unit (not shown) are housed within the frame part 8 for controlling the encoding of the illumination radiation 11 generated by the illuminator and for generating data representing an optical reflection spectrum using data generated by the optical sensor 58 in response to receiving a reflected component of encoded illumination radiation 11 that has been reflected from the face of the user 12.
  • Figure 3b schematically illustrates an example of data generated by the apparatus 50 shown in Fig. 3a, in use, in which the optical sensor 58 is a camera with a 2D imaging pixel sensor array and an image forming assembly (e.g., focussing lenses) configured to receive light from the target subject 12 and to form an image of the target subject at an image plane coinciding with the surface plane of the 2D imaging pixel sensor array.
  • the optical sensor is configured to generate a temporal sequence of successive image frames 13 of the target subject 12, at a given image frame rate.
  • each individual imaging pixel sensor within the 2D imaging pixel sensor array produces a corresponding temporal sequence of successive sensor output values 14.
  • This corresponding temporal sequence corresponds to the intensity of illumination light received by that individual imaging pixel sensor from a location on the surface of the target subject 12 that shares the same location as that of the individual imaging pixel sensor within the image of the subject formed on the 2D imaging pixel sensor array.
  • each individual imaging pixel sensor concurrently receives ambient illumination.
  • the component of received light corresponding to a reflected component of illumination radiation 11 is temporally modulated in intensity and this results in a results in a corresponding temporal modulation 14a in the output signal from each individual imaging pixel sensor which is superimposed in a background output signal level 14b corresponding to ambient illumination levels which typically varies in time in an uncontrollable way (e.g., reduces smoothy in time, in this example).
  • Figure 3c schematically illustrates an example of the cross-correlation of the pseudo-random sequence 11 with data 14 generated in the example shown in Fig. 3b.
  • the cross-correlation result 15 consists of a distinct correlation peak 16 surrounded wither side by low correlation levels 17.
  • the correlation peak 16 which confirms the presence of the Maximum Length Sequence within the detected signal 14.
  • the Maximum Length Sequence may be uniquely associated with a particular spectral band of illumination light (e.g., specific colour) and the identification of that MLS sequence within the detected signal 14 allows the apparatus to identify that the illumination light has the particular spectral band in question. In so doing, colour filters are not required at the optical sensor in order to “know” what the particular spectral band is that is being sensed by the optical sensor. Instead, the MLS sequence allows the apparatus to “know” this information.
  • Figure 4b shows an example of a noisy reflection signal 19 corresponding to reflected light containing the Maximum Length Sequence of Fig. 4a.
  • Figure 4c shows an example of the result of a cross-correlation between the noisy reflection signal 19 of Fig. 4b and the Maximum Length Sequence 18 within it as shown in Fig. 4a.
  • the cross-correlation result shows a very strong and distinct correlation peak 20 which confirms the presence of the Maximum Length Sequence within the noisy signal 19.
  • This correlation peak is followed by a much lower sequence of values 21 corresponding to the cross-correlation with ambient illumination represented within the noisy signal.
  • the height of the distinct peak 20 is proportional to the impulse response function of the target subject in respect of the illumination radiation 1 1 generated by the illuminator part 9.
  • the impulse response function of the target subject is proportional to the reflection coefficient of the surface part of the target subject responsible for the reflected encoded light contained in the noisy signal 19.
  • a unipolar MLS has a (2 n - 2)/2 values of “one” and (2 n - 2)/2 + 1 values of “zero”.
  • a cross correlation is performed, one signal is kept the same while the other is circularly shifted one sample at a time. For each shift the correlation coefficient calculated. At a lag of zero, the two signals are the same, so all the values of “one” and all the values of “zero” within the MLS line up and one effectively obtains a sum equal to: (2 n - 2)/2. This sum can be normalised by the length of the signal, if desired.
  • the correlation impulse response peak value exceeds 2 n /4 because the intensity of the image pixel is not “1”, but some other 8-bit or 12-bit integer value. Therefore, the correlation impulse response peak would be approximately equal to: a(2 n /4), where the constant a is the strength (intensity, or relative intensity) of the reflected signal. This allows the strength of the reflected signal to be estimated.
  • Figure 5a illustrates schematic view of an apparatus 50 configured for obtaining an optical reflection spectrum for a target object 54.
  • the apparatus comprises an illuminator unit 52 comprising a plurality of optical illumination sources (52a, 52b, 52c, ... , 52i) each individually configured for illuminating the target object 54 with illuminating optical radiation 53.
  • Each one of the plurality of optical illumination sources (52a, 52b, 52c, .... 52i) is spectrally distinct from each of the other optical illumination sources of the plurality of optical illumination sources.
  • Figure 5f schematically shows a spectrum of the combined optical output of all of the plurality of optical illumination sources (52a, 52b, 52c, ... , 52i).
  • Ai a known and pre-set optical wavelength
  • AA a known and pre-set spectral bandwidth
  • Each optical illumination source may comprise a light-emitting diode (LED) configured to intrinsically generate optical radiation having a wavelength confined to a pre-set spectral band.
  • the pre-set spectral band may be of sufficiently narrow spectral width for use by the apparatus in unfiltered form.
  • the pre-set spectral band may be of insufficiently narrow width for use by the apparatus in unfiltered form in which case the apparatus may include an optical filer arranged to filter the optical output of a given optical illumination source such that the resulting filtered output has a sufficiently narrow width for use by the apparatus.
  • the illuminator 52 is configured to encode information identifying each respective one of the optical illumination sources distinguishably from amongst the plurality of optical illumination sources (52a, 52b, 52c, ... , 52i) by applying a time-varying modulation of the intensity of optical radiation generated by the respective optical illumination source.
  • Figure 5f schematically shows the time-modulated modulated optical output of the of the spectral bands of the plurality of optical illumination sources (52a, 52b, 52c, .... 52i).
  • the illuminator is configured to illuminate the target object by controlling each of one of the plurality of optical illumination sources to encode the information according to a respective pseudo-random code.
  • MLS Maximum Length Sequence
  • the respective pseudo-random codes may be mutually orthogonal illuminator and the illuminator 52 may be configured to illuminate the target object with optical radiation generated by each one of the plurality of optical illumination sources simultaneously during a common time period.
  • the illuminator is configured to illuminate the target object with optical radiation generated by each one of the plurality of optical illumination sources during a plurality of separate respective time periods
  • the optical sensor is configured to receive, during each respective said time period, optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources.
  • Figure 5g schematically illustrates three different implementations of such a temporal modulation in the output intensity of an optical illumination source according to a given Maximum Length Sequence.
  • the output intensity maybe modulated between value of 100% output (i.e., maximum possible intensity) and 0% output (i.e., effectively no output) as shown in the modulation curve 73 of figure 5g.
  • the output intensity may be modulated between a first value of less than 100% output (e.g., 55% of maximum possible output) and a second value which is less than the first value but greater than zero (e.g., 50% of maximum possible output) shown in the modulation curves 74 and 75 of figure 5g.
  • both the first and the second values are greater than 50%, for example such as an intensity modulated between an upper output of 83% of maximum possible intensity, and a lower output of 80% of maximum possibly intensity as shown by modulation curve 74.
  • This has the benefit of reducing, or removing, the perception by of ‘flicker’ from the optical illumination source in question.
  • the illuminator is preferably configured such that the time-varying modulation of the intensity of optical radiation results in an intensity modulation that is substantially imperceptible to direct human sight.
  • the apparatus includes a control unit 51 comprising an encoder unit 51a configured to control the power supplied to each respective optical illumination source by applying the time-varying modulation (e.g., an MLS sequence) of the electrical power supplied by the apparatus to the respective optical illumination source. This modulates the output response of the optical illumination source resulting in a modulation of the intensity of optical radiation generated by the optical illumination source in question.
  • a control unit 51 comprising an encoder unit 51a configured to control the power supplied to each respective optical illumination source by applying the time-varying modulation (e.g., an MLS sequence) of the electrical power supplied by the apparatus to the respective optical illumination source. This modulates the output response of the optical illumination source resulting in a modulation of the intensity of optical radiation generated by the optical illumination source in question.
  • the time-varying modulation e.g., an MLS sequence
  • the apparatus includes an optical sensor 58 comprising array of spatially separate pixel sensors (e.g., monochrome pixel sensors) each of which is configured individually to receive optical radiation from the illuminated target object.
  • the received optical radiation includes within it a reflected component 55a of the illuminating optical radiation 53 generated by each respective one of the plurality of optical illumination sources.
  • the received optical radiation includes within it an ambient component 55b that does not originate from the optical illumination output 53.
  • An optical image forming assembly 56 is optionally included in the apparatus for converging received light 57 to form an image of the target subject at an image plane within which the optical sensor 58 resides (e.g., form the image upon an array of separate optical sensors such as a digital imaging chip).
  • the optical image forming assembly 56 may comprise an assembly of one or more optical lenses and/or mirrors.
  • the optical sensor may comprise a pixel array 58 comprising a plurality of optical sensor pixels, and the image forming assembly maybe configured to form upon the pixel array an image of the target object using the reflected components of the illumination optical radiation.
  • the spectrum generator may be configured to generate data describing a plurality of optical reflection spectra according to the optical radiation within the image as received by the respective optical sensor pixels. Each optical reflection spectrum of the plurality of optical reflection spectra corresponds to reflected components of the illumination optical radiation received by a respective one of the plurality of optical sensor pixels.
  • the apparatus 50 may provide a multi-band imager configured to obtain an image of the target object and data describing a plurality of optical reflection spectra, the multi-band imager being configured to associate each image pixel of the obtained image with a given optical reflection spectrum from amongst the obtained plurality of data describing optical reflection spectra.
  • an image pixel corresponds to an optical sensor pixel associated with the given optical reflection spectrum. This enables the apparatus to provide a multi-band image.
  • the apparatus may be arranged to provide synchronisation, as discussed above. In this way the apparatus would know when each given illumination source is activated (i.e., switch on) and when it is deactivated (i.e., switch off).
  • the apparatus may be arranged to synchronise the operation of the optical sensor(s) (e.g., a camera) to capture illumination (e.g., capture an image frame, if a camera chip) and to apply a label to the respective captured data (e.g., image frames). Applying a label may include applying metadata to the captured data file, or may include storing the captured data in an appropriately labelled/named data storage file.
  • the optical sensor(s) e.g., a camera
  • capture illumination e.g., capture an image frame, if a camera chip
  • Applying a label may include applying metadata to the captured data file, or may include storing the captured data in an appropriately labelled/named data storage file.
  • the decoder unit may simply use the MLS sequence corresponding to a particular colour of light that one is interested in without having to cycle through each different MLS sequence.
  • the apparatus may be arranged to operate without such synchronisation between the optical sensor(s) (e.g., camera) and the illumination sources (e.g., LED lights). In this case one may check the captured data sequence (e.g., video sequence) for presence of the cross-correlation peak.
  • the decoder unit may be arranged to perform cross correlation, to detect the crosscorrelation peak, and to only process a given "colour” if a cross-correlation peak is found to be present.
  • MLSi pre-stored digital set of all of the MLS sequences that were applied to the different optical sources
  • a spectrum generator unit 59 is configured to receive from the decoder unit 51 b all of the assigned optical spectral bands and (and in association with) corresponding detection signals generated by the optical sensor in response to the receipt of light corresponding to the spectral band in question.
  • the spectrum generator unit is arranged to generate data describing an optical reflection spectrum according to the reflected components of the illumination optical radiation received by the optical sensor from the plurality of optical illumination sources, and according to the respective assigned spectral bands.
  • a spectrum generator unit 59 may include an analyser unit configured for detecting a composition of the object using the obtained reflection spectrum. Any one of multiple approaches may be used for this purpose including, but not limited to, the following examples: comparing the relative strengths of different frequency bands known to absorb or reflect light; use of a deterministic physics model based of the light/matter interaction to infer the concentration of different chromophores; use of a machine learnt model trained on labelled data to identify the presence and quantity of key chromophores in a representative target sample set.
  • Figure 5b illustrates steps in a method for obtaining an optical reflection spectrum for a target object, the method comprising the following steps.
  • first step 60 illuminate the target object with optical radiation generated by each one of a plurality of spectrally distinct optical illumination sources such that the optical radiation generated by each one of the plurality of sources encodes information identifying the respective source distinguishably from amongst the plurality of optical illumination sources.
  • the encoding is via a time-varying modulation of the intensity of optical radiation generated by the sources.
  • Each one of the plurality of optical illumination sources is spectrally distinct from each of the other sources.
  • step 61 by an optical sensor, receive optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources.
  • decoding i.e., recovering
  • the encoded information within the reflected component of the illumination optical radiation received by the optical sensor and assign an optical spectral band to the respective illumination optical radiation according to the decoded information.
  • step 63 generate data describing an optical reflection spectrum an optical reflection spectrum according to the reflected components of the illumination optical radiation received by the optical sensor from the plurality of sources, and according to the respective assigned spectral bands.
  • FIG. 5c and Figure 5d illustrate another example of the invention.
  • Figure 5c schematically illustrates an illuminator comprising all of the features described above with reference to Figure 5c. That is to say, a plurality of optical illumination sources (52a, 52b, 52c, ... , 52i) are configured to illuminate the target object 54 with optical radiation, and each one of the illumination sources is spectrally distinct from each of the other illumination.
  • the optical sensor 58 is configured to receive optical radiation from the illuminated target object 54 comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources (52a, 52b, 52c, ... , 52i).
  • an optical spectral band e.g., A
  • AAi optical spectral band
  • the decoder unit need not perform cross-correlation. Instead, the decoder unit need only perform one correlation operation with the known MLS sequence. This will provide the value of the correlation peak and correspondingly - a value (e.g., ⁇ z(2 n /4), see above) proportional to the reflected signal. If synchronisation is not applied as between the illuminators (e.g., LED lights) and the optical sensor (e.g., camera) then the decoder unit may be arranged to search the full sequence of captured data (e.g., video sequence), and to subsequently perform a cross correlation to determine the correlation peak.
  • the decoder unit may be arranged to search the full sequence of captured data (e.g., video sequence), and to subsequently perform a cross correlation to determine the correlation peak.
  • the decoder unit may be arranged to perform a cross correlation with the sequence of captured data and with each of the MLS code sequences until it is determined which respective one of the MLS code sequences matches a given correlation peak.
  • Each correlation peak would correspond to a respective one of the individual illumination sources that was triggered during that associated sequence of captured data.
  • the action of decoding by the decoder unit 51 b renders the decoding unit equivalent to a comparison unit configured to compare the reflected component of the illuminating optical radiation received by the optical sensor to the pseudo-random code encoded within the optical radiation generated by each respective one of the plurality of optical illumination sources.
  • the decoder unit 51 b is forms a part of a comparison unit (51 b, 51 c).
  • the comparison unit is further configured to calculate a relative intensity of the reflected component of the illuminating optical radiation relative to the intensity of the received optical radiation from said illuminated target object.
  • the comparison unit includes a calculation unit 51c configured to perform this calculation.
  • a spectrum generator 59 is configured to receive the relative intensity values generated by the calculation unit 51 c and to receive from the decoder unit 51 b all of the assigned optical spectral bands and (and in association with) corresponding detection signals generated by the optical sensor in response to the receipt of light corresponding to the spectral band in question.
  • the spectrum generator unit is arranged to generate data describing an optical reflection spectrum according to the reflected components of the illumination optical radiation received by the optical sensor from the plurality of optical illumination sources, and according to the respective assigned spectral bands and the relative intensity values.
  • a spectrum generator unit 59 may include an analyser unit configured for detecting a composition of the object using the obtained reflection spectrum.
  • Figure 5d illustrates steps in a method for obtaining an optical reflection spectrum for a target object, the method comprising the following steps.
  • First step 64 illuminate the target object with optical radiation generated by each one of a plurality of spectrally distinct optical illumination sources so that optical radiation generated by each one of a plurality of optical illumination sources encodes a pseudo-random code according to a time-varying modulation of the intensity of optical radiation generated by it.
  • step 65 by an optical sensor, receive optical radiation from the illuminated target object comprising a reflected component of the illuminating optical radiation generated by each respective one of the plurality of optical illumination sources.
  • step 66 compare the reflected component of the illuminating optical radiation received by the optical sensor to the pseudo-random code encoded within the optical radiation generated by each respective one of the plurality of optical illumination sources. This may include decoding (i.e., recovering) the encoded information within the reflected component of the illumination optical radiation received by the optical sensor and assigning an optical spectral band to the respective illumination optical radiation according to the decoded information.
  • step 67 calculate a relative intensity of the reflected component of the illuminating optical radiation relative to the intensity of the received optical radiation from the illuminated target object.
  • step 68 generate data describing an optical reflection spectrum according to the calculated relative intensity of the reflected components of the illumination optical radiation received by the optical sensor from said plurality of optical illumination sources.
  • FIG. 5e illustrates an example of the apparatus described above with reference to figures 5a, 5b, 5c and 5d. Another implementation of the apparatus of Figure 5e is illustrated in Figure 3a. Like reference symbols are used in figures 3a and 5e to refer to like items.
  • the apparatus comprises a mirror unit 50 for personal use (e.g., domestic or industrial/medical settings) by a person 12.
  • the mirror unit comprises a reflector part 10 presenting outwardly a reflective surface via which a user 12 may regard themselves, and a frame part 8 upon which the reflector part is mounted.
  • An illuminator (52, 9) is housed in/on the frame part 8 adjacent to the reflector part such that both the reflector part and the illuminator part may be visible simultaneously to a user whereby the user may be illuminated by illumination radiation (11 , 53) generated by the illuminator while a user regard themselves via the reflector part.
  • the illuminator (52, 8) preferably comprises a plurality of spectrally distinct LEDs configured to emit light encoded with a pseudo-random code as discussed above.
  • a camera (56, 58) is housed within, or mounted upon, the frame part adjacent to the reflector part such that each of the reflector part, the illuminator part and the camera may be visible simultaneously to a user.
  • the user may be illuminated by illumination radiation generated by the illuminator while a user regard themselves via the reflector part and simultaneously the camera (56, 58) may receive a reflected component of the illumination light from the user’s face.
  • the control unit 51 and spectrum generator unit 59 (not shown) are housed within the frame part 8.
  • Figure 6a shows an image of a target subject 77 when illuminated by ambient white light alone.
  • a colour reference chart 76 comprises 24 square surface sections each presenting a different uniform colour. The relative brightness of each of these 24 square surface sections illustrates the broad spectral range of colours of light present within the ambient illumination.
  • Figure 6b shows an image of a target subject 77 when illuminated by an illumination light source configured to produce illumination light of wavelengths confined to a first selected spectral band (centred on red light) and modulated in intensity according to an MLS code.
  • the relative brightness of each of the 24 square surface sections of the colour reference chart 76 illustrates the narrow spectral range of colours of light (red) present within the selected spectral band of illumination.
  • Figure 6c shows an image of the cross-correlation of the image of Figure 6b with the MLS code used to modulate the illumination light source (red light).
  • This cross-correlation of the image is an image of the reflection coefficient of the target subject in respect of the first selected spectral band (centred on red light).
  • Figure 7a shows an image of a target subject 77 when illuminated by ambient white light alone.
  • a colour reference chart 76 comprises 24 square surface sections each presenting a different uniform colour. The relative brightness of each of these 24 square surface sections illustrates the broad spectral range of colours of light present within the ambient illumination.
  • Figure 7b shows an image of a target subject 77 when illuminated by an illumination light source configured to produce illumination light of wavelengths confined to a first selected spectral band (centred on blue light) and modulated in intensity according to an MLS code.
  • the relative brightness of each of the 24 square surface sections of the colour reference chart 76 illustrates the narrow spectral range of colours of light (blue) present within the selected spectral band of illumination.
  • Figure 7c shows an image of the cross-correlation of the image of Figure 6b with the MLS code used to modulate the illumination light source (blue light).
  • This cross-correlation of the image is an image of the reflection coefficient of the target subject in respect of the first selected spectral band (centred on blue light). Note the distinct difference between the reflection coefficients of Figure 6c and the reflection coefficients of Figure 7c.
  • the invention is able to generate data describing a reflection spectrum (i.e., a spectrum of reflection coefficient vs. illumination wavelength) for each pixel of the image of the target subject. This is analogous to the “data cube” of Figure 1 .
  • Figures 8a and 8b show images of a target subject 77.
  • Figure 8a shows the target subject with a first (bad choice) region of interest identified 78.
  • Figure 8b shows the target subject with a second (good choice) region of interest 79 identified differing from the first region of interest.
  • the distinction between a “good choice” and a “bad choice” is made as follows with reference to Figure 9a and 9b.
  • Figure 9a shows a sequence of optical image sensor signals corresponding to a sequence of image frames generated by an optical sensor in respect of intensity-modulated (MLS- encoded) optical radiation reflected from the second (good choice) region of interest 79.
  • the sequence of signals shows a clear modulation corresponding to the MLS modulation applied to the illumination light source.
  • MLS- encoded intensity-modulated
  • Figure 9b shows a sequence of optical image sensor signals corresponding to a sequence of image frames generated by an optical sensor in respect of intensity-modulated (MLS- encoded) optical radiation reflected from the first (bad choice) region 78 of interest.
  • the sequence of signals shows no clear modulation corresponding to the MLS modulation applied to the illumination light source. This sequence may be more difficult compared/cross-correlated to the original MLS sequence.
  • the apparatus may be arranged to perform a subtraction operation to determine, in any given respective frames, “where” in the pixel space does the biggest change in illumination arise.
  • a “good” region of interest may be one where it is determined that sufficiently large changes in intensity arise that are consistent with the transitions in the MLS sequence.
  • a “poor” region of interest may be one where it is determined that, conversely, insufficiently large changes in intensity arise. This would result in lower correlation values.
  • Figure 9c shows a single threshold applied to the sequence of optical image sensor signals from the sequence of Figure 9a for use in distinguishing between image frames of an image sequence which capture an image of the target subject when illuminated by higher-intensity portions of the intensity- modulated (MLS- encoded) optical radiation and image frames of the image sequence which capture an image of the target subject when illuminated by lower-intensity portions of the intensity-modulated (MLS- encoded) optical radiation.
  • MLS- encoded intensity-modulated
  • a processed sequence of optical image sensor signals is generated that is modulated to possess only either one of the two possible values (i.e., ‘high’ or ‘low’).
  • This processed sequence may be cross-correlated with the MLS sequence used to modulate the intensity of the illumination light source which, similarly, was modulated to possess only either one of two possible values.
  • Figure 9d shows an alternative algorithm in which two concurrent thresholds are applied to the sequence of optical image sensor signals from the sequence of Figure 9a for use in distinguishing between image frames of an image sequence which capture an image of the target subject when illuminated by higher- intensity portions of the intensity-modulated (MLS- encoded) optical radiation and image frames of the image sequence which capture an image of the target subject when illuminated by lower-intensity portions of the intensity-modulated (MLS- encoded) optical radiation. If the value of a given optical image sensor signal within the sequence is above a higher threshold 81 , then it is deemed to be associated with a time when the MLS-encoded optical modulation was at a ‘high’ value.
  • a processed sequence of optical image sensor signals is generated that is modulated to possess only either one of the two possible values (i.e., ‘high’ or ‘low’).
  • This processed sequence may be crosscorrelated with the MLS sequence used to modulate the intensity of the illumination light source which, similarly, was modulated to possess only either one of two possible values.
  • Figure 10 shows graph of the intensity of the sequence of optical image sensor signals from the sequence of Figure 9a in which the signal intensities are re-arranged into a sequence of progressively increasing intensity (from left to right). It can be seen that the signal intensities broadly fit into one of three categories. Those categories are:
  • the threshold when a single threshold 80 is employed, as discussed above with reference to Figure 9c, the threshold may be set at a pixel intensity value 94 corresponding to the midpoint of the ordered sequence of progressively increasing pixel intensity, this being the point that divides the band A such as shown in Figure 10.
  • Figure 11 a and 1 1 b show a sequence of optical image sensor signals corresponding to a sequence of image frames generated by an optical sensor according to an imaging sensor frame rate that produces a first number of frames over the duration of an optical illumination of an imaged target subject in which the modulation comprises a second number of illumination modulation samples, which is different to the first number.
  • the sequence of frames can be down-sampled to match the length of values in the transmitted MLS sequence.
  • the apparatus is to determine of the start point of the sequence; the second is resampling of the signal.
  • Figures 11 a and 1 1 b illustrate the differences between projected signals 94 (i.e., the MLS signal known to have been contained within the projected illumination LEDs) and observed signals 93 captured by the image sensors.
  • the signal starts on a ‘high’ value, which means that the start of the observed signal can be determined from the first major increase in value (or first ‘lit’ frame).
  • the start point for the observed signal can be determined by using a step size (representing the number of frames in the received signal corresponding to one value in the true projected signal) and stepping back along the frame sequence from the first ‘lit’ frame. Since the first ‘lit’ frame could be partially ‘lit’ or on the cusp, a small offset was found to improve the match between true and projected signals after down-sampling.
  • Figure 12a and 12b show a sequence of optical image sensor signals taken from Figure 11 a.
  • the step is calculated using the image capture frame rate and MLS signal frequency.
  • This method can well handle non-integer step sizes and yielded good results in synchronising and matching the true signal 94 and the projected 93 (captured) signal in length.
  • the signal was down-sampled.
  • the true signal 94 begins with six consecutive high values, whereas in the observed signal 93 there are fourteen consecutive ‘lit’ frames.
  • the appropriate options are a batch size of two or three, which results in seven down-sampled ‘lit’ frames (as in Figure 12a) or five down-sampled ‘lit’ frames.
  • a step size of 2.5 allows fourteen lit frames to be down-sampled to 5.
  • Figure 13 shoes a flow chart according to a process for generating a reflection spectrum for a target subject, when using a video camera to capture data.
  • the process comprises:
  • Step 1 The capture 100 of a video data and the provision of an MLS code sequence (optionally in hexadecimal format) to be corelated with the captured video data.
  • Step 2 The extraction 101 of the frames 101 a of the captured video.
  • Step 3 A step 102 of aligning/scaling/rotating/positioning the extracted frames relative to each other such that image features within each extracted frame are co-registered. In this way, pixels in each frame 102a that are associated with the same given feature of the imaged subject can be identified.
  • Step 4 Converting 103 the frames 102a generated by Step 3 into greyscale frame format 103a.
  • Step 5 Selection 104 of one frame 104a corresponding to conditions of ambient illumination only
  • Step 6 Selection 106 of a region of interest (ROI) within the sequence of aligned frames 106a
  • Step 7 Calculation 107 of the sum of the pixel intensity values within the ROI across the sequence of aligned frames, thereby generating an intensity sequence 107a forthat ROI.
  • Step 8 Determination 108 of appropriate intensity threshold values 108a (or value, if only one thresholds used) for the intensity sequence 107a forthat ROI.
  • Step 9 Application 109 of the intensity threshold(s) determined at Step 8 to the intensity sequence 107a for that ROI in each of the image frames generated by Step 3 into greyscale frame format 103a to determine 109a whether the ROI within each frame is ‘lit’ or ‘unlit’.
  • This generates a binary sequence 109b in which the value “1” denotes that the ROI for a given frame is considered to be ‘lit’ and a value of “0” denotes that the ROI for a given frame is considered to be ‘unlit’.
  • Step 10 Selection 110, based on the binary sequence of Step 9, of those frames 100a deemed to match signal length of the MSL contained within the projected illumination upon the subject.
  • Step 11 Converting 11 1 the illumination level in all frames 11 a selected at Step 10 into a onedimensional (1 D) array or sequence and calculating the correlation 111 b of that sequence with the known MLS bit sequence.
  • the binary projected signals preferably comprise Maximum-Length-Sequences.
  • the inventors have found that they have certain mathematical properties which are advantageous when used in this invention.
  • the principal advantage is that the auto-correlation function is a of the form of a delta function. This means that the amplitude of the autocorrelation peak is directly and linearly related to the energy of the reflected signal (e.g., note the term a(2’ 1 /4), see above).
  • n i + 1
  • This operation provides a low cost means to infer the reflected optical signal intensity (energy), a, over all pixels within the image. It is possible to achieve similar measures for the reflection amplitude, a, using code different sequences, other than MLS. However arbitrary signals do not possess an ideal autocorrelation function. As such, signal whitening or weighting processor functions are required to recover a correlation result. These additional operations add considerably to the computational complexity. Additionally, the MLS sequence rejects noise across the entire bandwidth from DC to the Nyquist sampling frequency. While signal whitening can achieve the desired autocorrelation function for an arbitrary signal, it does so at the expense of signal to noise, especially when the signal has not been selected with a “white” spectrum to begin with. Thus, it is particularly advantageous to employ a maximum length sequence in this application as the autocorrelation response and its determination can infer the reflection coefficient very efficiently.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant d'obtenir un spectre de réflexion optique pour un objet cible (54). L'objet cible est éclairé par un rayonnement optique (53) généré par une pluralité de sources d'éclairage optique (52) qui sont distinctes les unes des autres sur le plan spectral. Le rayonnement optique (53) code un code pseudo-aléatoire en fonction d'une modulation temporelle de l'intensité optique identifiant la source d'éclairage optique respective. Un capteur optique (58) reçoit un rayonnement optique de l'objet cible éclairé comprenant une composante réfléchie (55a) du rayonnement optique (53) d'éclairage généré par les sources d'éclairage optique. L'information codée dans la composante réfléchie du rayonnement optique d'éclairage est décodée. En comparant la composante réfléchie du rayonnement optique d'éclairage reçu par le capteur optique au code pseudo-aléatoire codé dans le rayonnement optique, une intensité relative est calculée pour la composante réfléchie du rayonnement optique d'éclairage par rapport à l'intensité du rayonnement optique reçu de l'objet cible éclairé. Des données sont générées décrivant un spectre de réflexion optique en fonction des composantes réfléchies du rayonnement optique d'éclairage et en fonction des bandes spectrales attribuées respectives.
PCT/IB2023/062622 2022-12-14 2023-12-13 Améliorations dans la détection spectrale WO2024127274A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2218844.5A GB2625331A (en) 2022-12-14 2022-12-14 Improvements in and relating to spectral sensing
GB2218844.5 2022-12-14

Publications (1)

Publication Number Publication Date
WO2024127274A1 true WO2024127274A1 (fr) 2024-06-20

Family

ID=84974603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/062622 WO2024127274A1 (fr) 2022-12-14 2023-12-13 Améliorations dans la détection spectrale

Country Status (2)

Country Link
GB (1) GB2625331A (fr)
WO (1) WO2024127274A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014002514A1 (de) * 2014-02-21 2015-08-27 Universität Stuttgart Vorrichtung und Verfahren zur multi- oder hyperspektralen Bildgebung und / oder zur Distanz- und / oder 2-D oder 3-D Profilmessung eines Objekts mittels Spektrometrie
US10041833B1 (en) * 2016-04-05 2018-08-07 The United States of America as Represented by the Adminstrator of the NASA System and method for active multispectral imaging and optical communications
CN114441035A (zh) * 2021-12-31 2022-05-06 复旦大学 基于高速可调多色led光源的多光谱成像方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014002514A1 (de) * 2014-02-21 2015-08-27 Universität Stuttgart Vorrichtung und Verfahren zur multi- oder hyperspektralen Bildgebung und / oder zur Distanz- und / oder 2-D oder 3-D Profilmessung eines Objekts mittels Spektrometrie
US10041833B1 (en) * 2016-04-05 2018-08-07 The United States of America as Represented by the Adminstrator of the NASA System and method for active multispectral imaging and optical communications
CN114441035A (zh) * 2021-12-31 2022-05-06 复旦大学 基于高速可调多色led光源的多光谱成像方法及装置

Also Published As

Publication number Publication date
GB202218844D0 (en) 2023-01-25
GB2625331A (en) 2024-06-19

Similar Documents

Publication Publication Date Title
AU2018247216B2 (en) Systems and methods for liveness analysis
JP5733032B2 (ja) 画像処理装置および方法、画像処理システム、プログラム、および、記録媒体
US9593982B2 (en) Sensor-synchronized spectrally-structured-light imaging
Steiner et al. Design of an active multispectral SWIR camera system for skin detection and face verification
US7925056B2 (en) Optical speckle pattern investigation
Ratner et al. Illumination multiplexing within fundamental limits
WO2013098708A2 (fr) Acquisition de données multispectrales
CN107636733B (zh) 验证印刷物品的真实性的方法和数据处理终端
CN108692815A (zh) 使用纵向色差的多光谱成像
US10460145B2 (en) Device for capturing imprints
WO2016023582A1 (fr) Procédé de détection d'une présentation falsifiée à un système de reconnaissance vasculaire
US20200250847A1 (en) Surface reconstruction of an illuminated object by means of photometric stereo analysis
WO2024127274A1 (fr) Améliorations dans la détection spectrale
GB2625332A (en) Improvements in and relating to spectral sensing
CN108267426A (zh) 基于多光谱成像的绘画颜料识别系统及方法
JP2022535884A (ja) 自然光及び/又は人工光の下での物体認識のためのシステム及び方法
Jia et al. Single-shot Fourier transform multispectroscopy
WO2008111076A2 (fr) Système et procédé pour des mesures multiplexées
EP3430472B1 (fr) Procédé de production d'images vidéo indépendantes de l'éclairage d'arrière-plan
Tominaga et al. Estimation of surface properties for art paintings using a six-band scanner
KR101346982B1 (ko) 텍스쳐 영상과 깊이 영상을 추출하는 장치 및 방법
Vink et al. Robust skin detection using multi-spectral illumination
Pelagotti et al. Multispectral and multi-modal imaging data processing for the identification of painting materials
Sharma Enabling Wide Adoption of Hyperspectral Imaging
WO2023084484A1 (fr) Caméra couleur à capteur de vision dynamique