WO2018046488A1 - Mapping and auditing luminaires across geographic areas - Google Patents

Mapping and auditing luminaires across geographic areas Download PDF

Info

Publication number
WO2018046488A1
WO2018046488A1 PCT/EP2017/072222 EP2017072222W WO2018046488A1 WO 2018046488 A1 WO2018046488 A1 WO 2018046488A1 EP 2017072222 W EP2017072222 W EP 2017072222W WO 2018046488 A1 WO2018046488 A1 WO 2018046488A1
Authority
WO
WIPO (PCT)
Prior art keywords
luminaires
image data
geographic area
overhead image
data
Prior art date
Application number
PCT/EP2017/072222
Other languages
French (fr)
Inventor
Dong Han
Yuting Zhang
Talmai Brandão DE OLIVEIRA
Marc Aoun
Original Assignee
Philips Lighting Holding B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Lighting Holding B.V. filed Critical Philips Lighting Holding B.V.
Publication of WO2018046488A1 publication Critical patent/WO2018046488A1/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/20Responsive to malfunctions or to light source life; for protection
    • H05B47/21Responsive to malfunctions or to light source life; for protection of two or more light sources connected in parallel
    • H05B47/22Responsive to malfunctions or to light source life; for protection of two or more light sources connected in parallel with communication between the lamps and a central unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/155Coordinated control of two or more light sources

Abstract

The present disclosure is related to methods, systems, apparatus, and computer-readable media (transitory and non-transitory) for mapping and auditing a plurality of luminaires (102, 202) installed in a geographic area using overhead sensor data (include image data) obtained from a relatively high elevation. In various embodiments, overhead image data capturing an outdoor geographic area may be obtained (402) from a first elevation. The overhead image data may be analyzed (404) to detect a plurality of luminaires within the geographic area based on light emitted by each of the plurality of luminaires. Each luminaire of the plurality of luminaires may be classified (410) based on: one or more attributes of light emitted by the luminaire that are captured in the overhead image data; and field data gathered at one or more elevations below the first elevation within a different geographic area.

Description

Mapping and auditing luminaires across geographic areas
TECHNICAL FIELD
The present disclosure is directed generally to lighting maintenance and control in a geographic area. More particularly, but not exclusively, various methods and apparatus disclosed herein relate to systems and methods for mapping and auditing luminaires across geographic areas.
BACKGROUND
Digital lighting technologies, i.e., illumination based on semiconductor light sources, such as light-emitting diodes ("LEDs"), offer a viable alternative to traditional fluorescent, HID, and incandescent lamps. Functional advantages and benefits of LEDs include high energy conversion and optical efficiency, durability, lower operating costs, and many others. Recent advances in LED technology have provided efficient and robust full- spectrum lighting sources that enable a variety of lighting effects in many applications. Some of the fixtures embodying these sources feature a lighting module, including one or more LEDs capable of producing different colors, e.g., red, green, and blue, as well as a processor for independently controlling the output of the LEDs in order to generate a variety of colors and color-changing lighting effects, for example, as discussed in detail in U.S. Patent Nos. 6,016,038 and 6,211,626, incorporated herein by reference.
Manually mapping and/or performing audits of a plurality of luminaires installed in a geographic area such as a metropolitan area can be expensive, both in terms of economics and labor. In some instances, personnel may be deployed to manually map each installed luminaire, such as street lamps and lamps illuminating pedestrian walkways. The personal may also gather field data that includes observations about the luminaires, such as lighting attributes associated with each luminaire, light source types used, luminaire make/models, and so forth. Lighting attributes may include, for instance, intensity, beam width, area illuminated, color, color temperature, and so forth. These lighting attributes may be used, for instance, to determine whether each luminaire is functioning properly, is functioning consistently with other similar luminaires (e.g., along the same street), and so forth. A large metropolitan area may include tens or even hundreds of thousands of luminaires to be mapped and audited. In addition to the potentially immense amount of resources that may be required to perform such mapping and auditing, it is also likely that luminaires may malfunction or otherwise cease to operate properly at a rate that exceeds the audit capabilities of the governing entity. Accordingly, there is a need for a quicker and less resource-intensive way to map and/or audit large numbers of luminaires installed in a geographic region.
SUMMARY
The present disclosure is related to methods, systems, apparatus, and computer-readable media (transitory and non-transitory) for mapping and auditing a plurality of luminaires installed in a geographic area using overhead sensor data (include image data) obtained from a relatively high elevation. In some embodiments, the overhead image data may include satellite image data that is captured, for instance, during the night when a high percentage of luminaires in the geographic area likely will be illuminated and easily visible. In some embodiments, the satellite image data may be captured during a time period that is selected to reduce noise created by other potential light sources, such as vehicles.
In various embodiments, the overhead image data obtained from a relatively high elevation may be analyzed to detect and localize (e.g., map) luminaires installed in the geographic region. In addition, one or more lighting attributes of the detected luminaires, such as their intensity, color, color temperature, beam width, area illuminated, etc., may be determined based on analysis of the image data and used to classify detected luminaires. In some embodiments, one or more machine learning techniques may be used to perform this analysis. Such a machine learning algorithm may be trained using, for instance, field data that includes local observations gathered at relatively low elevations (e.g., at ground level). In various embodiments, such field data may be gathered by one or multiple light sensors that may be mounted, for instance, on a vehicle or aerial drone travelling through the geographic area.
Generally, in one aspect, a computer-implemented method include the following operations: obtaining, by one or more processors, overhead image data that captures an outdoor geographic area from a first elevation; analyzing, by one or more of the processors, the overhead image data to detect a plurality of luminaires within the geographic area based on light emitted by each of the plurality of luminaires; and classifying, by one or more of the processors, each luminaire of the plurality of luminaires. The classifying may be based on: one or more attributes of light emitted by the luminaire that are captured in the overhead image data; and field data gathered at one or more elevations below the first elevation within the same geographic area or a different geographic area, the field data including local observations of one or more attributes of light emitted by one or more luminaires.
In various embodiments, the method may further include localizing, by one or more of the processors, based on additional geographic data associated with the geographic area, the detected plurality of luminaires contained within the geographic area. In various embodiments, the additional geographic data associated with the geographic area may include predetermined map data of the geographic area, and the method may further include fitting the overhead image data to the predetermined map data to localize the detected plurality of luminaires.
In various embodiments, the method may further include excluding, from the analyzing, localizing, and classifying, one or more portions of the overhead image data based on the predetermined map data. In various embodiments, the overhead image data may include image data captured by a camera mounted on an airplane or helicopter. In various embodiments, the overhead image data may include satellite image data. In various embodiments, the overhead image data may be captured while the geographic area is not illuminated by daylight.
In various embodiments, the field data may be gathered within the same geographic area and include local observations of one or more attributes of light emitted by at least a subset of the plurality of luminaires. In various embodiments, the method may further include comparing, by one or more of the processors, the field data to classifications of the subset of the plurality of luminaires. In various embodiments, the comparing may include verifying the classifications of the plurality of luminaires against the field data. In various embodiments, the comparing may include training a machine learning model based on the field data. In various embodiments, the classifying may be performed using the machine learning model. In various embodiments, the field data ,au be gathered within the different geographic area and includes local observations of one or more attributes of light emitted by one or more luminaires in the different geographic area, and the classifying may be performed using a machine learning model trained using the field data.
As used herein for purposes of the present disclosure, the term "LED" should be understood to include any electroluminescent diode or other type of carrier
injection/junction-based system that is capable of generating radiation in response to an electric signal. Thus, the term LED includes, but is not limited to, various semiconductor- based structures that emit light in response to current, light emitting polymers, organic light emitting diodes (OLEDs), electroluminescent strips, and the like. In particular, the term LED refers to light emitting diodes of all types (including semi-conductor and organic light emitting diodes) that may be configured to generate radiation in one or more of the infrared spectrum, ultraviolet spectrum, and various portions of the visible spectrum (generally including radiation wavelengths from approximately 400 nanometers to approximately 700 nanometers). Some examples of LEDs include, but are not limited to, various types of infrared LEDs, ultraviolet LEDs, red LEDs, blue LEDs, green LEDs, yellow LEDs, amber LEDs, orange LEDs, and white LEDs (discussed further below). It also should be appreciated that LEDs may be configured and/or controlled to generate radiation having various bandwidths (e.g., full widths at half maximum, or FWHM) for a given spectrum (e.g., narrow bandwidth, broad bandwidth), and a variety of dominant wavelengths within a given general color categorization.
The term "light source" should be understood to refer to any one or more of a variety of radiation sources, including, but not limited to, LED-based sources (including one or more LEDs as defined above), incandescent sources (e.g., filament lamps, halogen lamps), fluorescent sources, phosphorescent sources, high-intensity discharge sources (e.g., sodium vapor, mercury vapor, and metal halide lamps), lasers, other types of electroluminescent sources, pyro-luminescent sources (e.g., flames), candle-luminescent sources (e.g., gas mantles, carbon arc radiation sources), photo-luminescent sources (e.g., gaseous discharge sources), cathode luminescent sources using electronic satiation, galvano-luminescent sources, crystallo- luminescent sources, kine- luminescent sources, thermo-luminescent sources, tribo luminescent sources, sonoluminescent sources, radio luminescent sources, and luminescent polymers.
A given light source may be configured to generate electromagnetic radiation within the visible spectrum, outside the visible spectrum, or a combination of both. Hence, the terms "light" and "radiation" are used interchangeably herein. Additionally, a light source may include as an integral component one or more filters (e.g., color filters), lenses, or other optical components. Also, it should be understood that light sources may be configured for a variety of applications, including, but not limited to, indication, display, and/or illumination. An "illumination source" is a light source that is particularly configured to generate radiation having a sufficient intensity to effectively illuminate an interior or exterior space. In this context, "sufficient intensity" refers to sufficient radiant power in the visible spectrum generated in the space or environment (the unit "lumens" often is employed to represent the total light output from a light source in all directions, in terms of radiant power or "luminous flux") to provide ambient illumination (i.e., light that may be perceived indirectly and that may be, for example, reflected off of one or more of a variety of intervening surfaces before being perceived in whole or in part).
For purposes of this disclosure, the term "color" generally is used to refer primarily to a property of radiation that is perceivable by an observer (although this usage is not intended to limit the scope of this term). Accordingly, the terms "different colors" implicitly refer to multiple spectra having different wavelength components and/or bandwidths. It also should be appreciated that the term "color" may be used in connection with both white and non- white light.
The term "color temperature" generally is used herein in connection with white light, although this usage is not intended to limit the scope of this term. Color temperature essentially refers to a particular color content or shade (e.g., yellowish, bluish) of white light. The color temperature of a given radiation sample conventionally is characterized according to the temperature in degrees Kelvin (K) of a black body radiator that radiates essentially the same spectrum as the radiation sample in question. Black body radiator color temperatures generally fall within a range of approximately 700 degrees K (typically considered the first visible to the human eye) to over 10,000 degrees K; white light generally is perceived at color temperatures above 1800-2000 degrees K.
Lower color temperatures generally indicate white light having a more significant red component or a "warmer feel," while higher color temperatures generally indicate white light having a more significant blue component or a "cooler feel." By way of example, fire has a color temperature of approximately 1,800 degrees K, a conventional incandescent bulb has a color temperature of approximately 2848 degrees K, early morning daylight has a color temperature of approximately 3,000 degrees K, and overcast midday skies have a color temperature of approximately 10,000 degrees K. A color image viewed under white light having a color temperature of approximately 3,000 degree K has a relatively reddish tone, whereas the same color image viewed under white light having a color temperature of approximately 10,000 degrees K has a relatively bluish tone.
The term "controller" is used herein generally to describe various apparatus relating to the operation of one or more light sources. A controller can be implemented in numerous ways (e.g., such as with dedicated hardware) to perform various functions discussed herein. A "processor" is one example of a controller which employs one or more microprocessors that may be programmed using software (e.g., microcode) to perform various functions discussed herein. A controller may be implemented with or without employing a processor, and also may be implemented as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions. Examples of controller components that may be employed in various embodiments of the present disclosure include, but are not limited to, conventional microprocessors, application specific integrated circuits (ASICs), and field-programmable gate arrays (FPGAs).
In various implementations, a processor or controller may be associated with one or more storage media (generically referred to herein as "memory," e.g., volatile and non- volatile computer memory such as RAM, PROM, EPROM, and EEPROM, floppy disks, compact disks, optical disks, magnetic tape, etc.). In some implementations, the storage media may be encoded with one or more programs that, when executed on one or more processors and/or controllers, perform at least some of the functions discussed herein.
Various storage media may be fixed within a processor or controller or may be transportable, such that the one or more programs stored thereon can be loaded into a processor or controller so as to implement various aspects of the present disclosure discussed herein. The terms "program" or "computer program" are used herein in a generic sense to refer to any type of computer code (e.g., software or microcode) that can be employed to program one or more processors or controllers.
The term "user interface" as used herein refers to an interface between a human user or operator and one or more devices that enables communication between the user and the device(s). Examples of user interfaces that may be employed in various implementations of the present disclosure include, but are not limited to, switches, potentiometers, buttons, dials, sliders, a mouse, keyboard, keypad, various types of game controllers (e.g., joysticks), track balls, display screens, various types of graphical user interfaces (GUIs), touch screens, microphones and other types of sensors that may receive some form of human-generated stimulus and generate a signal in response thereto.
As used herein, a "light footprint" refers to emitted light reflected from a surface. Accordingly, a light footprint associated with a particular luminaire such as a streetlight may include light that originates from the streetlight and is reflected from the street, e.g., underneath the streetlight.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings, like reference characters generally refer to the same parts throughout the different views. In addition, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the disclosure.
Fig. 1 schematically illustrates various components that may be used to implement techniques described herein, in accordance with various embodiments.
Fig. 2 schematically depicts various types of views of a geographic area that may be combined and/or used in conjunction in various ways to map and audit a plurality of luminaires in the geographic area.
Fig. 3 depicts an example of how satellite overhead image data or field data may be analyzed to annotate another view of a geographic area, in accordance with various embodiments.
Fig. 4 depicts an example method of mapping and auditing a plurality of luminaires in a geographic area, in accordance with various embodiments.
Fig. 5 schematically depicts an example computing system architecture.
DETAILED DESCRIPTION
Manually mapping and/or performing audits of a plurality of luminaires installed in a geographic area such as a metropolitan area can be expensive, in terms of economics, time, and labor. A large metropolitan area may include tens or even hundreds of thousands of luminaires to be mapped and audited. It is also likely that luminaires may malfunction or otherwise cease to operate properly at a rate that exceeds the audit capabilities of the governing entity. Accordingly, there is a need for a quicker and less resource-intensive way to map and/or audit large numbers of luminaires installed in a geographic region. More generally, the Applicants have recognized and appreciated that it would be beneficial to streamline processes for auditing pluralities of luminaires contained in geographic areas to reduce labor and other costs. Referring to Fig. 1, in one embodiment, two luminaires taking the form of a first streetlight 102A and a second streetlight 102B (streetlights generically will be referred to herein with the number "102") are depicted alongside a road 104. While two streetlights 102 are depicted on one side of road 104 only, this is not meant to be limiting. More
streetlights 102 may be installed alongside road 104, and may be installed on one side of road 104 and/or on both sides of road 104. Each streetlight 102 may emit light 106A/106B that casts a light footprint 108A/108B on and/or near road 104.
In various embodiments, streetlights 102 may be mapped and/or audited by bringing one or more light sensors (not depicted) into proximity of (e.g., within) emitted light 106A/106B. The light sensors may measure various attributes of emitted light 106A/106B, such as intensity, color, color temperature, saturation, hue, and/or light footprint 108A/108B size/shape, to name a few. To accomplish this in an expedient manner, in some cases, individuals may walk (or drive) past each luminaire and manually record its light attributes. In other embodiments, one or more light sensors may be mounted on a vehicle 110 and/or unmanned aerial vehicle ("UAV") 112 that is operated to pass through emitted light
106A/106B. Light sensors may be mounted at various locations of vehicle 110 and/or UAV 112, such as on top, bottom, and/or on one or both sides. Top-mounted light sensors may directly measure various attributes of emitted light 106A/ 106B. Bottom-mounted light sensors may measure various attributes of light footprints 108A/108B. Side-mounted light sensors may measure various attributes of emitted light 106A/106B and/or light footprints 108A/108B.
Operating vehicle 110 and/or UAV 112 past streetlights 102 may be an effective way to gather field data that includes accurate and local measurements of attributes of emitted light 106 and/or of attributes of luminaires 102 themselves. However, as noted above, in large metropolitan areas with roadways, pedestrian walkways ("pedways"), and other areas that may be illuminated by streetlights or other similar luminaires, there may be prohibitively large numbers of luminaires to map and/or audit.
Accordingly, in various embodiments, overhead image data may be obtained from a relatively high elevation and/or altitude (e.g., higher than elevations at which vehicle 110 and/or UAV 112 obtain local measurements). For example, in Fig. 1, one or more satellites 114 may capture overhead image data from a relatively high altitude. Light emitted by streetlights 102, including light footprints 108A/108B, may be visible in the overhead image data, especially when the overhead image data is captured while the geographic area is not illuminated by the sun (e.g., during nighttime). Using techniques described herein, the overhead image data may be analyzed to map and/or audit streetlights 102 without requiring exhaustive gathering of field data locally at every single streetlight 102.
In some embodiments, overhead image data capturing an outdoor geographic area from a first elevation may be obtained, e.g., from satellite 114 and/or from another vehicle operating at a relatively high elevation/altitude, such as an airplane, helicopter, or even a high-altitude UAV. The overhead image data may then be analyzed to detect and/or localize a plurality of luminaires (e.g., streetlights 102) within the geographic area based on light (e.g., 106A/106B, 108A, 108B) emitted by each of the plurality of luminaires that is captured in the overhead image data. Each luminaire of the plurality of luminaires may then be classified based on the overhead image data.
A luminaire may be classified based on overhead image data in various ways. In some embodiments, a luminaire classification may include a class of light source used (e.g., LED, halogen, CFL, incandescent, etc.). Additionally or alternatively, a luminaire classification may include a luminaire type, such as streetlight, pedestrian walkway lamp, parking lot lamp, sidewall illumination luminaire, bridge luminaire, and so forth. Luminaire classifications may include other information about luminaires as well, such as one or lighting attributes (e.g., intensity, light footprint size/shape, color, color temperature, hue, saturation, etc.), a location, a make/model, and so forth.
In some embodiments, classification of a given luminaire may be based on one or more attributes of light emitted by the given luminaire that are captured in the overhead image data. Additionally or alternatively, classification of the given luminaire may be based on field data gathered at one or more elevations below an elevation at which the overhead image data was captured. The field data may include local observations (obtained by light sensor(s) mounted on vehicle 110 and/or UAV 112) of one or more attributes of light emitted by one or more luminaires. In various embodiments, the field data may represent lighting attributes of luminaires within the same geographic area as is represented in the overhead image data, or within a different geographic area.
In some embodiments, the classification based on the overhead image data may be performed using one or machine learning models. Various types of machine learning models may be employed to analyze overhead image data in accordance with the present disclosure, including but not limited to trained classifiers, regression models, artificial neural networks, and so forth. A machine learning model may be supervised or unsupervised.
Supervised machine learning models may be trained on various data. In some embodiments, a machine learning model may be trained using field data gathered using light sensors mounted on vehicle 110 and/or UAV 112.
For example, field data may be organized into feature vectors, with each feature vector including locally observed attributes of a corresponding luminaire. These feature vectors may then be labeled with appropriate classifications (e.g., class of light source used, luminaire type, etc.), and used as labeled training examples for one or more machine learning classifiers that will ultimately be used to analyze overhead image data.
In some embodiments, overhead image data may be analyzed using one or more machine learning models to group similar detected luminaires into clusters. Detected luminaires may be grouped into clusters, for instance, based on one or more attributes of light detected in the overhead image data, such as intensity (or brightness), footprint shape/size, luminaire shape/size, color, hue, saturation, color temperature, and so forth. Then, feature vectors representing these clusters (and/or individual luminaires within the clusters) may be compared to feature vectors representing locally observed attributes of corresponding luminaires. Measures of correlation (e.g., similarity, Euclidian distance between jointly embedded feature vectors, etc.) between the various feature vectors may then be used to train the one or more machine learning models.
Fig. 2 schematically depicts various types of views of the same geographic area that may be combined and/or used in conjunction to audit a plurality of luminaires in the geographic area. Predetermined map data of the geographic area is depicted in a first view 220. First view identifies streets (Main and First) and a pedestrian walkway ("PEDWAY"), as well as other areas, but does not include data captured by an image device such as a camera. First view 220 may be the type of data that is often used in navigation applications, for instance. Accordingly, features such as streets are labeled with their names. Additionally, real features such as luminaires, trees, buildings, etc., are not necessarily depicted (although in some instances one or more of these features may be rendered).
Second view 222 is an overhead image captured of the same geographic area represented by first view 220. Second view 222 may have been captured during daylight. Consequently, various features of the geographic area are visible. For example, two buildings, 224A and 224B, are visible at top left and top right, respectively. A number of luminaires 2021-8 in the form of streetlights are visible along Main Street (which runs from top to bottom and is referenced in first view 220). Additionally, a number of additional luminaires 2029-12 are visible along the "PEDWAY" at bottom left. Third view 226 depicts satellite-based overhead image data captured of the same geographic area as first view 220 and second view 222. A plurality of light footprints 208i-i4 are visible (depicted as dashed lines). Light footprints 208i_8 correspond to light emitted by luminaires 2021-8, which are visible as silhouettes against light footprints 208i_8. Light footprints 2O89-12 correspond to light emitted by luminaires 2029-12, which again are visible as silhouettes against light footprints 2O89-12. Two additional light footprints, 208i3 and 208i4, are also visible. Light footprint 208i3 is created by a light on top of building 224A. Light footprint 208i4 is created by headlights of a vehicle travelling from right to left on First Street.
While the satellite image depicted in third view 226 appears to be from directly overhead, this is not meant to be limiting. In various embodiments, overhead image data (whether captured by satellite or aircraft) may be captured from various angles relative to the ground. For example, in some scenarios, overhead image data may be captured from multiple angled overhead perspectives, e.g., to capture luminaires that may be blocked (e.g., by a tall building) from one or more perspectives. In some embodiments, overhead image data such as the satellite image depicted in third view 226 may include three-dimensional features such as buildings and vegetation, whereas roads and streets will typically be two- dimensional (unless, of course, a bridge or overpass is considered). In some instances, overhead image data may be accordingly annotated to indicate such three-dimensional features. Such annotations may be used, for instance, to detect sections of the geographic area that are blocked by a three-dimensional feature (e.g., a tall building) in a first overhead image. A second overhead image from another perspective with a clear view of the block sections may then be obtained and used to audit luminaires blocked in the first overhead image.
While only light footprints are visible in third view 226, this is for illustrative purposes. Real life satellite image data, even when captured at nighttime, likely would include other visible features, such as lighted building windows, lighted advertisements (e.g., billboards), street markings within or near lighting footprints, physical features such as building walls and/or vegetation that happens to be illuminated by one or more luminaires, and so forth.
Fourth view 228 graphically depicts local observations obtained while gathering field data, e.g., using light sensors mounted on vehicle 1 10 and/or UAV 1 12. A plurality of graphical elements 2301-12 (drawn using dash-dot-dash to distinguish from the light footprints 2081-14 of third view 226) each represent one or more attributes of light emitted by luminaires 2021-12. For example, light footprints 208i_8 cast by luminaires 202i_8 are larger and shaped slightly differently than light footprints 2O89-12 cast by luminaires 2029- 12. Consequently, graphical elements 2301-8 are larger and have slightly different shapes than graphical elements 23O9-12. While not apparent in Fig. 2, it should be understood that graphical elements 230 may represent locally observed light attributes other than footprint shape and size, including but not limited to color, intensity, color temperature, hue, saturation, etc.
In various embodiments, data associated with one or more of views 220, 222, 226, and/or 228 may be used in conjunction with each other as part of the mapping and/or auditing process. For example, predetermined map data associated with first view 220 may be fitted to (e.g., overlaid over) data associated with other views, such as second view 222, third view 226, and/or fourth view 228, in order to localize luminaries. For example, geographic location and/or scale data embedded in or otherwise associated with (e.g., as metadata) predetermined map data may be used to calculate geographic locations of luminaires detected in overhead image data.
Additionally or alternatively, predetermined map data associated with first view 220 may be fitted to data associated with other views (222, 226, 228) in order to isolate streets, pedestrian walkways, bridges, playgrounds, and other areas in which luminaires to be mapped/audited may be located, such as areas immediately proximate to streets and walkways. Other areas that do not include luminaires to be mapped/audited, such as the top left city block that contains building 224A, the top right city block that contains building 224B, or the bottom right city block that includes a tree (see second view 222), may be excluded from consideration. Thus, when analyzing satellite overhead image data
corresponding to third view 226, these regions may be excluded from consideration, which may result in light footprint 20813 being ignored. The satellite overhead image data represented by second view 222 may be used in a similar fashion, e.g., in addition to or instead of the predetermined map data associated with first view 220.
In some embodiments, light footprints having certain known shapes and/or sizes may also be ignored and/or excluded from consideration. For example, light footprint 208i4 in third view 226 is created by headlights of a passing vehicle (which is not visible in third view 226 because third view may be captured during darkness). Consequently, light footprint 208i4 has a shape and size that is different from those associated with known luminaires. In some embodiments, such a light footprint may be detected, e.g., using image processing techniques such as edge detection, and may be discarded and/or ignored as noise. Fig. 3 demonstrates another example of how one or more views of Fig. 2 may be used for mapping/auditing purposes. In Fig. 3, second view 222 has been annotated with graphical elements 340i_i2, which correspond to luminaires 2021-12 in Fig. 2. In some embodiments, graphical elements 340 may represent local observations gathered as field data, e.g., by vehicle 110 and/or UAV 112, and hence may correspond to graphical elements 330i-i2 in fourth view 228 of Fig. 2. In other embodiments, however, graphical elements 340i-i2 may correspond to luminaires classified based on satellite-based overhead image data depicted in third view 226.
An annotated view such as that depicted in Fig. 3 may be used for various purposes. In some embodiments, it may be displayed on a computing device as part of a graphical user interface to provide a user with an overview of luminaires contained within a geographic area. For example, different types of luminaires, such as luminaires 202i_8 versus 2029-12, can be visually annotated differently to indicate their different classifications, lighting attributes, etc. For example, one type of luminaire such as a streetlight may be circled or otherwise annotated using one color, and another type of luminaire such as a pedestrian walkway lamp may be circled or otherwise annotated using another color.
Additionally or alternatively, other types of annotations may be used as well, such as highlighting (e.g., highlighting each physical luminaire body), textually, with call outs (e.g., pop-up windows or dialog), and so forth.
In some embodiments, overhead image data (e.g., satellite-based) capturing a geographic area (potentially much larger than that depicted in Figs. 2 and 3, such as an entire city or a large portion thereof) may be analyzed to classify detected luminaires. Field data may then be gathered (e.g., using vehicle-mounted or UAV-mounted light sensors) from a subset of the detected and classified luminaires (e.g., a sample). If local observations contained the sampled field data corroborate the classifications made based on the overhead image data, then confidence measures associated with the classifications made using the overhead image data may be increased. If local observations contained in the sampled field data refute or are otherwise inconsistent with the classifications made based on the overhead image data, the confidence measures may be decreased. Additionally or alternatively, one or machine learning models used to classify luminaires based on overhead image data may be further trained based on the sampled field data. Gathering a sampling of field data, rather than exhaustive field data containing local observations of every luminaire, may conserve considerable time and resources. Many municipal luminaires such as streetlights tend to be evenly spaced along a road. The same may apply to luminaires deployed in other public and/or controlled areas, including but not limited to pedestrian walkways, bridges, parking lots, stadiums, venues, airports, train stations, and so forth. Accordingly, once distances are determined between multiple streetlights along a road, it may be possible to predict locations of additional luminaires along the road. In some embodiments, geolocations forming part of gathered field data may be used to calibrate image-based distances in overhead image data, e.g., to provide scale (e.g., meters apart on the ground equals Y millimeters/pixels apart in the overhead image data). Additionally or alternatively, location data already contained in overhead image data may be verified by gathering field data and comparing the gathered field data to the location data contained in the overhead image data. This verification may improve the localization techniques described herein.
Fig. 4 depicts an example method 400 for mapping and/or auditing a plurality of luminaires in a geographic area, in accordance with various embodiments. While operations of method 400 are depicted in a particular order, this is not meant to be limiting. One or more operations may be re-ordered, added, or omitted.
At block 402, overhead image data capturing an outdoor geographic area from a first elevation may be obtained, e.g., directly from a satellite and/or through an organization that sells or licenses preexisting satellite images. Additionally or alternatively, overhead image data may be obtained from a camera mounted on aircraft such as an airplane or helicopter, or even a high-altitude UAV. In some embodiments, overhead image data may be capturing at nighttime (e.g., so that emitted light is clearly visible in overhead image data) and/or during times in which vehicular traffic is likely to be minimal (e.g., to reduce noise in overhead image data).
At block 404, the overhead image data obtained at block 402 may be analyzed to detect a plurality of luminaires within the geographic area based on light emitted by each of the plurality of luminaires that is captured in the overhead image data. As noted above, attributes associated with luminaires not meant for auditing, such as lighted windows in buildings, vehicle headlights, and so forth, may be identified and ignored or discarded.
At block 406, the plurality of luminaires detected at block 404 may be localized (e.g., mapped) based on additional geographic data associated with the geographic area. The additional geographic data may include, for instance, predetermined map data, a reference daytime satellite image of the same geographic area, geolocation data embedded in the overhead image data, and so forth. At block 408 (which may or may not occur earlier in method 400), one or more portions of the overhead image data may be excluded from various other steps of method 400. For example, the portions of the overhead image data may be excluded from consideration during the analyzing (block 404), localizing (block 406), and/or classifying (block 410). This may improve processing time and reduce consumption of computing resources on unnecessary calculations.
At block 410, each luminaire of the plurality of luminaires may be classified, e.g., with a light source type (e.g., LED, incandescent, CFL, halogen, etc.), type of luminaire (e.g., streetlight, building exterior light, bridge light, pedestrian light, light model, etc.), one or more attributes of the luminaire and/or light it emits, make/model, and so forth. In various embodiments, the classifying may be based at least in part on one or more attributes of light emitted by the luminaire that are captured in the overhead image data, such as color, size/shape of light footprint, and so forth. In some embodiments in which a silhouette of the luminaire is visible against a light footprint, the luminaire shape and/or size may also be considered. Additionally, in some embodiments, classification may be directly or indirectly based on field data gathered at one or more elevations below the first elevation from which the overhead image was captured. Moreover, the field data may be gathered within the same geographic area or a different geographic area. Thus, for instance, one or more machine learning classifiers used to analyze overhead image data to map/audit luminaires may first be trained using field data in one city, and then applied towards overhead image data captured of a completely different city.
At optional block 412, additional field data (e.g., a sample gathered within the same geographic area as that captured in the overhead image data obtained at block 402) by may be gathered and compared to the classifications determined at block 410 to verify (e.g., corroborate) and/or refute those classifications. If the field data corroborates the
classifications, confidence measures associated with the classifications may be increased. In the field data refutes or is inconsistent with the classifications, machine learning models employed to determine the classifications may be further trained using the field data.
In addition to auditing pluralities of luminaires installed in geographic areas, techniques described herein may be utilized for other purposes. For example, in some embodiments, a plurality of luminaires installed in a geographic area may be centrally controllable, e.g., by a government or commercial entity. The plurality of luminaires may be centrally controlled to emit light having known properties (e.g., intensity, color, color temperature, etc.). Then, these lighting properties may be verified, e.g., using the vehicle- based or UAV-based approach, or by using the overhead- image-based approached described herein. To the extent the measured lighting properties do not match the specified lighting properties, adjustments can be made to ensure that each luminaire emits light having the desired properties. Additionally or alternatively, one or more machine learning models employed to detect luminaires in overhead image data may be updated to account for differences between specified and observed lighting attributes. In some embodiments, additional lighting properties such as unforeseen reflection effects (e.g., due to a luminaire casting light on a reflective pool of water or on the side of a reflective building) may be determined and adjusted for, as desired.
Suppose there is a specific geographic area in which a plurality of luminaires are centrally-controlled. As a first step, the luminaires may be operated to emit light at, for instance, 100%, and a satellite image of the area may be captured. Then, the intensity may be set to a lower level, e.g., 50%>, and another satellite image may be captured. This may be repeated for as many intensity levels as desired. These satellite images may be used along with actual illumination values observed on the ground of the same luminaires at the same settings to train a machine learning algorithm. Such training may improve the algorithm's accuracy to classify the luminaires and/or aspects of light they emit. Similar techniques may be used to train machine learning algorithms based on other attributes of the luminaires besides individual intensity levels, such as the type of lamps, number of lamps on a given street, color, color temperature, and so forth.
Fig. 5 is a block diagram of an example computer system 510. Computer system 510 typically includes at least one processor 514 that communicates with a number of peripheral devices via bus subsystem 512. These peripheral devices may include a storage subsystem 526, including, for example, a memory subsystem 525 and a file storage subsystem 526, user interface output devices 520, user interface input devices 522, and a network interface subsystem 516. The input and output devices allow user interaction with computer system 510. Network interface subsystem 516 provides an interface to outside networks and is coupled to corresponding interface devices in other computer systems.
User interface input devices 522 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen
incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term "input device" is intended to include all possible types of devices and ways to input information into computer system 510 or onto a communication network. User interface output devices 520 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non- visual display such as via audio output devices. In general, use of the term "output device" is intended to include all possible types of devices and ways to output information from computer system 510 to the user or to another machine or computer system.
Storage subsystem 526 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 526 may include the logic to perform selected aspects of method 400.
These software modules are generally executed by processor 514 alone or in combination with other processors. Memory 525 used in the storage subsystem 526 can include a number of memories including a main random access memory (RAM) 530 for storage of instructions and data during program execution and a read only memory (ROM) 532 in which fixed instructions are stored. A file storage subsystem 526 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 526 in the storage subsystem 526, or in other machines accessible by the processor(s) 514.
Bus subsystem 512 provides a mechanism for letting the various components and subsystems of computer system 510 communicate with each other as intended. Although bus subsystem 512 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
Computer system 510 can be of varying types including a workstation, server, computing cluster, blade server, or any other data processing system or computing device. In some instances, computer system 510 may be used in conjunction with other computer systems to perform various techniques described herein. In some embodiments, multiple computer systems may together form what may be referred to as a "cloud computing environment." In some embodiments, various techniques described herein may be performed by such a cloud computing environment, and various data gathered, processed, and/or stored in association with performance of such techniques may likewise be stored on the cloud. Due to the ever-changing nature of computers and networks, the description of computer system 510 depicted in Fig. 5 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system 510 are possible having more or fewer components than the computer system depicted in Fig. 5.
While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."
The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be interpreted as indicating exclusive alternatives (i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of." "Consisting essentially of," when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of and "consisting essentially of shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty ("PCT") do not limit the scope.

Claims

CLAIMS:
1. A computer-implemented method comprising:
obtaining (402), by one or more processors, first overhead image data that captures an outdoor geographic area from a first elevation and at least one second overhead image data that captures an outdoor geographic area from a second elevation;
analyzing (404), by one or more of the processors, the overhead image data to detect a plurality of luminaires (102, 202) within the geographic area based on light emitted by each of the plurality of luminaires; and
classifying (410), by one or more of the processors, each luminaire of the plurality of luminaires, wherein the classifying is based on using the first and second overhead image data that includes one or more attributes of light emitted by the plurality of luminaires and field data gathered at one or more elevations below the first and second elevation within the same geographic area or a different geographic area, the field data including local observations of one or more attributes of light emitted by one or more luminaires, to localize the plurality of luminaires using the first and second overhead image data and field data.
2. The computer- implemented method of claim 1, wherein the first overhead image data comprises image data based on predetermined navigation or map data of the geographic area.
3. The computer- implemented method of claim 2, wherein the method further includes fitting the overhead second image data to the predetermined navigation or map data of the first overhead mage data, to localize the detected plurality of luminaires.
4. The computer- implemented method of claim 3, further comprising excluding, from the analyzing, localizing, and classifying, one or more portions of the first or second overhead image data based on the predetermined map data.
5. The computer- implemented method of claim 1, wherein the second overhead image data comprises image data captured by a camera mounted on an airplane or helicopter or satellite image data.
6. The computer- implemented method of claim 1, wherein the second overhead image data is captured while the geographic area is not illuminated by daylight.
7. The computer- implemented method of claim 1, wherein the field data is gathered within the same geographic area and includes local observations of one or more attributes of light emitted by at least a subset of the plurality of luminaires.
8. The computer-implemented method of claim 7, further including comparing (412), by one or more of the processors, the field data to classifications of the subset of the plurality of luminaires.
9. The computer-implemented method of claim 8, wherein the comparing includes verifying the classifications of the plurality of luminaires against the field data.
10. The computer- implemented method of claim 1, wherein the field data is gathered within the different geographic area and includes local observations of one or more attributes of light emitted by one or more luminaires in the different geographic area, and the classifying is performed using a machine learning model trained using the field data.
11. A system comprising one or more processors and memory operably coupled with the one or more processors, wherein the memory stores instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to:
obtain (402) first overhead image data capturing an outdoor geographic area from a first elevation and at least one second overhead image data that captures an outdoor geographic area from a second elevation;
receive field data gathered at one or more elevations below the first elevation within a different geographic area, the field data including local observations of one or more attributes of light emitted by one or more luminaires
analyze (404) the overhead image data to detect a plurality of luminaires (102, 202) within the geographic area based on light emitted by each of the plurality of luminaires; and
classify (410) each luminaire of the plurality of luminaires, wherein the classifying is based on output of a machine learning model that is trained based on the first and second overhead image data that includes one or more attributes of light emitted by the plurality of luminaires and field data gathered at one or more elevations below the first and second elevations within a different geographic area, the field data including local observations of one or more attributes of light emitted by one or more luminaires, to localize the plurality of luminaires using the first and second overhead image data and field data.
12. The system of claim 11, wherein the first overhead image data includes predetermined navigation or map data of the geographic area and the second overhead image data includes a satellite image of the geographic area captured during daylight, and the system further includes instructions to fit the first and second overhead image data in order to localize the detected plurality of luminaires.
13. At least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the following operations:
obtaining (402) overhead image data capturing an outdoor geographic area from a first elevation and at least one second overhead image data that captures an outdoor geographic area from a second elevation;
analyzing (404) the overhead image data to detect a plurality of luminaires (102, 202) within the geographic area based on light emitted by each of the plurality of luminaires; and
classifying (410) each luminaire of the plurality of luminaires, wherein the classifying is based on using the the first and second overhead image data that includes one or more attributes of light emitted by the plurality of luminaires and the field data gathered at one or more elevations below the first elevation within a different geographic area, the field data including local observations of one or more attributes of light emitted by one or more luminaires contained within the different geographic area to localize the plurality of luminaires using the first and second overhead image data and field data.
PCT/EP2017/072222 2016-09-07 2017-09-05 Mapping and auditing luminaires across geographic areas WO2018046488A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662384649P 2016-09-07 2016-09-07
US62/384,649 2016-09-07
EP16190883.5 2016-09-27
EP16190883 2016-09-27

Publications (1)

Publication Number Publication Date
WO2018046488A1 true WO2018046488A1 (en) 2018-03-15

Family

ID=57137822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/072222 WO2018046488A1 (en) 2016-09-07 2017-09-05 Mapping and auditing luminaires across geographic areas

Country Status (1)

Country Link
WO (1) WO2018046488A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3126590A1 (en) * 2021-09-02 2023-03-03 Centre D'etudes Et D'expertise Sur Les Risques L'environnement La Mobilite Et L'amenagement System for managing an adaptive lighting installation and corresponding method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016038A (en) 1997-08-26 2000-01-18 Color Kinetics, Inc. Multicolored LED lighting method and apparatus
US6211626B1 (en) 1997-08-26 2001-04-03 Color Kinetics, Incorporated Illumination components
US20090316147A1 (en) * 2008-06-24 2009-12-24 International Business Machines Corporation Method and apparatus for failure detection in lighting systems
US20140147052A1 (en) * 2012-11-27 2014-05-29 International Business Machines Corporation Detecting Broken Lamps In a Public Lighting System Via Analyzation of Satellite Images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016038A (en) 1997-08-26 2000-01-18 Color Kinetics, Inc. Multicolored LED lighting method and apparatus
US6211626B1 (en) 1997-08-26 2001-04-03 Color Kinetics, Incorporated Illumination components
US20090316147A1 (en) * 2008-06-24 2009-12-24 International Business Machines Corporation Method and apparatus for failure detection in lighting systems
US20140147052A1 (en) * 2012-11-27 2014-05-29 International Business Machines Corporation Detecting Broken Lamps In a Public Lighting System Via Analyzation of Satellite Images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAREN MCMENEMY, JAMES NIBLOCK: "Classification of Luminaire Colour using CCDs", SENSORS, CAMERAS, AND SYSTEMS FOR SCIENTIFIC/INDUSTRIAL APPLICATIONS VII. EDITED BY BLOUKE, MORLEY M. PROCEEDINGS OF THE SPIE, 1 February 2006 (2006-02-01), XP040217860 *
SHYAMA PROSAD CHOWDHURY ET AL: "Performance Evaluation of Airport Lighting Using Mobile Camera Techniques", 2 September 2009, COMPUTER ANALYSIS OF IMAGES AND PATTERNS, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 1171 - 1178, ISBN: 978-3-642-03766-5, XP019137408 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3126590A1 (en) * 2021-09-02 2023-03-03 Centre D'etudes Et D'expertise Sur Les Risques L'environnement La Mobilite Et L'amenagement System for managing an adaptive lighting installation and corresponding method
EP4145960A1 (en) * 2021-09-02 2023-03-08 Centre d'Etudes et d'Expertise sur les Risques l'Environnement la mobilite et l'Amenagement System for managing an adaptive lighting system and corresponding method

Similar Documents

Publication Publication Date Title
CN106134289B (en) Detection based on light reflectivity
JP6032757B2 (en) Policy-based OLN lighting management system
US11304276B2 (en) Glare-reactive lighting apparatus
CN106797692B (en) Illuminate preference ruling
US10531539B2 (en) Method for characterizing illumination of a target surface
US8605154B2 (en) Vehicle headlight management
CN109892011B (en) Lighting system and lighting system control method
KR20120007545A (en) Systems and apparatus for image-based lighting control and security control
JP2012529736A (en) System and apparatus for automatically retrieving and correcting personal preferences applicable to multiple controllable lighting networks
KR102127080B1 (en) Smart street lamp control system using lora communication
CN106716876A (en) High-dynamic-range coded light detection
CN111587436A (en) System and method for object recognition using neural networks
KR20210065219A (en) Traffic safety system for intelligent crosswalk
CN110521286B (en) Image analysis technique
CN108781494B (en) Method for characterizing illumination of a target surface
CN109076677B (en) Method for determining the contribution and orientation of a light source at a predetermined measurement point
US20210183026A1 (en) Image and object detection enhancement based on lighting profiles
WO2018046488A1 (en) Mapping and auditing luminaires across geographic areas
KR100961675B1 (en) Traffic light lamp using light emitting diode
WO2018091315A1 (en) System and method for managing lighting based on population mobility patterns
EP3542338A1 (en) System and method for managing lighting based on population mobility patterns
US20200128649A1 (en) Method and system for asset localization, performance assessment, and fault detection
KR101943195B1 (en) Apparatus for method for controlling intelligent light
KR20210002663U (en) colour temperature control register for street lamps
WO2018153791A1 (en) Street light uniformity measurement using data collected by a camera-equipped vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17768711

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17768711

Country of ref document: EP

Kind code of ref document: A1