WO2018153791A1 - Street light uniformity measurement using data collected by a camera-equipped vehicle - Google Patents

Street light uniformity measurement using data collected by a camera-equipped vehicle Download PDF

Info

Publication number
WO2018153791A1
WO2018153791A1 PCT/EP2018/053930 EP2018053930W WO2018153791A1 WO 2018153791 A1 WO2018153791 A1 WO 2018153791A1 EP 2018053930 W EP2018053930 W EP 2018053930W WO 2018153791 A1 WO2018153791 A1 WO 2018153791A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
street
images
luminance
vehicle
Prior art date
Application number
PCT/EP2018/053930
Other languages
French (fr)
Inventor
Dong Han
Anqing Liu
Yuting Zhang
Ting Xu
Talmai BRANDÃO DE OLIVEIRA
Sirisha RANGAVAJHALA
Hassan MOHANNA
Original Assignee
Philips Lighting Holding B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Lighting Holding B.V. filed Critical Philips Lighting Holding B.V.
Publication of WO2018153791A1 publication Critical patent/WO2018153791A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The described embodiments relate to systems, methods, and apparatuses for estimating luminance uniformity of streetlights (208, 302). Luminance uniformity is estimated using a vehicle (220, 312) that travels along a street (224, 310, 406) and collects RGB images (402, 404) of the street while the street lights are operating. The images can filtered according to usability for estimating luminance uniformity. Furthermore, when an image includes an obstruction that prevents the visibility of a surface of the street, the image can be segmented to identify and/or remove the obstruction. In order to complete the image of the surface of the street, another image can be re-arranged to create a composite information/image with portions of both the other image and the segmented image. Color data from the composite information/image can be mapped to tristimulus values for determining relative luminance from the color data, and ultimately be used generate a luminance uniformity estimate for the street lights.

Description

STREET LIGHT UNIFORMITY MEASUREMENT USING DATA COLLECTED BY A CAMERA-EQUIPPED VEHICLE
TECHNICAL FIELD
The embodiments set forth herein relate to determining luminance uniformity of street lights. More particularly, the embodiments include systems, methods, and apparatuses for using vehicle mounted cameras to collect street images that can be analyzed for generating luminance uniformity estimations of street lights.
BACKGROUND
Light uniformity of street lights is crucial from a safety standpoint as drivers rely on street lights for direction and hazard avoidance. When the appearance of a street is rendered obscure by non-uniform street lighting, a driver may view the street in a way that is not accurate, thereby increasing the risk of an accident. Methods for analyzing street light output have been performed manually by groups of people that must work at discrete locations on closed streets. Such methods can be timely and costly processes because of the planning involved in temporarily halting traffic, which can potentially lead to safety hazards.
SUMMARY
The described embodiments relate to systems, methods, and apparatuses for determining street light uniformity using a vehicle that collects images while traveling over a street. In some embodiments, a method is set forth for estimating luminance uniformity of street lights using images captured by vehicle mounted cameras. The method can include steps of: receiving image data corresponding to a first image and a second image that each includes areas of a street that is illuminated by the street lights, forming or compiling composite information/data (or a composite image) of the street using portions of the first image and the second image, converting RBG data from the first image and the second image into luminance data, and providing a luminance uniformity estimate for the street lights based on the luminance data. The method can further include a step of identifying an obstruction in the first image or the second image, wherein the obstruction corresponds to an object that limits an amount of light that is incident upon the street from the street lights. The method can also include a step of identifying or removing the obstruction from the first image or the second image, wherein forming the composite information/image includes combining portions of the first image and the second image where the obstruction is identified or absent. In some embodiments, the method can include generating an average luminance value based on the luminance data, and determining a deviation of at least portion of the luminance data from the average luminance value, wherein the luminance uniformity estimate is based on the deviation. The first image can be captured by a first camera mounted at the front of a vehicle and the second image can be captured by a second camera mounted at the rear of the vehicle. The method can further include steps of identifying a reflective street marking in the first image or the second image, and removing a portion of the first image or the second image that includes the reflective street marking. The image data can be wirelessly received from a computing device of a vehicle that includes the vehicle mounted cameras. The method can also include steps of determining a distance between the street lights, and generating multiple luminance uniformity estimates, wherein a number of luminance uniformity estimates generated is based on the distance between the street lights. Any steps of the method can be embodied as instructions, stored on a non-transitory computer-readable medium, that when executed by one or more processors of a computing device, cause the computing device to perform one or more steps of the method.
In other embodiments, a system is set forth for analyzing luminance uniformity of street lights that illuminate a street. The system can include a vehicle configured to travel over the street, a first camera and a second camera attached to the vehicle and configured to capture images of the street while the vehicle is traveling over the street, and a computing device. The computing device can be configured to receive the captured images, convert RGB values from the captured images into luminance data, and estimate luminance uniformity of the street lights using the luminance data. The computing device can be further configured to compile a composite information/image from the images captured by the first camera and the second camera, and the images were captured at different times. In some embodiments the computing device can be configured to convert the RGB values using a matrix having at least two dimensions, and/or segment at least one of the captured images to isolate an area of the street that is depicted in the at least one captured image. The luminance data from a segmented captured image can be used to estimate luminance uniformity. In some embodiments, converting the RGB values can include mapping the RGB values to tristimulus values, and extracting the luminance data from the tristimulus values. As used herein for purposes of the present disclosure, the term "LED" should be understood to include any electroluminescent diode or other type of carrier
injection/junction-based system that is capable of generating radiation in response to an electric signal. Thus, the term LED includes, but is not limited to, various semiconductor- based structures that emit light in response to current, light emitting polymers, organic light emitting diodes (OLEDs), electroluminescent strips, and the like. In particular, the term LED refers to light emitting diodes of all types (including semi-conductor and organic light emitting diodes) that may be configured to generate radiation in one or more of the infrared spectrum, ultraviolet spectrum, and various portions of the visible spectrum (generally including radiation wavelengths from approximately 400 nanometers to approximately 700 nanometers). Some examples of LEDs include, but are not limited to, various types of infrared LEDs, ultraviolet LEDs, red LEDs, blue LEDs, green LEDs, yellow LEDs, amber LEDs, orange LEDs, and white LEDs (discussed further below). It also should be appreciated that LEDs may be configured and/or controlled to generate radiation having various bandwidths (e.g., full widths at half maximum, or FWHM) for a given spectrum (e.g., narrow bandwidth, broad bandwidth), and a variety of dominant wavelengths within a given general color categorization.
For example, one implementation of an LED configured to generate essentially white light (e.g., a white LED) may include a number of dies which respectively emit different spectra of electroluminescence that, in combination, mix to form essentially white light. In another implementation, a white light LED may be associated with a phosphor material that converts electroluminescence having a first spectrum to a different second spectrum. In one example of this implementation, electroluminescence having a relatively short wavelength and narrow bandwidth spectrum "pumps" the phosphor material, which in turn radiates longer wavelength radiation having a somewhat broader spectrum.
It should also be understood that the term LED does not limit the physical and/or electrical package type of an LED. For example, as discussed above, an LED may refer to a single light emitting device having multiple dies that are configured to respectively emit different spectra of radiation (e.g., that may or may not be individually controllable). Also, an LED may be associated with a phosphor that is considered as an integral part of the LED (e.g., some types of white LEDs). In general, the term LED may refer to packaged LEDs, non-packaged LEDs, surface mount LEDs, chip-on-board LEDs, T-package mount LEDs, radial package LEDs, power package LEDs, LEDs including some type of encasement and/or optical element (e.g., a diffusing lens), etc. The term "light source" should be understood to refer to any one or more of a variety of radiation sources, including, but not limited to, LED-based sources (including one or more LEDs as defined above), incandescent sources (e.g., filament lamps, halogen lamps), fluorescent sources, phosphorescent sources, high-intensity discharge sources (e.g., sodium vapor, mercury vapor, and metal halide lamps), lasers, other types of electroluminescent sources, pyro-luminescent sources (e.g., flames), candle-luminescent sources (e.g., gas mantles, carbon arc radiation sources), photo-luminescent sources (e.g., gaseous discharge sources), cathode luminescent sources using electronic satiation, galvano-luminescent sources, crystallo-luminescent sources, kine-luminescent sources, thermo-luminescent sources, triboluminescent sources, sonoluminescent sources, radioluminescent sources, and luminescent polymers.
A given light source may be configured to generate electromagnetic radiation within the visible spectrum, outside the visible spectrum, or a combination of both. Hence, the terms "light" and "radiation" are used interchangeably herein. Additionally, a light source may include as an integral component one or more filters (e.g., color filters), lenses, or other optical components. Also, it should be understood that light sources may be configured for a variety of applications, including, but not limited to, indication, display, and/or illumination. An "illumination source" is a light source that is particularly configured to generate radiation having a sufficient intensity to effectively illuminate an interior or exterior space. In this context, "sufficient intensity" refers to sufficient radiant power in the visible spectrum generated in the space or environment (the unit "lumens" often is employed to represent the total light output from a light source in all directions, in terms of radiant power or "luminous flux") to provide ambient illumination (i.e., light that may be perceived indirectly and that may be, for example, reflected off of one or more of a variety of intervening surfaces before being perceived in whole or in part).
The term "spectrum" should be understood to refer to any one or more frequencies (or wavelengths) of radiation produced by one or more light sources.
Accordingly, the term "spectrum" refers to frequencies (or wavelengths) not only in the visible range, but also frequencies (or wavelengths) in the infrared, ultraviolet, and other areas of the overall electromagnetic spectrum. Also, a given spectrum may have a relatively narrow bandwidth (e.g., a FWHM having essentially few frequency or wavelength components) or a relatively wide bandwidth (several frequency or wavelength components having various relative strengths). It should also be appreciated that a given spectrum may be the result of a mixing of two or more other spectra (e.g., mixing radiation respectively emitted from multiple light sources).
For purposes of this disclosure, the term "color" is used interchangeably with the term "spectrum." However, the term "color" generally is used to refer primarily to a property of radiation that is perceivable by an observer (although this usage is not intended to limit the scope of this term). Accordingly, the terms "different colors" implicitly refer to multiple spectra having different wavelength components and/or bandwidths. It also should be appreciated that the term "color" may be used in connection with both white and non- white light.
The term "color temperature" generally is used herein in connection with white light, although this usage is not intended to limit the scope of this term. Color temperature essentially refers to a particular color content or shade (e.g., reddish, bluish) of white light. The color temperature of a given radiation sample conventionally is characterized according to the temperature in degrees Kelvin (K) of a black body radiator that radiates essentially the same spectrum as the radiation sample in question. Black body radiator color temperatures generally fall within a range of approximately 700 degrees K (typically considered the first visible to the human eye) to over 10,000 degrees K; white light generally is perceived at color temperatures above 1500-2000 degrees K.
Lower color temperatures generally indicate white light having a more significant red component or a "warmer feel," while higher color temperatures generally indicate white light having a more significant blue component or a "cooler feel." By way of example, fire has a color temperature of approximately 1,800 degrees K, a conventional incandescent bulb has a color temperature of approximately 2848 degrees K, early morning daylight has a color temperature of approximately 3,000 degrees K, and overcast midday skies have a color temperature of approximately 10,000 degrees K. A color image viewed under white light having a color temperature of approximately 3,000 degree K has a relatively reddish tone, whereas the same color image viewed under white light having a color temperature of approximately 10,000 degrees K has a relatively bluish tone.
The term "lighting fixture" is used herein to refer to an implementation or arrangement of one or more lighting units in a particular form factor, assembly, or package. The term "lighting unit" is used herein to refer to an apparatus including one or more light sources of same or different types. A given lighting unit may have any one of a variety of mounting arrangements for the light source(s), enclosure/housing arrangements and shapes, and/or electrical and mechanical connection configurations. Additionally, a given lighting unit optionally may be associated with (e.g., include, be coupled to and/or packaged together with) various other components (e.g., control circuitry) relating to the operation of the light source(s). An "LED-based lighting unit" refers to a lighting unit that includes one or more LED-based light sources as discussed above, alone or in combination with other non LED- based light sources. A "multi-channel" lighting unit refers to an LED-based or non LED- based lighting unit that includes at least two light sources configured to respectively generate different spectrums of radiation, wherein each different source spectrum may be referred to as a "channel" of the multi-channel lighting unit.
The term "controller" is used herein generally to describe various apparatus relating to the operation of one or more light sources. A controller can be implemented in numerous ways (e.g., such as with dedicated hardware) to perform various functions discussed herein. A "processor" is one example of a controller which employs one or more microprocessors that may be programmed using software (e.g., microcode) to perform various functions discussed herein. A controller may be implemented with or without employing a processor, and also may be implemented as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions. Examples of controller components that may be employed in various embodiments of the present disclosure include, but are not limited to, conventional microprocessors, application specific integrated circuits (ASICs), and field-programmable gate arrays (FPGAs).
In various implementations, a processor or controller may be associated with one or more storage media (generically referred to herein as "memory," e.g., volatile and non- volatile computer memory such as RAM, PROM, EPROM, and EEPROM, floppy disks, compact disks, optical disks, magnetic tape, etc.). In some implementations, the storage media may be encoded with one or more programs that, when executed on one or more processors and/or controllers, perform at least some of the functions discussed herein.
Various storage media may be fixed within a processor or controller or may be transportable, such that the one or more programs stored thereon can be loaded into a processor or controller so as to implement various aspects of the present invention discussed herein. The terms "program" or "computer program" are used herein in a generic sense to refer to any type of computer code (e.g., software or microcode) that can be employed to program one or more processors or controllers.
The term "addressable" is used herein to refer to a device (e.g., a light source in general, a lighting unit or fixture, a controller or processor associated with one or more light sources or lighting units, other non-lighting related devices, etc.) that is configured to receive information (e.g., data) intended for multiple devices, including itself, and to selectively respond to particular information intended for it. The term "addressable" often is used in connection with a networked environment (or a "network," discussed further below), in which multiple devices are coupled together via some communications medium or media.
In one network implementation, one or more devices coupled to a network may serve as a controller for one or more other devices coupled to the network (e.g., in a master/slave relationship). In another implementation, a networked environment may include one or more dedicated controllers that are configured to control one or more of the devices coupled to the network. Generally, multiple devices coupled to the network each may have access to data that is present on the communications medium or media; however, a given device may be "addressable" in that it is configured to selectively exchange data with (i.e., receive data from and/or transmit data to) the network, based, for example, on one or more particular identifiers (e.g., "addresses") assigned to it.
The term "network" as used herein refers to any interconnection of two or more devices (including controllers or processors) that facilitates the transport of information (e.g., for device control, data storage, data exchange, etc.) between any two or more devices and/or among multiple devices coupled to the network. As should be readily appreciated, various implementations of networks suitable for interconnecting multiple devices may include any of a variety of network topologies and employ any of a variety of communication protocols. Additionally, in various networks according to the present disclosure, any one connection between two devices may represent a dedicated connection between the two systems, or alternatively a non-dedicated connection. In addition to carrying information intended for the two devices, such a non-dedicated connection may carry information not necessarily intended for either of the two devices (e.g., an open network connection).
Furthermore, it should be readily appreciated that various networks of devices as discussed herein may employ one or more wireless, wire/cable, and/or fiber optic links to facilitate information transport throughout the network.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention.
Fig. 1 provides a diagram that illustrates how luminance uniformity of street lights can be estimated manually by a person.
Fig. 2 is a diagram that illustrates an embodiment of a luminance uniformity measurement system (LUMS) that includes a front facing camera and a back facing camera attached to a vehicle.
Figs. 3A and 3B illustrate how a LUMS can be used to prevent obstruction of images.
Fig. 4 provides a diagram that illustrates steps for segmenting and compiling images in order to generate a luminance uniformity estimation for an area of a street.
Fig. 5 illustrates a method for providing a luminance uniformity estimate of street lights of a street using images collected by a vehicle traveling along the street.
Fig. 6 illustrates a method for estimating luminance uniformity based on composite information/image created from multiple images of a street.
DETAILED DESCRIPTION
The described embodiments relate to systems, methods, and apparatuses for determining street light uniformity using a vehicle that collects images while traveling over a street. Manual methods of determining street light uniformity can prove to be inefficient because of the time and cost of shutting down portions of streets to discretely analyze an output of a section of street lights. Furthermore, weather and other environmental conditions can delay such processes, making them difficult to exercise expeditiously. Despite the inefficiency of such methods, they are typically undertaken to verify compliance of street lighting with legal standards, identify any degradation that has occurred at the street lights, and track environmental conditions that have an effect on lighting. For example, some legal standards require that a street be sectioned off at particular dimensions, and that a certain number of measurements be taken within each section of the street. In order to reduce the time and costs associated with determining streetlight uniformity, a luminaire uniformity measurement system (LUMS) can be employed. The LUMS can include one or more cameras attached to a vehicle such as an automobile, airplane, drone, and/or any other vehicle suitable for moving a camera over a street. The LUMS can employ one or more vehicles that can be autonomous or controlled by a person during operation. In some embodiments, the vehicle can include a camera mounted at a front side of the vehicle, and another camera mounted on a rear side of the vehicle.
A camera of the LUMS can include a charged-coupled device (CCD) image sensor for capturing color data at each pixel of the CCD. The color data can correspond to images periodically captured by the vehicle while the vehicle is traveling along the street.
Because the LUMS vehicle can operate on busy streets, some images captured by the LUMS can include light reflected from oncoming vehicles. The LUMS can handle such images using an algorithm that detects distortion and other light obstructions, and discards those images that include a threshold amount of distortion or obstruction. Furthermore, when an image of the street is captured by the front facing camera and is obstructed, an image from the rear facing camera can be used in place of the image in order to avoid processing images that include obstructions. This also applies to images captured by the rear facing camera, which can also be obstructed. Therefore, images of the street captured by the rear facing camera that are obstructed can be discarded, and images from the front facing camera can be used in place of the obstructed images. In some embodiments, images taken from the front facing camera and the rear facing camera can be merged into a composite information/image where any obstructions or distortions are filtered or segmented out.
Images that include other features beyond street pavement, or other surfaces that indicate luminance uniformity, can be removed using a segmentation algorithm for segmenting non-pavement pixels from the images. Street markings, such as broken white lines, can be more reflective than street pavement, therefore, it can be beneficial to segment such street markings out of the images. Image segmentation can be performed using various algorithms that can include a Watershed algorithm. The Watershed algorithm can be performed by a computing device that converts the image into a gray scale image that can be analyzed as a topographical image for segmenting according to the peaks and valleys in the topographical image. In some embodiments, a pixel classifier such as a Support Vector Machine and/or a deep convolutional neural network can be used. A Support Vector Machines can refer to an algorithm of supervised learning for analyzing data such as image data. A deep convolutional neural network is a feed-forward artificial neural network having convolution layers for preserving spatial relationships between pixels. Once the images have been segmented to isolate the street images, the street images can be combined to form a composite information/image, such as a panoramic image, having greater dimensions than the originally captured images. The street images can be combined by stretching and/or stitching the street images together from multiple different segments. The resulting street image can then be sampled to estimate luminance uniformity at different portions of the street.
In some embodiments, color data from the street images can be captured as RGB values, which correspond to the red, green, and blue channels of the CCD array of the camera. The RGB values can then be mapped to a different color space using different matrix operations, which can be performed at a computing device attached to the vehicle, or a remote computing device that can receive the images from the LUMS for processing. The color spaces can include an XYZ color space, LMS color space, and/or any other color space suitable for providing information related to luminance. The matrix operations used for converting the RGB values to XYZ values can be selected according to the conditions and/or features of the camera capturing the images. Once the XYZ values have been derived, the Y values, which indicate a relative luminance level of the pixels, can be used to compare relative luminance values among the various pixels captured for each image. Based on this comparison, overall luminance uniformity among all of the street images can be
characterized. For example, street dimensions and/or distances between street lights can be calculated by LUMS and used in combination with the luminance levels to estimate luminance uniformity of the streetlights. Furthermore, while some lighting standards can require uniformity measurements to occur at a given repeated distance, LUMS can provide a higher precision measurement than what many standards require. Using an overall luminance uniformity estimation, which can be an average of luminance values, a deviation from the overall luminance uniformity can be calculated for different samples of the street images in order to identify locations where luminance of the streetlights is not uniform.
Fig. 1 provides a diagram 100 that illustrates how luminance uniformity of streetlights 108 can be estimated manually by a person 104. Typically, when luminance uniformity measurements are conducted by a person 104, a street section 106 is blocked off so that traffic cannot enter the street section 106, as illustrated in the view 102 of the street section 106. Although blocking off the street can be necessary for performing the measurements by a person 104, such practice can be time consuming and risky because the measurements are typically taken at night. Furthermore, legal standards often govern the dimensions in which each luminosity uniformity measurement must be taken, making manual measurements even more inefficient. For example, some standards can require the person 104 to use a photometer to measure luminance uniformity at a finite number of locations 1 18 between two points 112, representing the streetlights 108. The locations 1 18 can be within a field having a length 1 16 and a width 1 14 that are also defined by some standard. Because of the limitations on measurement locations 1 18 per pair of streetlights 108, the scalability of the manual measurement process can be impractical.
The embodiments described herein provide different methods for conducting luminance uniformity measurements using a vehicle can that include one or more cameras, and an image processing device for filtering and analyzing images. The vehicle can be any mobile device that can traverse streets while carrying one or more cameras for capturing images of the streets. For example, the vehicle can be an automobile that can drive along the street section 106 and capture images of the locations 1 18. In this way, compliance with standards can be performed without blocking off the street section 106 or requiring the person 104 to manually take measurements at each location 1 18 of each street section 106.
This allows the luminance uniformity measurements to be performed at night during a variety of weather conditions. Furthermore, image filtering and analysis can be performed for identifying degradation of the lighting and/or the streets in order that maintenance can be scheduled on a more regular basis to cure the degradation.
Fig. 2 is a diagram 200 that illustrates an embodiment of a luminance uniformity measurement system (LUMS) that includes a front facing camera and a back facing camera attached to a vehicle 220. Each of the front facing camera and/or the rear facing camera can be a high definition camera and/or a low illumination camera, which is able to capture clear images in low light intensity environments. The vehicle 220 can include a computing device that is connected to each of the front facing camera and the rear facing camera. The computing device can include a memory for collecting the images provided by the front facing camera and the rear facing camera. In this way, while the vehicle 220 is moving along a first direction 206, the vehicle 220 is able to collect images of a first area 222 of a street 224 using the front facing camera and simultaneously collect images of a second area 226 using the rear facing camera. By collecting images while in motion and in opposite directions, the LUMS can quickly obtain images that can be used to determine luminance uniformity without the need to block off the street 224 from traffic.
In some embodiments, the LUMS can perform luminance uniformity calculations at the vehicle 220 using software executing at the computing device of the vehicle. In other embodiments, the computing device of the vehicle 220 can be connected to a transmitter (not depicted) for sending the collected images over a network 202 to a network device 204. The network 202 can be a private or public network, such as the Internet, and the network device 204 can be a server device that is loaded with software for performing image analysis and luminance uniformity calculations. In some embodiments, a stream of images can be provided from the vehicle 220 to the network device 204 for processing. In other embodiments, the computing device in the vehicle 220 can filter the collected images according to usability and transmit the usable images to the network device 204 for further processing. In yet other embodiments, the luminance uniformity calculations can be performed at the computing device of the vehicle 220, and the results of the luminance uniformity calculations can be transmitted to the network device 204.
Usability of images can be determined according one or more image quality metrics that indicate how well the details of the street 224 have been captured in each image. Furthermore, the computing device of the vehicle 220 can cause each of the front facing camera and the rear facing camera to capture images of the same area of the street 224 as the vehicle 220 is traveling along the street. The captured images of the same area can then be compared to identify the image that captures the area with the best quality. For example, the computing device can determine the image of the area that has the least amount of distortion, is the sharpest, includes the most accurate color, and/or any other metric for determining superior quality of an image over another image. Once the computing device has identified the image that captures the area with the best quality and/or detail, any other image of the area can be discarded.
In some embodiments, the vehicle 220 can be an automobile that is driven by a person in order to gather images for determining luminance uniformity. In other
embodiments, the vehicle 220 is an autonomous automobile that is capable of traveling over the street 224 without direct or real-time control from a person. In this way, safety risks associated with manually illuminance uniformity measurements of streets can be eliminated by removing the need to have a person on the street. In yet other embodiments, the vehicle 220 can be an aerial drone capable of traveling autonomously in the air and over the street 224. Furthermore, the aerial drone can include one or more cameras for capturing images of the street 224 for determining luminance uniformity of streetlights 208 that illuminate the street 224. The aerial drone can move slower or faster than the traffic that is traveling along the street 224, and suspend itself over locations along the street 224 to take luminance uniformity measurements at those locations. In some embodiments, the computing device of the vehicle 220 can be connected to a global positioning system (GPS) or other position coordinate mechanisms (e.g., cellular triangulation, etc.), and each image captured by the front facing camera and the rear facing camera can be associated with position coordinate data. In this way, the computing device can filter the position coordinate data to determine whether there are any images that do not depict particular section of street.
Figs. 3A and 3B illustrate how a luminance uniformity measurement system (LUMS) can be used to prevent obstruction of images. Specifically, FIG. 3 A illustrates a perspective view 300 of a LUMS vehicle 312 moving along a street 310 in order to capture images that can provide information related to luminance uniformity of the street lights 302. The LUMS vehicle 312 can include a front facing camera and a rear facing camera (not depicted) that can capture, respectively, an image of a first area 304 of the street 310 and an image of a second area 306 of the street 310. Because the LUMS vehicle 312 can operate on the street 310 while the traffic is moving through the street 310, an automobile 308 also moving on the street 310 can obstruct images captured by a camera of the LUMS vehicle 312. For example, the automobile 308 can enter the first area 304 while the front facing camera is capturing an image of a portion of the street 310 included in the first area 304, causing an obstruction to the image. A computing device of the LUMS can determine that the image was obstructed and direct the rear facing camera to re-capture the image of the street previously obstructed. For example, the computing device can determine that the automobile 308 has passed the LUMS vehicle 312 to an extent that the automobile 308 would not be obstructing an image of the street 310 captured by the rear facing camera, as illustrated in perspective view 314 of FIG. 3B. In response to this determination, the rear facing camera can capture the image of the street corresponding to the area that was previously obstructed by the automobile 308.
In some embodiments, each of the front facing camera and the rear facing camera simultaneously collect images, and images that correspond to the same area on the street can be segmented and combined. For example, an image can be segmented when a portion of the image is obstructed, but another portion of the image is usable for purposes of determining luminance uniformity. The usable portion of the image can be maintained and combined with a usable portion of a different image in order to create a composite information/image of the street 310 that is suitable for determining luminance uniformity at the street 310. In some embodiments, street markings 316 can be identified within images and can filtered out in order to provide a more accurate estimation of luminance uniformity. Because such street markings 316 can reflect light at a much greater rate than bare asphalt, street markings 316 can negatively influence luminance uniformity estimations and render them inaccurate.
Fig. 4 provides a diagram 400 that illustrates steps for segmenting and compiling images in order to generate a luminance uniformity estimation for an area of a street. Specifically, diagram 400 illustrates how a first image 402 captured by a front facing camera 413 of a LUMS and a second image 404 by a rear facing camera 415 of the LUMS can be used to estimate luminance uniformity for an area of a street. Each of the first image 402 and the second image 404 can be captured during night time when street lights alongside a street 406 are illuminated. In this way, the first image 402 and the second image 404 will include light from the street lights that is reflecting from the street 406.
A computing device 422 of the LUMS that receives the first image 402 and the second image 404 from the front facing camera 413 and the rear facing camera 415, respectively, can identify that an obstruction (e.g., a truck 408) is present in the first image 402. In response, the computing device 422 can use the second image 404 to create a composite information/image 418 of the street 406. For example, the computing device can use an image segmentation algorithm to identify a first image region 406 where a surface of the street 406 is present, and crop the first image region 406. Furthermore, the computing device 422 can determine that the second image 404 includes a portion of the surface of the street that was obstructed by the truck 408. Because the second image 404 was captured by the rear facing camera 415 attached to a LUMS vehicle traveling on the street behind the truck 408, the second image 404 will not include the truck 408. Therefore, a second image region 414, corresponding to the region obstructed by the truck 408 in the first image 402, can be cropped and combined with the first image region 406 in an arrangement 410 that creates a composite information/image 418.
The composite information/image 418 can be an image that is a combination of multiple images or portions of images captured by the rear facing camera 415 and/or the front facing camera 413. The segmentation of the images to remove obstructions and other artifacts can be performed using any algorithm suitable for isolating features of an image. For example, a Watershed algorithm can be used to isolate the street 406 in the first image 402 and the second image 404. A Watershed algorithm can operate by converting the images into grayscale images and filling different colors into areas of the grayscale images that have different shades of gray. The shades of gray can be analogous to valleys of different depth, and that valleys having similar shades of gray can be filled in with the same color. As the image is filled in with different colors at the different depths, the colors can begin to merge, at which point, barriers can be drawn to prevent the colors from merging. The process can continue until all valleys in the image are filled to their peaks. The resulting barriers that have been drawn on the image can provide the segmentation. To further improve segmentation, a Support Vector Machine and/or deep convolutional neural network can be used. A Support Vector Machine is a machine learning algorithm that uses supervised learning for classification and/or image segmentation. A deep convolutional neural network uses convolution of three dimensional data to perform image recognition and segmentation.
Once the first image 402 and the second image 404 have been segmented to isolate the regions that include the street 406, and combined to form the composite information/image 418, the composite information/image 418 can be processed to estimate luminance uniformity. The composite information/image 418 can be represented by a number of pixels 420 that are sampled by the computing device 422 of the LUMS. It should be noted that the computing device 422 can be part of the LUMS or otherwise in communication with the LUMS. The pixels 420 can include RGB (i.e., red, green, blue) data organized in a frame of the image. The RGB values for each pixel 420 can be converted into CIE XYZ color space, or any other color space suitable for determining a luminance corresponding to each pixel or group of pixels. An RGB converter 424 can be embodied as software on the computing device 422 for converting the RGB values to the CIE XYZ color space. The RGB values can be converted to CIE XYZ color space using a matrix operation, such as the operation described by Equation (1) below.
(1)
Figure imgf000017_0001
In Equation (1), RGB values are mapped to XYZ values, corresponding to tristimulus values in the CIE XYZ color space, using the matrix M. The matrix M can change according to the source for the CIE model. For example, a standard E model reference source for the CIE XYZ color space can use the following matrix from Equation (2).
0.4887180 0.3106803 0.2006017
M 0.1762044 0.8129847 0.0108109 (2)
0.0000000 0.0102048 0.9897952
Alternatively, when an RGB model is used with a standard D65 source as a reference source, the following matrix can be used from Equation (3). 0.4887180 0.3575761 0.1804375
M = 0.2126729 0.7151522 0.0721750 (3)
0.0193339 0.1191920 0.9503041
The selection of the matrix, M, can also depend on the conditions of the camera capturing the first image 402 and the second image 404. In some embodiments, the computing device 422 can store different matrices, M, that are associated with different conditions for the cameras capturing the images. In this way, the LUMS can adapt to the changing conditions of the cameras as the LUMS vehicle is traveling along the street 406 collecting images. Once the RGB values have been converted into XYZ values in the CIE XYZ color space, Y values can be calculated. The Y values can indicate relative luminance values for each pixel 420 or group of pixels of the composite information/image 418. A luminance analyzer 426, which can be embodied as software operating on the computing device 422, can process the Y values in order to make luminance uniformity estimates for the street 406 captured in the composite information/image 418. A luminance uniformity estimate can be generated by first averaging Y values for a composite image 418 of a street 406. In some embodiments, the luminance uniformity estimate can be generated using an average of all Y values for the composite information/image 418, and in other embodiments, the luminance uniformity estimate can be generated using an average of less than all the Y values from the composite information/image 418. The average of the Y values can be an overall relative luminance that is used as a reference for calculating a standard deviation of Y values of pixels at different locations in the composite information/image 418. The standard deviations can provide an estimate of luminance uniformity for various locations within the composite information/image 418 and/or any other image captured by the front facing camera and/or the rear facing camera. In some embodiments, the cameras of the LUMS can be calibrated to provide absolute luminance values so that estimates of luminance uniformity can be based on the absolute luminance values, rather than the relative luminance values from the CIE XYZ color space.
In some embodiments, the number of luminance uniformity estimates for a section of street 406 can depend on the area of the street 406 and the number of street lights that are illuminating the area. For example, the computing device 422 can determine how many street lights are located around an area of a street and the size of the area using a computer vision algorithm. Based on the number of street lights and/or the size of the area, the computing device 422 can determine how many luminance uniformity estimates will be generated for different reference locations within the area of the street.
In other embodiments, the LUMS vehicle can operate at night and include headlights and tail lights, in order to comply with traffic laws governing night time driving. Because the headlights and tail lights can interfere with measurements of street light luminance, the luminance uniformity estimate generated by the LUMS can be compensated to provide a more accurate luminance uniformity estimate. For example, the front facing camera of the LUMS vehicle can capture one or more images of a street or surface that is illuminated by the headlights but not illuminated by street lights. Additionally, the rear facing camera of the LUMS vehicle can capture one or more images of the street or surface that is illuminated by tail lights but not illuminated by street lights. Each headlight and tail light image can then be analyzed to determine how much luminance the headlights and tail lights contribute to the images captured by the front facing camera and the rear facing camera. Furthermore, each headlight and tail light image can be analyzed to identify how light from the headlight and tail light distributes over a street surface of other surface. The information gleaned from these analyses can be used to compensate images captured while the LUMS vehicle is traveling over a street illuminated by street lights. For example, the luminance values measured from the images can be reduced by the amount of luminance that was contributed to the images by the headlights and/or tail lights. Furthermore, the overall average luminance for an image can be calculated using luminance values that have been reduced or otherwise compensated to account for the light emitted by the headlights and/or tail lights.
Fig. 5 illustrates a method 500 providing a luminance uniformity estimate of street lights of a street using images collected by a vehicle traveling along the street. The method 500 can be performed by a computing device, network device, and/or any other device suitable for processing images. The method 500 can include a block 502 of receiving an image of a street captured by a front facing camera and/or a rear facing camera attached to a vehicle that is traveling along the street while the street is illuminated by street lights. The image can be a single frame captured discretely at a particular time, or the image can be part of a video that is being transmitted for purposes of determining luminance uniformity of street lights. The method 500 can also include a block 504 of determining that the image includes enough area of the street for making a luminance uniformity estimate. Part of the purpose for this determination is to ensure that the image includes at least some amount of light that is from the street lights and reflecting off the street. Furthermore, this determination can be based on a computer vision algorithm and/or a machine learning algorithm for distinguishing between images that include portions of a street and images that do not include portions of a street. For example, a deep convolutional neural network can be used for recognizing images that include portions of the street.
At block 506 of method 500, the image of the street can be segmented to isolate a portion of the image that includes the street. Segmentation can also be performed using a deep convolutional neural network and/or a Support Vector Machine, which can classify similar pixels in order to filter segments of an image that are unrelated to an area of interest. The method 500 can further include a block 508 of converting one or more pixels from the image into CIE XYZ color space, which seeks to link the physical color produced by electromagnetic frequencies with tristimulus values. The tristimulus values correspond to an amount of sensation necessary to produce the physical sensation of color in the cells of an eye. For example, the tristimulus value "Y" can correspond to relative luminance, the value "Z" can be associated with an amount of blue stimulation, and the value "X" can correspond to non-negative response curves. Once the pixels have been converted from RGB data provided by a camera to CIE XYZ color space, the "Y" tristimulus value can be used to estimate luminance uniformity.
At block 510, luminance uniformity can be estimated for the area of the street using the one or more converted pixel values. The estimate of luminance uniformity can be calculated by averaging the Y values corresponding to multiple different areas of a street and determining deviations of the Y values from the average. The amount of deviation within an area of the street can provide an estimate of luminance uniformity for the area of the street. For example, if there is no deviation of the Y values from the average for an area of a street, the computing device can indicate that the street lights for that area are providing uniform luminance. However, if 50% of the Y values sampled over an area of a street image are less than the average Y value by 90%, the computing device can indicate that the street lights for that area are not providing uniform luminance.
Fig. 6 illustrates a method 600 for estimating luminance uniformity based on a composite information/image created from multiple images of a street. The method 600 can be performed by a computing device, network device, and/or any other device suitable for processing images. The method 600 can include a block 602 of receiving a first image from a front facing camera of a vehicle and a second image from a rear facing camera of the vehicle. The first image and the second image can be taken concurrently, or at different times. For example, depending on the direction and speed that the vehicle is traveling, the rear facing camera can capture an area of a street on which the vehicle is traveling immediately after the front facing camera captured the same area. In other words, the velocity of the vehicle can be used to time the capture of images from each camera. However, in some embodiments, each of the front facing camera and/or the rear facing camera can be video cameras that simultaneously stream images to a computing device that is attached or otherwise in communication with the vehicle.
At block 604, a determination can be made that a portion of the first image includes an object that is obstructing a view of an area of a street. The obstruction can be an automobile on the street or any other object capable of obstructing the view of a street surface. The determination can be made using a computer vision algorithm for performing object recognition. At block 606, a determination can be made that the second image includes the area of the street without the object obstructing the view. The determination at block 606 can also be performed using a computer vision algorithm. Furthermore, the determination at block 606 can be performed by comparing the areas in the first image surrounding the object to the areas in the second image to identify similarities. The identified similarities allow conclusions to be drawn about whether the portions of the street captured in each image are the same portions. When the images do contain the same portions of the street, the images can be used to cure obstructions found in either of the images.
At block 608, the first image is segmented to remove the portion of the first image that includes the obstruction. The segmenting of the first image can be performed using a Support Vector Machine, deep convolutional neural network, a Watershed algorithm, and/or any other process suitable for segmenting an image. At block 610, portion of the first image and the second image are combined to create a composite information/image of the street without the obstruction. Orientation and dimensions of the first image and the second image can be adjusted in order to create the composite information/image. At block 612, luminance uniformity of the street lights illuminating the street can be estimated. In some embodiments, the composite information/image can be used to determine whether the street is experiencing degradation and needs maintenance. For example, when a pothole forms in a street, typically the surfaces that make up the pothole will reflect light differently than the surfaces surrounding the pothole. These differences can be identified by the computing device performing method 600 and used to provide notifications to persons or devices that are responsible for fixing any degradation occurring at the street.
While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."
The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be interpreted as indicating exclusive alternatives (i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of." "Consisting essentially of," when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of and "consisting essentially of shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 211 1.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty ("PCT") do not limit the scope.

Claims

CLAIMS:
1. A method for estimating luminance uniformity of street lights (208, 302) using images (402, 404) captured by one or more vehicle mounted cameras to capture RGB images, the method comprising:
receiving image data corresponding to a first image and a second image of an area of a street (224, 310, 406) that is illuminated by the street lights, captured by the one or more cameras at different times;
identifying an obstruction (408) in the first image or the second image, wherein the obstruction corresponds to an object that limits an amount of light that is incident upon the street from the street lights;
forming composite information (418) of the area using portions of the first image and the second image,
converting RBG data from the composite information (418) into luminance data; and
providing a luminance uniformity estimate for the street lights based on the luminance data.
2. The method of claim 1, wherein forming the composite information includes combining portions (412, 414) of the first image and the second image where the obstruction is absent from the composite information.
3. The method of claim 1, further comprising:
generating an average luminance value based on the luminance data; and determining a deviation of at least portion of the luminance data from the average luminance value, wherein the luminance uniformity estimate is based on the deviation.
4. The method of claim 1, wherein the first image is captured by a first camera (413) mounted at the front of a vehicle (220, 312) and the second image is captured by a second camera (415) mounted at the rear of the vehicle.
5. The method of claim 1, further comprising:
identifying a reflective street marking in the first image or the second image; and
removing a portion of the first image or the second image that includes the reflective street marking.
6. The method of claim 1, wherein the image data is wirelessly received from a computing device (422) of a vehicle that includes the vehicle mounted cameras.
7. The method of claim 1, further comprising:
determining a distance between the street lights; and
generating multiple luminance uniformity estimates, wherein a number of luminance uniformity estimates generated is based on the distance between the street lights.
8. A system for analyzing luminance uniformity of street lights (208, 302) that illuminate a street (224, 310, 406), the system comprising:
a vehicle (220, 312) configured to travel over the street;
a first camera (413) and a second camera (415) attached to the vehicle and configured to capture images (402, 404) of an area of the street illuminated by the street lights (208, 302) while the vehicle is traveling over the street; and
a computing device (422) that is configured to receive the captured images, identify an obstruction (408) in the first image or the second image, wherein the obstruction corresponds to an object that limits an amount of light that is incident upon the street from the street lights, generate a composite information/ (418) of the area using portions of the first image and the second image where the obstruction is absent to remove the obstruction from the first image or the second image and convert RGB values from the composite
information/into luminance data, and estimate luminance uniformity of the street lights using the luminance data.
9. The system of claim 8, wherein the computing device is further configured to convert the RGB values using a matrix having at least two dimensions.
10. The system of claim 8, wherein the computing device is further configured to segment at least one of the captured images to isolate an area of the street that is depicted in the at least one captured image.
1 1. The system of claim 8, wherein the luminance data from a segmented captured image is used to estimate luminance uniformity.
12. The system of claim 8, wherein converting the RGB values includes mapping the RGB values to tristimulus values, and extracting the luminance data from the tristimulus values.
13. A non-non-transitory computer readable medium configured to store instructions that when executed by one or more processors of a computing device, cause the computing device to perform steps corresponding to any of claims 1-6.
PCT/EP2018/053930 2017-02-22 2018-02-16 Street light uniformity measurement using data collected by a camera-equipped vehicle WO2018153791A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762462159P 2017-02-22 2017-02-22
US62/462,159 2017-02-22
EP17160257.6 2017-03-10
EP17160257 2017-03-10

Publications (1)

Publication Number Publication Date
WO2018153791A1 true WO2018153791A1 (en) 2018-08-30

Family

ID=58448287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/053930 WO2018153791A1 (en) 2017-02-22 2018-02-16 Street light uniformity measurement using data collected by a camera-equipped vehicle

Country Status (1)

Country Link
WO (1) WO2018153791A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905952A (en) * 2019-04-03 2019-06-18 北京百度网讯科技有限公司 Roam lamp control device and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007012839A2 (en) * 2005-07-23 2007-02-01 The Queen's University Of Belfast Light measurement method and apparatus
US20100121561A1 (en) * 2007-01-29 2010-05-13 Naoaki Kodaira Car navigation system
US20140313504A1 (en) * 2012-11-14 2014-10-23 Foundaión Cidaut Procedure and device for the measurement of dynamic luminance and retroreflection of road markings and signals and the obtention of the shape, position and dimensions of the same
US20150022659A1 (en) * 2012-03-06 2015-01-22 Iwasaki Electric Co., Ltd. Luminance measuring apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007012839A2 (en) * 2005-07-23 2007-02-01 The Queen's University Of Belfast Light measurement method and apparatus
US20100121561A1 (en) * 2007-01-29 2010-05-13 Naoaki Kodaira Car navigation system
US20150022659A1 (en) * 2012-03-06 2015-01-22 Iwasaki Electric Co., Ltd. Luminance measuring apparatus
US20140313504A1 (en) * 2012-11-14 2014-10-23 Foundaión Cidaut Procedure and device for the measurement of dynamic luminance and retroreflection of road markings and signals and the obtention of the shape, position and dimensions of the same

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GLENN J ET AL: "CALIBRATION AND USE OF CAMERA-BASED SYSTEMS FOR ROAD LIGHTING ASSESSMENT", INTERNATIONAL JOURNAL OF LIGHTING RESEARCH AND TECHNO, DIVISION, LONDON, vol. 32, no. 1, 1 January 2000 (2000-01-01), pages 33 - 40, XP009073321, ISSN: 1365-7828, DOI: 10.1177/096032710003200105 *
GLENN J ET AL: "PRACTICAL LIMITATIONS AND MEASUREMENTS FOR CAMERA BASED ROAD LUMINANCE/LIGHTING STANDARDS ASSESSMENT", JOURNAL OF THE ILLUMINATING ENGINEERING SOCI, ILLUMINATING ENGINEERING SOCIETY. NEW YORK, US, vol. 28, no. 1, 1 January 1999 (1999-01-01), pages 64 - 70, XP009073351, ISSN: 0099-4480 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905952A (en) * 2019-04-03 2019-06-18 北京百度网讯科技有限公司 Roam lamp control device and method

Similar Documents

Publication Publication Date Title
US8754960B2 (en) Systems and apparatus for image-based lighting control and security control
WO2019084173A1 (en) Integrated automotive adaptive driving beam headlamp and calibration method
US10531539B2 (en) Method for characterizing illumination of a target surface
KR101950850B1 (en) The apparatus and method of indoor positioning with indoor posioning moudule
US8605154B2 (en) Vehicle headlight management
US9900954B2 (en) Lighting control based on one or more lengths of flexible substrate
CN108781494B (en) Method for characterizing illumination of a target surface
JP6884219B2 (en) Image analysis technique
CN109076677B (en) Method for determining the contribution and orientation of a light source at a predetermined measurement point
US20190182930A1 (en) Method for calibration of lighting system sensors
WO2018153791A1 (en) Street light uniformity measurement using data collected by a camera-equipped vehicle
US11153546B2 (en) Low-light imaging system
US20190268997A1 (en) System and method for managing lighting based on population mobility patterns
WO2018091315A1 (en) System and method for managing lighting based on population mobility patterns
Herrnsdorf et al. LED-based photometric stereo-imaging employing frequency-division multiple access
Le Francois et al. Top-down illumination photometric stereo imaging using light-emitting diodes and a mobile device
KR101943195B1 (en) Apparatus for method for controlling intelligent light
EP3073807A1 (en) Apparatus and method for controlling a lighting system
JP2012008924A (en) Number reader
KR102014722B1 (en) Anti-glare lightning apparatus
WO2020121626A1 (en) Image processing device, computer program, and image processing system
EP4057779A1 (en) System comprising a luminaire and a camera module
JP2020184170A (en) Image analyzer, electronic controller, and image analysis method
JP2021044667A (en) Exposure control apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18704569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18704569

Country of ref document: EP

Kind code of ref document: A1