WO2022088901A1 - 一种智能灯光切换方法、系统及相关设备 - Google Patents

一种智能灯光切换方法、系统及相关设备 Download PDF

Info

Publication number
WO2022088901A1
WO2022088901A1 PCT/CN2021/114823 CN2021114823W WO2022088901A1 WO 2022088901 A1 WO2022088901 A1 WO 2022088901A1 CN 2021114823 W CN2021114823 W CN 2021114823W WO 2022088901 A1 WO2022088901 A1 WO 2022088901A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
image
light source
line
vehicle
Prior art date
Application number
PCT/CN2021/114823
Other languages
English (en)
French (fr)
Inventor
苏文尧
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2023526210A priority Critical patent/JP2023548691A/ja
Priority to EP21884633.5A priority patent/EP4224362A4/en
Publication of WO2022088901A1 publication Critical patent/WO2022088901A1/zh
Priority to US18/307,615 priority patent/US20230256896A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • B60Q1/1415Dimming circuits
    • B60Q1/1423Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic
    • B60Q1/143Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic combined with another condition, e.g. using vehicle recognition from camera images or activation of wipers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • B60Q1/1415Dimming circuits
    • B60Q1/1423Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/11Controlling the light source in response to determined parameters by determining the brightness or colour temperature of ambient light
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/30Indexing codes relating to the vehicle environment
    • B60Q2300/31Atmospheric conditions
    • B60Q2300/314Ambient light
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/30Indexing codes relating to the vehicle environment
    • B60Q2300/33Driving situation
    • B60Q2300/332Driving situation on city roads
    • B60Q2300/3321Detection of streetlights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/40Indexing codes relating to other road users or special conditions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • the present application relates to the field of intelligent vehicles, and in particular, to an intelligent lighting switching method, system and related equipment.
  • the high beam plays a more significant role in improving the sight line and expanding the observation field.
  • the visible range after turning on the high beam is much larger than that when only turning on the low beam. Visible range of lights.
  • high beams are not suitable for all roads at night. For example, when a vehicle is passing in front of you, turning on the high beams may cause the driver of the other side to be visually blinded instantly, and it may also cause the driver of the other side to perceive speed and distance. Decreased judgment of width. Therefore, the correct use of high beams is essential for safe driving at night. In order to solve the driver's erroneous use of high beams from the source, intelligent light switching technology came into being.
  • the light source can be distinguished by the different image characteristics presented by the light source under different exposure times, or the brightness of the ambient light in front can be directly calculated, or the vanishing point can be obtained according to the lane line, so as to assist.
  • the driver switches the high and low beams.
  • the implementation effect of the above-mentioned method is not ideal, and the implementation of the above-mentioned first method has high requirements on the camera module, so the implementation of this method requires high cost; the above-mentioned second method ignores the influence of the brightness of the headlights in the distance , so that there is a large error in the calculation of ambient light brightness; the implementation of the third method above depends on the accurate acquisition of lane lines, but the probability of accurately acquiring lane lines at night is extremely low.
  • the application provides an intelligent light switching method, system and related equipment, which can accurately detect ambient light brightness and accurately classify light source information. After the two kinds of information are fused, the switching function of high beam or low beam can be accurately performed. , which effectively solves the problem that the distant light source is not found and interferes with the influence of the light source.
  • the present application provides a method for switching intelligent lights, the method comprising: acquiring an image, the image being captured by a camera set at a fixed position of a vehicle, and light source information is recorded in the image; calculating the The ambient light brightness value corresponding to the image; according to the light source information, classify the light sources included in the image to obtain a classification result; Light switch or dipped beam switch (light switch).
  • This light switching method combines ambient light information and light source. There are two types of source information, so that more relevant information of the night road can be referenced when judging light switching, which improves the accuracy of light switching.
  • the calculating the ambient light brightness value corresponding to the image includes: selecting at least one area in the image, and calculating the at least one area According to the brightness value of the at least one area, the ambient light brightness value corresponding to the image is obtained by calculation.
  • the method provided by the present application when calculating the brightness value of ambient light, select at least one area to calculate its brightness value, and then combine its brightness values to obtain the brightness value of ambient light, fully considering the possible light source information in different areas of the image, Compared with calculating the ambient light brightness only through the whole image, the method provided by the present application can obtain the ambient light brightness more accurately.
  • the classifying the light sources included in the image to obtain a classification result includes: inputting the image into a light source detection model, according to The light source detection model obtains the light source category in the image.
  • the classification of light sources is realized by the light source detection model.
  • the light source detection model can quickly and accurately realize the classification of light sources according to the characteristics such as the size, position, and color of the light sources.
  • the method before the inputting the image into the light source detection model, the method further includes: using a plurality of sample images to perform the light source detection model on the light source detection model.
  • the sample image includes the light source and label information of the light source.
  • the training refers to the process of identifying and learning the light source information in the sample image by the initial model. After the training is completed, the light source detection model Different characteristics of different light sources can be accurately identified, and the accuracy of light source classification is improved.
  • the obtaining the light source category in the image according to the light source detection model includes: selecting a brightness value greater than a preset value from the image threshold bright spot, and set a light frame for the bright spot; pair the light frame to obtain a plurality of light frame pairs; according to the plurality of light frame pairs, determine the VP Line, and the VP Line is used to distinguish The first area and the second area of the image; according to the positional relationship between the bright spot and the VP Line and the color feature of the bright spot, the bright spot is classified to obtain different types of light source categories.
  • the method before pairing the light frames, the method further includes: deduplicating the light frames, so that the light frames are No overlap or tangency occurs.
  • determining the vanishing point horizontal line VP Line according to the plurality of light frame pairs includes: determining the center of the light frame of the light frame pair in the image. Map the lines to obtain the line number distribution diagram of the light frame pair; select the VP Line according to the line number distribution diagram; modify the VP Line according to the preset correction value to obtain the corrected VP Line; use the reference value The corrected VP Line is adjusted to obtain a VP Line for classifying the bright spot, wherein the reference value is used to describe the change of the camera pitch angle.
  • the VP Line is initially determined by the line number distribution diagram of the light frame pair, and it needs to be corrected and reference values are introduced later, so that the final obtained VP Line is more accurate, thereby improving the subsequent light source. Classification accuracy.
  • the present application provides an intelligent light switching system, the system includes: an acquisition unit for acquiring an image, the image is captured by a camera set at a fixed position of the vehicle, and the light is recorded in the image source information; an ambient light detection unit, used to calculate the ambient light brightness value corresponding to the image; a light source classification unit, used to classify the light sources included in the image according to the light source information, and obtain the classification As a result, the switching unit performs high beam switching or low beam switching (light switching) according to the brightness value of the ambient light corresponding to the image and the classification result.
  • the ambient light detection unit is specifically configured to: select at least one area in the image, and calculate the brightness value of the at least one area; According to the brightness value of the at least one area, the ambient light brightness value corresponding to the image is obtained by calculation.
  • the light source classification unit is configured to classify the light sources included in the image, and when a classification result is obtained, the light source classification unit is specifically configured to: The image is input to a light source detection model, and a light source category in the image is obtained according to the light source detection model.
  • the light source classification unit before inputting the image into a light source detection model, is further configured to: use a plurality of sample images to classify the light source
  • the light source detection model is trained, and the sample image includes the light source and the label of the light source.
  • the light source classification unit when configured to obtain the light source category in the image according to the light source detection model, it is specifically configured to: Selecting a bright spot with a brightness value greater than a preset threshold in the image, and setting a light frame for the bright spot; pairing the light frame to obtain a plurality of light frame pairs; according to the plurality of light frame pairs, determine The vanishing point horizontal line VP Line, the VP Line is used to distinguish the first area and the second area of the image; according to the positional relationship between the bright spot and the VP Line and the color characteristics of the bright spot, the Bright spots are classified to obtain different types of light source categories.
  • the light source classification unit before pairing the light frame, is further configured to: deduplicate the light frame to There is no overlap or tangent between the light frames.
  • the light source classification unit is used for determining the vanishing point horizontal line VP Line according to the plurality of light frame pairs, specifically for: Mapping the light frame center of the light frame pair in the line in the image to obtain the line number distribution map of the light frame pair; according to the line number distribution map, select VP Line; Correction to obtain the corrected VP Line; use the reference value to adjust the corrected VP Line to obtain the VP Line for classifying the bright spot, wherein the reference value is used to describe the camera pitch angle change.
  • a computing device in a third aspect, includes a processor and a memory, the memory is used for storing a program code, and the processor is used for the program code in the memory to execute the above-mentioned first aspect and in combination with the above-mentioned
  • the smart light switching method provided by any one of the implementation manners of the first aspect.
  • a computer-readable storage medium stores a computer program.
  • the processor executes the first aspect and the first aspect in combination with the above-mentioned first aspect.
  • the intelligent light switching method provided by any one of the implementations.
  • a computer program product includes instructions that, when the computer program product is executed by a computer, enables the computer to execute the above-mentioned first aspect and any implementation manner in combination with the above-mentioned first aspect The provided flow of smart light switching.
  • FIG. 1 is a schematic diagram of an application scenario of a smart light switching method provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an intelligent vehicle according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an intelligent lighting switching system provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an initial light source detection model provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a method for extracting light source features in an image provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of setting a light frame for a first bright spot according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of deduplicating a light frame according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a light frame pairing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a drawing line number distribution diagram provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of determining the value of the first row provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of eliminating road reflection interference provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a smart light switching method provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a computing device according to an embodiment of the present application.
  • the RGB color space is based on the three basic colors of red, green and blue, which are superimposed to different degrees to produce rich and extensive colors, so it is commonly known as the three-primary color mode, also known as the natural color mode.
  • Red, green and blue represent the three basic colors (primary colors) in the visible spectrum, and each color is divided into 256 levels according to its brightness. When the three primary colors of color light overlap, various intermediate colors can be produced due to different color mixing ratios.
  • the display system generally uses the RGB color space.
  • the color cathode ray tube and the color raster graphic display use the R, G, B values to drive the R, G, B electron guns to emit electrons, and excite the R, G, B on the phosphor screen respectively.
  • the three colors of phosphors emit light of different brightnesses and are additively mixed to produce various colors.
  • the YUV color space contains a luminance signal and two chrominance signals.
  • the luminance signal is often referred to as Y.
  • the chrominance signal is composed of two independent signals (the chrominance signal representing red and the chrominance signal representing blue) , the two chrominance signals are often called UV or PbPr or CbCr, so the YUV color space is also called the YPbPr color space, or the YCbCr color space.
  • the luminance information is separated from the chrominance information, and different sampling rates are used for the luminance and chrominance of the same frame image.
  • the luminance information Y and the chrominance information U ⁇ V are independent of each other.
  • the Y signal component is a black and white grayscale image.
  • the U and V signal components are monochromatic colormaps. Since human vision is more sensitive to luminance than to chrominance, the YUV color space is widely used for color television systems.
  • the HSI color space uses three parameters H, S, and I to describe the color characteristics, where H defines the frequency of the color, called hue; S represents the depth of the color, called saturation; I represents the intensity or brightness.
  • ROI Region of interest
  • machine vision software such as Halcon, OpenCV, and MATLAB to obtain the ROI of the region of interest, and perform the next image processing.
  • Intersection over Union refers to the ratio of the intersection and union of the areas of two rectangular boxes. IOU is a measure of the accuracy of detecting corresponding objects in a specific dataset. IOU is a simple measure, and any task that produces a bounding box in the output can be measured by IOU.
  • ADAS Advanced Driver Assistance System
  • the technical processing enables the driver to detect possible dangers at the fastest time, in order to attract attention and improve the safety of the active safety technology.
  • the sensors used in ADAS mainly include cameras, radar, laser and ultrasonic, which can detect light, heat, pressure or other variables used to monitor the state of the car, usually located in the front and rear bumpers of the vehicle, side mirrors, inside the steering column or in the windshield on glass.
  • Early ADAS technology was mainly based on passive alarms. When the vehicle detects potential danger, it will issue an alarm to remind the driver to pay attention to abnormal vehicle or road conditions. Active interventions are also common for the latest ADAS technologies.
  • the Adaptive Cruise Control (ACC) system is developed on the basis of the cruise control system. In addition to the function of the cruise control system, it can drive at the speed set by the driver. , it can also maintain the preset following distance and automatically accelerate and decelerate as the distance changes. Compared with the cruise control system, the ACC system can better help the driver to coordinate the braking and accelerator.
  • ACC Adaptive Cruise Control
  • the Road Side Unit is an automatic road toll collection (Electronic Toll Collection, ETC) system, installed on the road side, using a dedicated short-range communication (Dedicated Short Range Communications, DSRC), and the on-board unit (On Board Unit). , OBU) to communicate, realize vehicle identification, electronic deduction device.
  • the RSU can be composed of a high-gain directional beam-steered read-write antenna and a radio frequency controller.
  • the high-gain directional beam control read-write antenna is a microwave transceiver module, responsible for signal and data transmission/reception, modulation/demodulation, encoding/decoding, encryption/decryption; the radio frequency controller is used to control the transmission and reception of data and the processing of the upper computer.
  • FIG. 1 is a schematic diagram of an application scenario of an intelligent light switching system.
  • the scenario shown in Figure 1 is a scenario in the Internet of Vehicles.
  • the scenario includes multiple smart vehicles, radio transmission towers, and RSUs.
  • the intelligent light switching system can be applied to the in-vehicle system of the intelligent vehicle, for example, in ADAS and ACC systems, which can realize the switching of intelligent high and low beams in various scenarios such as assisted driving and automatic driving.
  • intelligent lighting may also be installed in the smart vehicle as a stand-alone system, to be distinguished from other assisted driving systems in the smart vehicle.
  • the embodiment of the present application provides an intelligent vehicle 200 applied in the application scenario of the above-mentioned intelligent lighting switching system. Please refer to FIG. 2 , which is an embodiment of the present application. A schematic diagram of the structure of the intelligent vehicle 200 .
  • the intelligent vehicle 200 may be set to a fully intelligent driving mode, or may be set to a partial intelligent driving mode. It can be understood that when the intelligent vehicle 200 is set to the fully intelligent driving mode, the intelligent vehicle 200 can perform corresponding operations without interacting with humans, and the operations include but are not limited to acceleration, deceleration, and following; when the intelligent vehicle is set to In the partial intelligent driving mode, the intelligent vehicle 200 can not only automatically perform corresponding operations, but also can be performed by the driver. For example, determine the vehicle and its surrounding environment, determine the possible behavior of at least one other vehicle in the surrounding environment, determine A confidence level corresponding to the likelihood that the other vehicle will perform the possible behavior, and the intelligent vehicle 200 is then controlled based on the determined information.
  • Intelligent vehicle 200 may include various subsystems, such as travel system 210 , sensor system 220 , control system 230 , one or more peripherals 240 , as well as computer system 250 , power supply 260 , and user interface 270 .
  • intelligent vehicle 200 may include more or fewer subsystems, each of which may include multiple elements.
  • each of the subsystems and elements of the intelligent vehicle 200 may be interconnected in a variety of ways, eg, by wire or wirelessly.
  • the travel system 210 may include components that power the intelligent vehicle 200 .
  • travel system 210 may include engine 2110 , energy source 2120 , transmission 2130 , and wheels/tires 2140 .
  • the engine 2110 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a gasoline engine and an electric motor, a hybrid engine, an internal combustion engine and an air compression engine.
  • Engine 2110 converts energy source 2120 into mechanical energy.
  • Examples of energy sources 2120 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. Energy source 2120 may also provide energy to other systems of intelligent vehicle 200 .
  • Transmission 2130 may transmit mechanical power from engine 2110 to wheels 2140 .
  • Transmission 2130 may include a gearbox, a differential, and a driveshaft.
  • transmission 2130 may also include other devices, such as clutches.
  • the drive shafts may include one or more axles that may be coupled to one or more wheels 2140 .
  • the sensor system 220 may include several sensors that sense information about the surrounding environment of the smart vehicle 200 and obtain information about its own vehicle.
  • sensor system 220 may include positioning system 2210 , inertial measurement unit (IMU) 2220 , radar 2230 , and vision sensors 2240 .
  • the positioning system 2210 may include a GPS system, a Beidou system or other positioning systems.
  • the sensor system 220 may also include sensors that monitor the internal systems of the smart vehicle 200, eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, and the like. The data acquired by these sensors can be used to detect objects and their corresponding properties including, but not limited to, position, shape, orientation, velocity. This detection and identification is of great significance for the intelligent vehicle 200 to safely perform subsequent operations.
  • the positioning system 2210 may be used to determine the geographic location of the intelligent vehicle 200 .
  • the IMU 2220 may sense position and orientation changes of the intelligent vehicle 200 based on inertial acceleration.
  • the IMU 2220 may be a combination of an accelerometer and a gyroscope, in which case the IMU 2220 may be used to measure the curvature of the smart vehicle 200 .
  • the radar 2230 may utilize wireless signals to sense the surrounding environment of the intelligent vehicle 200 , the surrounding environment including but not limited to surrounding vehicles, infrastructure, and pedestrians. It can be understood that the radar 2230 may include, but is not limited to, millimeter-wave radar and lidar. In some embodiments, in addition to sensing the surrounding environment, the radar 2230 may also be used to sense object motion in the environment.
  • Vision sensor 2240 may be used to capture multiple images of the environment surrounding intelligent vehicle 200 .
  • Vision sensors 2240 may include, but are not limited to, still cameras, video cameras.
  • control system 230 may be used to control the operation of the intelligent vehicle 200 and its components.
  • Control system 230 may include a number of elements, in one embodiment, control system 230 includes steering system 2310, actuators 2320, braking unit 2330, computer vision system 2340, route control system 2350, obstacle avoidance system 2360, and high and low beams Light Switching System 2370.
  • the steering system 2310 may be operated to adjust the forward direction of the intelligent vehicle 200 .
  • steering system 2310 may include a steering wheel system.
  • the actuator 2320 may be used to control the engine 2110 and thus the speed of the intelligent vehicle 200 .
  • actuator 2320 may include a throttle.
  • the braking unit 2330 may be used to control the smart vehicle 200 to decelerate.
  • the braking unit 2330 may use friction to reduce the rotational speed of the wheels 2140 .
  • the braking unit 2330 may convert the kinetic energy of the wheels 2140 into electrical current.
  • the braking unit 2330 may also take other methods to reduce the rotational speed of the wheels 2140 to control the speed of the smart vehicle 200 .
  • the actuator 2320 and the braking unit 2330 can be combined into one unit module, and the combined unit module can be used to control the speed of the intelligent vehicle 200.
  • the combined unit module can include the accelerator system and braking system.
  • Computer vision system 2340 may be used to process and analyze images captured by vision sensor 2240 for subsequent operations. Through the computer vision system 2340, the surrounding environment of the intelligent vehicle 200, the characteristics of objects in the surrounding environment, and their motion states can also be recognized.
  • the surrounding environment may include traffic signals, road boundaries and obstacles, the characteristics of objects in the surrounding environment include but are not limited to their surface optical properties, and the motion states include but are not limited to stationary, acceleration, and deceleration.
  • Computer vision system 2340 may use color space conversion techniques, object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • the computer vision system 2340 includes an image detection system, a neural network based processing system, and the like, and can be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 2350 is used to determine the travel route of the intelligent vehicle 200 .
  • the route control system 2350 may combine data from one or more predetermined maps of the positioning system 2210 to determine a driving route for the intelligent vehicle 200 .
  • the obstacle avoidance system 2360 is used to identify, evaluate, avoid or bypass obstacles in the surrounding environment.
  • the obstacle avoidance system 2360 needs to obtain information about the surrounding environment by means of the radar 2230 and the visual sensor 2240, and then analyzes the surrounding environment through the computer vision system 2340 to identify potential obstacles, and then the obstacle avoidance system 2360 evaluates and avoidance.
  • the high and low beam switching system 2370 is used to intelligently switch the high and low beams.
  • the high and low beam switching system 2370 can be automatically activated according to the ambient light, and the high and low beam switching can be performed automatically, or it can be manually activated.
  • control system 230 may be added to the control system 230, and those components described above may also be replaced and/or reduced.
  • the intelligent vehicle 200 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 240 .
  • Peripherals 240 may include, but are not limited to, wireless communication system 2410 , vehicle computer 2420 , microphone 2430 and/or speaker 2440 .
  • the peripheral device 240 can interact with the user of the smart vehicle 200.
  • the on-board computer 2420 can provide information to the user of the smart vehicle 200, and at the same time, the user of the smart vehicle 200 can also store data Uploaded to the in-vehicle computer 2420 , it can be understood that the user of the smart vehicle 200 can operate through the touch screen of the in-vehicle computer 2420 .
  • peripherals 240 may provide a means for intelligent vehicle 200 to communicate with other devices in the vehicle, eg, microphone 2430 may receive audio from a user of intelligent vehicle 200, which may include voice commands and other audio inputs. Similarly, speakers 2440 may output audio to a user of smart vehicle 200 .
  • Wireless communication system 2410 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 2410 may use 3G cellular communications, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communications, such as LTE, or 5G cellular communications.
  • the wireless communication system 2410 can communicate with a wireless local area network (WLAN) using WiFi.
  • WLAN wireless local area network
  • wireless communication system 2410 may utilize an infrared link, Bluetooth, or ZigBee to communicate directly with devices, which may include, but are not limited to, vehicles and/or utilities between roadside stations.
  • the wireless communication system 2410 may further include one or more DSRC devices and one or more LTE-V2X devices.
  • Power supply 260 may provide power to various components of intelligent vehicle 200 .
  • the power source 260 may include one or more battery packs, the batteries of which may be rechargeable lithium-ion or lead-acid batteries. It will be appreciated that in some embodiments, power source 260 and energy source 2120 may be implemented together.
  • Computer system 250 may include one or more processors 2520 that execute instructions 25110 stored in a non-transitory computer-readable medium such as memory 2510.
  • Computer system 250 may also be multiple computing devices that control individual components or subsystems of intelligent vehicle 200 in a distributed fashion.
  • Processor 2520 may be any conventional processor, such as a commercially available CPU.
  • the processor may be a dedicated device such as an Application Specific Integrated Circuit (ASIC) or other hardware-based processor.
  • FIG. 2 functionally illustrates a processor, memory, and computer, those of ordinary skill in the art will understand that the processor, computer, or memory may actually include devices that may or may not be stored within the same physical enclosure. Multiple processors, computers, or memories.
  • the memory may be a hard drive or other storage medium located within an enclosure other than a computer.
  • reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components may each have their own processor that only performs computations related to component-specific functions .
  • a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • memory 2510 may contain instructions 25110 (eg, program logic) that may be executed by processor 2520 to implement various functions of intelligent vehicle 200, including those described above.
  • Memory 2510 may also contain additional instructions, including instructions to send data to, receive data, interact with, and/or control one or more of travel system 210 , sensor system 220 , control system 230 , and peripherals 240 .
  • memory 2510 may also store data such as road maps, route information, vehicle data such as the location, direction, speed of the vehicle, and other relevant information. It can be understood that, in one embodiment, when the intelligent vehicle 200 is in an autonomous, semi-autonomous and/or manual mode, its computer system 250 can use the data to perform related operations, for example, according to the road information of the target road segment and the received data.
  • the target vehicle speed range adjusts the current speed of the intelligent vehicle so that the intelligent vehicle can follow the vehicle at a constant speed.
  • User interface 270 for providing information to or receiving information from a user of intelligent vehicle 200 .
  • the user interface 270 may include an interface required by one or more input/output devices in the peripheral device 240, such as a USB interface, an AUX interface, an OBD interface.
  • Computer system 250 may control the functions of intelligent vehicle 200 based on data from various subsystems (eg, travel system 210 , sensor system 220 , and control system 230 ) and data received from user interface 270 .
  • computer system 250 may control steering system 2310 to avoid obstacles detected by sensor system 220 and obstacle avoidance system 2360 .
  • the above-mentioned components may not only be assembled inside the intelligent vehicle 200 as a subsystem, but one or more of the components may also be installed separately from the intelligent vehicle 200 .
  • memory 2510 may exist partially or completely separate from intelligent vehicle 200 .
  • the above components may be coupled in wired and/or wireless manner.
  • An intelligent driving car traveling on the road can recognize its surrounding environment and adjust the current speed.
  • the surrounding environment may include, but is not limited to, other vehicles, pedestrians, traffic control equipment, other infrastructure, and other types of objects.
  • the intelligent driving car may consider each identified object independently and determine the speed at which the self-driving car is to adjust based on characteristics of the object, such as speed, acceleration, relative distance to the vehicle, and the like.
  • the intelligent vehicle 200 or a computing device associated with the intelligent vehicle 200 eg, computer system 250, computer vision system 2340, data storage device 2510 of FIG.
  • the intelligent vehicle 200 can adjust the ego vehicle speed based on the predicted behavior of the identified object.
  • the intelligent vehicle 200 can determine how the vehicle needs to adjust (eg, accelerate, decelerate, or stop) and to what steady state based on the predicted behavior of the object, and may also consider other factors in the process , such as the lateral position of the intelligent vehicle 200 in the road where it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so on.
  • the computing device may also provide instructions to modify the steering angle of intelligent vehicle 200 so that the autonomous vehicle follows a given trajectory and/or maintains contact with objects in the vicinity of the autonomous vehicle (such as safe lateral and longitudinal distances for cars in adjacent lanes.
  • the above-mentioned smart vehicle 200 may be a car, a truck, a motorcycle, a bus, a boat, an amusement vehicle, an amusement park vehicle, a construction equipment, a tram, a train, etc., which is not limited in this embodiment of the present application.
  • FIG. 2 the schematic structural diagram of the smart vehicle shown in FIG. 2 is only an exemplary implementation in the embodiments of the present application, and the smart vehicles in the embodiments of the present application include but are not limited to the above structures.
  • the present application provides an intelligent lighting switching system, which is used for intelligent lighting switching during driving at night.
  • the intelligent lighting switching system first acquires a front image, then acquires ambient light information, and classifies the light sources in the image, and then Combine the ambient light information and the light source classification information to determine whether to switch the high and low beam lights.
  • the units inside the intelligent light switching system may be divided in various manners, which are not limited in this application.
  • FIG. 3 is an exemplary division manner. As shown in FIG. 3 , the functions of each functional unit will be briefly described below.
  • the illustrated smart light switching system 300 includes an acquisition unit 310 , an ambient light detection unit 320 , a light source classification unit 330 and a switching unit 340 .
  • the acquisition unit 310 is used to acquire an image, the image is captured by a camera set at a fixed position of the vehicle, and the light source information is recorded in the image;
  • the ambient light detection unit 320 is used to calculate the corresponding image of the image.
  • the light source classification unit 330 is configured to classify the light sources included in the image according to the light source information, and obtain a classification result;
  • the switching unit 340 is configured to, according to the ambient light brightness corresponding to the image, The value and the classification result are used to perform high beam switching or low beam switching (light switching).
  • the light source classification unit 330 includes a light source detection model, and the light source detection model The acquired images are detected and light sources are classified.
  • the light source detection model can be an AI model, and the initial AI model needs to be trained before the AI model is used for detection.
  • This application uses the sample image obtained by the camera and contains the light source information to train the initial AI model. , so that the AI model after training has the ability to classify light sources, and can classify the light sources of the images obtained by the camera.
  • the light source detection model in this application can also determine the position of the light source (detection frame information/position of the target in the image) and the actual distance between the detected light source and the workshop.
  • sample images captured by cameras with annotation information for training.
  • the sample images record the target, and the annotation information includes the target in the sample image.
  • the category information of the target is used to indicate the category of the target.
  • the target refers to a light source, such as headlights, tail lights, street lights, traffic lights, etc.
  • the category information of the target refers to the category information of the light source.
  • the detection frame refers to the light frame
  • the detection frame information may include but not limited to the target.
  • Category information pixel coordinate information.
  • a rectangular detection frame is used for marking, and the detection frame information includes category information and pixel coordinate information of the marked target, and the category information includes features such as the shape and color of the detection frame.
  • the pixel coordinate information consists of four pixel coordinates, that is, the abscissa of the upper left corner, the ordinate of the upper left corner, the abscissa of the lower right corner, and the ordinate of the lower right corner of the detection frame.
  • the detection frame information can directly display text to indicate the category information of the target, or the shape and color of the detection frame can be used to indicate the category of the target, and the label information can be in the form of Extensible markup language (XML) or JavaScript object notation (JavaScript object notation, JSON) and other files are saved.
  • XML Extensible markup language
  • JSON JavaScript object notation
  • the light source detection model is described in detail below.
  • the training data (sample images) required for the training of the light source detection model in this application is acquired by a camera at a fixed position in the vehicle.
  • the sample image can be detected by using a target detection algorithm, so as to obtain the category information and detection frame information of the light source recorded in the sample image, or can be obtained by manual annotation. Understandably, the sample images may include images acquired at different times and images at different exposure times.
  • the initial light source detection model in this application can be an AI model, and specifically a deep neural network model can be selected. The position in the image is detected, and the actual distance between the light source and the vehicle can also be calculated, which means that the initial light source detection model in this application is improved in structure.
  • the structure of the initial light source detection model 400 of the present application mainly includes three parts, namely, a backbone network 410 , a detection network 420 and a loss function calculation unit 430 .
  • the backbone network 410 is used to perform feature extraction on the input sample image, and it contains several convolutional layers, which can be selected from visual geometry group network (VGG), residual network (residual network), dense convolutional network ( dense convolutional network), etc.
  • the detection network 420 is used to detect and identify the features extracted by the backbone network 410, and output light source category information and light source position information (ie, detection frame information). The output of 410 is subjected to further convolution calculations.
  • the light source detection model of the present application is different in structure. Because the distance between the lights of the same vehicle in the image, the real width of the vehicle, the focal length of the camera used to acquire the image, and the distance from the vehicle to the vehicle, there is a proportional relationship among these four, three of which are known, From this proportional relationship, another value can be obtained.
  • the backbone network 410 can use the same network, but in the detection network 420, the present application adds multiple channels to each convolutional layer responsible for the regression detection frame, and preferably two channels are added to indicate that the light source is in The horizontal and vertical coordinates in the sample image, through the horizontal and vertical coordinates of the light source of the same vehicle, can obtain the distance between the lights of the vehicle on the image, thereby obtaining the distance from the vehicle to the vehicle.
  • more channels can also be added, and each channel is assigned a corresponding physical meaning, which is not limited in this application.
  • the parameters of the initial light source detection model 400 are initialized, and then the sample image is input to the initial light source detection model 400 .
  • the backbone network 410 performs feature extraction on the target recorded in the sample image to obtain abstract features, and then inputs the abstract features to the detection network 420, and the detection network performs further detection and identification, and predicts the target's category, location and self-direction.
  • the distance between the car and the car is output to the loss function calculation unit 430 through the corresponding channel; then the annotation information corresponding to the sample image is also input into the loss function calculation unit 430, and the loss function calculation unit 430
  • the prediction result predicted by the detection network 420 is
  • the annotation information corresponding to the sample image is compared, and the loss function is calculated, and the loss function is used as the objective function to update and adjust the parameters in the model using the back propagation algorithm.
  • Input the sample images carrying the annotation information in turn, and perform the above training process iteratively until the loss function value converges, that is, the loss function value calculated each time fluctuates around a certain value, then stop the training.
  • the light source classification The model has been trained, that is, the light source classification model has the ability to detect the category, location, and distance to the vehicle in the image, which can be used for light source classification.
  • the construction of the loss function needs to be redesigned.
  • the target localization model of the present application is an improvement on the classic target detection model (such as yolo, faster RCNN, etc.)
  • the loss function of the classical target detection model is Loss1
  • the feature extraction of the light source in the image is mainly completed by the backbone network 410 and the detection network 420.
  • the following describes in detail how to extract features to realize classification with reference to FIG. 5 .
  • the method includes but is not limited to the following steps:
  • S501 Select bright spots whose brightness value is greater than a preset threshold from the image, and set a light frame for them.
  • a bright spot whose brightness is greater than or equal to a preset threshold among the bright spots is selected as the first bright spot, and a light frame is set for it.
  • the size of the light frame can be adaptively changed according to the size of the first bright spot to be calibrated.
  • FIG. 6 is a schematic diagram of setting a light frame for the first bright spot.
  • a bright spot is marked with a red square
  • the first bright spot with a certain distribution characteristic above the image is marked with a blue square
  • the first bright spot with distinct color characteristics above the image is marked with a purple square
  • the said having a certain distribution feature means that the connecting lines of the first bright spots on the left and right sides of the upper area of the image will intersect at a certain point in the center of the image within a certain error range.
  • the above-mentioned setting method of the light frame is only an exemplary method in this application, and the specific parameters such as the size, shape, and color of the light frame can be set by the R&D personnel according to actual needs and experimental data.
  • the specific parameters such as the size, shape, and color of the light frame can be set by the R&D personnel according to actual needs and experimental data.
  • the light frame use light frames of different colors or shapes to calibrate the bright spots with different features, which is not limited in this application.
  • S502 Screen the light frame to remove duplicates.
  • the light frame is deduplicated by using the IOU algorithm.
  • Spot A is calibrated by light frame C
  • bright spot B is calibrated by light frame D.
  • a second light frame is generated, and the second light frame is larger than the original light frame.
  • both bright spots can be included.
  • the light frame F in FIG. 7 is the second light frame. It is understood that the size of the second light frame can be determined according to the size of the original light frame. In this case, specific parameters such as the size, shape, and color of the second light frame may also be preset.
  • S503 Pair the deduplicated light frames to obtain a first light frame pair.
  • any two light frames in the image must meet at least three conditions to be successfully paired.
  • the first condition is: the absolute value of the difference between the heights of the two light frames is less than the first
  • the second condition is: the distance between the two light frames has a linear relationship with the height of the light frame, that is, the ratio between the distance between the two light frames and the height of the light frame is in the proportional range
  • the third condition is: the angle between the line connecting the center points of the two light frames and the horizontal line is less than the second threshold.
  • the distance between the two light frames is the distance between the center points of the light frames (that is, the length of the line connecting the center points of the two light frames), and the height of the light frame is the distance between the two light frames.
  • the average height of each light frame is the distance between the center points of the light frames.
  • Figure 8 is a schematic diagram of the pairing of light frames.
  • light frame A and light frame B are two light frames arbitrarily selected in the image.
  • the height of light frame A is a
  • the height of light frame B is b.
  • the distance between the center points of the light frame is c, when
  • the first threshold value, the second threshold value and the ratio interval are set by the research and development personnel according to actual requirements and experimental data, and are not limited in this application.
  • S504 Determine the distance between the distant headlight and the vehicle.
  • the central area is the area where the lights of distant vehicles may appear, and the first light frame pair in the central area is selected, and the distance between the center points of the first bright spot in the first light frame pair is the same as the real
  • the ratio of the vehicle width is the first ratio
  • the ratio of the focal length of the camera used to acquire the image to the distance from the first light frame to the vehicle to the own vehicle is the second ratio. It can be understood that the first ratio and the second ratio are in the are equal within a certain error range. In an embodiment of the present application, the two can be considered to be approximately equal. Therefore, the distance between the center points of the first bright spot in the first light frame pair, the real vehicle width, and the camera’s width are known.
  • the focal length can be used to obtain the distance from the vehicle to which the first light frame pair belongs to the own vehicle, or a numerical value can be introduced as an error value, and the distance between the center points of the first bright spot in the known first light frame pair and the real vehicle width
  • the first ratio can be calculated, and the error value can be added or subtracted from the first ratio to obtain the second ratio, and knowing the focal length of the camera, the first light frame can be obtained from the vehicle to which it belongs. the distance.
  • the distance between all vehicles in the obtained image and the own vehicle may be calculated according to the above method, so that not only the light source information can be obtained, but also The distance information of the light source is obtained, combined with the irradiation distance of the high beam and the low beam, the switching can be performed more accurately.
  • the acquired image is divided into N rows (N is a positive integer), the row where the center points of the two first bright spots in the first light frame pair are located is determined, and the statistics in the acquired image are calculated.
  • the number of first light frame pairs in each row draw the line number distribution diagram with the number of lines as the abscissa and the number of the first light frame pairs as the ordinate.
  • Figure 9 is a schematic diagram of drawing the line number distribution diagram.
  • the acquired image is divided into 7 lines. If the image width is 280px (pixels), the height of each line is 40px, and each first light frame is determined.
  • the row where the center point of the first bright spot in the pair is located count the number of first light frame pairs in each row in the acquired image.
  • the number of pairs of lamps in the third row is 5, the number of pairs of lamps in the fifth row is 3, and the number of pairs of lamps in the sixth row and the seventh row is 0.
  • Draw the row number distribution diagram with the number of rows as the abscissa and the number of the first lamp frame pair as the ordinate.
  • the row number distribution map is in the form of a histogram.
  • Figure 10 is a schematic diagram of determining the value of the first row, wherein, the black dot mark is the trough with the largest abscissa in the row number distribution diagram, that is, the row number corresponding to the black dot mark is the first row value . It can be understood that the determination method of the numerical value of the second row is the same as the determination method of the numerical value of the first row described above, and details are not repeated here.
  • the center points of the two first bright spots in the first light frame pair are in different rows, when the number of first light frame pairs in each row of the image is subsequently counted, the two first bright spots
  • the line where the center point is located is 0.5 each. It can be understood that there are different statistical methods in the above process, for example, denote 1 for each row where the center points of the two first bright spots are located, or arbitrarily select the row where the center points of the two first bright spots are located.
  • One line in the above is marked with 1, which is not limited in this application.
  • the distribution map of the number of lines there are different methods for drawing the distribution map of the number of lines.
  • the line where the center point of the first bright spot marked by the light frame is located in the image can be determined, and the number of lines in each line in the image can be counted.
  • the default vanishing point horizontal line is selected as the first vanishing point horizontal line (VP Line I).
  • the third threshold or, when the absolute value of the difference between the second row value and the default vanishing point horizontal line is greater than the third threshold
  • the default vanishing point horizontal line is selected as the first vanishing point horizontal line (VP Line I). Since the headlight is generally in the middle of the vehicle body, the VP Line I is corrected according to the preset correction value, and the correction value is added to the vanishing point horizontal line to obtain the second vanishing point horizontal line (VP Line II).
  • a reference value is introduced, and the difference between VP Line I and the reference value is considered.
  • the absolute value of the difference is multiplied by the damping coefficient ⁇ , and the reference value is added to obtain the third vanishing point horizontal line (VP Line III).
  • the obtained VP Line III is the VP Line that can be used to classify the bright spots. .
  • correction value is a small positive integer, and the specific value thereof is set by the research and development personnel according to the experimental data and actual requirements, and is not limited in this application.
  • the third threshold is set by the research and development personnel according to actual needs and experimental data, and is not limited in this application.
  • the reference value and the magnitude of the damping coefficient ⁇ may be adaptively adjusted according to specific driving environments such as braking, acceleration, uphill, downhill, etc.
  • the above-mentioned method of introducing the reference value and the damping coefficient ⁇ is only an exemplary method in the present application.
  • the method can be adjusted according to actual needs, or the method can be transformed to obtain better
  • the beneficial effects are not limited in this application.
  • the line in the obtained image corresponding to the line value of VP Line III is used as the dividing line, and the obtained image is divided into a first area and a second area, and the first area is the The image area above the boundary line, and the second area is the image area below the boundary line.
  • the road reflected light is mainly excluded for the first bright spot in the second area.
  • the first bright spot in the second area perform position analysis on the first bright spot respectively, and detect whether there are other first bright spots in the area vertically below the first bright spot and the left and right offset is less than or equal to the preset distance , if there are other first bright spots in the area, the other first bright spots are screened out to obtain a second bright spot and a second light frame pair. It can be understood that the remaining first light frame pair after the screening operation is the second light frame pair.
  • the preset distance is set by the research and development personnel according to actual needs and experimental data, which is not limited in this application.
  • FIG. 11 is a schematic diagram of eliminating the interference of reflected light from the road surface.
  • the position of the bright spot A is analyzed, and the preset distance of the left and right offset is the distance from point M to point O (or point N to point 0). It can be understood that the distance from point M to point O is equal to the distance from point N to point O.
  • the bright spot D is in the area X, so the bright spot D is screened out.
  • the bright spot E is in the fan-shaped area, and the bright spot E is screened out.
  • S508 Determine the pair of headlights and taillights.
  • the second light frame pair is a pair of headlights; when the brightness of the second bright spot is within the third interval and all When the color of the halo is within the fourth interval, the second light frame pair is a tail light pair.
  • the V component (red component) or the U component (blue component) is mainly considered when performing color analysis. Therefore, when the brightness of the second bright spot is detected, the value of the Y component (brightness value) of each pixel of the second bright spot is obtained, and the average brightness value of the second bright spot can be obtained; When the color analysis of the second bright spot is performed, the value of the V component of each pixel of the second bright spot is obtained, and the average value of the V component of the second bright spot can be obtained. When the average value of the V component is in the fourth interval, the second light frame pair is a tail light pair.
  • the average value of the U component can also be calculated to perform color distinction, which is not limited in this application.
  • the second light frame pair can be determined to be a headlight pair or a tail light pair only when two bright spots in the second light frame pair meet the above conditions.
  • an error value is introduced.
  • the second light frame can also be considered to be The pair is a pair of headlights or a pair of taillights. It can be understood that the specific value of the error value can be set by the R&D personnel according to actual needs and experimental data, which is not limited in this application.
  • first interval, the second interval, the third interval and the fourth interval are set by the research and development personnel according to actual requirements and experimental data, which are not limited in this application.
  • the second bright spot is a street lamp.
  • the method for determining the street light is the same as the method for determining the pair of headlights and the pair of taillights, which will not be described in detail here.
  • fifth interval and the sixth interval are set by R&D personnel according to actual needs and experimental data, which are not limited in this application.
  • a bright spot with a brightness greater than or equal to the fourth threshold in the first area as a third bright spot, perform brightness detection on the third bright spot in the first area, and perform brightness detection on the third bright spot in the first area.
  • Bright spots of halos for color analysis.
  • the third bright spot is a red light in a traffic signal;
  • the third bright spot is a green light in the traffic light;
  • the color of the halo is within the eleventh interval and all
  • the color of the halo is within the twelfth interval
  • the third bright spot is the yellow light in the traffic signal.
  • the method for determining the traffic signal light is the same as the method for determining the pair of headlights and the pair of taillights, which will not be described in detail here.
  • the fourth threshold is set by the research and development personnel according to actual needs and experimental data, and is not limited in this application.
  • misclassification may occur.
  • large vehicles such as buses and trucks have two lights on the front, rear, top, bottom, and sides. Therefore, two lights may appear in the first area, two lights In the case of the second region, this needs to be corrected.
  • the light frame pair is a headlight of a large vehicle; when the light frame pair has a brightness within the fifteenth interval.
  • the color of the halo is within the sixteenth section and the color of the halo is within the seventeenth section, the light frame pair is a tail light of a large vehicle.
  • the thirteenth interval, the fourteenth interval, the fifteenth interval, the sixteenth interval and the seventeenth interval are set by the R&D personnel according to actual needs and experimental data, which are not limited in this application.
  • S504 determining the distance between the distant headlights and the vehicle
  • S507 excluding the interference of road reflection light
  • S508 determining the pair of headlights and taillights
  • S509 determine the street lights
  • S510 determine the traffic lights
  • S511 correct the misclassified situation
  • the central area of the image is selected as the ROI area; the image is compressed.
  • the central area is an area where the headlights of distant vehicles may appear, and specific parameters such as the shape and size of the central area may be preset by the research and development personnel according to experimental data. Compressing the image and then performing subsequent operations (calculating the brightness weighted value, etc.) can reduce the amount of calculation, but some detail information in the image will be lost. Therefore, the light source classification is carried out by combining the compressed image and the selected central area image of the original image. Specifically, the compressed image is compared with the selected central area image of the original image. The light sources in the central area image are classified according to the central area image, and the light sources in other areas are classified according to the compressed images.
  • the above content introduces the light source classification model and its training process.
  • the following will specifically describe the process of the intelligent light switching method provided by this application. As shown in FIG. 12 , the method includes but is not limited to the following steps:
  • the intelligent light switching system obtains the front image of the vehicle in real time through the camera module.
  • the image includes but is not limited to light information of the vehicle ahead, street light information on both sides of the road, traffic light information and other light source information, and ambient light information.
  • the camera module can capture images in real time according to a preset time, for example, capture images every 1/20s.
  • the front image acquired by the intelligent lighting switching system in real time through the camera module includes 3 frames of images, and the 3 frames of images are respectively a short exposure image, a medium exposure image, and a long exposure image.
  • the exposure time is the shortest, the exposure time of the long exposure image is the longest, and the exposure time of the medium exposure image is in between.
  • the exposure time can be set in various ways, such as setting the exposure time of short exposure to 10ms, the exposure time of medium exposure to 20ms, and the exposure time of long exposure to 30ms, or setting the exposure time of short exposure to 5ms, the exposure time for medium exposure is 10ms, and the exposure time for long exposure is 20ms. Therefore, the above-mentioned short exposure, medium exposure and long exposure are only relative terms, and the specific exposure time is set by the R&D personnel according to actual needs and experimental data, which is not limited here.
  • the front image acquired by the intelligent lighting switching system is an image in the RGB color space, and the image needs to be converted into an image in the YUV color space.
  • S1220 Calculate the ambient light brightness value corresponding to the image.
  • the intelligent lighting switching system selects at least one ROI area for the acquired image, acquires the brightness value of each pixel in the selected area, calculates the average brightness value, and then calculates the brightness weighted value according to the set weight. is the ambient light brightness value. It is understandable that specific parameters such as the size and shape of the selected ROI region are preset by the research and development personnel according to the image, actual needs and experimental data, and are not limited in this application. It is understandable that the weights are set by the research and development personnel according to the experimental data, and are not limited in this application.
  • the central area and the entire image can be selected as the ROI area, and the central area is an area where the headlights of distant vehicles may appear.
  • the shape of the central area Specific parameters such as size can be preset by R&D personnel according to experimental data.
  • the central area, the upper area and the entire image can be selected as the ROI area, the central area is the area where the lights of the distant vehicle may appear, and the upper area is the area where the street lights may appear , as mentioned above, specific parameters such as the shape and size of the central area and the upper area can be preset by the research and development personnel according to the experimental data.
  • the Y component represents brightness information
  • obtaining the value of the Y component of the pixel in the image can be considered to have obtained the brightness value of the pixel.
  • S1230 Classify the light sources included in the image.
  • the image is input into a light source detection model, and the light source category in the image is obtained according to the light source detection model.
  • the ambient light brightness can be calculated first, the light source classification can also be carried out first, or both can be carried out at the same time.
  • S1240 Determine whether to switch between high and low beams according to the ambient light brightness value corresponding to the image and the classification result.
  • the light source classification result shows that there are indeed car lights within the illumination distance of the high beam, and it is not affected by the interfering light source. If the ambient light brightness value corresponding to the image is greater than or equal to the sixth threshold, and the light source classification result shows that there is no vehicle light within the illumination distance of the high beam, it can be considered that the ambient light brightness value is affected by the interfering light source.
  • the light source classification result shows that there is no vehicle light within the illumination distance of the high beam, and it is judged to switch to the high beam at this time; if the image The corresponding ambient light brightness value is less than the sixth threshold, and the light source classification result shows that there are vehicle lights within the illumination distance of the high beam, and at this time, it is judged to switch to the low beam.
  • the irradiation distance of the high beam is set by the research and development personnel according to the actual requirements and the parameters of the vehicle lamp, which is not limited in this application.
  • the camera will acquire images in real time, and the intelligent light switching system will also perform ambient light brightness calculation and light source classification in real time, the light source information and its distance information can be acquired in real time.
  • Delay switching mode when the light source classification result shows that the difference between the distance between the light source and the vehicle and the distance between the vehicle's high beam is within the preset interval, it is judged that the delay is switched to the high beam. It is understandable that the above The preset interval and the extended time can be set by R&D personnel according to actual needs, which are not limited in this application.
  • the intelligent light switching system can be activated manually or by itself.
  • the manual activation method is suitable for the switching method in the above S1240.
  • the intelligent light switching system is triggered; if the automatic activation method is adopted, the ambient light brightness can be calculated first, and when it is greater than or equal to the first
  • the threshold is seven, it can be considered that the environment where the vehicle is located at this time does not need to turn on the lights, then there is no need to classify the light sources, otherwise the light sources are classified and judged whether to switch the high beam or the low beam, which can reduce energy consumption. , or another module for calculating the ambient light brightness can be set up.
  • the intelligent lighting switching system When the ambient light brightness obtained by the module is greater than or equal to the seventh threshold, it can be considered that the intelligent lighting switching system does not need to be activated, otherwise the intelligent lighting switching system is triggered. At this time, the intelligent lighting The switching system can switch between high and low beams according to the methods described in S1210-S1240. It is understandable that the seventh threshold is set by the research and development personnel according to actual requirements and experimental data, which is not limited in this application.
  • FIG. 3 is a schematic structural diagram of a smart light switching system provided by the present application, and the smart light switching system is used to execute the smart light switching method described in FIG. 12 .
  • the application does not limit the division of the functional units of the intelligent lighting switching system, and each unit in the intelligent lighting switching system can be added, reduced or combined as required.
  • the operations and/or functions of each unit in the smart light switching system are respectively to implement the corresponding flow of the method described in FIG. 12 , and are not repeated here for brevity.
  • Figure 3 exemplarily provides a division of functional units:
  • the intelligent light switching system 300 includes an acquisition unit 310 , an ambient light detection unit 320 , a light source classification unit 330 and a switching unit 340 .
  • the obtaining unit 310 is configured to execute the foregoing step S1210, and optionally execute an optional method in the foregoing steps.
  • the ambient light detection unit 320 is configured to execute the foregoing step S1220, and optionally execute the optional method in the foregoing steps.
  • the light source classification unit 330 is configured to execute the foregoing steps S501-S511, S1230, and optionally execute the optional methods in the foregoing steps.
  • the switching unit 340 is configured to execute the foregoing step S1240, and optionally execute an optional method in the foregoing steps.
  • each unit included in the intelligent lighting switching system 300 may be a software unit, a hardware unit, or a part of a software unit and a part of a hardware unit.
  • FIG. 13 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • the computing device 1300 includes a processor 1310 , a communication interface 1320 and a memory 1330 , and the processor 1310 , the communication interface 1320 and the memory 1330 are connected to each other through an internal bus 1340 .
  • the computing device 1300 may be the smart light switching system 300 in FIG. 3 , and the functions performed by the smart light switching system 300 in FIG. 3 are actually performed by the processor 1310 of the smart light switching system 300 .
  • the processor 1310 may be composed of one or more general-purpose processors, such as a central processing unit (central processing unit, CPU), or a combination of a CPU and a hardware chip.
  • the above-mentioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL) or any combination thereof.
  • the communication interface 1320 is used to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), core network, wireless local area network (Wireless Local Area Networks, WLAN) and the like.
  • RAN radio access network
  • WLAN wireless Local Area Networks
  • the bus 1340 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus 1340 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 13, but it does not mean that there is only one bus or one type of bus.
  • the memory 1330 may include volatile memory (volatile memory), such as random access memory (RAM); the memory 1330 may also include non-volatile memory (non-volatile memory), such as read-only memory (read- only memory, ROM), flash memory (flash memory), hard disk drive (HDD) or solid-state drive (solid-state drive, SSD); the memory 1330 may also include a combination of the above types.
  • volatile memory volatile memory
  • non-volatile memory such as read-only memory (read- only memory, ROM), flash memory (flash memory), hard disk drive (HDD) or solid-state drive (solid-state drive, SSD
  • the memory 1330 may also include a combination of the above types.
  • the memory 1330 is used to store the program code for executing the above-mentioned smart light switching method embodiment.
  • the functional units shown in FIG. 12 may be used to implement the method steps in the method embodiment shown in FIG. 12 with the intelligent light switching system 300 as the main body of execution.
  • Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored.
  • the program When the program is executed by a processor, it can implement some or all of the steps described in the above method embodiments, and realize the above The function of any one of the functional units described in FIG. 3 .
  • Embodiments of the present application also provide a computer program product, which, when running on a computer or a processor, enables the computer or processor to execute one or more of the method steps in any of the above methods with the intelligent light switching system 300 as the main body of execution. multiple steps. If each component module of the above-mentioned device is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in the computer-readable storage medium.
  • the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be implemented in the present application.
  • the implementation of the examples constitutes no limitation.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • the modules in the apparatus of the embodiment of the present application may be combined, divided and deleted according to actual needs.

Abstract

本申请提供了一种智能灯光切换方法、系统及相关设备。其中,该方法包括:智能灯光切换系统获取图像,所述图像由设置于车辆的固定位置的摄像机拍摄得到,所述图像中记录了灯源信息;计算所述图像对应的环境光亮度值;根据所述灯源信息,对所述图像中所包含的灯源进行分类,获得分类结果;根据所述图像对应的环境光亮度值和所述分类结果,进行远光灯切换或者近光灯切换。上述方法融合了环境光信息和灯源信息两种信息,使得在判断是否切换至远光灯或者近光灯时,有更多的夜间道路的相关信息可以参考,提高了灯光切换的准确率。

Description

一种智能灯光切换方法、系统及相关设备
本申请要求于2020年10月31日提交中国专利局、申请号为202011197967.5、申请名称为“一种智能灯光切换方法、系统及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能车辆领域,尤其涉及一种智能灯光切换方法、系统及相关设备。
背景技术
众所周知,夜间行车需要借助远光灯和近光灯来照明,便于驾驶员获取道路信息。相比于近光灯,远光灯在提高视线、扩大观察视野等方面的作用更加显著,尤其在没有路灯的漆黑路面上,开启远光灯后的可视范围要远远大于只开启近光灯时的可视范围。然而,远光灯并非适用于所有夜间道路,比如,正面会车时,若开启远光灯,可能导致对方驾驶员视觉上瞬间致盲,也可能导致对方驾驶员对速度和距离的感知力以及对宽度的判断力下降。因此,正确使用远光灯对夜间安全行驶至关重要。为了从源头上解决驾驶员错误使用远光灯的行为,智能灯光切换技术应运而生。
现有的智能灯光切换方法中,可以通过灯源在不同曝光时间下呈现的图像特征不同,来对灯源进行区分,或者直接计算前方环境光亮度,再或者根据车道线获取消失点,从而辅助驾驶员切换远近光灯。但是,上述方法的实施效果并不理想,实现上述第一种方法对摄像头模组的要求较高,因此该方法的实施需要较高成本;上述第二种方法忽略了远处大灯的亮度影响,使得环境光亮度的计算存在较大误差;上述第三种方法的实现依赖于车道线的准确获取,可是在夜间准确获取车道线的概率极低。
因此,如何准确实现远光灯切换或近光灯切换是目前亟待解决的问题。
发明内容
本申请提供了一种智能灯光切换方法、系统及相关设备,能够准确检测环境光亮度,并准确分类灯源信息,将两种信息融合后,可以准确地执行远光灯或近光灯切换功能,有效解决了远处光源未被发现、干扰光源影响的问题。
第一方面,本申请提供一种智能灯光切换方法,所述方法包括:获取图像,所述图像由设置于车辆的固定位置的摄像机拍摄得到,所述图像中记录了灯源信息;计算所述图像对应的环境光亮度值;根据所述灯源信息,对所述图像中所包含的灯源进行分类,获得分类结果;根据所述图像对应的环境光亮度值和所述分类结果,进行远光灯切换或者近光灯切换(灯光切换)。
在本申请提供的方案中,在获取图像之后,进行环境光亮度计算,并且进行灯源分类,然后将其结果结合起来再判断是否进行灯光切换,这种灯光切换方法融合了环境光信息和灯源信息两种信息,使得在判断灯光切换时有更多的夜间道路的相关信息可以参考,提高 了灯光切换的准确率。
结合第一方面,在第一方面的一种可能的实现方式中,所述计算所述图像对应的环境光亮度值,包括:在所述图像中选取至少一个区域,并计算所述至少一个区域的亮度值;根据所述至少一个区域的亮度值,计算得到所述图像对应的环境光亮度值。
在本申请提供的方案中,计算环境光亮度值时,选取至少一个区域计算其亮度值,然后将其亮度值结合起来得到环境光亮度值,充分考虑了图像不同区域可能存在的灯源信息,相较于仅通过整幅图像计算环境光亮度,本申请提供的方法可以更准确地获得环境光亮度。
结合第一方面,在第一方面的一种可能的实现方式中,所述对所述图像中所包含的灯源进行分类,获得分类结果,包括:将所述图像输入灯源检测模型,根据所述灯源检测模型获得所述图像中的灯源类别。
在本申请提供的方案中,灯源分类是通过灯源检测模型实现的,经过训练后,灯源检测模型可以根据灯源的大小、位置、颜色等特征快速准确地实现灯源分类。
结合第一方面,在第一方面的一种可能的实现方式中,所述将所述图像输入灯源检测模型之前,所述方法还包括:利用多个样本图像对所述灯源检测模型进行训练,所述样本图像中包括所述灯源及所述灯源的标注信息。
在本申请提供的方案中,正式应用灯源检测模型之前,需要对其进行训练,所述训练是指初始模型识别并学习样本图像中的灯源信息的过程,训练完成后,灯源检测模型可以准确识别不同灯源的不同特征,提高了灯源分类的准确率。
结合第一方面,在第一方面的一种可能的实现方式中,所述根据所述灯源检测模型获得所述图像中的灯源类别,包括:从所述图像中选取亮度值大于预设阈值的亮斑,并对所述亮斑设置灯框;对所述灯框进行配对,得到多个灯框对;根据所述多个灯框对,确定VP Line,所述VP Line用于区分所述图像的第一区域和第二区域;根据所述亮斑与所述VP Line的位置关系及所述亮斑的颜色特征,对所述亮斑进行分类,得到不同类型的灯源类别。
在本申请提供的方案中,进行灯源分类,首先需要获取图像中亮度较高的亮斑并设置灯框,这是对光源的初步筛选,再进行灯源配对,根据车灯对之间的关系进行配对,相较于单个识别,这种配对的方式提高了分类的效率和准确率,然后根据灯框对在图像中的分布确定消失点水平线VP Line,最后根据亮斑与VP Line的位置关系及其亮度、颜色等特征,对亮斑进行分类,可理解,确定VP Line之后就能大概划分出车灯区域、路灯及信号灯的区域,后续分类过程会更加准确且有效。
结合第一方面,在第一方面的一种可能的实现方式中,对所述灯框进行配对之前,所述方法还包括:对所述灯框进行去重,以使得所述灯框之间不发生重叠或相切。
在本申请提供的方案中,对灯框进行配对之前需要对其进行去重,让灯框间不会重叠或者相切,从而使得后续分类过程中灯框间不会相互影响,提高了灯源分类的准确性。
结合第一方面,在第一方面的一种可能的实现方式中,根据所述多个灯框对,确定消失点水平线VP Line,包括:对所述灯框对的灯框中心在图像中的行进行映射,得到灯框对的行数分布图;根据所述行数分布图,选取VP Line;根据预设的修正值对所述VP Line进行修正,得到修正后的VP Line;利用参考值对所述修正后的VP Line进行调整,获得用于对所述亮斑进行分类的VP Line,其中,所述参考值用于描述所述摄像机俯仰角的变化。
在本申请提供的方案中,VP Line最初是由灯框对的行数分布图确定的,后续需要对其进行修正并引入参考值,使得最后获得的VP Line更加准确,从而提高了后续灯源分类的准确性。
第二方面,本申请提供一种智能灯光切换系统,所述系统,包括:获取单元,用于获取图像,所述图像由设置于车辆的固定位置的摄像机拍摄得到,所述图像中记录了灯源信息;环境光探测单元,用于计算所述图像对应的环境光亮度值;灯源分类单元,用于根据所述灯源信息,对所述图像中所包含的灯源进行分类,获得分类结果;切换单元,根据所述图像对应的环境光亮度值和所述分类结果,进行远光灯切换或者近光灯切换(灯光切换)。
结合第二方面,在第二方面的一种可能的实现方式中,所述环境光探测单元,具体用于:在所述图像中选取至少一个区域,并计算所述至少一个区域的亮度值;根据所述至少一个区域的亮度值,计算得到所述图像对应的环境光亮度值。
结合第二方面,在第二方面的一种可能的实现方式中,所述灯源分类单元,用于对所述图像中所包含的灯源进行分类,获得分类结果时,具体用于:将所述图像输入灯源检测模型,根据所述灯源检测模型获得所述图像中的灯源类别。
结合第二方面,在第二方面的一种可能的实现方式中,所述灯源分类单元,用于将所述图像输入灯源检测模型之前,还用于:利用多个样本图像对所述灯源检测模型进行训练,所述样本图像中包括所述灯源及所述灯源的标签。
结合第二方面,在第二方面的一种可能的实现方式中,所述灯源分类单元,用于根据所述灯源检测模型获得所述图像中的灯源类别时,具体用于:从所述图像中选取亮度值大于预设阈值的亮斑,并对所述亮斑设置灯框;对所述灯框进行配对,得到多个灯框对;根据所述多个灯框对,确定消失点水平线VP Line,所述VP Line用于区分所述图像的第一区域和第二区域;根据所述亮斑与所述VP Line的位置关系及所述亮斑的颜色特征,对所述亮斑进行分类,得到不同类型的灯源类别。
结合第二方面,在第二方面的一种可能的实现方式中,所述灯源分类单元,用于对所述灯框进行配对之前,还用于:对所述灯框进行去重,以使得所述灯框之间不发生重叠或相切。
结合第二方面,在第二方面的一种可能的实现方式中,所述灯源分类单元,用于根据所述多个灯框对,确定消失点水平线VP Line时,具体用于:对所述灯框对的灯框中心在图像中的行进行映射,得到灯框对的行数分布图;根据所述行数分布图,选取VP Line;根据预设的修正值对所述VP Line进行修正,得到修正后的VP Line;利用参考值对所述修正后的VP Line进行调整,获得用于对所述亮斑进行分类的VP Line,其中,所述参考值用于描述所述摄像机俯仰角的变化。
第三方面,提供了一种计算设备,所述计算设备包括处理器和存储器,所述存储器用于存储程序代码,所述处理器用于所述存储器中的程序代码执行上述第一方面以及结合上述第一方面中的任意一种实现方式所提供的智能灯光切换方法。
第四方面,提供了计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,当该计算机程序被处理器执行时,所述处理器执行上述第一方面以及结合上述第一方面中的任意一种实现方式所提供的智能灯光切换方法。
第五方面,提供了一种计算机程序产品,该计算机程序产品包括指令,当该计算机程序产品被计算机执行时,使得计算机可以执行上述第一方面以及结合上述第一方面中的任意一种实现方式所提供的智能灯光切换的流程。
附图说明
图1为本申请实施例提供的一种智能灯光切换方法的应用场景的示意图;
图2为本申请实施例提供的一种智能车辆的结构示意图;
图3为本申请实施例提供的一种智能灯光切换系统的结构示意图;
图4为本申请实施例提供的一种初始灯源检测模型的结构示意图;
图5为本申请实施例提供的一种提取图像中灯源特征方法的示意图;
图6为本申请实施例提供的一种给第一亮斑设置灯框的示意图;
图7为本申请实施例提供的一种对灯框进行去重的示意图;
图8为本申请实施例提供的一种灯框配对方法的示意图;
图9为本申请实施例提供的一种绘制行数分布图的示意图;
图10为本申请实施例提供的一种确定第一行数值的示意图;
图11为本申请实施例提供的一种排除路面反射干扰的示意图;
图12为本申请实施例提供的一种智能灯光切换方法的示意图;
图13为本申请实施例提供的一种计算设备的结构示意图。
具体实施方式
下面结合附图对本申请实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
首先,结合附图对本申请中所涉及的部分用语和相关技术进行解释说明,以便于本领域技术人员理解。
RGB颜色空间以红、绿、蓝三种基本色为基础,进行不同程度的叠加,产生丰富而广泛的颜色,所以俗称三基色模式,也被称为自然色彩模式。红绿蓝代表可见光谱中的三种基本颜色(三原色),每一种颜色按其亮度的不同分为256个等级。当色光三原色重叠时,由于不同的混色比例能产生各种中间色。显示器系统一般使用的就是RGB颜色空间,彩色阴极射线管,彩色光栅图形的显示器都使用R、G、B数值来驱动R、G、B电子枪发射电子,并分别激发荧光屏上的R、G、B三种颜色的荧光粉发出不同亮度的光线,并通过相加混合产生各种颜色。
YUV颜色空间包含一个亮度信号以及两个色度信号,亮度信号经常被称作Y,色度信号是由两个互相独立的信号(表示红色的色度信号和表示蓝色的色度信号)组成,两种色度信号经常被称作UV或PbPr或CbCr,所以YUV颜色空间也被称为YPbPr颜色空间,或者YCbCr颜色空间。将亮度信息从色度信息中分离了出来,并且对同一帧图像的亮度和色度采用了不同的采样率。在YUV颜色空间中,亮度信息Y与色度信息U\V相互独立。Y信号分量为黑白灰度图。U、V信号分量为单色彩色图。由于人类视觉对亮度的敏感度比对 色度的敏感度高,所以YUV颜色空间为彩色电视系统广泛使用。
HSI颜色空间用H、S、I三个参数来描述颜色特性,其中H定义颜色的频率,称为色调;S表示颜色的深浅程度,称为饱和度;I表示强度或亮度。
感兴趣区域(region of interest,ROI):机器视觉、图像处理中,从被处理的图像中以方框、圆、椭圆、不规则多边形等方式划分出的需要处理的区域,称为感兴趣区域ROI。在Halcon、OpenCV、MATLAB等机器视觉软件上常用到各种算子(Operator)和函数来求得感兴趣区域ROI,并进行图像的下一步处理。
交并比(Intersection over Union,IOU)是指两个矩形框面积的交集和并集的比值。IOU是一种测量在特定数据集中检测相应物体准确度的一个标准。IOU是一个简单的测量标准,只要是在输出中得出一个预测范围(bounding boxes)的任务都可以用IOU来进行测量。
先进驾驶辅助系统(Advanced Driver Assistance System,ADAS)是利用安装于车上的各式各样的传感器,在第一时间收集车内外的环境数据,进行静、动态物体的辨识、侦测与追踪等技术上的处理,从而能够让驾驶者在最快的时间察觉可能发生的危险,以引起注意和提高安全性的主动安全技术。ADAS采用的传感器主要有摄像头、雷达、激光和超声波等,可以探测光、热、压力或其它用于监测汽车状态的变量,通常位于车辆的前后保险杠、侧视镜、驾驶杆内部或者挡风玻璃上。早期的ADAS技术主要以被动式报警为主,当车辆检测到潜在危险时,会发出警报提醒驾车者注意异常的车辆或道路情况。对于最新的ADAS技术来说,主动式干预也很常见。
自适应巡航控制(Adaptive Cruise Control,ACC)系统,是在定速巡航系统的基础上发展而来的,除了具有定速巡航系统的功能,即可依照驾驶者所设定的速度行驶的功能外,还可以实现保持预设的跟车距离以及随着车距变化自动加速与减速的功能。相较定速巡航系统而言,ACC系统能够更好地帮助驾驶员协调刹车和油门。
路侧单元(Road Side Unit,RSU)是自动道路缴费(Electronic Toll Collection,ETC)系统中,安装在路侧,采用专用短距离通讯(Dedicated Short Range Communications,DSRC),与车载单元(On Board Unit,OBU)进行通讯,实现车辆身份识别,电子扣分的装置。RSU可以由高增益定向束控读写天线和射频控制器组成。高增益定向束控读写天线是一个微波收发模块,负责信号和数据的发送/接收、调制/解调、编码/解码、加密/解密;射频控制器是控制发射和接收数据以及处理向上位机收发信息的模块。
本申请提供的智能灯光切换方法由智能灯光切换系统执行,为了便于理解本申请实施例,首先对本申请实施例基于的一种智能灯光切换系统的应用场景进行描述。如图1所示,图1是一种智能灯光切换系统的应用场景的示意图,图1所示的场景为车联网中的一个场景,该场景包括多辆智能车辆、无线电发射塔、RSU,而智能灯光切换系统可以应用在所述智能车辆的车载系统上,比如应用在ADAS、ACC系统中,可以实现在辅助驾驶、自动驾驶等多种场景中的智能远近光灯的切换,另外,智能灯光切换系统也可以作为独立系统安装在所述智能车辆中,与所述智能车辆中的其他辅助驾驶系统区分开来。
基于上述智能灯光切换系统的应用场景,本申请实施例提供了一种应用于上述智能灯 光切换系统的应用场景中的智能车辆200,请参见图2,图2是本申请实施例提供的一种智能车辆200的结构示意图。
需要说明的是,智能车辆200可以设置为完全智能驾驶模式,也可以设置为部分智能驾驶模式。可理解,当智能车辆200设置为完全智能驾驶模式时,智能车辆200可以在不和人交互的情况下进行相应操作,所述操作包括但不限于加速、减速、跟车;当智能车辆设置为部分智能驾驶模式时,智能车辆200不仅可以自动执行相应操作,同时还可以由驾驶员来执行相应操作,例如,确定车辆及其周边环境,确定周边环境中的至少一个其他车辆的可能行为,确定所述其他车辆执行可能行为的可能性相对应的置信水平,然后,基于所确定的信息来控制智能车辆200。
智能车辆200可包括各种子系统,例如行进系统210、传感器系统220、控制系统230、一个或多个外围设备240以及计算机系统250、电源260和用户接口270。可选地,智能车辆200可包括更多或更少的子系统,每个子系统可包括多个元件。另外,智能车辆200的每个子系统和元件可以通过多种方式互连,例如,可通过有线或无线方式互连。
行进系统210可包括为智能车辆200提供动力的组件。在一个实施例中,行进系统210可包括引擎2110、能量源2120、传动装置2130和车轮/轮胎2140。引擎2110可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎2110将能量源2120转换成机械能量。
能量源2120的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源2120也可以为智能车辆200的其他系统提供能量。
传动装置2130可以将来自引擎2110的机械动力传送到车轮2140。传动装置2130可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置2130还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮2140的一个或多个轴。
传感器系统220可包括感测智能车辆200周边环境信息以及获取其自车信息的若干个传感器。例如,传感器系统220可包括定位系统2210、惯性测量单元(inertial measurement unit,IMU)2220、雷达2230以及视觉传感器2240。其中,定位系统2210可包括GPS系统、北斗系统或者其他定位系统。传感器系统220还可包括被监视智能车辆200的内部系统的传感器,例如,车内空气质量监测器、燃油量表、机油温度表等。由这些传感器所获取的数据可用于检测对象及其相应特性,所述特性包括但不限于位置、形状、方向、速度。这种检测和识别对于智能车辆200安全执行后续操作意义重大。
定位系统2210可用于确定智能车辆200的地理位置。
IMU 2220可基于惯性加速度来感测智能车辆200的位置和朝向变化。在一个实施例中,IMU 2220可以是加速度计和陀螺仪的组合,此时,IMU 2220可用于测量智能车辆200的曲率。
雷达2230可利用无线信号来感测智能车辆200的周边环境,该周边环境包括但不限于周围车辆、基础设施、行人。可理解,雷达2230可以包括但不限于毫米波雷达、激光雷达。在一些实施例中,除了感测周边环境以外,雷达2230还可用于感测环境中的物体运动状态。
视觉传感器2240可用于捕捉智能车辆200周边环境的多个图像。视觉传感器2240可以包括但不限于静态相机、视频相机。
控制系统230可用于控制智能车辆200及其组件的操作。控制系统230可包括多个元件,在一个实施例中,控制系统230包括转向系统2310、执行器2320、制动单元2330、计算机视觉系统2340、路线控制系统2350、障碍物避免系统2360以及远近光灯切换系统2370。
转向系统2310可通过操作来调整智能车辆200的前进方向。例如,在一个实施例中,转向系统2310可以包括方向盘系统。
执行器2320可用于控制引擎2110进而控制智能车辆200的速度。例如,在一个实施例中,执行器2320可以包括油门。
制动单元2330可用于控制智能车辆200进行减速。制动单元2330可使用摩擦力来降低车轮2140的转速。在其他实施例中,制动单元2330可将车轮2140的动能转换为电流。制动单元2330也可采取其他方法来降低车轮2140的转速从而控制智能车辆200的速度。
可理解,执行器2320和制动单元2330可合并成一个单元模块,所述合并后的单元模块可用于控制智能车辆200的速度,在一个实施例中,所述合并后的单元模块可以包括油门系统和刹车系统。
计算机视觉系统2340可用于处理和分析由视觉传感器2240捕捉的图像,以便进行后续操作。通过计算机视觉系统2340,还能识别智能车辆200周边环境、周边环境中的物体的特征及其运动状态。所述周边环境可包括交通信号、道路边界和障碍物,所述周边环境中的物体的特征包括但不限于其表面光学特性,所述运动状态包括但不限于静止、加速、减速。计算机视觉系统2340可使用颜色空间转换技术、物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统2340包括图像检测系统、基于神经网络的处理系统等,可以用于为环境绘制地图、跟踪物体、估计物体的速度等。
路线控制系统2350用于确定智能车辆200的行驶路线。在一些实施例中,路线控制系统2350可结合来自定位系统2210的一个或多个预定地图的数据来为智能车辆200确定行驶路线。
障碍物避免系统2360用于识别、评估、避免或绕过周围环境中的障碍物。在一个实施例中,障碍物避免系统2360需要借助雷达2230、视觉传感器2240获取周围环境的信息,然后通过计算机视觉系统2340分析周围环境,识别出潜在障碍物,再由障碍物避免系统2360进行评估和规避。
远近光灯切换系统2370用于智能切换远近光灯。在一个实施例中,远近光灯切换系统2370可以根据环境光自行启动,自动进行远近光切换,也可以人为启动。
需要说明的是,控制系统230可以增加其他组件,也可以替换和/或减少上文描述的那些组件。
智能车辆200通过外围设备240与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备240可包括但不限于无线通信系统2410、车载电脑2420、麦克风2430和/或扬声器2440。
需要说明的是,在一些实施例中,外围设备240可以与智能车辆200的用户进行交互,例如,车载电脑2420可以给智能车辆200的用户提供信息,同时,智能车辆200的用户也可以将数据上传至车载电脑2420,可理解,智能车辆200的用户可以通过车载电脑2420的触摸屏来进行操作。另外,外围设备240可提供智能车辆200与车内其他设备通信的手段,例如,麦克风2430可从智能车辆200的用户接收音频,所述音频可以包括语音命令及其他音频输入。类似地,扬声器2440可向智能车辆200的用户输出音频。
无线通信系统2410可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统2410可使用3G蜂窝通信,例如CDMA、EVDO、GSM/GPRS,或者4G蜂窝通信,例如LTE,或者5G蜂窝通信。无线通信系统2410可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统2410可利用红外链路、蓝牙或ZigBee与设备直接通信,所述设备可以包括但不限于车辆和/或路边台站之间的公共设施。
另外,在本申请的一个实施例中,多辆智能车辆之间能通过V2X进行通信,因此,无线通信系统2410中还可以包括一个或多个DSRC设备、一个或多个LTE-V2X设备。
电源260可向智能车辆200的各种组件提供电力。在一个实施例中,电源260可以包括一个或多个电池组,所述电池组中的电池可为可再充电锂离子电池或铅酸电池。可理解,在一些实施例中,电源260和能量源2120可一起实现。
智能车辆200的部分或所有功能受计算机系统250控制。计算机系统250可包括一个或多个处理器2520,由处理器2520执行指令25110,所述指令25110存储在例如存储器2510这样的非暂态计算机可读介质中。计算机系统250还可以是采用分布式方式控制智能车辆200的个体组件或子系统的多个计算设备。
处理器2520可以是任何常规的处理器,诸如商业可获得的CPU。可选的,该处理器可以是诸如专用集成电路(Application Specific Integrated Circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图2功能性地图示了处理器、存储器和计算机等器件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器2510可包含指令25110(例如,程序逻辑),指令25110可被处理器2520执行从而实现智能车辆200的包括上述功能在内的各种功能。存储器2510也可包含额外的指令,包括向行进系统210、传感器系统220、控制系统230和外围设备240中的一个或多个发送数据、接收数据、与其交互和/或对其进行控制的指令。
除了存储指令25110以外,存储器2510还可存储数据,例如道路地图、路线信息,车 辆的位置、方向、速度等车辆数据,以及其他相关信息。可理解,在一种实施例中,智能车辆200处于自主、半自主和/或手动模式时,其计算机系统250能利用所述数据进行相关操作,例如,可以根据目标路段的道路信息和接收的目标车辆速度范围,对智能车辆的当前速度进行调整,从而使智能车辆能以恒定的速度跟车行驶。
用户接口270,用于向智能车辆200的用户提供信息或从其接收信息。可选地,用户接口270可包括外围设备240中的一个或多个输入/输出设备所需要的接口,例如USB接口、AUX接口、OBD接口。
计算机系统250可基于各种子系统(例如,行进系统210、传感器系统220和控制系统230)的数据以及从用户接口270接收的数据来控制智能车辆200的功能。例如,计算机系统250可控制转向系统2310来规避由传感器系统220和障碍物避免系统2360检测到的障碍物。
可选地,上述组件不仅可以作为子系统组装在智能车辆200内部,组件中的一个或多个还可与智能车辆200分开安装。例如,存储器2510可以部分或完全地与智能车辆200分开存在。上述组件可以采取有线和/或无线方式进行耦合。
需要说明的是,上述各个模块及其中的组件有可能根据实际需要增添、替换或者删除,本申请对此不作限制。
在道路上行进的智能驾驶汽车,如图2中的智能车辆200,可以识别其周围环境调整当前速度。所述周围环境可以包括但不限于其它车辆、行人、交通控制设备、其它基础设施以及其它类型的物体。在一些实施例中,所述智能驾驶汽车可以独立地考虑每个识别的物体,并且基于物体的特性,例如速度、加速度、与本车的相对距离等,来确定自动驾驶汽车所要调整的速度。
可选地,智能车辆200或者与智能车辆200相关联的计算设备(如图2的计算机系统250、计算机视觉系统2340、数据存储装置2510)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可理解,每一个所识别的物体都相关联,因此还可以通过分析周围环境中所有物体的状态来预测单个物体的行为。智能车辆200能够基于预测的所述识别的物体的行为来调整自车速度。换句话说,智能车辆200能够基于所预测的物体的行为来确定车辆需要如何调整(例如,加速、减速、或者停止)以及调整到什么稳定状态,在这个过程中,也可以考虑其它因素的影响,例如智能车辆200在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等。
除了提供调整智能车辆200的速度的指令之外,计算设备还可以提供修改智能车辆200的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如相邻车道的轿车)的安全横向和纵向距离。
上述智能车辆200可以为轿车、卡车、摩托车、公共汽车、船、娱乐车、游乐场车辆、施工设备、电车、火车等,本申请实施例对此不作限定。
可以理解的是,图2所示的智能车辆的结构示意图只是本申请实施例中的一种示例性的实施方式,本申请实施例中的智能车辆包括但不仅限于以上结构。
本申请提供了一种智能灯光切换系统,用于夜间行车过程中的智能灯光切换,智能灯光切换系统首先会获取前方图像,然后获取环境光信息,并对该图像中的灯源进行分类,然后结合环境光信息和灯源分类信息来判断是否切换远近光灯。智能灯光切换系统内部的单元可以有多种划分方式,本申请对此不作限制。图3为一种示例性的划分方式,如图3所示,下面将分别简各个功能单元的功能。
所示智能灯光切换系统300包括获取单元310、环境光探测单元320、灯源分类单元330和切换单元340。其中,获取单元310,用于获取图像,所述图像由设置于车辆的固定位置的摄像机拍摄得到,所述图像中记录了灯源信息;环境光探测单元320,用于计算所述图像对应的环境光亮度值;灯源分类单元330,用于根据所述灯源信息,对所述图像中所包含的灯源进行分类,获得分类结果;切换单元340,根据所述图像对应的环境光亮度值和所述分类结果,进行远光灯切换或者近光灯切换(灯光切换)。
由上述内容可知,灯源分类是本申请方案中至关重要的一步,是由灯源分类单元330实现的,具体地,灯源分类单元330中包含灯源检测模型,灯源检测模型会对所获取的图像进行检测并进行灯源分类。
可理解,灯源检测模型可以是一种AI模型,而在利用AI模型进行检测之前需要对初始AI模型进行训练,本申请利用摄像机获取到的包含灯源信息的样本图像对初始AI模型进行训练,从而使得训练后AI模型具备灯源分类的能力,可以对摄像机所获取的图像进行灯源分类。
另外,本申请中的灯源检测模型还可以确定灯源位置(检测框信息/目标在图像中的位置)以及检测灯源与自车间实际距离。
在训练过程中,需要使用特别的训练数据进行训练,从模型能力需求出发进行分析,需要使用携带标注信息的摄像机拍摄的样本图像进行训练,样本图像中记录了目标,标注信息包括目标在样本图像中的类别信息、位置信息(检测框信息/目标在图像中的位置)和距离信息(目标与自车间的距离等)。
目标的类别信息用于表示目标的类别,在本申请实施例中,所述目标指灯源,例如:车大灯、车尾灯、路灯、交通信号灯等,目标的类别信息指灯源的类别信息。
另外,进行分类时,需要借助检测框,所述检测框用于在样本图像中将目标标注出来,在本申请实施例中,所述检测框指灯框,检测框信息可以包括但不限于目标类别信息、像素坐标信息。示例性的,在本申请的一个实施例中,利用矩形检测框进行标注,检测框信息包括所标注的目标的类别信息、像素坐标信息,该类别信息包括检测框的形状、颜色等特征,该像素坐标信息由四个像素坐标组成,即检测框的左上角横坐标、左上角纵坐标、右下角横坐标、右下角纵坐标。
需要说明的是,检测框信息在本申请的一个实施例中,可以直接显示文字表示该目标的类别信息,也可以利用检测框的形状、颜色等特征来表明该目标的类别,标注信息可以以可扩展标记语言(extensible markup language,XML)或JavaScript对象简谱(JavaScript object notation,JSON)等文件进行保存。
下面对灯源检测模型进行详细介绍。
本申请中灯源检测模型的训练所需的训练数据(样本图像)是由车辆中固定位置的摄像机所获取。可以利用目标检测算法对该样本图像进行检测,从而得到样本图像中记录的灯源的类别信息和检测框信息,也可以通过人工标注的方式得到。可理解,样本图像可以包括不同时刻获取的图像以及不同曝光时间的图像。
获取多个样本图像并得到其标注信息后,多个带有标注信息的样本图像构成训练集,利用训练集中的训练样本进行训练。首先确定初始灯源检测模型,由上文可知,本申请中的初始灯源检测模型可以是一种AI模型,具体可以选用一种深度神经网络模型,该模型不仅可以对灯源的类别及其在图像中的位置进行检测,同时还可以计算灯源与自车的实际距离,这意味着本申请中的初始灯源检测模型在结构上有所改进。
如图4所示,本申请的初始灯源检测模型400的结构主要包括三部分,即骨干网络410、检测网络420和损失函数计算单元430。骨干网络410用于对输入的样本图像进行特征提取,其内部包含若干卷积层,可以选用视觉几何组网络(visual geometry group network,VGG)、残差网络(residual network)、密集卷积网络(dense convolutional network)等。检测网络420用于对骨干网络410提取的特征进行检测和识别,输出灯源类别信息、灯源位置信息(即检测框信息),其内部本质上也是由若干卷积层组成,会对骨干网络410的输出结果进行进一步的卷积计算。
如上文所述,与一般的检测模型(例如yolo、faster RCNN等)相比,本申请的灯源检测模型在结构上有所不同。因为图像中同一辆车的车灯间的距离、该车辆的真实车宽、用于获取图像的摄像机的焦距、该车辆到自车的距离,这四者存在比例关系,已知其中三个,就能根据该比例关系求得另一个的值。因此,骨干网络410可以选用同一种网络,但在检测网络420中,本申请在每个负责回归检测框的卷积层上增加了多个通道,优先增加两个通道,用于表示灯源在样本图像中的横坐标和纵坐标,通过同一车辆的灯源的横、纵坐标,可以得到图像上该车辆车灯间的距离,从而得到该车辆到自车的距离。当然,也可以增加更多个通道,每个通道赋予相应的物理含义,本申请对此不作限定。
首先,将初始灯源检测模型400的参数初始化,之后将样本图像输入至初始灯源检测模型400。骨干网络410对样本图像中记录的目标进行特征提取,得到抽象的特征,然后将抽象的特征输入至检测网络420,检测网络进行进一步的检测和识别,预测出该目标的类别、位置以及到自车的距离,并通过相应的通道进行输出至损失函数计算单元430;然后将该样本图像对应的标注信息也输入损失函数计算单元430,损失函数计算单元430将检测网络420预测得到的预测结果与该样本图像对应的标注信息进行比对,并计算出损失函数,以损失函数为目标函数使用反向传播算法更新调整模型中的参数。依次输入携带标注信息的样本图像,不断迭代执行上述训练过程,直到损失函数值收敛时,即每次计算得到的损失函数值在某一个值附近上下波动,则停止训练,此时,灯源分类模型已经训练完成,即灯源分类模型已经具备检测图像中目标的类别、位置、以及到自车的距离,可以用于灯源分类。
值得说明的是,由于本申请在每个负责回归检测框的卷积层上增加了两个通道,因此 在损失函数的构造上需要重新设计。假设本申请的目标定位模型是在经典的目标检测模型(例如yolo、faster RCNN等)上进行的改进,经典的目标检测模型的损失函数为Loss1,那么构造的本申请的目标定位模型的损失函数Loss可以表示为:Loss=Loss1+Loss2,Loss2为新增的两个通道所对应的损失函数。
如上文所述,对图像中的灯源的特征的提取主要由骨干网络410和检测网络420完成,下面结合图5具体描述如何提取特征来实现分类。如图5所示,该方法包括但不限于以下步骤:
S501:从图像中选取亮度值大于预设阈值的亮斑,并给其设置灯框。
具体地,选取亮斑中亮度大于或等于预设阈值的亮斑为第一亮斑,给其设置灯框,所述灯框的尺寸可以根据需要标定的第一亮斑的大小适应性变化。
示例性的,如图6所示,图6为给第一亮斑设置灯框的示意图,通过识别第一亮斑在图像上的位置、亮度、尺寸和颜色等要素,将位于图像下方的第一亮斑用红色方框标识,将位于图像上方、具有一定分布特征的第一亮斑用蓝色方框标识,将位于图像上方具有鲜明的颜色特征的第一亮斑用紫色方框标识,由此可以较为清楚地显示具有不同特征的亮斑在图像上的分布情况。
所述具有一定分布特征是指图像上方区域左右两侧的第一亮斑的连线在一定误差范围内会相交于图像中央的某一点。
需要说明的是,上述对灯框的设置方式只是本申请中的一种示例性方式,灯框的尺寸、形状、颜色等具体参数,均可由研发人员根据实际需要和实验数据进行设置,比如,设置灯框时,使用不同颜色或不同形状的灯框对不同特征的亮斑进行标定,本申请中对此不作限制。
S502:对所述灯框进行筛选去重。
在设置灯框后,灯框间可能出现相交、相切的情形,从而可能影响后续操作,因此,需要对灯框进行筛选去重处理。需要说明的是,对所述灯框进行筛选去重的方法有很多种,比如,在本申请的一个实施例中,对所述灯框采用IOU算法进行去重,如图7所示,亮斑A被灯框C标定,亮斑B被灯框D标定,当检测出灯框C与灯框D有重叠区域E时,生成第二灯框,该第二灯框比原始灯框更大一些,可以将两个亮斑都包含在内,图7中的灯框F即为第二灯框,可理解,所述第二灯框的尺寸可以根据原始灯框的大小来确定,在其他情况下,第二灯框的尺寸、形状、颜色等具体参数也可以预先设置。
S503:对去重后的灯框进行配对,得到第一灯框对。
在本申请的一个实施例中,图像中任意两个灯框至少需要满足三个条件,才能配对成功,第一个条件为:所述两个灯框的高度的差值的绝对值小于第一阈值;第二个条件为:所述两个灯框间的距离与所述灯框的高度存在线性关系,即所述两个灯框间的距离与所述灯框的高度的比值在比例区间内;第三个条件为:所述两个灯框中心点的连线与水平线的角度小于第二阈值。可理解,第二个条件中,所述两个灯框间的距离为灯框中心点间的距离(即所述两个灯框中心点连线的长度),灯框的高度为所述两个灯框的高度的平均值。
如图8所示,图8为灯框配对的示意图,图中灯框A和灯框B为图像中任意选取的两 个灯框,灯框A的高度为a,灯框B的高度为b,灯框中心点之间的距离为c,当|a-b|小于第一阈值,
Figure PCTCN2021114823-appb-000001
在比例区间内,并且θ小于第二阈值时,灯框A和灯框B配对成功,即可以认为灯框A中的亮斑和灯框B中的亮斑为一组灯对。
可理解,第一阈值、第二阈值以及比例区间由研发人员根据实际需求和实验数据进行设置,在本申请中不作限制。
S504:确定远处车灯与自车的距离。
由上文可知,中央区域为远处车辆的车灯可能出现的区域,选取中央区域中的第一灯框对,所述第一灯框对中第一亮斑的中心点间的距离与真实车宽的比值为第一比值,用于获取图像的摄像机的焦距与所述第一灯框对所属车辆到自车的距离的比值为第二比值,可理解,第一比值与第二比值在一定误差范围内相等,在本申请的一个实施例中,可认为二者近似相等,因此,已知所述第一灯框对中第一亮斑中心点间的距离、真实车宽以及摄像机的焦距,可得所述第一灯框对所属车辆到自车的距离,或者,引入一个数值作为误差值,在已知第一灯框对中第一亮斑中心点间的距离、真实车宽的情况下,可计算出第一比值,将第一比值加上或减去所述误差值,得到第二比值,而已知摄像机的焦距,可得所述第一灯框对所属车辆到自车的距离。
需要说明的是,上述方法仅为本申请中的一种示例性的方法,还可以有不同的方法来确定远处车灯与自车的距离,本申请对此不作限制。
可理解,所述中央区域的形状、尺寸等具体参数可由研发人员根据实验数据预先设置,本申请中对此不作限制。
另外,在本申请的其他实施例中,为了获取更精准的灯源分类结果,可以根据上述方法计算所获取图像中所有车辆与自车的距离,由此,不仅可以得到灯源信息,还可以得到灯源的距离信息,结合远光灯和近光灯的照射距离,能够更加准确地进行切换。
S505:确定消失点水平线(VP Line)。
在本申请的一个实施例中,将所获取的图像分为N行(N为正整数),确定第一灯框对中两个第一亮斑的中心点所在的行,统计所获取图像中每行的第一灯框对的数量,绘制以行数为横坐标,第一灯框对的数量为纵坐标的行数分布图,所述行数分布图可以是平滑散点图、折线散点图、直方图等形式,所述行数分布图为平滑散点图时,选取所述行数分布图的所有波谷中横坐标最大的波谷,将该处对应的行数值作为第一行数值,或者,选取所述行数分布图的所有波峰中横坐标最大的波峰左半侧最低区间中央位置,将该处对应的行数值作为第二行数值。
如图9所示,图9为绘制行数分布图的示意图,将所获取的图像分为7行,若图像宽度为280px(像素),则每行的高度为40px,确定各个第一灯框对中的第一亮斑的中心点所在行,统计所获取图像中每行的第一灯框对的数量,图9中,第一行、第二行和第四行灯对数为1,第三行灯对数为5,第五行灯对数为3,第六行和第七行灯对数为0,绘制以行数为横坐标,第一灯框对的数量为纵坐标的行数分布图,该行数分布图是直方图形式。
如图10所示,图10为确定第一行数值的示意图,其中,黑点标记处为行数分布图中横坐标最大的波谷,即该黑点标记处对应的行数为第一行数值。可理解,第二行数值的确 定方式与上述第一行数值的确定方式相同,在此不再赘述。
需要说明的是,若第一灯框对中两个第一亮斑的中心点所在的行不同,在后续统计图像每行的第一灯框对的数量时,所述两个第一亮斑的中心点所在的行各记0.5。可理解,上述过程还有不同的统计方法,比如,给所述两个第一亮斑的中心点所在的行各记1,或者任意选择所述两个第一亮斑的中心点所在的行中的一行记1,本申请中对此不作限制。
在本申请的其他实施例中,还有不同的绘制行数分布图的方法,可以首先确定图像中被灯框所标定的第一亮斑的中心点所在的行,统计图像中每行的第一亮斑的数量,绘制以行数为横坐标,第一亮斑的数量为纵坐标的行数分布图。
当所述行数分布图中无波峰和/或无波谷时,选取默认消失点水平线作为第一消失点水平线(VP Line I)。当所述第一行数值与所述默认消失点水平线的差值的绝对值大于第三阈值时,或者,当所述第二行数值与所述默认消失点水平线的差值的绝对值大于第三阈值时,选取默认消失点水平线作为第一消失点水平线(VP Line I)。由于车灯一般在车体的中间位置,所以根据预设的修正值对VP Line I进行修正,将所述消失点水平线加上修正值,得到第二消失点水平线(VP Line II)。另外,考虑自车在刹车、加速以及上下坡等过程中会导致摄像机俯仰角的变化,从而造成图像中灯源位置发生较大变化,因此引入参考值,将VP Line I与所述参考值的差值的绝对值乘以阻尼系数α,再加上参考值,得到第三消失点水平线(VP Line III),此时所得的VP Line III即为可用于对所述亮斑进行分类的VP Line。
可理解,修正值为较小的正整数,其具体数值由研发人员根据实验数据和实际需求进行设置,在本申请中不作限制。
可理解,第三阈值由研发人员根据实际需求和实验数据进行设置,在本申请中不作限制。
需要说明的是,在本申请的其他实施例中,可以根据刹车、加速、上坡、下坡等具体行车环境,来自适应调整参考值和阻尼系数α的大小。
另外,上述引入参考值和阻尼系数α的方法仅为本申请中的一种示例性方法,在具体实现过程中,可根据实际需求对该方法做出调整,或者变换方法,以获取更好的有益效果,本申请对此不作限制。
S506:利用VP Line III分割图像区域。
在得到VP Line III后,以VP Line III这个行数值所对应的所获取的图像中的行作为分界线,将所获取的图像分为第一区域和第二区域,所述第一区域为所述分界线以上的图像区域,所述第二区域为所述分界线以下的图像区域。
S507:排除路面反射光的干扰。
因为第一区域的灯源与路面之间的距离较大,所以产生的反射光较暗,极大可能不满足“亮度大于或等于预设阈值”这一条件,而第二区域的灯源与路面之间的距离较小,极大可能满足“亮度大于或等于预设阈值”这一条件,因此,主要针对第二区域的第一亮斑进行路面反射光的排除。
选取第二区域内的第一亮斑,对该第一亮斑分别进行位置分析,检测该第一亮斑垂直下方、左右偏移小于或等于预设距离的区域内是否存在其他第一亮斑,若所述区域内存在其他第一亮斑,则筛除所述其他第一亮斑,得到第二亮斑和第二灯框对。可理解,进行筛 除操作后剩余的第一灯框对为第二灯框对,另外,预设距离由研发人员根据实际需求和实验数据进行设置,本申请对此不作限制。
如图11所示,图11为排除路面反射光干扰的示意图,图中对亮斑A进行位置分析,其中,左右偏移的预设距离为M点到O点的距离(或者N点到O点的距离),可理解,M点到O点的距离与N点到O点的距离相等。有两种方案来排除路面反射光的干扰,第一种方案是检测区域X内是否存在其他第一亮斑,第二种方案是检测区域X和区域Y组成的扇形区域内是否存在其他第一亮斑,该扇形区域是以P点为圆心,以P点到M点的距离为半径所画的区域,该扇形区域的角度为(α+β),可理解,α=β,若采取第一种方案,亮斑D在区域X内,所以筛除亮斑D,若采取第二种方案,亮斑E在该扇形区域内,则筛除亮斑E。
S508:确定大灯对、尾灯对。
对第二区域内第二灯框对中的第二亮斑进行亮度检测,并对第二区域内第二灯框对中的第二亮斑的光晕进行色彩分析;当所述第二亮斑的亮度在第一区间内且所述光晕的颜色在第二区间内时,所述第二灯框对为大灯对;当所述第二亮斑的亮度在第三区间内且所述光晕的颜色在第四区间内时,所述第二灯框对为尾灯对。
需要说明的是,若将获取的RGB颜色空间下的图像转化成YUV颜色空间下的图像,那么进行色彩分析时,主要考虑V分量(红色分量)或者U分量(蓝色分量)。因此,对所述第二亮斑进行亮度检测时,获取所述第二亮斑每个像素的Y分量的值(亮度值),可得到所述第二亮斑的平均亮度值;对所述第二亮斑的进行色彩分析时,获取所述第二亮斑每个像素的V分量的值,可得到所述第二亮斑的V分量的平均值,当所述平均亮度值在第三区间且所述V分量的平均值在第四区间时,所述第二灯框对为尾灯对。当然,也可计算U分量的平均值来进行色彩区分,本申请对此不作限制。
可理解,对不同颜色空间下的图像而言,进行亮度检测、色彩分析采取方法可能不同,上述内容仅为本申请中一种示例性方法,不视为对本申请的限制。
需要说明的是,当所述第二灯框对中的两个亮斑均满足上述条件时,才能确定所述第二灯框对为大灯对或者尾灯对。在本申请的一个实施例中,引入误差值,当所述第二灯框对中的两个亮斑的亮度信息和颜色信息在误差范围内满足上述条件,也可认为所述第二灯框对为大灯对或者尾灯对,可理解,误差值的具体值可由研发人员根据实际需求和实验数据进行设置,本申请中对此不作限制。
可理解,第一区间、第二区间、第三区间和第四区间由研发人员根据实际需求和实验数据进行设置,本申请对此不作限制。
对所述第一区域内的第二亮斑进行亮度检测,并对所述第一区域内的第二亮斑的光晕进行色彩分析,当所述第二亮斑的亮度在第五区间内且所述光晕的颜色在第六区间内时,所述第二亮斑为路灯。
S509:确定路灯。
确定路灯的方法与确定大灯对和尾灯对的方法相同,在此不再详细说明,参考上述确定大灯对和尾灯对的相关内容即可。
需要说明的是,第五区间、第六区间由研发人员根据实际需求和实验数据进行设置, 本申请对此不作限制。
S510:确定交通信号灯。
选取所述第一区域内亮度大于或等于第四阈值的亮斑为第三亮斑,对所述第一区域内的第三亮斑进行亮度检测,并对所述第一区域内的第三亮斑的光晕进行色彩分析。当所述第三亮斑的亮度在第七区间内且所述光晕的颜色在第八区间内时,所述第三亮斑为交通信号灯中的红灯;当所述第三亮斑的亮度在第九区间内且所述光晕的颜色在第十区间内时,所述第三亮斑为交通信号灯中的绿灯;当所述第三亮斑的亮度在第十一区间内且所述光晕的颜色在第十二区间内时,所述第三亮斑为交通信号灯中的黄灯。
确定交通信号灯的方法与确定大灯对和尾灯对的方法相同,在此不再详细说明,参考上述确定大灯对和尾灯对的相关内容即可。
可理解,第七区间、第八区间、第九区间、第十区间、第十一区间和第十二区间由研发人员根据实际需求和实验数据进行设置,本申请对此不作限制。
可理解,第四阈值由研发人员根据实际需求和实验数据进行设置,在本申请中不作限制。
S511:对误分类的情形进行纠正。
在灯源分类过程中,有可能出现误分类的情况,比如,公交车、卡车等大型车辆前后方上下两侧各有两个灯,因此,可能出现两个灯在第一区域、两个灯在第二区域的情况,需要对此进行纠正。
对第二区域内的第二灯框对的进行垂直分析,找到位于所述第二灯框对垂直上方,且与所述第二灯框对的距离在第十三区间内的灯框对;当所述灯框对的亮度在第十四区间内且所述光晕的颜色在第十五区间内时,所述灯框对为大型车辆的大灯;当所述灯框对的亮度在第十六区间内且所述光晕的颜色在第十七区间内时,所述灯框对为大型车辆的尾灯。
可理解,第十三区间、第十四区间、第十五区间、第十六区间和第十七区间由研发人员根据实际需求和实验数据进行设置,本申请对此不作限制。
上述进行亮度检测和色彩分析过程与确定大灯对和尾灯对时采取的方法相同,在此不再详细说明,参考上述确定大灯对和尾灯对的相关内容即可。
需要说明的是,S504(确定远处车灯与自车的距离)可以放在S507(排除路面反射光的干扰)之后进行,也可以放在S508(确定大灯对和尾灯对)或S509(确定路灯)或S510(确定交通信号灯)或S511(对误分类的情形进行纠正)之后进行。
另外,在本申请的一个实施例中,对于所获取的原始图像进行如下操作:选取该图像的中央区域作为ROI区域;对该图像进行压缩。其中,所述中央区域为远处的车辆的大灯可能出现的区域,所述中央区域的形状、尺寸等具体参数可以由研发人员根据实验数据预先设置。对该图像进行压缩后再进行后续操作(计算亮度加权值等)可以降低计算量,但是该图像中部分细节信息会丢失。因此,采取压缩后的图像和所选取的原始图像的中央区域图像相结合的方式来进行灯源分类,具体地,将压缩后的图像和所选取的原始图像的中央区域图像进行比对,所述中央区域图像内的灯源根据该中央区域图像进行分类,其他区 域的灯源根据压缩后的图像进行分类。
上述内容介绍了灯源分类模型及其训练过程,下面将具体描述本申请提供的智能灯光切换方法的过程,如图12所述,该方法包括但不限于以下步骤:
S1210:获取图像。
具体地,智能灯光切换系统通过摄像机模组实时获取自车的前方图像。所述图像中包括但不限于前方车辆的灯光信息、道路两侧的路灯信息、交通信号灯信息等灯源信息,以及环境光信息。
可理解,所述摄像机模组可以根据预先设定的时间实时截取图像,比如每1/20s截取一次图像。
在本申请的一个实施例中,智能灯光切换系统通过摄像机模组实时获取的前方图像包括3帧图像,这3帧图像分别是短曝光图像、中曝光图像、长曝光图像,其中短曝光图像的曝光时间最短,长曝光图像的曝光时间最长,中曝光图像的曝光时间介于二者之间。
可理解,曝光时间可以有多种设定方式,比如设定短曝光的曝光时间为10ms,中曝光的曝光时间为20ms,长曝光的曝光时间为30ms,或者,设定短曝光的曝光时间为5ms,中曝光的曝光时间为10ms,长曝光的曝光时间为20ms。因此,上述短曝光、中曝光和长曝光只是相对而言,具体的曝光时间由研发人员根据实际需要和实验数据进行设置,在此不作限制。
需要说明的是,不同曝光时间下灯源呈现的图像特征不同,采取3帧图像来进行分析可以帮助获取到更多的灯源信息,在本申请的其他实施例中,还可以获取更多或更少的图像来进行后续分析。
在本申请的一个实施例中,由所述智能灯光切换系统所获取的前方图像为RGB颜色空间下的图像,需要将该图像转换成YUV颜色空间下的图像。
需要说明的是,上述由RGB颜色空间到YUV颜色空间的转换只是本申请的一种示例性方式,在实际情况下,还可以有其他的转换方式,比如转换成HSI颜色空间。
S1220:计算所述图像对应的环境光亮度值。
具体地,智能灯光切换系统对所获取的图像选取至少一个ROI区域,获取所选取的区域每个像素的亮度值,计算平均亮度值,然后根据设置的权重,计算亮度加权值,该亮度加权值即为环境光亮度值。可理解,所选取的ROI区域的尺寸、形状等具体参数由研发人员根据图像、实际需要以及实验数据预先设置,在本申请中不作限制。可理解,权重由研发人员根据实验数据进行设置,在本申请中不作限制。
在本申请的一个实施例中,可以选取中央区域和整幅图像作为ROI区域,所述中央区域为远处的车辆的大灯可能出现的区域,如上文所述,所述中央区域的形状、尺寸等具体参数可以由研发人员根据实验数据预先设置。
在本申请的一个实施例中,可以选取中央区域、上方区域和整幅图像作为ROI区域,所述中央区域为远处车辆的车灯可能出现的区域,所述上方区域为路灯可能出现的区域,如上文所述,所述中央区域和所述上方区域的形状、尺寸等具体参数可以由研发人员根据实验数据预先设置。
需要说明的是,对YUV颜色空间下的图像来说,Y分量表示亮度信息,获取该图像中像素的Y分量的值,即可认为获取了该像素的亮度值。
示例性的,选取中央区域和整幅图像作为ROI区域,现已计算出中央区域的平均亮度值为200,整幅图像的平均亮度值为120,若设置的中央区域和整幅图像的权重为3:7,计算亮度加权值:200*0.3+120*0.7=144,所以计算所得的亮度加权值即为144。
S1230:对所述图像中所包含的灯源进行分类。
具体地,将所述图像输入灯源检测模型,根据所述灯源检测模型获得所述图像中的灯源类别。
需要说明的是,计算环境光亮度和灯源分类并无确定的先后关系,在实际应用中,可以先计算环境光亮度,也可以先进行灯源分类,还可以二者同时进行。
S1240:根据所述图像对应的环境光亮度值和所述分类结果判断是否进行远近光灯切换。
具体地,若所述图像对应的环境光亮度值大于或等于第六阈值,灯源分类结果显示远光灯照射距离内确实有车灯,并不是受到了干扰光源的影响,此时判断切换至近光灯;若所述图像对应的环境光亮度值大于或等于第六阈值,灯源分类结果显示远光灯照射距离内无车灯,可认为环境光亮度值受到了干扰光源的影响,此时判断切换至远光灯;若所述图像对应的环境光亮度值小于第六阈值,灯源分类结果显示远光灯照射距离内无车灯,此时判断切换至远光灯;若所述图像对应的环境光亮度值小于第六阈值,灯源分类结果显示远光灯照射距离内有车灯,此时判断切换至近光灯。
可理解,远光灯照射距离由研发人员根据实际需求和车灯的参数进行设置,本申请中对此不作限制。另外,由于摄像机会实时获取图像,智能灯光切换系统也会实时进行环境光亮度计算和灯源分类,因此可以实时获取到灯源信息及其距离信息,在本申请的一个实施例中,可以设置延时切换模式,当灯源分类结果表明灯源与自车的距离与自车远光灯照射距离的差值在预设区间内时,判断延时切换至远光灯,可理解,所述预设区间和延长的时间可由研发人员根据实际需求进行设置,本申请中对此不作限制。
需要说明的是,智能灯光切换系统可以人为启动,也可以自行启动。其中,人为启动方式适用于上述S1240中的切换方法,当驾驶员判断需要开灯时,触发该智能灯光切换系统;若采取自行启动的方式,可以先计算环境光亮度,当其大于或等于第七阈值时,可认为自车此时所处环境无需开灯,那么之后无需再进行灯源分类,否则进行灯源分类并判断是否进行远光灯切换或近光灯切换,这样可以减少能耗,或者,可以另外设置一个计算环境光亮度的模块,当该模块得到的环境光亮度大于或等于第七阈值时,可认为无需启动智能灯光切换系统,否则触发智能灯光切换系统,此时智能灯光切换系统可按照S1210-S1240所述方法进行远近光灯切换。可理解,第七阈值由研发人员根据实际需求和实验数据进行设置,本申请中对此不作限制。
上述详细阐述了本申请实施例的方法,为了便于更好的实施本申请实施例的上述方案,相应地,下面还提供用于配合实施的相关设备。
如图3所示,图3是本申请提供的一种智能灯光切换系统的结构示意图,该智能灯光切换系统用于执行上述图12所述的智能灯光切换方法。本申请对该智能灯光切换系统的功能单元的划分不做限定,可以根据需要对该智能灯光切换系统中的各个单元进行增加、减少或合并。此外,智能灯光切换系统中的各个单元的操作和/或功能分别为了实现上述图12所描述的方法的相应流程,为了简洁,在此不再赘述。图3示例性的提供了一种功能单元的划分:
智能灯光切换系统300包括获取单元310、环境光探测单元320、灯源分类单元330和切换单元340。
具体地,所述获取单元310用于执行前述步骤S1210,且可选的执行前述步骤中可选的方法。
所述环境光探测单元320用于执行前述步骤S1220,且可选的执行前述步骤中可选的方法。
所述灯源分类单元330用于执行前述步骤S501-S511、S1230,且可选的执行前述步骤中可选的方法。
所述切换单元340用于执行前述步骤S1240,且可选的执行前述步骤中可选的方法。
上述四个单元之间互相可通过通信通路进行数据传输,应理解,智能灯光切换系统300包括的各单元可以为软件单元、也可以为硬件单元,或部分为软件单元部分为硬件单元。
参见图13,图13是本申请实施例提供的一种计算设备的结构示意图。如图13所示,该计算设备1300包括:处理器1310、通信接口1320以及存储器1330,所述处理器1310、通信接口1320以及存储器1330通过内部总线1340相互连接。
所述计算设备1300可以是图3中的智能灯光切换系统300,图3中的智能灯光切换系统300所执行的功能实际上是由所述智能灯光切换系统300的处理器1310来执行。
所述处理器1310可以由一个或者多个通用处理器构成,例如中央处理器(central processing unit,CPU),或者CPU和硬件芯片的组合。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC)、可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD)、现场可编程逻辑门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合。
通信接口1320用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),核心网,无线局域网(Wireless Local Area Networks,WLAN)等。
总线1340可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线1340可以分为地址总线、数据总线、控制总线等。为便于表示,图13中仅用一条粗线表示,但不表示仅有一根总线或一种类型的总线。
存储器1330可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM);存储器1330也可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM)、快闪存储器(flash memory)、硬盘(hard disk  drive,HDD)或固态硬盘(solid-state drive,SSD);存储器1330还可以包括上述种类的组合。存储器1330用于存储执行以上述智能灯光切换方法实施例的程序代码,在一种实施方式中,存储器1330还可以缓存其他数据,并由处理器1310来控制执行,以实现智能灯光切换系统300所示的功能单元,或者用于实现图12所示的方法实施例中以智能灯光切换系统300为执行主体的方法步骤。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时,可以实现上述方法实施例中记载的任意一种的部分或全部步骤,以及实现上述图3所描述的任意一个功能单元的功能。
本申请实施例还提供了一种计算机程序产品,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中以智能灯光切换系统300为执行主体的方法步骤的一个或多个步骤。上述所涉及的设备的各组成模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在所述计算机可读取存储介质中。
在上述实施例中,对各个实施例的描述各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
应理解,本文中涉及的第一、第二、第三、第四以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请的范围。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络 单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例装置中的模块可以根据实际需要进行合并、划分和删减。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (17)

  1. 一种智能灯光切换方法,其特征在于,所述方法包括:
    获取图像,所述图像由设置于车辆的固定位置的摄像机拍摄得到,所述图像中记录了灯源信息;
    计算所述图像对应的环境光亮度值;
    根据所述灯源信息,对所述图像中所包含的灯源进行分类,获得分类结果;
    根据所述图像对应的环境光亮度值和所述分类结果,进行灯光切换。
  2. 如权利要求1所述的方法,其特征在于,所述计算所述图像对应的环境光亮度值,包括:
    在所述图像中选取至少一个区域,并计算所述至少一个区域的亮度值;
    根据所述至少一个区域的亮度值,计算得到所述图像对应的环境光亮度值。
  3. 如权利要求1或2所述的方法,其特征在于,所述对所述图像中所包含的灯源进行分类,获得分类结果,包括:
    将所述图像输入灯源检测模型,根据所述灯源检测模型获得所述图像中的灯源类别。
  4. 如权利要求3所述的方法,其特征在于,所述将所述图像输入灯源检测模型之前,所述方法还包括:
    利用多个样本图像对所述灯源检测模型进行训练,所述样本图像中包括所述灯源及所述灯源的标注信息。
  5. 如权利要求3或4所述的方法,其特征在于,所述根据所述灯源检测模型获得所述图像中的灯源类别,包括:
    从所述图像中选取亮度值大于预设阈值的亮斑,并对所述亮斑设置灯框;
    对所述灯框进行配对,得到多个灯框对;
    根据所述多个灯框对,确定消失点水平线VP Line,所述VP Line用于区分所述图像的第一区域和第二区域;
    根据所述亮斑与所述VP Line的位置关系及所述亮斑的颜色特征,对所述亮斑进行分类,得到不同类型的灯源类别。
  6. 如权利要求5所述的方法,其特征在于,对所述灯框进行配对之前,所述方法还包括:
    对所述灯框进行去重,以使得所述灯框之间不发生重叠或相切。
  7. 如权利要求5或6所述的方法,其特征在于,根据所述多个灯框对,确定消失点水 平线VP Line,包括:
    对所述灯框对的灯框中心在图像中的行进行映射,得到灯框对的行数分布图;
    根据所述行数分布图,选取VP Line;
    根据预设的修正值对所述VP Line进行修正,得到修正后的VP Line;
    利用参考值对所述修正后的VP Line进行调整,获得用于对所述亮斑进行分类的VP Line,其中,所述参考值用于描述所述摄像机俯仰角的变化。
  8. 一种智能灯光切换系统,其特征在于,包括:
    获取单元,用于获取图像,所述图像由设置于车辆的固定位置的摄像机拍摄得到,所述图像中记录了灯源信息;
    环境光探测单元,用于计算所述图像对应的环境光亮度值;
    灯源分类单元,用于根据所述灯源信息,对所述图像中所包含的灯源进行分类,获得分类结果;
    切换单元,根据所述图像对应的环境光亮度值和所述分类结果,进行灯光切换。
  9. 如权利要求8所述的系统,其特征在于,所述环境光探测单元,具体用于:
    在所述图像中选取至少一个区域,并计算所述至少一个区域的亮度值;
    根据所述至少一个区域的亮度值,计算得到所述图像对应的环境光亮度值。
  10. 如权利要求8或9所述的系统,其特征在于,所述灯源分类单元,用于对所述图像中所包含的灯源进行分类,获得分类结果时,具体用于:
    将所述图像输入灯源检测模型,根据所述灯源检测模型获得所述图像中的灯源类别。
  11. 如权利要求10所述的系统,其特征在于,所述灯源分类单元,用于将所述图像输入灯源检测模型之前,还用于:
    利用多个样本图像对所述灯源检测模型进行训练,所述样本图像中包括所述灯源及所述灯源的标签。
  12. 如权利要求10或11所述的系统,其特征在于,所述灯源分类单元,用于根据所述灯源检测模型获得所述图像中的灯源类别时,具体用于:
    从所述图像中选取亮度值大于预设阈值的亮斑,并对所述亮斑设置灯框;
    对所述灯框进行配对,得到多个灯框对;
    根据所述多个灯框对,确定消失点水平线VP Line,所述VP Line用于区分所述图像的第一区域和第二区域;
    根据所述亮斑与所述VP Line的位置关系及所述亮斑的颜色特征,对所述亮斑进行分类,得到不同类型的灯源类别。
  13. 如权利要求12所述的系统,其特征在于,所述灯源分类单元,用于对所述灯框进 行配对之前,还用于:
    对所述灯框进行去重,以使得所述灯框之间不发生重叠或相切。
  14. 如权利要求12或13所述的系统,其特征在于,所述灯源分类单元,用于根据所述多个灯框对,确定消失点水平线VP Line时,具体用于:
    对所述灯框对的灯框中心在图像中的行进行映射,得到灯框对的行数分布图;
    根据所述行数分布图,选取VP Line;
    根据预设的修正值对所述VP Line进行修正,得到修正后的VP Line;
    利用参考值对所述修正后的VP Line进行调整,获得用于对所述亮斑进行分类的VP Line,其中,所述参考值用于描述所述摄像机俯仰角的变化。
  15. 一种计算设备,其特征在于,所述计算设备包括存储器和处理器,所述处理器执行所述存储器存储的计算机指令,使得所述计算设备执行权利要求1-7任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述权利要求1-7任意一项所述的方法。
  17. 一种车辆,其特征在于,所述车辆包含权利要求8-14任一项所述的智能灯光切换系统。
PCT/CN2021/114823 2020-10-31 2021-08-26 一种智能灯光切换方法、系统及相关设备 WO2022088901A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023526210A JP2023548691A (ja) 2020-10-31 2021-08-26 インテリジェントライトスイッチング方法及びシステム並びに関連装置
EP21884633.5A EP4224362A4 (en) 2020-10-31 2021-08-26 INTELLIGENT LAMP SWITCHING METHOD AND SYSTEM AND APPARATUS
US18/307,615 US20230256896A1 (en) 2020-10-31 2023-04-26 Intelligent light switching method and system, and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011197967.5A CN114454809A (zh) 2020-10-31 2020-10-31 一种智能灯光切换方法、系统及相关设备
CN202011197967.5 2020-10-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/307,615 Continuation US20230256896A1 (en) 2020-10-31 2023-04-26 Intelligent light switching method and system, and related device

Publications (1)

Publication Number Publication Date
WO2022088901A1 true WO2022088901A1 (zh) 2022-05-05

Family

ID=81383536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114823 WO2022088901A1 (zh) 2020-10-31 2021-08-26 一种智能灯光切换方法、系统及相关设备

Country Status (5)

Country Link
US (1) US20230256896A1 (zh)
EP (1) EP4224362A4 (zh)
JP (1) JP2023548691A (zh)
CN (1) CN114454809A (zh)
WO (1) WO2022088901A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115438413A (zh) * 2022-09-15 2022-12-06 贝壳找房(北京)科技有限公司 光源布设方法和电子设备、计算机可读存储介质
CN115534801A (zh) * 2022-08-29 2022-12-30 深圳市欧冶半导体有限公司 车灯自适应调光方法、装置、智能终端及存储介质
CN115984828A (zh) * 2023-03-20 2023-04-18 江西省天轴通讯有限公司 基于几何特征描述子的远光灯开启检测方法、装置及设备
CN116506988A (zh) * 2023-05-22 2023-07-28 浙江雨林电子科技有限公司 一种led灯智能调光调色控制方法及系统
CN116685015A (zh) * 2023-08-03 2023-09-01 成都迅晟规划设计管理有限公司 一种基于环境光线的灯光控制方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102584501B1 (ko) * 2018-10-05 2023-10-04 삼성전자주식회사 자율 주행 장치의 객체 인식 방법 및 자율 주행 장치
US11770763B1 (en) * 2021-02-08 2023-09-26 T-Mobile Innovations Llc Dynamic selection of an anchor node
CN116073446B (zh) * 2023-03-07 2023-06-02 天津天元海科技开发有限公司 基于灯塔多能源环境集成供电系统的智能供电方法和装置
CN116538484A (zh) * 2023-06-21 2023-08-04 中山易事达光电科技有限公司 行车照明中的车灯散热控制方法及系统
CN116806069B (zh) * 2023-08-21 2023-11-17 中电信数字城市科技有限公司 路灯控制系统及路灯控制方法
CN117202430B (zh) * 2023-09-20 2024-03-19 浙江炯达能源科技有限公司 用于智慧灯杆的节能控制方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955937A (zh) * 2011-08-23 2013-03-06 罗伯特·博世有限公司 用于确定将光发射和/或反射至车辆的对象的对象类别的方法
CN106218492A (zh) * 2016-08-31 2016-12-14 江苏鸿信系统集成有限公司 一种黑夜行车灯光自动切换装置及切换方法
CN110549934A (zh) * 2019-09-12 2019-12-10 中国计量大学 一种基于图像处理和深度学习的汽车智能灯光调节系统
CN110738158A (zh) * 2019-10-11 2020-01-31 奇点汽车研发中心有限公司 车辆灯光控制方法和装置、电子设备和存储介质
US20200079276A1 (en) * 2017-05-11 2020-03-12 HELLA GmbH & Co. KGaA Assistance method for a light control system in a motor vehicle, light control system, computer program product, and computer-readable medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8120652B2 (en) * 1997-04-02 2012-02-21 Gentex Corporation System for controlling vehicle equipment
MXPA05001880A (es) * 2002-08-21 2005-06-03 Gentex Corp Metodos de adquisicion y procesamiento de imagen para control automatico de iluminacion exterior vehicular.
DE102008039091A1 (de) * 2008-08-21 2009-05-14 Daimler Ag Verfahren zur Aufbereitung von Kameradaten für eine variable Fahrlichtsteuerung eines Fahrzeuges
DE102008048309A1 (de) * 2008-09-22 2010-04-01 Volkswagen Ag Verfahren und Vorrichtung zum Detektieren von Fahrzeugen bei Dunkelheit
JP4702426B2 (ja) * 2008-10-10 2011-06-15 株式会社デンソー 車両検出装置、車両検出プログラム、およびライト制御装置
KR102135901B1 (ko) * 2013-12-03 2020-07-20 현대모비스(주) 차량의 hba 제어시스템 및 제어방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955937A (zh) * 2011-08-23 2013-03-06 罗伯特·博世有限公司 用于确定将光发射和/或反射至车辆的对象的对象类别的方法
CN106218492A (zh) * 2016-08-31 2016-12-14 江苏鸿信系统集成有限公司 一种黑夜行车灯光自动切换装置及切换方法
US20200079276A1 (en) * 2017-05-11 2020-03-12 HELLA GmbH & Co. KGaA Assistance method for a light control system in a motor vehicle, light control system, computer program product, and computer-readable medium
CN110549934A (zh) * 2019-09-12 2019-12-10 中国计量大学 一种基于图像处理和深度学习的汽车智能灯光调节系统
CN110738158A (zh) * 2019-10-11 2020-01-31 奇点汽车研发中心有限公司 车辆灯光控制方法和装置、电子设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4224362A4

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115534801A (zh) * 2022-08-29 2022-12-30 深圳市欧冶半导体有限公司 车灯自适应调光方法、装置、智能终端及存储介质
CN115534801B (zh) * 2022-08-29 2023-07-21 深圳市欧冶半导体有限公司 车灯自适应调光方法、装置、智能终端及存储介质
CN115438413A (zh) * 2022-09-15 2022-12-06 贝壳找房(北京)科技有限公司 光源布设方法和电子设备、计算机可读存储介质
CN115438413B (zh) * 2022-09-15 2023-08-15 贝壳找房(北京)科技有限公司 光源布设方法和电子设备、计算机可读存储介质
CN115984828A (zh) * 2023-03-20 2023-04-18 江西省天轴通讯有限公司 基于几何特征描述子的远光灯开启检测方法、装置及设备
CN116506988A (zh) * 2023-05-22 2023-07-28 浙江雨林电子科技有限公司 一种led灯智能调光调色控制方法及系统
CN116506988B (zh) * 2023-05-22 2023-09-19 浙江雨林电子科技有限公司 一种led灯智能调光调色控制方法及系统
CN116685015A (zh) * 2023-08-03 2023-09-01 成都迅晟规划设计管理有限公司 一种基于环境光线的灯光控制方法
CN116685015B (zh) * 2023-08-03 2023-09-29 成都迅晟规划设计管理有限公司 一种基于环境光线的灯光控制方法

Also Published As

Publication number Publication date
EP4224362A1 (en) 2023-08-09
JP2023548691A (ja) 2023-11-20
CN114454809A (zh) 2022-05-10
US20230256896A1 (en) 2023-08-17
EP4224362A4 (en) 2024-01-24

Similar Documents

Publication Publication Date Title
WO2022088901A1 (zh) 一种智能灯光切换方法、系统及相关设备
US11386673B2 (en) Brake light detection
US11321573B1 (en) Vision-based detection and classification of traffic lights
US11281230B2 (en) Vehicle control using vision-based flashing light signal detection
CN114375467B (zh) 用于检测紧急车辆的系统和方法
US20230184560A1 (en) Visual interface display method and apparatus, electronic device, and storage medium
CN112512887B (zh) 一种行驶决策选择方法以及装置
WO2021189210A1 (zh) 一种车辆换道方法及相关设备
CN113792566A (zh) 一种激光点云的处理方法及相关设备
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
WO2022051951A1 (zh) 车道线检测方法、相关设备及计算机可读存储介质
CN115147796A (zh) 评测目标识别算法的方法、装置、存储介质及车辆
CN113343738A (zh) 检测方法、装置及存储介质
EP4254320A1 (en) Image processing method and apparatus, and storage medium
CN115100630A (zh) 障碍物检测方法、装置、车辆、介质及芯片
CN114821212A (zh) 交通标志物的识别方法、电子设备、车辆和存储介质
CN115063639B (zh) 生成模型的方法、图像语义分割方法、装置、车辆及介质
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆
CN115985109B (zh) 一种无人驾驶矿车环境感知方法和系统
CN109145692B (zh) 车辆驾驶辅助系统和方法
US20230391358A1 (en) Retrofit vehicle computing system to operate with multiple types of maps
WO2023050058A1 (zh) 控制车载摄像头的视角的方法、装置以及车辆
CN114972824A (zh) 杆件检测方法、装置、车辆和存储介质
CN115691152A (zh) 匝道口交通调度的方法及设备
CN115861976A (zh) 车辆的控制方法、装置和车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884633

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023526210

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 2021884633

Country of ref document: EP

Effective date: 20230505

NENP Non-entry into the national phase

Ref country code: DE