CN111310738A - High beam vehicle snapshot method based on deep learning - Google Patents

High beam vehicle snapshot method based on deep learning Download PDF

Info

Publication number
CN111310738A
CN111310738A CN202010240833.0A CN202010240833A CN111310738A CN 111310738 A CN111310738 A CN 111310738A CN 202010240833 A CN202010240833 A CN 202010240833A CN 111310738 A CN111310738 A CN 111310738A
Authority
CN
China
Prior art keywords
high beam
car
vehicle
detection
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010240833.0A
Other languages
Chinese (zh)
Inventor
刘建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xunji Technology Co ltd
Original Assignee
Qingdao Xunji Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xunji Technology Co ltd filed Critical Qingdao Xunji Technology Co ltd
Priority to CN202010240833.0A priority Critical patent/CN111310738A/en
Publication of CN111310738A publication Critical patent/CN111310738A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

The invention discloses a far-reaching headlamp vehicle snapshot method based on deep learning. According to the technical scheme, a hardware detection device is not needed, cost consumption is saved to a certain extent, the operation is simple and convenient, and workers can complete high beam detection and snapshot only by processing video streams of the traffic camera; the high beam is detected by adopting a target detection technology based on deep learning, so that the detection rate is improved, the robustness is good, various night lighting conditions and road section environments can be adapted, the operation requirements of different environments are met, and the application range is expanded; the judgment of car following and car meeting scenes is introduced, so that the defects of the existing high beam snapshot technology are supplemented and perfected, and the sufficiency and the reasonability of law enforcement evidence are greatly improved.

Description

High beam vehicle snapshot method based on deep learning
Technical Field
The invention relates to the technical field of road traffic safety, in particular to a high beam vehicle snapshot method based on deep learning.
Background
Along with the rapid development of road traffic construction business in China, the problem of road traffic safety is more and more prominent, especially, the behavior that a motor vehicle driver does not use a high beam according to regulations when driving at night becomes a main reason for causing traffic accidents at night, the traditional high beam detection technology generally utilizes a hardware device for detection, although the method has high detection speed and high capture rate, the method still has certain limitation, and the method is reflected in the aspects of difficult installation on the hardware detection device, poor environmental adaptability and the like. In recent years, with the continuous development of deep learning technology, a plurality of target detection algorithms based on deep learning are proposed, and the target detection algorithms based on deep learning are applied to a high beam identification task, so that the problems of difficult installation, high false alarm rate and the like of the traditional high beam detection method can be effectively solved, and whether a target vehicle illegally uses a high beam in meeting and following scenes can be further judged.
At present, the research on the principle and method for distinguishing the meeting scene and the following scene of a target vehicle (starting a high beam vehicle) is not seen at home and abroad, but the research on the principle and method for snapshotting the high beam mainly comprises the following steps:
the method is based on the traditional night vehicle lamp detection, and simply uses threshold segmentation to obtain the light area in the video image, so that the method has larger error and has no good applicability;
the method has the advantages that different illumination intensities of the high beam and the low beam are used as judgment bases, but the method has high requirements on the illumination intensities of the automobile lamps and the environment, firstly, the luminous environment of the street lamp at night is complex, and the influence of noises such as reflected light of the street lamp, the ground, rain and snow and the like can cause interference on the intensity of the automobile lamps; secondly, the intensity of the light emitted by different vehicle lamp types is different;
the method has the advantages that different direct angles of the high-beam and the low-beam lamps are used as judgment bases, but the method has higher requirements on the installation angle of the detection device and the irradiation angle of the vehicle lamps, and firstly, the direct angles of the high-beam and the low-beam lamps of different vehicles are different; secondly, the detection device needs to be installed to an ideal position which can only receive the incident light of the high beam, and the difficulty is high.
The existing method for detecting the high beam by using a hardware device has larger difficulty in installation and debugging due to the non-visualization of the detection mode, and particularly adopts a snapshot device with different direct projection angles of the high beam and the low beam as a judgment basis, so that the snapshot device is easily influenced by the flatness of a road and has poorer adaptability.
Zhejiang's harmony provides a high beam identification method which utilizes a high exposure camera to acquire a light area in a video image and then classifies the light area according to morphological characteristics of a high beam of a specific vehicle type.
The patent CN106934378A provides an automobile high beam identification system and method based on video deep learning, but the deep learning module of the system is based on a CNN + LSE method, a simple LeNet5 network structure is adopted in the realization, and the complex problem processing result is not ideal; secondly, the method only classifies and outputs the video key frames and does not detect and mark the target vehicle.
In addition, the existing high beam snapshot method is only used for judging the state of the vehicle with the high beam switched on, but does not further judge whether the target vehicle (the vehicle with the high beam switched on) is switched on the high beam in the scene of following or meeting.
Disclosure of Invention
The invention aims to provide a high beam vehicle snapshot method based on deep learning to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
the high beam vehicle snapshot method based on deep learning comprises a camera, a video processing module, an image processing module, a high beam identification module and a target tracking module, realizes high beam identification snapshot of a vehicle, and integrally comprises the following six steps:
the method comprises the following steps: the camera shoots a vehicle, and the video processing module acquires a video image from the camera;
step two: the image processing module intercepts a high beam detection area in a video image;
step three: the high beam identification module detects vehicles with high beams in the high beam detection area;
step four: the camera shoots the vehicle to record vehicle information and records a target vehicle with problems, and the target tracking module tracks the target vehicle;
step five: in the tracking process of the target tracking module, an image processing module intercepts images of car following and car meeting detection areas of a target vehicle;
step six: the image processing module intercepts corresponding car following and car meeting detection areas according to the current car lamp area position and transmits the car following and car meeting detection areas to the car following and car meeting detection modules, and the car following and car meeting detection modules detect headlamps and tail lamps of the car at night.
Further, the image processing module receives the video frame in real time, intercepts a rectangular area with a fixed size in the picture and transmits the rectangular area to the high beam identification module.
Further, the high beam identification module adopts a high beam detection model based on deep learning.
Further, the target tracking module takes the target high beam area as an initialization tracking area, calculates whether the position of the current car light area and the car light brightness and morphological characteristics change in real time in the tracking process, and finally transmits the car light area to the image processing module.
Further, the high beam identification module and the car following and meeting detection module call a trained multi-target detection model based on improved YOLOv3 to detect the car lights in the detection area when working.
Further, the improved YOLOv3 network is composed of 5 residual modules, each of which is formed by stacking a plurality of residual components.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
(1) the method has the advantages that a hardware detection device is not needed, cost consumption is saved to a certain extent, the operation is simple and convenient, and workers can complete high beam detection and snapshot only by processing video streams of the traffic camera;
(2) the high beam is detected by adopting a target detection technology based on deep learning, so that the detection rate is improved, the robustness is good, various night lighting conditions and road section environments can be adapted, the operation requirements of different environments are met, and the application range is expanded;
(3) the judgment of car following and car meeting scenes is introduced, so that the defects of the existing high beam snapshot technology are supplemented and perfected, the sufficiency and the reasonability of law enforcement evidences are greatly improved, and the traffic department is more convenient to control the vehicles.
Drawings
FIG. 1 is a schematic diagram of the detailed working process of the present invention;
fig. 2 is a schematic diagram of a high beam detection area, a car following area and a car meeting area in the high beam identification module and the car following and car meeting detection module according to the present invention.
Fig. 3 is a detailed working flow chart of the car following and meeting detection module of the present invention.
Fig. 4 is a schematic diagram of network results of the high beam identification module and the car following and meeting detection module according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "front", "rear", "both ends", "one end", "the other end", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," "connected," and the like are to be construed broadly, such as "connected," which may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1-1, the present invention provides an embodiment: the high beam vehicle snapshot method based on deep learning comprises a camera, a video processing module, an image processing module, a high beam identification module and a target tracking module, realizes high beam identification snapshot of a vehicle, and integrally comprises the following six steps:
the method comprises the following steps: the camera shoots a vehicle, the video processing module acquires a video image from the camera, and a camera video stream is acquired in real time by calling the camera SDK;
step two: the image processing module intercepts a high beam detection area in a video image;
step three: the high beam identification module detects vehicles with high beams in a high beam detection area, the high beam identification module calls a trained multi-target (high beam, headlamp and tail lamp) detection model based on improved YOLOv3 when working, detects the detected area to obtain a boundary frame and confidence of the high beams of the vehicles, can further filter non-high beam areas by setting a threshold of the confidence, and finally transmits the boundary frame of the high beams of the vehicles to the target tracking module;
step four: the camera shoots the vehicle to record vehicle information and records the target vehicle with problems, and the target tracking module tracks the vehicle lamp area of the target vehicle by using a repeat tracking algorithm (the repeat tracking algorithm can be retrieved from professional books or papers in the field);
step five: in the tracking process of the target tracking module, the image processing module intercepts images of vehicle following and meeting detection areas of a target vehicle, judges whether the target vehicle runs with the vehicle or not during detection operation, and only needs to judge whether the vehicle runs in front of the target vehicle or not; judging whether the target vehicle meets the opposite lane to run, only judging whether the opposite lane has the vehicle to run, and leading in the judgment of the following and meeting scenes to ensure that the law enforcement process of law enforcement personnel becomes rational;
step six: the image processing module intercepts corresponding car following and car meeting detection areas according to the current car lamp area position and transmits the car following and car meeting detection areas to the car following and car meeting detection modules, and the car following and car meeting detection modules detect headlamps and tail lamps of the car at night.
Further, the image processing module receives the video frame in real time, and the rectangular area of fixed size transmits to high beam identification module in the intercepting picture, because the vehicle high beam is comparatively dazzling at regional illumination in a distance, and the visual differentiation effect is obvious, and intercepting part region is used for the high beam discernment to be favorable to improving the high beam detection rate, and the reduction of detection area also is favorable to improving detection rate in addition, at the initial installation, needs debugging personnel artificial drawing of deciding the high beam detection area, is located generally by the first half on detected lane.
Furthermore, the high beam identification module adopts a high beam detection model based on deep learning, abandons the traditional hardware device detection mode, and greatly improves the accuracy of module detection and can adapt to various road section environments because the generalization capability of the method for extracting the characteristics by using the convolutional neural network is far higher than that of the traditional artificial characteristic extraction method.
Further, the target tracking module takes the target high beam area as an initialization tracking area, calculates the position of the current car light area and whether the brightness and the morphological characteristics of the car light change in real time (continuous on judgment) in the tracking process, and finally transmits the car light area to the image processing module.
Further, when the high beam identification module and the following and meeting detection module work, a trained multi-target (high beam, headlamp and tail lamp) detection model based on improved YOLOv3 is called to detect the lamps in the detection area, and if the headlamp is detected in the following area and the confidence coefficient is higher than a threshold value, the following driving behavior of the target vehicle can be judged; if the tail lamp is detected in the meeting area and the confidence coefficient is higher than the threshold value, the vehicle meeting driving behavior of the target vehicle with the vehicle in the opposite lane can be judged.
Example 2
(1) Network results used in the third step (the high beam identification module detects the vehicle with the high beam in the high beam detection area) and the sixth step (the image processing module intercepts the corresponding following and meeting detection areas according to the current vehicle lamp area position and transmits the areas to the following and meeting detection modules, and the following and meeting detection modules detect the headlights and the tail lamps of the vehicle at night) are shown in fig. 1-2.
(2) The improved YOLOv3 feature extraction network adopted by the invention is composed of 5 residual modules, and each residual module is formed by stacking a plurality of residual components. In order to realize rapid vehicle high beam detection, firstly, the invention adjusts the dimension of each convolution layer of the YOLOv3 network without influencing the accuracy of a network model, reduces the width of the original network to 1/2, and greatly reduces the improved YOLOv3 network parameters and the calculation cost. And secondly, the detection rate of the far-reaching vehicle high beam in the video is improved by increasing the size of the input image. And then, improving a residual error component of the reduced YOLOv3 network, and replacing the residual error component of the bottleeck structure in the original network with a standard residual error component to enhance the feature extraction capability of the network.
(3) When detecting the model, firstly filtering an input image with the size of 512x512, and performing down-sampling by using 32-dimensional 3x3 convolution with the step size of 2; then inputting the feature map (256X 256) into a feature extraction network for feature extraction to obtain feature maps of three different scales, namely 64X64, 32X32 and 16X 16; and finally, performing up-sampling operation on the characteristic pyramid by 2 times of step length, and fusing the characteristic pyramid with the depth residual error network to form a final depth fusion multi-target (high beam, headlamp and tail lamp) detection model.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (6)

1. A high beam vehicle snapshot method based on deep learning is characterized in that: including camera, video processing module, image processing module, far-reaching headlamp identification module and target tracking module, realize taking a candid photograph the far-reaching headlamp identification of vehicle, wholly be following six steps:
the method comprises the following steps: the camera shoots a vehicle, and the video processing module acquires a video image from the camera;
step two: the image processing module intercepts a high beam detection area in a video image;
step three: the high beam identification module detects vehicles with high beams in the high beam detection area;
step four: the camera shoots the vehicle to record vehicle information and records a target vehicle with problems, and the target tracking module tracks the target vehicle;
step five: in the tracking process of the target tracking module, an image processing module intercepts images of car following and car meeting detection areas of a target vehicle;
step six: the image processing module intercepts corresponding car following and car meeting detection areas according to the current car lamp area position and transmits the car following and car meeting detection areas to the car following and car meeting detection modules, and the car following and car meeting detection modules detect headlamps and tail lamps of the car at night.
2. The deep learning-based high beam vehicle snapshot method of claim 1, wherein: the image processing module receives the video frame in real time, intercepts a rectangular area with a fixed size in the picture and transmits the rectangular area to the high beam identification module.
3. The deep learning-based high beam vehicle snapshot method of claim 1, wherein: the high beam identification module adopts a high beam detection model based on deep learning.
4. The deep learning-based high beam vehicle snapshot method of claim 1, wherein: the target tracking module takes a target high beam area as an initialization tracking area, calculates the position of the current car light area and whether the brightness and morphological characteristics of the car light change in real time in the tracking process, and finally transmits the car light area to the image processing module.
5. The deep learning-based high beam vehicle snapshot method of claim 1, wherein: and the high beam identification module and the car following and meeting detection module call a trained multi-target detection model based on improved YOLOv3 to detect the car lights in the detection area when working.
6. The deep learning-based high beam vehicle snapshot method of claim 5, wherein: the improved YOLOv3 network consists of 5 residual modules, each of which is stacked from multiple residual components.
CN202010240833.0A 2020-03-31 2020-03-31 High beam vehicle snapshot method based on deep learning Pending CN111310738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010240833.0A CN111310738A (en) 2020-03-31 2020-03-31 High beam vehicle snapshot method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010240833.0A CN111310738A (en) 2020-03-31 2020-03-31 High beam vehicle snapshot method based on deep learning

Publications (1)

Publication Number Publication Date
CN111310738A true CN111310738A (en) 2020-06-19

Family

ID=71148252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010240833.0A Pending CN111310738A (en) 2020-03-31 2020-03-31 High beam vehicle snapshot method based on deep learning

Country Status (1)

Country Link
CN (1) CN111310738A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863694A (en) * 2022-05-27 2022-08-05 郑州高识智能科技有限公司 Vehicle driving scene identification and distinguishing method for high beam detection
CN115762178A (en) * 2023-01-09 2023-03-07 长讯通信服务有限公司 Intelligent electronic police violation detection system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206164739U (en) * 2016-11-30 2017-05-10 中山大学 Automation video recording that vehicle far -reaching headlamp used is in violation of rules and regulations collected evidence and enforcement system
CN110135503A (en) * 2019-05-19 2019-08-16 重庆理工大学 One kind put together machines people's part depth study recognition methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206164739U (en) * 2016-11-30 2017-05-10 中山大学 Automation video recording that vehicle far -reaching headlamp used is in violation of rules and regulations collected evidence and enforcement system
CN110135503A (en) * 2019-05-19 2019-08-16 重庆理工大学 One kind put together machines people's part depth study recognition methods

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863694A (en) * 2022-05-27 2022-08-05 郑州高识智能科技有限公司 Vehicle driving scene identification and distinguishing method for high beam detection
CN114863694B (en) * 2022-05-27 2023-12-12 郑州高识智能科技有限公司 Vehicle driving scene identification and distinguishing method for high beam detection
CN115762178A (en) * 2023-01-09 2023-03-07 长讯通信服务有限公司 Intelligent electronic police violation detection system and method

Similar Documents

Publication Publication Date Title
Narote et al. A review of recent advances in lane detection and departure warning system
US7899213B2 (en) Image processing system and vehicle control system
US7566851B2 (en) Headlight, taillight and streetlight detection
Alcantarilla et al. Night time vehicle detection for driving assistance lightbeam controller
TWI302879B (en) Real-time nighttime vehicle detection and recognition system based on computer vision
JP4253275B2 (en) Vehicle control system
US8903603B2 (en) Environment recognizing device for a vehicle and vehicle control system using the same
CN102963294B (en) Method for judging opening and closing states of high beam of vehicle driving at night
CN108496176B (en) Method for identifying objects in the surrounding area of a motor vehicle, driver assistance system and motor vehicle
JP4722101B2 (en) Control devices for automobiles such as light distribution
CN113553998B (en) Anti-dazzling snapshot method for license plate at night on expressway based on deep learning algorithm
AU2019100914A4 (en) Method for identifying an intersection violation video based on camera cooperative relay
CN111310738A (en) High beam vehicle snapshot method based on deep learning
KR101134857B1 (en) Apparatus and method for detecting a navigation vehicle in day and night according to luminous state
CN110450706A (en) A kind of adaptive distance light lamp control system and image processing algorithm
CN111027494A (en) Matrix vehicle lamp identification method based on computer vision
KR101859201B1 (en) Developed automobile high-beam automatic control system based on camera sensor and its method
JPH08193831A (en) Apparatus for detecting approaching vehicle
CN111046741A (en) Method and device for identifying lane line
JP4007578B2 (en) Headlamp irradiation range control method and headlamp apparatus
JP2013025568A (en) Approaching obstacle detecting device and program
CN113034923A (en) Method for detecting identification and continuous opening of high beam of vehicle
KR101171368B1 (en) Method and system for controlling of adaptive front lighting
CN113743226B (en) Daytime front car light language recognition and early warning method and system
CN115565363A (en) Signal recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination