CN108021856B - Vehicle tail lamp identification method and device and vehicle - Google Patents

Vehicle tail lamp identification method and device and vehicle Download PDF

Info

Publication number
CN108021856B
CN108021856B CN201610978015.4A CN201610978015A CN108021856B CN 108021856 B CN108021856 B CN 108021856B CN 201610978015 A CN201610978015 A CN 201610978015A CN 108021856 B CN108021856 B CN 108021856B
Authority
CN
China
Prior art keywords
vehicle
target
image
area
tail lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610978015.4A
Other languages
Chinese (zh)
Other versions
CN108021856A (en
Inventor
丁赞
徐波
杨青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201610978015.4A priority Critical patent/CN108021856B/en
Publication of CN108021856A publication Critical patent/CN108021856A/en
Application granted granted Critical
Publication of CN108021856B publication Critical patent/CN108021856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention discloses a vehicle tail lamp identification method, a vehicle tail lamp identification device and a vehicle, wherein the method comprises the following steps: identifying tail lamps of a first target vehicle according to the collected historical vehicle front images; determining a history position area of tail lamps of the first target vehicle in the history front-of-vehicle image according to the identification result; predicting the position of a tail lamp of the first target vehicle in the collected current front image by using a preset model according to the historical position area to obtain a target detection area; and identifying the tail lamp of the first target vehicle in the target detection area of the current front image so as to obtain the current information of the tail lamp of the first target vehicle. According to the method and the device, the vehicle tail lamp position area of the current vehicle front image is predicted by utilizing the historical position area of the vehicle tail lamp to obtain the target detection area, and the vehicle tail lamp is only recognized in the target detection area, so that the effects of reducing the calculation amount, improving the detection efficiency and improving the real-time performance are achieved.

Description

Vehicle tail lamp identification method and device and vehicle
Technical Field
The disclosure relates to the field of intelligent traffic systems, in particular to a vehicle tail lamp identification method and device and a vehicle.
Background
With the rapid development of scientific technology, in the field of intelligent traffic systems, the identification capability of a road surface is improved by detecting front vehicles, and particularly, the front vehicles are identified when the vehicle runs in a complex road traffic environment at night, so that more safety warnings and assistance are provided for drivers, and the improvement of the active safety of the vehicles becomes an important direction for the development of the field.
At present, the identification of the front vehicle under the night driving environment is generally realized by identifying the tail lamp of the front vehicle. In the related art, for identifying the vehicle tail light, a corresponding extraction and matching rule is generally formulated by using the characteristics of the vehicle tail light, and the tail light is identified from each frame of acquired front images according to the extraction and matching rule.
In the related art tail light identification method, all areas of each acquired frame of the front image need to be detected to identify the tail light of the vehicle. Therefore, the identification method of the related art has the disadvantages of large detection calculation amount, low detection efficiency and poor detection real-time performance.
Disclosure of Invention
The purpose of the present disclosure is to provide a method and an apparatus for identifying a tail light of a vehicle, and a vehicle, which can solve the problems of large detection computation amount, low detection efficiency, and poor detection real-time performance caused by the need of detecting all areas of each acquired frame of a front image in the related art, and achieve the effects of improving the detection efficiency and the real-time performance.
In order to achieve the above object, the present disclosure provides a vehicle tail light identification method, the method including:
identifying tail lamps of a first target vehicle according to the collected historical vehicle front images;
determining a history position area of tail lamps of the first target vehicle in the history front-of-vehicle image according to the identification result;
predicting the position of a tail lamp of the first target vehicle in the collected current front image by using a preset model according to the historical position area to obtain a target detection area;
and identifying the tail lamp of the first target vehicle in the target detection area of the current front image so as to obtain the current information of the tail lamp of the first target vehicle.
Optionally, the preset model is an AR model;
the step of predicting the position of the tail lamp of the first target vehicle in the collected current image in front of the vehicle by using a preset model according to the historical position area to obtain a target detection area comprises the following steps:
acquiring a first random time sequence according to the minimum circumscribed rectangle frame boundary position information of the historical position area and the acquisition time of the image in front of the vehicle in which the historical position area is located;
and predicting the position of the tail lamp of the first target vehicle in the acquired current image in front of the vehicle by utilizing the AR model according to the first random time sequence.
Optionally, the method further comprises:
carrying out tail lamp identification in the region outside the target detection region of the collected current image in front of the vehicle at preset time intervals;
if the tail lamp of the second target vehicle is identified in the area outside the target detection area of the acquired current image in front of the vehicle, determining the position area of the tail lamp of the second target vehicle in the image in front of the vehicle according to the identification result;
and adding a second random time sequence acquired according to the minimum circumscribed rectangular frame boundary position of the position area of the tail lamp of the second target vehicle in the image in front of the vehicle and the acquisition time of the image in front of the vehicle into the first random time sequence.
Optionally, the step of identifying the tail light of the first target vehicle in the target detection area further includes:
and if the tail lamp of the first target vehicle is not identified in the target detection area in the collected current front images with the preset frame number, deleting the first random time sequence.
Optionally, the step of identifying the tail light of the first target vehicle in the target detection area includes:
expanding the target detection area according to a preset rule;
and identifying tail lamps of the first target vehicle in the expanded target detection area.
Optionally, the step of identifying the tail lights of the first target vehicle according to the collected historical vehicle front images includes:
screening out an area with color characteristics meeting first preset conditions in a preset area of interest of the historical vehicle front image, and determining a target area;
carrying out self-adaptive threshold segmentation on the image of the target region by using a maximum inter-class variance method to obtain a binary image;
extracting candidate bright blocks according to first characteristic information of the bright blocks in the binary image;
and extracting a target bright block pair as a tail lamp of the first target vehicle according to second characteristic information of the bright block pair obtained by pairwise combination of the candidate bright blocks.
Optionally, the first feature information includes: the area, the height of the minimum circumscribed rectangular frame and the width of the minimum circumscribed rectangular frame;
the extracting of the candidate bright blocks according to the characteristic information of the bright blocks in the binarized image comprises the following steps:
communicating and labeling the binary image to obtain a bright block containing a label;
and extracting the bright blocks of which the areas are smaller than a preset area threshold value and the heights and the widths of the minimum external rectangular frames meet a first preset range from the bright blocks to serve as the candidate bright blocks.
Optionally, the second feature information includes: the area, the center point and the symmetry of the bright blocks in the bright block pair and the minimum circumscribed rectangular frame of the bright block pair;
the step of extracting a target bright block pair as a tail lamp of the first target vehicle according to second feature information of a bright block pair obtained by pairwise combination of the candidate bright blocks comprises the following steps of:
extracting bright block pairs with the areas and the central points meeting a second preset range from the bright block pairs as candidate bright block pairs;
and extracting the target bright block pair of which the minimum circumscribed rectangle frame meets a third preset range and the symmetry meets a second preset condition from the candidate bright block pair to be used as the tail lamp of the first target vehicle.
The present disclosure also provides a vehicle tail light recognition device, the device including:
the first identification module is used for identifying tail lamps of a first target vehicle according to the collected historical vehicle front images;
a first determination module, configured to determine, according to a result of the recognition, a historical position area of a tail lamp of the first target vehicle in the historical vehicle front image;
the prediction module is used for predicting the position of the tail lamp of the first target vehicle in the collected current vehicle front image by using a preset model according to the historical position area to obtain a target detection area;
and the second identification module is used for identifying the tail lamp of the first target vehicle in the target detection area of the current front image so as to obtain the current information of the tail lamp of the first target vehicle.
Optionally, the preset model is an AR model;
the prediction module comprises:
the acquisition submodule is used for acquiring a first random time sequence according to the minimum external rectangular frame boundary position information of the historical position area and the acquisition time of the vehicle front image in which the historical position area is located;
and the prediction submodule is used for predicting the position of the tail lamp of the first target vehicle in the acquired current front image by utilizing the AR model according to the first random time sequence.
Optionally, the apparatus further comprises:
the third identification module is used for carrying out tail lamp identification in the region outside the target detection region of the collected current vehicle front image at preset time intervals;
the second determining module is used for determining the position area of the tail lamp of the second target vehicle in the image in front of the vehicle according to the recognition result if the tail lamp of the second target vehicle is recognized in the area outside the target detection area of the acquired image in front of the vehicle;
and the adding module is used for adding a second random time sequence acquired according to the minimum circumscribed rectangular frame boundary position of the position area of the tail lamp of the second target vehicle in the image in front of the vehicle and the acquisition time of the image in front of the vehicle into the first random time sequence.
Optionally, the second identification module comprises:
and the deleting submodule is used for deleting the first random time sequence if the tail lamp of the first target vehicle is not identified in the target detection area in the collected current front images with the preset number of frames.
Optionally, the second identification module further comprises:
the expansion submodule is used for expanding the target detection area according to a preset rule;
and the identification submodule is used for identifying the tail lamp of the first target vehicle in the expanded target detection area.
Optionally, the first identification module includes:
the screening submodule is used for screening out an area with color characteristics meeting first preset conditions in a preset area of interest of the historical vehicle front image and determining a target area;
the segmentation submodule is used for carrying out self-adaptive threshold segmentation on the image of the target region by utilizing a maximum inter-class variance method to obtain a binary image;
the first extraction submodule is used for extracting candidate bright blocks according to first characteristic information of the bright blocks in the binary image;
and the second extraction submodule is used for extracting a target bright block pair as a tail lamp of the first target vehicle according to second characteristic information of the bright block pair obtained by pairwise combination of the candidate bright blocks.
Optionally, the first feature information includes: the area, the height of the minimum circumscribed rectangular frame and the width of the minimum circumscribed rectangular frame;
the first extraction submodule is configured to:
communicating and labeling the binary image to obtain a bright block containing a label;
and extracting the bright blocks of which the areas are smaller than a preset area threshold value and the heights and the widths of the minimum external rectangular frames meet a first preset range from the bright blocks to serve as the candidate bright blocks.
Optionally, the second feature information includes: the area, the center point and the symmetry of the bright blocks in the bright block pair and the minimum circumscribed rectangular frame of the bright block pair;
the second extraction submodule is configured to:
extracting bright block pairs with the areas and the central points meeting a second preset range from the bright block pairs as candidate bright block pairs;
and extracting the target bright block pair of which the minimum circumscribed rectangle frame meets a third preset range and the symmetry meets a second preset condition from the candidate bright block pair to be used as the tail lamp of the first target vehicle.
The present disclosure also provides a vehicle including the above vehicle tail light recognition device.
Through the technical scheme, the tail lamp of the first target vehicle is identified according to the collected historical vehicle front image; determining a history position area of tail lamps of the first target vehicle in the history front-of-vehicle image according to the identification result; predicting the position of a tail lamp of the first target vehicle in the collected current front image by using a preset model according to the historical position area to obtain a target detection area; in the target detection area of the current vehicle front image, the tail lamp of the first target vehicle is identified to obtain the current information of the tail lamp of the first target vehicle, so that the problems of large detection calculation amount, low detection efficiency and poor real-time performance caused by the fact that all areas of each acquired frame of vehicle front image need to be detected to identify the tail lamp of the vehicle in a tail lamp identification method in the related art can be solved, and the effects of reducing the calculation amount, improving the detection efficiency and improving the real-time performance are achieved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart of a vehicle tail light identification method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a vehicle tail light identification device according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method for identifying tail lights of a vehicle from a historical pre-vehicle image according to an embodiment of the disclosure;
fig. 4 is a scene diagram of an implementation of a method for identifying a tail light of a vehicle according to an embodiment of the disclosure;
FIG. 5 is a flowchart of a candidate bright block extraction method according to an embodiment of the disclosure;
fig. 6 is a flowchart of a vehicle tail light extraction method according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of a method for acquiring a target detection area of an image in front of a vehicle according to an embodiment of the present disclosure;
fig. 8 is a flowchart of a method for identifying tail lights of a vehicle in a target detection area according to an embodiment of the disclosure;
fig. 9 is a flowchart of a target detection area acquisition method according to another embodiment of the present disclosure;
fig. 10 is a flowchart of a vehicle tail light identification method according to another embodiment of the present disclosure;
FIG. 11 is a flow chart of a vehicle tail light identification method of an embodiment of the present disclosure;
FIG. 12 is a flow chart of a vehicle tail light identification method according to an embodiment of the present disclosure;
fig. 13 is a block diagram of a vehicle tail light identification device according to an embodiment of the present disclosure;
fig. 14 is a block diagram of an apparatus for a vehicle tail light identification method according to an embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
It should be noted that the terms "first," "second," "third," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Fig. 1 is a flowchart of a vehicle tail light identification method according to an embodiment of the present disclosure. Referring to fig. 1, the method may include the steps of:
in step S11, the tail lights of the first target vehicle are identified based on the collected historical vehicle front images.
Referring to fig. 2, in an embodiment of the present disclosure, an image of the front of a vehicle may be captured in real time by an image capturing device 110 (e.g., a front view camera) installed at a suitable position of the vehicle (e.g., behind a front windshield of the vehicle), and a processor 120 (e.g., a DSP chip) receives the image of the front of the vehicle through an input interface connected to the image capturing device 110 and stores the image of the front of the vehicle in a memory 130 for subsequent analysis processing of the image of the front of the vehicle, as shown in fig. 2.
In an embodiment of the present disclosure, image capture device 110 continuously performs image capture in front of the vehicle in real time. The historical pre-vehicle image is a pre-vehicle image acquired before the current time, and for example, 5 frames of pre-vehicle images acquired before the current time are taken as the historical pre-vehicle image. In this way, in the 5-frame vehicle front image, the tail lights of the first target vehicle are recognized.
In a night driving environment, the characteristics of the automobile tail lamp are relatively stable, such as the brightness, color and shape of the automobile tail lamp and the symmetry of the left and right tail lamps of the automobile, so that the automobile tail lamp can be identified by using the characteristics of the automobile tail lamp after the collected historical front image of the automobile.
In step S12, a history position area of the tail lamps of the first target vehicle in the history front-of-vehicle image is determined based on the result of the recognition.
For the historical front images in which the vehicle tail lamps are identified, the historical position areas of the vehicle tail lamps in the historical front images can be respectively obtained. In the embodiment of the present disclosure, the historical position area of the vehicle tail light may be determined by two points, i.e., the top left corner TL and the bottom right corner BR, of the minimum bounding rectangle of the vehicle tail light.
In step S13, the position of the tail lamp of the first target vehicle in the captured current image of the vehicle ahead is predicted by using a preset model based on the historical position region, so as to obtain a target detection region.
In step S14, in the target detection area of the current vehicle front image, the tail lights of the first target vehicle are identified to obtain current information of the tail lights of the first target vehicle.
After the historical position areas of the tail lamps of the vehicle are determined, the position areas of the tail lamps of the vehicle in the image in front of the vehicle can be predicted according to the historical position areas according to a preset model, the position areas can be used as target detection areas of the image in front of the vehicle, and then the tail lamps of the vehicle are identified only in the target detection areas to obtain tail lamp information in the image in front of the vehicle.
According to the method for identifying the tail lamp of the vehicle, the position area where the tail lamp appears in the next frame is predicted by using the position of the tail lamp in the historical image in front of the vehicle according to the relevance of the tail lamp of the target vehicle in the continuous frames, and the rapid detection of the tail lamp of the vehicle is realized by using the relevance of the tail lamp between frames. Therefore, the vehicle tail lamp position area of the current front image can be predicted by utilizing the historical position area of the vehicle tail lamp to obtain the target detection area, and the vehicle tail lamp is only recognized in the target detection area, so that the effects of reducing the calculation amount, improving the detection efficiency and improving the real-time performance are achieved.
Referring to fig. 3, in an embodiment of the present disclosure, step S11 may include:
in step S31, a region whose color feature satisfies a first preset condition is screened out from a preset region of interest of the historical vehicle front image, and a target region is determined.
The calculation amount of the whole image is large, the time consumption is long, the real-time performance of the detection is poor, and the position of the tail lamp appearing in the image in front of the vehicle is stable in a certain area according to statistics of a large number of images in front of the vehicle. Therefore, an interested region can be preset for the image to be detected in front of the vehicle, and the tail lamp of the vehicle can be detected only in the interested region, so that the workload of vehicle detection can be reduced, the detection time can be shortened, and the real-time performance of detection can be improved.
In an embodiment of the present disclosure, a rectangular coordinate system is established with the upper boundary of the image 400 of the vehicle front to be detected as the x-axis and the vertical direction as the y-axis, as shown in fig. 4, the position 1/3 of the image 400 of the vehicle front may be set as the upper boundary of the detection region, the maximum value in the y-axis direction is set as the lower boundary of the detection region, and the region between the upper boundary and the lower boundary of the detection region is the preset region of interest 410. Therefore, the real-time performance of detection can be improved, and interference light rays in the image in front of the vehicle, such as reflected light of a street lamp and a billboard, can be effectively eliminated.
It should be noted that, in the embodiment of the present disclosure, the upper boundary line and the lower boundary line of the detection region may be appropriately adjusted according to the acquired image, and the size of the first region of interest is changed according to the actual situation.
In road images in front of a vehicle collected in a night traffic environment, many highlight areas such as tail lights of the front vehicle, street lamps, reflective lights of a billboard, and the like are generally present. According to the prior knowledge, in most cases, the tail lights of the vehicle have a significant difference in color from other light sources, such as the tail lights of the vehicle appear as reddish areas in the image in front of the vehicle, i.e. the R component is higher in value among the three components of R, G and B in the image. Therefore, in the embodiment of the present disclosure, the first preset condition may be that the R component value is greater than a preset R component threshold. According to the first preset condition, the area meeting the first preset condition is screened out from the image in front of the vehicle, and the remaining area in the preset area of interest of the image in front of the vehicle is the target area.
In addition, in the embodiment of the disclosure, in order to improve the image quality, the processor may also perform preprocessing on the image in front of the vehicle before performing vehicle detection in a preset region of interest in the image in front of the vehicle. The pretreatment comprises the following steps: and denoising and smoothing the image in front of the vehicle by using a Gaussian filter.
In step S32, the image of the target region is adaptively threshold-segmented by the maximum inter-class variance method to obtain a binarized image.
After a target area with color characteristics meeting first preset conditions is extracted, segmenting an image of the target area to obtain a binary image. The method can adopt a maximum inter-class variance method to carry out self-adaptive segmentation on the image in front of the vehicle, namely, firstly, an initial segmentation threshold value Ts of the image is obtained by counting a large amount of tail lamp brightness information, and an initial segmentation threshold value T1 is calculated according to an algorithm; then, obtaining a segmentation threshold T which enables the inter-class variance to be maximum by adopting a traversing method in a brightness accumulation histogram of a plurality of frames of images; finally, the image in front of the vehicle is segmented according to a segmentation threshold T, pixel points larger than the segmentation threshold T are suspected vehicle tail lamps, namely target pixel points, and are marked to be white, namely the pixel value is 255; otherwise, it is background and marked black, i.e. the pixel value is 0, thus resulting in a binarized image containing only 0 and 255 pixel values.
In step S33, a candidate bright block is extracted from the first feature information of the bright block in the binarized image.
In an embodiment of the present disclosure, the first feature information may include an area, a height of the minimum bounding rectangle frame, and a width of the minimum bounding rectangle frame. Since the shape of the vehicle tail light has a certain characteristic and satisfies a certain ratio, the interfering bright blocks that do not conform to the shape characteristic of the vehicle tail light can be filtered out according to the first characteristic information, and the remaining bright blocks are the bright blocks of the vehicle tail light and are used as candidate bright blocks.
Referring to fig. 5, in an embodiment of the present disclosure, the step S33 includes:
in step S331, the binarized image is connected and labeled to obtain a bright block containing a label.
The binary image after adaptive threshold segmentation obtains some dispersed pixel points which are not associated with each other, so that a connected method is needed to connect the pixel points belonging to the same bright block to form and mark the connected bright block to distinguish the pixel points belonging to different objects.
In an embodiment of the present disclosure, a communication mode in 8 fields may be adopted and a communication area is marked, that is, scanning may be performed according to a preset scanning mode (for example, from top to bottom, from left to right), when a first target pixel point (a pixel point with a pixel value of 255) is searched, the target pixel point is given a mark "1", and with this point as a starting point, the bright block is determined by using a contour tracking method of the bright block, and the same mark "1" is assigned to all pixel points of the same bright block, that is, a next target pixel point is searched in eight surrounding fields according to a preset direction (for example, a counterclockwise direction); if the next pixel point is a pixel point with the pixel value of 255 in the neighborhood, the pixel point is marked as '1', otherwise, the pixel point in the next direction of the neighborhood is continuously searched until the point search of the neighborhood is finished. After a connected region is marked, a new connected region is searched according to the same method, the mark number is added with '1' for calculation, and the mark of the pixel point in the bright block is the mark of the bright block. Thus, the number of the bright blocks of the suspected headlight and the mark of each bright block are obtained. Since the features such as the size and shape of these bright blocks are different, the binarized image needs to be further processed.
In step S332, a bright block having an area smaller than a preset area threshold and having a height and a height of the minimum bounding rectangle satisfying a first preset range is extracted from the bright blocks as a candidate bright block.
Because various kinds of interference illumination exist in a night traffic scene, the communicated and marked bright blocks may include interference bright blocks such as vehicle lights, street lamps, road surface reflected light and billboard reflected light, and therefore, candidate bright blocks which meet the characteristics of tail lamps of vehicles need to be extracted according to the area and shape of the bright blocks and the height and width characteristics of the minimum external rectangular frame.
Firstly, the area of the bright block is required to be larger than or equal to a preset area threshold value so as to screen out the small-area interference bright block, namely A (C)i)≥THAIn the formula, A (C)i) Represents a bright block CiArea of, THAIs a preset area threshold. In an embodiment of the present disclosure, the preset area threshold may be set to be 3, that is, the bright blocks with areas smaller than 3 pixels are screened.
Secondly, the height and the width of the minimum circumscribed rectangular frame of the bright block also meet the first preset range. In an embodiment of the disclosure, the first predetermined range is as shown in formula (1).
Figure BDA0001146014040000121
In the formula, W (C)i) And are each bright CiThe width and height of the minimum bounding rectangle frame, THWIs a preset width threshold value, TH, of the minimum bounding rectangleWHR1And THWHR2Respectively is the lower limit and the upper limit of the preset aspect ratio of the minimum circumscribed rectangular frame.
It should be noted that, in the embodiment of the present disclosure, the preset area threshold value limiting the area of the bright block, the preset width threshold value limiting the height and width of the minimum circumscribed rectangular frame of the bright block, and the preset width-to-height ratio lower limit and upper limit in the tail light extraction rule are set empirically and may be adjusted according to the collected vehicle front image.
The area is less than the bright piece of presetting the area threshold value and the height of minimum external rectangle frame and highly satisfying first preset scope and draw out, bright piece as the target to carry out the tail lamp that follow-up processing discerned the vehicle.
In step S34, a target bright block pair is extracted as a tail light of the first target vehicle according to the second feature information of the bright block pair obtained by combining the candidate bright blocks two by two.
When the vehicle runs at night, the left tail lamp and the right tail lamp of the vehicle generally have similar characteristics such as area, height, symmetry and the like, and the left tail lamp and the right tail lamp appear in pairs, so that the characteristics of the vehicle tail lamp can be utilized to further analyze and process the bright block pairs obtained by pairwise combination of the extracted candidate bright blocks to obtain the tail lamp of the vehicle, and the identification of the vehicle tail lamp is realized.
Referring to fig. 6, in an embodiment of the present disclosure, the step S34 includes:
in step S341, a pair of bright blocks whose area and center point satisfy the second preset range is extracted from the pair of bright blocks as a pair of candidate bright blocks.
Since the automobile tail lights generally appear in pairs and the two tail lights of the same vehicle have similarities in shape and position, it is reflected that the difference between the areas of the two tail lights and the difference in the vertical direction of the center point of the lighted lamp should be within a fixed range. And the bright blocks formed by combining the candidate bright blocks in pairs do not necessarily belong to the same vehicle, so the bright block pair candidate bright block pairs meeting the second preset range need to be extracted according to the areas and the central points of the bright blocks in the bright block pairs, so as to exclude the bright block pairs not belonging to the same vehicle.
In an embodiment of the disclosure, the second predetermined range is as shown in formula (2).
Figure BDA0001146014040000131
In the formula, A (.) is the area of the candidate bright block; cY(.) is the vertical coordinate of the center point of the candidate bright block; THd (THd)A
Figure BDA0001146014040000132
The area difference threshold value of the bright block in the bright block pair and the longitudinal coordinate difference threshold value of the central point of the tail lamp are respectively.
In step S342, a bright block pair, in which the minimum circumscribed rectangle frame satisfies the third preset range and the symmetry of the bright block satisfies the second preset condition, is extracted from the candidate bright block pairs as the tail light of the first target vehicle.
According to the conversion relation from world coordinates to image coordinates, the tail lamps of the vehicle have certain characteristics in the image in front of the vehicle, namely the width and the aspect ratio of the minimum circumscribed rectangular frame of the bright block formed by the two tail lamps meet certain conditions. In an embodiment of the present disclosure, the third predetermined range is as shown in formula (3).
Figure BDA0001146014040000133
Wherein u (), d (), l (), and r (), respectively, are bright blocks C in a bright block pairiAnd a bright block CjThe upper, lower, left and right positions of the minimum circumscribed rectangular frame; THw1And THw2The lower limit and the upper limit of the width threshold of the minimum external rectangular frame of the bright block pair are respectively; THRatio1And THRatio2Aspect ratio of minimum bounding rectangle frame of bright block pairLower and upper limits of.
In addition, in general, two tail lights of the same vehicle have higher symmetry, and the symmetry of two candidate bright blocks can be measured through a cross-correlation function, that is, the cross-correlation value of the two candidate bright blocks is obtained according to the cross-correlation function, and if the cross-correlation value satisfies a second preset condition, the candidate bright block pair can be regarded as the tail light of the vehicle. In an embodiment of the disclosure, the second preset condition may be that the cross-correlation value of two bright blocks in the candidate bright blocks is greater than a preset threshold.
Referring to fig. 7, in an embodiment of the present disclosure, step S13 includes:
in step S71, a first random time series is obtained according to the minimum bounding rectangle boundary position information of the historical position area and the acquisition time of the image in front of the vehicle where the historical position area is located.
The historical position area of the tail lamp of the first target vehicle can be determined by the boundary position of the minimum circumscribed rectangular frame (namely two points, namely the TL at the upper left corner and the BR at the lower right corner), and the image of the front vehicle where each frame of historical position area is located corresponds to one acquisition time, so that the coordinates of the TL and the BR at the two points can be respectively regarded as two time sequences in the X direction and the Y direction, that is, the first time sequence L is [ TL (X), BR (X), TL (Y), BR (Y), and the position of the target tail lamp pair is determined by four coordinates.
In step S72, the positions of the tail lamps of the first target vehicle in the captured current vehicle image are predicted using the AR model according to the first random time series.
For time t, the first random time series is denoted Lt=[TL(xt),BR(xt),TL(yt),BR(yt)]For a time series t1,t2,…,tkThe first random time series is denoted as [ L ]t1,Lt2,...,Ltk]. Since the four coordinate component motion tendencies that determine the historical position areas of the tail lights of the first target vehicle are not necessarily the same, a first random time series Lt1,Lt2,...,Ltk]Decomposing the data into four one-dimensional time sequences, and respectively predicting four coordinate components of the historical position area at the current time by adopting an AR modelAnd obtaining the position of the tail lamp of the first target vehicle in the acquired current image in front of the vehicle according to the coordinates of the image in front of the vehicle at the acquisition moment.
Since the number of steps of the AR model is too high, which increases the amount of calculation of the algorithm and deteriorates the real-time performance of vehicle detection, the position of the tail lamp of the first target vehicle can be predicted by selecting the 3-step AR model.
In an embodiment of the present disclosure, a position of a tail light of a first target vehicle in a collected 5-frame historical image in front of a vehicle may be set as an initial first random time sequence, and a position area of the tail light of the first target vehicle in the collected current image in front of the vehicle may be predicted by using an AR model according to the first random time sequence, and the position area may be used as a target detection area of the current image in front of the vehicle. And if the tail lamp of the first target vehicle is identified in the target detection area of the image in front of the current frame, updating the initial first random time sequence according to the position information of the minimum circumscribed rectangle frame boundary of the position area of the tail lamp of the first target vehicle in the image in front of the current frame and the acquisition time of the image in front of the current frame, and predicting the position of the tail lamp of the first target vehicle in the image in front of the next frame.
Referring to fig. 8, in an embodiment of the present disclosure, step S14 includes:
in step S73, if no tail light of the first target vehicle is identified in the target detection area in the current vehicle front image with the preset number of frames, the first random time sequence is deleted.
Since some interference factors on the road ahead may cause the tail light of the first target vehicle detected by the previous frame to be temporarily lost in the current image column ahead during night driving, in the present disclosure, it is set that if the tail light of the first target vehicle cannot be identified in all the images ahead of the preset number of frames, it is considered that the tail light of the first target vehicle has moved out of the image ahead of the vehicle, at this time, the first random time sequence corresponding to the tail light of the first target vehicle may be deleted, and the tail light of the first target vehicle is no longer tracked and identified.
Referring to fig. 9, in another embodiment of the present disclosure, step S14 further includes:
in step S91, the target detection region is expanded according to a preset rule.
In step S92, the tail lights of the first target vehicle are recognized in the expanded target detection area.
In an embodiment of the present disclosure, the preset rule may be to expand the predicted target prediction area by 5 pixels, and identify the tail light of the first target detection area in the expanded target detection area, so as to compensate for a predicted minor error, so as to improve the accuracy and reliability of detection and reduce false detection and false detection rate.
Fig. 10 is a flowchart illustrating a vehicle tail light identification method according to another exemplary embodiment. Referring to fig. 10, the method may include:
in step S101, a tail lamp of the first target vehicle is identified based on the collected historical vehicle front image.
In step S102, a history position area of the tail lamps of the first target vehicle in the history preceding vehicle image is determined according to the result of the recognition.
In step S103, the position of the tail lamp of the first target vehicle in the collected current image of the front of the vehicle is predicted by using a preset model according to the historical position area, so as to obtain a target detection area.
In step S104, in the target detection area of the current vehicle front image, the tail lights of the first target vehicle are identified to obtain current information of the tail lights of the first target vehicle.
In step S105, at preset intervals, tail light recognition is performed in a region other than the target detection region of the captured current image of the vehicle front.
Since there is a possibility that another vehicle may enter in front of the vehicle while the vehicle is traveling, it is necessary to perform tail lamp recognition on a region other than the target detection region of the image in front of the vehicle at predetermined time intervals to determine whether another vehicle enters. In an embodiment of the present disclosure, the preset time may be a time for acquiring 20 frames of the images in front of the vehicle, that is, for performing tail lamp identification on an area outside the target detection area of the currently acquired images in front of the vehicle every 20 frames of the images in front of the vehicle.
In step S106, if the tail lamp of the second target vehicle is recognized in the region other than the target detection region of the captured current image in front of the vehicle, the position region of the tail lamp of the second target vehicle in the image in front of the vehicle is determined based on the recognition result.
In step S107, a second random time sequence obtained according to the minimum bounding position of the bounding box of the rectangle circumscribing the tail lamp of the second target vehicle in the position area of the image in front of the vehicle and the acquisition time of the image in front of the vehicle is added to the first random time sequence.
If the tail lamp of the second target vehicle is identified in the area outside the target detection area of the current vehicle front image, acquiring the position area of the tail lamp of the second target vehicle in the vehicle front image, namely the coordinates of two points, namely the upper left corner TL and the lower right corner BR of the minimum circumscribed rectangle frame of the tail lamp of the second target vehicle and the time for acquiring the current vehicle front image, to obtain a second random time sequence, and adding the second random time sequence into the first random time sequence, so that the position area of the tail lamp of the second target vehicle in the next frame of vehicle front image can be predicted, and the tail lamp of the second target vehicle can be tracked and identified.
Referring to fig. 11, in an embodiment of the present disclosure, a global detection is first performed on a captured image of the front of the vehicle (i.e., tail lamp recognition is performed on a preset region of interest of the captured image of the front of the vehicle), when a tail lamp of a target vehicle (i.e., a first target vehicle) appears in the image of the front of the vehicle, the captured image of the front of the vehicle including the first target vehicle is used as historical images of the front of the vehicle, and in these historical images of the front of the vehicle, the tail lamp of the first target vehicle is respectively recognized, so as to obtain a historical position region of the tail lamp of the first target vehicle. And obtaining a first random time sequence according to the historical position area, and predicting a travel area of the tail lamp of the first target vehicle by using an AR model according to the first random time sequence to obtain a target detection area so as to detect and identify the tail lamp of the first target vehicle in the target detection area. When a new target vehicle (for example, a second target vehicle) appears in the acquired image in front of the vehicle, identifying a tail lamp of the target vehicle, acquiring a second random time sequence according to a tail lamp area of the target vehicle, and adding the acquired second random event sequence into the first random time sequence to realize simultaneous tracking and identification of the tail lamp of the first target vehicle and the tail lamp of the second target vehicle. If the tail lamps of any vehicle are not detected in the acquired images in front of the vehicle (namely all the target vehicles disappear), the global detection is returned again.
Referring to fig. 12, in the embodiment of the present disclosure, for the collected image in front of the vehicle, the AR model is used to predict the positions of the tail lamps of each preceding target vehicle in real time, and in the predicted target detection area, the tail lamps of the target vehicle are rapidly identified, so that the tail lamp information that is rapidly identified can be implemented. In some embodiments, based on the identified tail light information, the tail light information may be further processed to obtain information about the condition of the target vehicle, such as distance from the target vehicle, tail light condition (brake light, turn light, etc.) of the target vehicle, etc., so that driving maneuver reminders, output safety warning information, etc. may be provided to the driver.
According to the technical scheme, after the tail lamp of the first target vehicle is identified in the historical front image, the position of the tail lamp of the first target vehicle is predicted in real time by using the preset model according to the historical position area of the tail lamp of the first target vehicle in the historical front image, the predicted position area is used as the target detection area to identify the tail lamp of the first target vehicle, the problems that the detection operation amount is large, the detection efficiency is low and the real-time performance is poor due to the fact that all areas of each collected front image of each frame need to be detected to identify the tail lamp of the vehicle in the related technology can be solved, and the effects of reducing the operation amount and improving the detection efficiency and the real-time performance are achieved.
Secondly, tail lamp identification is carried out in the region outside the target detection region of the collected current front image at intervals of preset time, and the position region of the second target vehicle in the front road image is added into the random time sequence of the preset model to predict the position of the second target vehicle in the subsequent front image, so that the detection time of the tail lamp of the vehicle can be reduced, and the missing rate can be reduced.
In addition, in the process of identifying the tail lamp of the vehicle, the image is subjected to self-adaptive threshold segmentation by adopting the maximum inter-class variance method to obtain a binary image, so that the self-adaptive capacity of the threshold segmentation can be improved, and the real-time performance and the effectiveness of vehicle detection are improved.
Fig. 13 is a block diagram of a vehicle tail light recognition device according to an embodiment of the present disclosure. Referring to fig. 13, the apparatus 100 includes: a first identification module 131, a first determination module 132, a prediction module 133, and a second identification module 134;
the first identification module 131 is configured to identify a tail lamp of a first target vehicle according to the collected historical vehicle front image;
a first determining module 132, configured to determine, according to a result of the recognition, a historical position area of a tail lamp of the first target vehicle in the historical vehicle front image;
the prediction module 133 is configured to predict, according to the historical position region, a position of a tail lamp of the first target vehicle in the collected current vehicle front image by using a preset model, so as to obtain a target detection region;
the second identifying module 134 is configured to identify the tail lamp of the first target vehicle in the target detection area of the current vehicle front image, so as to obtain current information of the tail lamp of the first target vehicle.
In an embodiment of the present disclosure, the prediction module 133 includes:
the obtaining submodule 1331 is configured to obtain a first random time sequence according to the minimum bounding rectangle position information of the historical position region and the acquisition time of the vehicle front image in which the historical position region is located;
a prediction submodule 1332, configured to predict, according to the first random time sequence, a position of a tail lamp of the first target vehicle in the acquired current image of the vehicle front by using the AR model.
In an embodiment of the present disclosure, the apparatus 100 further includes:
the third identification module 135 is configured to perform tail lamp identification in an area outside the target detection area of the acquired current vehicle front image at preset intervals;
a second determining module 136, configured to determine, according to an identification result, a position area of a tail lamp of a second target vehicle in the image before the vehicle, if the tail lamp of the second target vehicle is identified in an area outside the target detection area of the acquired image before the vehicle;
and an adding module 137, configured to add a second random time sequence, which is obtained according to the position of the minimum circumscribed rectangular frame boundary of the position region of the tail light of the second target vehicle in the image in front of the vehicle and the acquisition time of the image in front of the vehicle, to the first random time sequence.
In an embodiment of the present disclosure, the second identification module 134 includes:
a deleting submodule 1341, configured to delete the first random time sequence if no tail lamp of the first target vehicle is identified in the target detection region in the collected current vehicle front images with the preset number of frames.
In another embodiment of the present disclosure, the second identification module 134 further includes:
an expansion submodule 1342, configured to expand the target detection area according to a preset rule;
an identification submodule 1343, configured to identify a tail light of the first target vehicle in the expanded target detection area.
In an embodiment of the present disclosure, the first identifying module 131 includes:
the screening submodule 1311 is configured to screen out, in a preset region of interest of the historical vehicle front image, a region where color features meet a first preset condition, and determine a target region;
a segmentation submodule 1312, configured to perform adaptive threshold segmentation on the image of the target region by using a maximum inter-class variance method to obtain a binarized image;
a first extraction sub-module 1313, configured to extract a candidate bright block according to first feature information of the bright block in the binarized image;
and a second extraction submodule 1314, configured to extract, according to second feature information of a bright block pair obtained by pairwise combination of the multiple candidate bright blocks, a target bright block pair as a tail light of the first target vehicle.
In an embodiment of the present disclosure, the first feature information includes: the area, the height of the minimum circumscribed rectangular frame and the width of the minimum circumscribed rectangular frame;
a first extraction sub-module 1313 for:
communicating and labeling the binary image to obtain a bright block containing a label;
and extracting the bright blocks of which the areas are smaller than a preset area threshold value and the heights and the widths of the minimum external rectangular frames meet a first preset range from the bright blocks to serve as the candidate bright blocks.
In an embodiment of the present disclosure, the second feature information includes: the area, the center point and the symmetry of a bright block in the bright blocks and the minimum circumscribed rectangular frame of the bright block pair;
a second extraction submodule 1314, configured to extract, from the pair of bright blocks, a pair of bright blocks whose areas and center points satisfy a second preset range as a candidate pair of bright blocks;
and extracting the target bright block pair of which the minimum circumscribed rectangle frame meets a third preset range and the symmetry meets a second preset condition from the candidate bright block pair to be used as the tail lamp of the first target vehicle.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 14 is a block diagram of an apparatus 1400 for a vehicle tail light identification method provided by the present disclosure. For example, the apparatus 1400 may be provided as a vehicle. Referring to fig. 14, the apparatus 1400 includes: the system comprises an electronic control unit 1401, an image acquisition device 1402, a processor 1403, a memory 1404, an automobile brake system 1405, a steering wheel angle sensor 1406, a wheel speed sensor 1407, an engine system 1408, a CAN bus 1409 and the like.
The image capturing device 1402 may be configured to perform the above-described real-time capturing of the image in front of the vehicle. Processor 1403 may be used to perform the vehicle tail light identification method described above. Memory 1404 may be used to store the images of the anterior vehicle captured by image capture device 1402 for subsequent analysis.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (13)

1. A vehicle tail light identification method, characterized in that the method comprises: identifying tail lamps of a first target vehicle according to the collected historical vehicle front images;
determining a history position area of tail lamps of the first target vehicle in the history front-of-vehicle image according to the identification result;
acquiring a first random time sequence according to the minimum circumscribed rectangle frame boundary position information of the historical position area and the acquisition time of the image in front of the vehicle in which the historical position area is located;
decomposing the first random time sequence into four one-dimensional time sequences, and respectively predicting coordinates of four coordinate components of the historical position area at the current acquisition time of the image in front of the vehicle by adopting an AR model to obtain a target detection area;
identifying a tail lamp of the first target vehicle in a target detection area of the current front image to obtain current information of the tail lamp of the first target vehicle;
the step of identifying the tail lamp of the first target vehicle according to the collected historical vehicle front image comprises the following steps:
screening out an area with color characteristics meeting first preset conditions in a preset area of interest of the historical vehicle front image, and determining a target area;
carrying out self-adaptive threshold segmentation on the image of the target region by using a maximum inter-class variance method to obtain a binary image;
screening out interference bright blocks which do not accord with the shape characteristics of the tail lamp of the vehicle according to the first characteristic information of the bright blocks in the binary image to extract candidate bright blocks;
and extracting a target bright block pair as a tail lamp of the first target vehicle according to second characteristic information of the bright block pair obtained by pairwise combination of the candidate bright blocks.
2. The method of claim 1, further comprising:
carrying out tail lamp identification in the region outside the target detection region of the collected current image in front of the vehicle at preset time intervals;
if the tail lamp of the second target vehicle is identified in the area outside the target detection area of the acquired current image in front of the vehicle, determining the position area of the tail lamp of the second target vehicle in the image in front of the vehicle according to the identification result;
and adding a second random time sequence acquired according to the minimum circumscribed rectangular frame boundary position of the position area of the tail lamp of the second target vehicle in the image in front of the vehicle and the acquisition time of the image in front of the vehicle into the first random time sequence.
3. The method of claim 1, wherein the step of identifying tail lights of the first target vehicle within the target detection area further comprises:
and if the tail lamp of the first target vehicle is not identified in the target detection area in the collected current front images with the preset frame number, deleting the first random time sequence.
4. The method of claim 1, wherein the step of identifying tail lights of the first target vehicle within the target detection area comprises:
expanding the target detection area according to a preset rule;
and identifying tail lamps of the first target vehicle in the expanded target detection area.
5. The method of claim 1, wherein the first feature information comprises: the area, the height of the minimum circumscribed rectangular frame and the width of the minimum circumscribed rectangular frame;
the extracting of the candidate bright blocks according to the characteristic information of the bright blocks in the binarized image comprises the following steps:
communicating and labeling the binary image to obtain a bright block containing a label;
and extracting the bright blocks of which the areas are smaller than a preset area threshold value and the heights and the widths of the minimum external rectangular frames meet a first preset range from the bright blocks to serve as the candidate bright blocks.
6. The method of claim 1, wherein the second feature information comprises: the area, the center point and the symmetry of the bright blocks in the bright block pair and the minimum circumscribed rectangular frame of the bright block pair;
the step of extracting a target bright block pair as a tail lamp of the first target vehicle according to second feature information of a bright block pair obtained by pairwise combination of the candidate bright blocks comprises the following steps of:
extracting bright block pairs with the areas and the central points meeting a second preset range from the bright block pairs as candidate bright block pairs;
and extracting the target bright block pair of which the minimum circumscribed rectangle frame meets a third preset range and the symmetry meets a second preset condition from the candidate bright block pair to be used as the tail lamp of the first target vehicle.
7. A vehicle rear light identification device, characterized in that the device comprises:
the first identification module is used for identifying tail lamps of a first target vehicle according to the collected historical vehicle front images;
a first determination module, configured to determine, according to a result of the recognition, a historical position area of a tail lamp of the first target vehicle in the historical vehicle front image;
a prediction module to: acquiring a first random time sequence according to the minimum circumscribed rectangle frame boundary position information of the historical position area and the acquisition time of the image in front of the vehicle in which the historical position area is located; decomposing the first random time sequence into four one-dimensional time sequences, and respectively predicting coordinates of four coordinate components of the historical position area at the current acquisition time of the image in front of the vehicle by adopting an AR model to obtain a target detection area;
the second identification module is used for identifying the tail lamp of the first target vehicle in the target detection area of the current front image so as to obtain the current information of the tail lamp of the first target vehicle;
the first identification module comprises:
the screening submodule is used for screening out an area with color characteristics meeting first preset conditions in a preset area of interest of the historical vehicle front image and determining a target area;
the segmentation submodule is used for carrying out self-adaptive threshold segmentation on the image of the target region by utilizing a maximum inter-class variance method to obtain a binary image;
the first extraction submodule is used for screening out interference bright blocks which do not accord with the shape characteristics of a vehicle tail lamp according to the first characteristic information of the bright blocks in the binary image so as to extract candidate bright blocks;
and the second extraction submodule is used for extracting a target bright block pair as a tail lamp of the first target vehicle according to second characteristic information of the bright block pair obtained by pairwise combination of the candidate bright blocks.
8. The apparatus of claim 7, further comprising:
the third identification module is used for carrying out tail lamp identification in the region outside the target detection region of the collected current vehicle front image at preset time intervals;
the second determining module is used for determining the position area of the tail lamp of the second target vehicle in the image in front of the vehicle according to the recognition result if the tail lamp of the second target vehicle is recognized in the area outside the target detection area of the acquired image in front of the vehicle;
and the adding module is used for adding a second random time sequence acquired according to the minimum circumscribed rectangular frame boundary position of the position area of the tail lamp of the second target vehicle in the image in front of the vehicle and the acquisition time of the image in front of the vehicle into the first random time sequence.
9. The apparatus of claim 7, wherein the second identification module comprises:
and the deleting submodule is used for deleting the first random time sequence if the tail lamp of the first target vehicle is not identified in the target detection area in the collected current front images with the preset number of frames.
10. The apparatus of claim 7, wherein the second identification module further comprises:
the expansion submodule is used for expanding the target detection area according to a preset rule;
and the identification submodule is used for identifying the tail lamp of the first target vehicle in the expanded target detection area.
11. The apparatus of claim 7, wherein the first feature information comprises: the area, the height of the minimum circumscribed rectangular frame and the width of the minimum circumscribed rectangular frame;
the first extraction submodule is configured to:
communicating and labeling the binary image to obtain a bright block containing a label;
and extracting the bright blocks of which the areas are smaller than a preset area threshold value and the heights and the widths of the minimum external rectangular frames meet a first preset range from the bright blocks to serve as the candidate bright blocks.
12. The apparatus of claim 7, wherein the second feature information comprises: the area, the center point and the symmetry of the bright blocks in the bright block pair and the minimum circumscribed rectangular frame of the bright block pair;
the second extraction submodule is configured to:
extracting bright block pairs with the areas and the central points meeting a second preset range from the bright block pairs as candidate bright block pairs;
and extracting the target bright block pair of which the minimum circumscribed rectangle frame meets a third preset range and the symmetry meets a second preset condition from the candidate bright block pair to be used as the tail lamp of the first target vehicle.
13. A vehicle characterized by comprising the vehicle rear light identification device recited in any one of claims 7 to 12.
CN201610978015.4A 2016-10-31 2016-10-31 Vehicle tail lamp identification method and device and vehicle Active CN108021856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610978015.4A CN108021856B (en) 2016-10-31 2016-10-31 Vehicle tail lamp identification method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610978015.4A CN108021856B (en) 2016-10-31 2016-10-31 Vehicle tail lamp identification method and device and vehicle

Publications (2)

Publication Number Publication Date
CN108021856A CN108021856A (en) 2018-05-11
CN108021856B true CN108021856B (en) 2020-09-15

Family

ID=62083703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610978015.4A Active CN108021856B (en) 2016-10-31 2016-10-31 Vehicle tail lamp identification method and device and vehicle

Country Status (1)

Country Link
CN (1) CN108021856B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764102A (en) * 2018-05-22 2018-11-06 哈尔滨理工大学 A kind of characteristic recognition method of the front vehicles image based on color extraction algorithm
CN111062231B (en) * 2018-10-17 2023-03-24 财团法人车辆研究测试中心 Vehicle detection method, night vehicle detection method based on light intensity dynamic and system thereof
CN111191482B (en) * 2018-11-14 2023-04-28 杭州海康威视数字技术股份有限公司 Brake lamp identification method and device and electronic equipment
CN109785385B (en) * 2019-01-22 2021-01-29 中国科学院自动化研究所 Visual target tracking method and system
CN110276742B (en) * 2019-05-07 2023-10-10 平安科技(深圳)有限公司 Train tail lamp monitoring method, device, terminal and storage medium
CN110188645B (en) * 2019-05-22 2022-10-21 阿波罗智联(北京)科技有限公司 Face detection method and device for vehicle-mounted scene, vehicle and storage medium
CN110696807B (en) * 2019-09-29 2021-04-09 一汽解放青岛汽车有限公司 Engine shutdown control method under traffic jam condition, vehicle and storage medium
CN110758394B (en) * 2019-09-29 2020-10-16 一汽解放青岛汽车有限公司 Engine starting control method and system under traffic jam working condition, vehicle and storage medium
CN110673609A (en) * 2019-10-10 2020-01-10 北京小马慧行科技有限公司 Vehicle running control method, device and system
CN111652143B (en) * 2020-06-03 2023-09-29 浙江大华技术股份有限公司 Vehicle detection method and device and computer storage medium
CN112164090A (en) * 2020-09-04 2021-01-01 杭州海康威视系统技术有限公司 Data processing method and device, electronic equipment and machine-readable storage medium
CN116994514B (en) * 2023-09-26 2024-01-02 永林电子股份有限公司 Image recognition-based vehicle display brightness control method, device and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727748A (en) * 2009-11-30 2010-06-09 北京中星微电子有限公司 Method, system and equipment for monitoring vehicles based on vehicle taillight detection
CN102867416A (en) * 2012-09-13 2013-01-09 中国科学院自动化研究所 Vehicle part feature-based vehicle detection and tracking method
CN103150898A (en) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 Method and device for detection of vehicle at night and method and device for tracking of vehicle at night
CN103778786A (en) * 2013-12-17 2014-05-07 东莞中国科学院云计算产业技术创新与育成中心 Traffic violation detection method based on significant vehicle part model
CN103824308A (en) * 2014-02-11 2014-05-28 广州中国科学院先进技术研究所 Image processing method in wireless multimedia sensor network
CN103984917A (en) * 2014-04-10 2014-08-13 杭州电子科技大学 Multi-feature nighttime vehicle detection method based on machine vision
CN103984950A (en) * 2014-04-22 2014-08-13 北京联合大学 Moving vehicle stop lamp state recognition method adaptable to day detection
CN104392629A (en) * 2014-11-07 2015-03-04 深圳市中天安驰有限责任公司 Method and device for detecting car distance
CN104866838A (en) * 2015-06-02 2015-08-26 南京航空航天大学 Vision-based automatic detection method for front vehicle
CN105718893A (en) * 2016-01-22 2016-06-29 江苏大学 Car tail light pair detecting method for night environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727748A (en) * 2009-11-30 2010-06-09 北京中星微电子有限公司 Method, system and equipment for monitoring vehicles based on vehicle taillight detection
CN102867416A (en) * 2012-09-13 2013-01-09 中国科学院自动化研究所 Vehicle part feature-based vehicle detection and tracking method
CN103150898A (en) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 Method and device for detection of vehicle at night and method and device for tracking of vehicle at night
CN103778786A (en) * 2013-12-17 2014-05-07 东莞中国科学院云计算产业技术创新与育成中心 Traffic violation detection method based on significant vehicle part model
CN103824308A (en) * 2014-02-11 2014-05-28 广州中国科学院先进技术研究所 Image processing method in wireless multimedia sensor network
CN103984917A (en) * 2014-04-10 2014-08-13 杭州电子科技大学 Multi-feature nighttime vehicle detection method based on machine vision
CN103984950A (en) * 2014-04-22 2014-08-13 北京联合大学 Moving vehicle stop lamp state recognition method adaptable to day detection
CN104392629A (en) * 2014-11-07 2015-03-04 深圳市中天安驰有限责任公司 Method and device for detecting car distance
CN104866838A (en) * 2015-06-02 2015-08-26 南京航空航天大学 Vision-based automatic detection method for front vehicle
CN105718893A (en) * 2016-01-22 2016-06-29 江苏大学 Car tail light pair detecting method for night environment

Also Published As

Publication number Publication date
CN108021856A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
CN108021856B (en) Vehicle tail lamp identification method and device and vehicle
US10984509B2 (en) Image processing apparatus, imaging device, moving body device control system, image information processing method, and program product
Wu et al. Lane-mark extraction for automobiles under complex conditions
Kuo et al. Vision-based vehicle detection for a driver assistance system
US10380434B2 (en) Vehicle detection system and method
US9836657B2 (en) System and method for periodic lane marker identification and tracking
Chen et al. Nighttime vehicle detection for driver assistance and autonomous vehicles
CN107891808B (en) Driving reminding method and device and vehicle
US20200250440A1 (en) System and Method of Determining a Curve
US20130286205A1 (en) Approaching object detection device and method for detecting approaching objects
US20050100192A1 (en) Moving object detection using low illumination depth capable computer vision
CN107886034B (en) Driving reminding method and device and vehicle
US20100110193A1 (en) Lane recognition device, vehicle, lane recognition method, and lane recognition program
Kyo et al. A robust vehicle detecting and tracking system for wet weather conditions using the IMAP-VISION image processing board
Wu et al. A real-time embedded blind spot safety assistance system
Niksaz Automatic traffic estimation using image processing
Chen et al. Front vehicle detection and distance estimation using single-lens video camera
Kuo et al. Vision-based vehicle detection in the nighttime
Vajak et al. A rethinking of real-time computer vision-based lane detection
FAN et al. Robust lane detection and tracking based on machine vision
Moizumi et al. Traffic light detection considering color saturation using in-vehicle stereo camera
Chen et al. Vision-based nighttime vehicle detection and range estimation for driver assistance
CN112669615A (en) Parking space detection method and system based on camera
Munajat et al. Vehicle detection and tracking based on corner and lines adjacent detection features
Dai et al. A driving assistance system with vision based vehicle detection techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant