CN108734105B - Lane line detection method, lane line detection device, storage medium, and electronic apparatus - Google Patents

Lane line detection method, lane line detection device, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN108734105B
CN108734105B CN201810361951.XA CN201810361951A CN108734105B CN 108734105 B CN108734105 B CN 108734105B CN 201810361951 A CN201810361951 A CN 201810361951A CN 108734105 B CN108734105 B CN 108734105B
Authority
CN
China
Prior art keywords
image
beacon
algorithm
edge
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810361951.XA
Other languages
Chinese (zh)
Other versions
CN108734105A (en
Inventor
邹博
唐闯
周磊
李丹丹
郭溪溪
孔令美
赵洋洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201810361951.XA priority Critical patent/CN108734105B/en
Publication of CN108734105A publication Critical patent/CN108734105A/en
Application granted granted Critical
Publication of CN108734105B publication Critical patent/CN108734105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The present disclosure relates to a lane line detection method, apparatus, storage medium, and electronic device, and relates to the field of image recognition, the method comprising: the method comprises the steps of identifying a first image collected by a camera device by using a first image identification algorithm to determine whether a vehicle exists in the first image, wherein the first image is any frame image which is collected by the camera device and contains target road image information, identifying the first image by using a second image identification algorithm when the vehicle does not exist in the first image to determine the position of a beacon in the first image, determining a target area corresponding to the beacon in the first image according to the position of the beacon, identifying the target area by using a third image identification algorithm to obtain a first position of a lane line in the target area on the first image, and effectively identifying the position of the lane line under a road monitoring visual angle.

Description

Lane line detection method, lane line detection device, storage medium, and electronic apparatus
Technical Field
The present disclosure relates to the field of image recognition, and in particular, to a lane line detection method, apparatus, storage medium, and electronic device.
Background
At present, with the increasing amount of automobiles kept, the pressure of road traffic is heavier and heavier. The lane lines are the most common road marking lines (including stop lines and lane dividing lines), and can effectively dredge traffic flow, improve road traffic capacity and reduce traffic accidents. Thus, the identification of lane markings is an important component of intelligent detection of vehicle violations. However, the conventional lane line identification is performed by vehicles traveling on the road, and cannot effectively assist the traffic monitoring system in determining vehicle violations.
Disclosure of Invention
The disclosure aims to provide a lane line detection method, a lane line detection device, a storage medium and electronic equipment, which are used for solving the problem that a lane line cannot be identified under a road monitoring view angle.
In order to achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided a lane line detection method, the method including:
identifying a first image acquired by a camera device by using a first image identification algorithm to determine whether a vehicle exists in the first image, wherein the first image is any frame of image acquired by the camera device and containing target road image information;
when no vehicle exists in the first image, identifying the first image by using a second image identification algorithm to determine the position of a pointer in the first image;
determining a target area corresponding to the pointer in the first image according to the position of the pointer;
and identifying the target area by using a third image identification algorithm to acquire a first position of a lane line in the target area on the first image.
Optionally, the first image recognition algorithm is a single-shot multi-box detector SSD algorithm, and recognizing a first image acquired by a camera device by using the first image recognition algorithm to determine whether a vehicle exists in the first image includes:
taking a preset vehicle model and the first image as input of an SSD algorithm, and determining whether a vehicle exists in the first image according to an output result of the SSD algorithm;
the vehicle model is an SSD model obtained through training according to a preset vehicle sample set.
Optionally, the second image recognition algorithm is an SSD algorithm, and recognizing the first image by using the second image recognition algorithm when no vehicle exists in the first image, so as to determine the position of the pointer in the first image, including:
dividing the first image into a plurality of detection frames according to a preset size, wherein the preset size comprises one or more sizes;
taking a preset beacon model and each detection box as the input of an SSD algorithm to obtain the identification results corresponding to the detection boxes output by the SSD algorithm;
merging the recognition results of the plurality of detection frames as the detection result of the first image;
determining the position of a pointer in the first image according to the detection result of the first image;
the beacon model is an SSD model obtained by training according to a preset beacon sample set, and the beacon sample set comprises one or more beacon samples.
Optionally, when there is no vehicle in the first image, identifying the first image by using a second image identification algorithm to determine the position of the beacon in the first image, further comprising:
determining the number of the pointers in the first image according to the detection result of the first image;
when N pointers are identified, dividing the first image into N identification areas respectively containing N pointers, wherein N is a positive integer;
taking a preset arrow model and the N identification areas as the input of an SSD algorithm, and determining the number of arrows contained in each identification area according to the output result of the SSD algorithm, wherein the arrow model is an SSD model obtained by training according to a preset arrow sample set;
and correcting the detection result of the first image according to the number of the arrows contained in each identification area, and determining the types of the N pointers.
Optionally, the determining a target region corresponding to the beacon in the first image according to the position of the beacon includes:
taking a first edge of a first beacon as a lower edge of the target area, and taking a position of a first edge of a pedestrian crossing or a second edge of the first beacon in the first image, which is translated upwards by a preset distance, as an upper edge of the target area; the first edge of the first beacon is a border line which is not provided with one end of an arrow on the first beacon, wherein the first beacon is any beacon on the first image, the first edge of the pedestrian crossing is an edge close to the first beacon, and the second edge of the first beacon is a border line which is positioned at one end of the first beacon which is provided with the arrow and is vertical to the first beacon;
when the pointers exist on the left side and the right side of the first pointer, two lines which respectively pass through the arrow center points of the two pointers on the left side and the right side of the first pointer and are perpendicular to the upper edge are respectively used as the left edge and the right edge of the target area;
when a second pointer exists on the left side of the first pointer and the pointer does not exist on the right side of the first pointer, taking a line which passes through the center point of an arrow of the second pointer and is vertical to the upper edge as the left edge of the target area, and taking a line which is symmetrical to the left edge about the midpoint of the arrow of the first pointer as the right edge;
when a third beacon exists on the right side of the first beacon and the beacon does not exist on the left side of the first beacon, taking a line which passes through the center point of the arrow point of the third beacon and is vertical to the upper edge as the right edge of the target area, and taking a line which is symmetrical to the right edge about the midpoint of the arrow of the first beacon as the left edge;
and taking a rectangular area surrounded by the upper edge, the lower edge, the left edge and the right edge as a target area corresponding to the first beacon.
Optionally, the third image recognition algorithm is a supervised descent algorithm SDM, and recognizing the target area by using the third image recognition algorithm to obtain a first position of a lane line in the target area on the first image includes:
taking the target area, a lane line model and the first image as input of the SDM algorithm to determine the first position of the lane line on the first image according to the output result of the SDM algorithm;
the lane line model is a learning matrix obtained by training according to a preset lane line sample set.
Optionally, the method further includes:
identifying the target area by utilizing a Hough transform algorithm to obtain a second position of the lane line on the first image;
judging whether the first position and the second position are correct or not according to the gray information of the first image and the position of the pointer;
when at least one of the first position and the second position is wrong, acquiring a third position of the lane line on the first image by using a watershed segmentation method according to the first position and the second position;
and taking the position with the maximum gray average value in the first position, the second position and the third position as the actual position of the output lane line.
According to a second aspect of the embodiments of the present disclosure, there is provided a lane line detection apparatus, the apparatus including:
the first identification module is used for identifying a first image acquired by a camera device by utilizing a first image identification algorithm so as to determine whether a vehicle exists in the first image, wherein the first image is any frame of image acquired by the camera device and contains target road image information;
the second identification module is used for identifying the first image by using a second image identification algorithm when no vehicle exists in the first image so as to determine the position of the pointer in the first image;
the area determining module is used for determining a target area corresponding to the pointer in the first image according to the position of the pointer;
and the third identification module is used for identifying the target area by utilizing a third image identification algorithm so as to acquire a first position of a lane line in the target area on the first image.
Optionally, the first image recognition algorithm is a single-shot multi-box detector SSD algorithm, and the first recognition module is configured to:
taking a preset vehicle model and the first image as input of an SSD algorithm, and determining whether a vehicle exists in the first image according to an output result of the SSD algorithm;
the vehicle model is an SSD model obtained through training according to a preset vehicle sample set.
Optionally, the second image recognition algorithm is an SSD algorithm, and the second recognition module includes:
the segmentation submodule is used for segmenting the first image into a plurality of detection frames according to preset sizes, and the preset sizes comprise one or more sizes;
the first identification submodule is used for taking a preset beacon model and each detection box as the input of an SSD algorithm so as to obtain the identification results corresponding to the detection boxes output by the SSD algorithm;
a merging submodule, configured to merge recognition results of the plurality of detection frames to obtain a detection result of the first image;
the determining submodule is used for determining the position of a pointer in the first image according to the detection result of the first image;
the beacon model is an SSD model obtained by training according to a preset beacon sample set, and the beacon sample set comprises one or more beacon samples.
Optionally, the second identification module further includes:
the determining submodule is further configured to determine the number of the pointers in the first image according to the detection result of the first image;
the dividing submodule is used for dividing the first image into N identification areas respectively containing N pointers when the N pointers are identified, wherein N is a positive integer;
the second identification submodule is used for taking a preset arrow model and the N identification areas as the input of an SSD algorithm, and determining the number of arrows contained in each identification area according to the output result of the SSD algorithm, wherein the arrow model is an SSD model obtained by training according to a preset arrow sample set;
and the correction submodule is used for correcting the detection result of the first image according to the number of the arrows contained in each identification area and determining the types of the N pointers.
Optionally, the area determining module includes:
a first determining submodule, configured to use a first edge of a first beacon as a lower edge of the target area, and use a position where the first edge of the pedestrian crossing in the first image or the second edge of the first beacon is shifted upward by a preset distance as an upper edge of the target area; the first edge of the first beacon is a border line which is not provided with one end of an arrow on the first beacon, wherein the first beacon is any beacon on the first image, the first edge of the pedestrian crossing is an edge close to the first beacon, and the second edge of the first beacon is a border line which is positioned at one end of the first beacon which is provided with the arrow and is vertical to the first beacon;
a second determining submodule, configured to, when the beacons exist on both left and right sides of the first beacon, respectively use two lines that respectively pass through arrow center points of the two beacons on the left and right sides of the first beacon and are perpendicular to the upper edge as a left edge and a right edge of the target area;
the second determining submodule is further used for taking a line which passes through the center point of the arrow of the second pointer and is perpendicular to the upper edge as the left edge of the target area and taking a line which is symmetrical to the left edge about the midpoint of the arrow of the first pointer as the right edge when the second pointer exists on the left side of the first pointer and the pointer does not exist on the right side of the first pointer;
the second determining submodule is further used for taking a line which passes through the center point of the arrow point of the third beacon and is perpendicular to the upper edge as the right edge of the target area and taking a line which is symmetrical to the right edge about the midpoint of the arrow of the first beacon as the left edge when the third beacon exists on the right side of the first beacon and the beacon does not exist on the left side;
and the third determining submodule is used for taking a rectangular area formed by the upper edge, the lower edge, the left edge and the right edge as a target area corresponding to the first beacon.
Optionally, the third image recognition algorithm is a supervised descent algorithm SDM, and the third recognition module is configured to:
taking the target area, a lane line model and the first image as input of the SDM algorithm to determine the first position of the lane line on the first image according to the output result of the SDM algorithm;
the lane line model is a learning matrix obtained by training according to a preset lane line sample set.
Optionally, the apparatus further comprises:
the fourth identification module is used for identifying the target area by utilizing a Hough transform algorithm so as to obtain a second position of the lane line on the first image;
the judging module is used for judging whether the first position and the second position are correct or not according to the gray information of the first image and the position of the pointer;
a position determining module, configured to, when at least one of the first position and the second position is incorrect, obtain, according to the first position and the second position, a third position of the lane line on the first image by using a watershed segmentation method;
the position determining module is further configured to use a position with a maximum gray average value among the first position, the second position, and the third position as an output actual position of the lane line.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the steps of the lane line detection method provided by the first aspect of the embodiments of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a computer-readable storage medium provided in a third aspect of the embodiments of the present disclosure; and
one or more processors to execute the program in the computer-readable storage medium.
According to the technical scheme, the method comprises the steps of firstly identifying whether a vehicle exists in a first image acquired by the camera equipment or not by using a first image identification algorithm, identifying the position of a pointer in the first image by using a second image identification algorithm under the condition that the first image does not have the vehicle which can interfere with lane line identification, then determining a region where the lane line possibly exists, namely a target region, according to the position of the pointer, and finally identifying the first position, in the first image, of the lane line in the target region by using a third image identification algorithm. The position of the lane line can be effectively identified under the road monitoring visual angle, so that the traffic monitoring system is assisted to judge the violation of the vehicle.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a lane line detection method according to an exemplary embodiment;
FIG. 2a is a flow chart illustrating another lane line detection method according to an exemplary embodiment;
FIG. 2b is a schematic diagram indicating the position of the pointer in the first image;
FIG. 3 is a flow chart illustrating another lane line detection method according to an exemplary embodiment;
FIG. 4a is a flow chart illustrating another lane line detection method according to an exemplary embodiment;
FIG. 4b is a schematic diagram indicating a target area in the first image corresponding to the pointer;
FIG. 4c is a schematic diagram indicating initial positions of feature points in the first image;
FIG. 4d is a schematic diagram indicating the actual positions of feature points in the first image;
FIG. 5a is a flow chart illustrating another lane line detection method according to an exemplary embodiment;
FIG. 5b is a schematic diagram indicating seed line selection in a watershed segmentation method;
FIG. 6 is a block diagram illustrating a lane line detection apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating another lane line detection apparatus according to an exemplary embodiment;
FIG. 8 is a block diagram illustrating another lane line detection apparatus in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating another lane line detection apparatus in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating another lane line detection apparatus according to an exemplary embodiment;
FIG. 11 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before introducing the lane line detection method, apparatus, storage medium, and electronic device provided by the present disclosure, an application scenario related to each embodiment of the present disclosure is first introduced. The application scene is an intersection provided with camera shooting equipment, the intersection can comprise a pedestrian crossing, a plurality of lanes, a direction indicator and the like for indicating the driving direction of each lane, and the camera shooting equipment can be a video camera, a high-speed camera and other equipment and is used for collecting images containing intersection image information.
Fig. 1 is a flowchart illustrating a lane line detection method according to an exemplary embodiment, as shown in fig. 1, the method including:
step 101, identifying a first image collected by a camera device by using a first image identification algorithm to determine whether a vehicle exists in the first image, wherein the first image is any frame image collected by the camera device and containing target road image information.
For example, the indication information of lane lines, direction indicators, crosswalks, etc. included in the target road is fixed, and the vehicles traveling on the target road may block the indication information, thereby interfering with the identification of the lane lines. Therefore, each frame of image including the target road image information acquired by the image pickup apparatus is first filtered. The first image (any frame of image collected by the camera device) is identified by using a first image identification algorithm, wherein the first image identification algorithm can be an SSD (Single Shot Multi Box Detector) algorithm, a YOLO (Young Ok) algorithm, or an image identification algorithm such as a Faster-RCNN (Faster-Regions with connected Neural networks) algorithm, and is used for determining whether a vehicle exists in the first image.
And 102, when no vehicle exists in the first image, identifying the first image by using a second image identification algorithm to determine the position of the pointer in the first image.
For example, when there is no vehicle in the first image that may interfere with lane line identification, the first image is identified using a second image identification algorithm, which may also be an image identification algorithm such as SSD algorithm, YOLO algorithm, or fast-RCNN algorithm, to identify the beacon (which may be zero, one, or multiple) in the first image, so as to determine the position of the beacon in the first image.
And 103, determining a target area corresponding to the pointer in the first image according to the position of the pointer.
For example, in actual roads, where a beacon is used to indicate the driving direction of each lane, including straight, left-turn, right-turn, u-turn, or other types of combined beacons, the position of the lane line is generally within the area near the beacon, such as: the position of the preset distance at the left side and the right side of the indicator is a lane line in the lane line, and the position of the preset distance at one end of an arrow of the indicator is a stop line in the lane line. Therefore, the target area including the lane line may be determined according to the position of the beacon on the first image, for example, the target area may be set to be a rectangular area with a preset length that is increased with the center of the beacon as the center, or the target area may be set to be a rectangular area with a preset multiple that is enlarged with the circumscribed rectangle of the beacon. It should be noted that the number of the target regions is the same as the number of the beacons recognized in step 102, that is, when the first image includes zero beacons, the first image may not be subjected to recognition processing, or the first image may be identified, and lane marking is performed manually, and when the first image includes one or more beacons, one or more target regions are determined in the first image.
And 104, identifying the target area by using a third image identification algorithm to acquire a first position of the lane line in the target area on the first image.
For example, after the target area is determined, the target area is identified by using a third image identification algorithm, where the third image identification algorithm may be SDM (supervisory drop Method, chinese), Hough Transform (Hough Transform), watershed segmentation, and the like, to identify the position of the lane line in the target area, and further determine the first position of the lane line in the target area on the first image according to the position of the target area on the first image. It should be noted that the target area may be one or more, and each corresponding target area corresponds to a position of a lane line (which may be understood as a set of coordinates on the first image).
In summary, in the present disclosure, a first image recognition algorithm is first used to recognize whether a vehicle exists in a first image captured by a camera device, a second image recognition algorithm is used to recognize a position of a beacon in the first image under the condition that the first image does not have a vehicle that may interfere with lane line recognition, a region where a lane line may exist, that is, a target region, is then determined according to the position of the beacon, and finally a third image recognition algorithm is used to recognize the first position in the first image where the lane line in the target region corresponds to. The position of the lane line can be effectively identified under the road monitoring visual angle, so that the traffic monitoring system is assisted to judge the violation of the vehicle.
Optionally, the first image recognition algorithm is an SSD algorithm, and step 101 includes:
and taking the preset vehicle model and the first image as the input of the SSD algorithm, and determining whether the vehicle exists in the first image according to the output result of the SSD algorithm. The vehicle model is an SSD model obtained through training according to a preset vehicle sample set.
For example, the SSD algorithm is based on a forward-propagating CNN (chinese: Convolutional Neural Network), a series of fixed-size calibration boxes (bounding boxes) are generated, and the probability that each box contains an object instance, i.e. confidence level, is then processed through non-maximum suppression to obtain the final prediction result. The SSD algorithm firstly needs to establish an SSD model through a preset object example sample set, namely a basic network, a standard framework for image classification, and a convolution layer with gradually reduced size layer by layer is added on the basis of the basic network for prediction under multiple scales. Firstly, a large number of image samples are used for training, and the corresponding relation between an object example and a default box is established through a target loss function, so that an SSD model is obtained. And then the image is identified according to the SSD model.
The target loss function can be achieved by the following formula:
Figure BDA0001636187170000101
wherein, L represents a target loss function, x represents the judgment (the value is 1 or 0) of the object instance type in the detection frame, c represents the confidence coefficient of the object instance contained in the detection frame, L represents the area range of the prediction frame, g represents the area range of the calibration frame, and L represents the area range of the calibration frameconfRepresents the class confidence loss function, LlocThe position loss function is represented, N represents the number of prediction boxes matched with the default box, α represents the adjustment coefficient, and the default value is 1.
To detect whether a vehicle exists in the first image, a vehicle model is obtained by training a preset vehicle sample set (a large number of images containing one or more vehicle samples), and then the vehicle model and the first image are used as inputs of an SSD algorithm, and the output result of the SSD algorithm includes: the location of the vehicle (coordinates on the first image) and the type of vehicle (which matches which type of vehicle in the vehicle sample set) in the first image may be understood as if the output of the SSD algorithm detected the location and type of vehicle, then it may be determined that a vehicle is present in the first image, and if the output of the SSD algorithm did not detect the location and type of vehicle, then it is determined that a vehicle is not present in the first image.
Fig. 2a is a flowchart illustrating another lane line detection method according to an exemplary embodiment, where as shown in fig. 2a, the second image recognition algorithm is an SSD algorithm, and step 102 includes:
step 1021, the first image is divided into a plurality of detection frames according to a preset size, and the preset size comprises one or more sizes.
Step 1022, the preset beacon model and each detection box are used as input of the SSD algorithm to obtain the identification results corresponding to the plurality of detection boxes output by the SSD algorithm.
In step 1023, the recognition results of the plurality of detection frames are combined to be the detection result of the first image.
And step 1024, determining the position of the pointer in the first image according to the detection result of the first image.
The beacon model is an SSD model obtained by training according to a preset beacon sample set, and the beacon sample set comprises one or more beacon samples.
For example, a set of preset beacon samples (a plurality of images including one or more beacon samples) is first trained to obtain a beacon model. Since the size of the beacon is much smaller than that of the first image, the first image may be divided into a plurality of detection boxes according to a preset size, for example, the first image may be divided into three detection boxes of 1000, 1600, and 2000 different sizes (for example, the first image is divided into a plurality of detection boxes, each detection box includes 1000, 1600, or 2000 pixel points), three types of detection boxes are correspondingly obtained, each type of detection box includes a plurality of detection boxes, the beacon model and each detection box are used as inputs of the SSD algorithm, and the output of the SSD algorithm is the position and the type of the beacon in each detection box, that is, the identification result corresponding to each detection box. And combining the recognition results of the detection frames according to the positions of the detection frames on the first image, applying non-maximum suppression to the detection frames with inclusion relation, and keeping the detection result with the maximum confidence as the detection result of the first image, wherein the detection result of the first image comprises the position and the type of the pointer in the first image. Taking fig. 2b as an example, the detection result of the first image is the positions of the three pointers on the first image and the types of the three pointers. When the recognition results of a plurality of detection frames are combined, the recognition result corresponding to each detection frame may be corrected by a series of post-processes, such as: removing the misdetected beacon according to the setting rule of the beacon, and removing the straight beacon if the straight beacon exists on the right side of the right turn beacon; if the detection result of the first image contains two rows of direction indicators, only one row of direction indicators close to the pedestrian crossing is reserved; when the same position is used for judging different types of beacons, selecting the type with high confidence as a recognition result; the beacon is removed when its width to height ratio exceeds the normal ratio of the beacon by a threshold.
Fig. 3 is a flowchart illustrating another lane line detection method according to an exemplary embodiment, where as shown in fig. 3, step 102 further includes:
and 1025, determining the number of the pointers in the first image according to the detection result of the first image.
Step 1026, when N pointers are identified, dividing the first image into N identification regions respectively containing N pointers, where N is a positive integer.
Step 1027, taking the preset arrow model and the N identification areas as input of the SSD algorithm, and determining the number of arrows included in each identification area according to an output result of the SSD algorithm, where the arrow model is an SSD model obtained by training according to a preset arrow sample set.
Step 1028, correcting the detection result of the first image according to the number of arrows included in each recognition area, and determining the types of the N pointers.
For example, since the samples of the pointers including the plurality of arrows (e.g., the pointers indicating the turning around and the pointers of other special categories) are relatively few, the category identification effect of the pointers including the plurality of arrows is relatively poor by using the trained pointer model as the input of the SSD algorithm, and thus the detection result of the first image can be corrected by identifying the arrow in the first image. Firstly, determining the number of the pointers in the first image according to the detection result of the first image, identifying N pointers, drawing N-1 dividing lines according to the key points of adjacent pointers, and dividing the first image into N identification areas respectively containing N pointers. An arrow model is obtained by training with a preset arrow sample set (a large number of images containing arrow samples), then the arrow model and the N identification areas are used as the input of the SSD algorithm, and the output result of the SSD algorithm is the number and the type of the arrows contained in each identification area (namely the direction of the arrows). And finally, correcting the detection result of the first image according to the number of arrows contained in each identification area, and determining the types of the N pointers. For example, if the number of arrows in an identification area is equal to 1, it indicates that the direction in the identification area is marked as: straight running, reverse straight running, left turning, right turning and the like, and further determining the category of the beacon according to the direction of the arrow. If the number of the arrows in one identification area is more than 1, indicating that the pointers in the identification area are combined categories: and (4) moving straight, turning right, turning left, turning off the head and the like, and determining the category of the pointer according to the direction of the arrow.
Fig. 4a is a flowchart illustrating another lane line detection method according to an exemplary embodiment, and as shown in fig. 4a, step 103 includes:
and step 1031, taking the first edge of the first beacon as the lower edge of the target area, and taking the position of the first edge of the pedestrian crossing or the second edge of the first beacon in the first image, which is translated upwards by a preset distance, as the upper edge of the target area. The first edge of the first guide mark is a side line of the first guide mark without one end of an arrow, wherein the first guide mark is any guide mark on the first image, the first edge of the pedestrian crossing is one edge close to the first guide mark, and the second edge of the first guide mark is a side line which is positioned at one end of the first guide mark with the arrow and is vertical to the first guide mark.
And 1032, when the first beacon exists on the left side and the right side of the first beacon, respectively taking two lines which respectively pass through the arrow center points of the two beacons on the left side and the right side of the first beacon and are perpendicular to the upper edge as the left edge and the right edge of the target area.
In step 1033, when the second pointer exists on the left side of the first pointer and the second pointer does not exist on the right side, a line passing through the center point of the arrow of the second pointer and perpendicular to the upper edge is taken as the left edge of the target area, and a line symmetrical to the left edge with respect to the center point of the arrow of the first pointer is taken as the right edge.
Step 1034, when the third beacon exists on the right side of the first beacon and the beacon does not exist on the left side, taking a line passing through the center point of the arrow point of the third beacon and perpendicular to the upper edge as the right edge of the target area, and taking a line symmetrical to the right edge about the midpoint of the arrow of the first beacon as the left edge.
In step 1035, a rectangular area surrounded by the upper edge, the lower edge, the left edge, and the right edge is used as a target area corresponding to the first beacon.
Taking the first image shown in fig. 4b as an example, the target road includes three pointers, taking the middle pointer as the first pointer as an example, the lower edge of the target area is a border line on the first pointer without an end of an arrow, the upper edge is a side of the crosswalk close to the first pointer, the left edge and the right edge are respectively arrow center points passing through the two pointers on the left and right sides of the first pointer, and two lines perpendicular to the upper edge, that is, the area included by the dashed frame in fig. 4b is the target area.
Optionally, the third image recognition algorithm is SDM (english: supervisory next Method, chinese: supervisory Descent algorithm), and step 104 includes:
the target area, the lane line model and the first image are used as input to an SDM algorithm to determine a first position of the lane line on the first image based on the output of the SDM algorithm.
The lane line model is a learning matrix obtained by training according to a preset lane line sample set.
For example, the SDM algorithm learns a series of gradient descent directions (i.e., learning matrices) to minimize the nonlinear least square function (i.e., mean square error), and can converge to a minimum value quickly. The SDM algorithm is divided into two steps of training and detecting:
in the training process, firstly, a lane line model, namely a learning matrix, needs to be established according to a large number of image sample sets containing lane lines, a training area is selected from each frame of image in the image sample sets, the selection mode of the training area is the same as the whole thought of the step 103, wherein a stop line in the lane line can be selected as the upper edge of the training area, the range is further reduced, and the detection efficiency is improved. Meanwhile, because the distance between the stop line in the actual lane line and the top of the arrow of the beacon changes greatly, in order to improve the accuracy of the detection result of the SDM algorithm, the upper edge of the training area can be translated up and down by preset distances, and then the training area can be translated left and right by 1 time, that is, each frame of image in the image sample set is trained for 15 times.
Secondly, according to the shape characteristics of the lane lines, marking m (for example, 14) preset number of feature points of the lane lines, taking local SIFT (Scale-invariant feature transform, Chinese) values of the m feature points as feature values to establish position constraints among the m feature points, and calculating an actual value x of the feature points through stepwise iteration. Taking m average values of m characteristic points in the image sample set as an initial value x0Taking FIG. 4c as an example, the initial values x of 14 feature points0The position on the first image is shown in fig. 4 c. Through the stepwise iteration, the positions of the 14 feature points gradually approach the actual position x of the lane line on the first image, as shown in fig. 4 d.
The training process of the learning matrix can be obtained by the following formula:
x1=x0+Δx1
Δx1=R0Φ0+b0
……
xk=xk-1+Δxk
Δxk=Rk-1Φk-1+bk-1
wherein x is0As the mean value of the feature points in the image sample set, b0SIFT value, phi, of feature points in the image selected for the first iteration0Is x0Corresponding SIFT value, xiAs a result of the i-1 st iteration, biSIFT value, phi, of feature points in the image selected for the i-1 st iterationiIs xiAnd the corresponding SIFT value, wherein the iteration ending condition is that the difference value between the result of the kth iteration and the result of the (k-1) th iteration is smaller than a preset threshold value. After k iterations, the obtained learning matrix R ═ { R ═ R0R1…RkIs the direction of gradient descent.
In the detection process, for a given image sample set and corresponding feature points, sampling is aligned with a target region with the initial feature points obeying positive distribution, R0And b0Can be obtained by solving the following optimization problem:
Figure BDA0001636187170000141
the above formula is a linear optimization problem, can be directly solved, and is solved through an iterative formula x after the first step of solving is finishedk=xk-1+ΔxkAnd x can be obtained1. Then R isk,bkThis can be obtained by a new linear regression:
Figure BDA0001636187170000142
the target area, the learning matrix R (lane line model) and the m characteristic points are used as the input of the SDM algorithm, the error is gradually reduced along with the increase of k, the end condition of the iteration is that the difference value between the result of the k iteration and the result of the (k-1) th iteration is smaller than a preset threshold value, and the output of the SDM algorithm is the actual positions of the m characteristic points, namely the first position of the lane line on the first image.
Fig. 5a is a flow chart illustrating another lane line detection method according to an exemplary embodiment, as shown in fig. 5a, the method further includes:
and 105, identifying the target area by using a Hough transform algorithm to obtain a second position of the lane line on the first image.
Illustratively, the hough transform is a voting algorithm for detecting objects with a specific shape, which is generally used for detecting straight lines. The process obtains a set conforming to the specific shape as a Hough transform result in a parameter space by calculating a local maximum of the accumulated result, and utilizes duality of points and lines of an image space and a Hough space. And identifying a straight line meeting a certain condition in the target area by using a Hough transform algorithm, and taking the straight line as a second position of the lane line on the first image. Each point on a y-kx + b straight line in the original image corresponds to a straight line of b-kx + y in Hough space, a plurality of points on the straight line of the original image intersect at one point on the corresponding straight line in parameter space, and whether the point is the straight line is judged according to the number of votes cast at the intersection point. Firstly, the Canny edge detection operator is utilized to obtain the edge of an object in an image, and an edge image is converted into a binary image to carry out Hough transform detection. Due to the fact that the illumination conditions of each frame of image are different, the gray values of the images are greatly different, the high threshold value of a Canny operator can be set by the gray value of an SDM target area, and a method of gradually increasing or decreasing the maxGap parameter in a Hough transform function through feedback adjustment is adopted to detect a straight line according to the detection result during Hough transform. Because a large number of interference lines (such as beacon region lines, telegraph poles at two sides of a road, street lamps and other linear interference lines) exist in the first image, for the Hough detection result, the straight lines in the beacon region can be removed according to the position information of the beacon, the short straight lines are further removed according to the length of the line segments, and finally the second position of the lane line on the first image is obtained.
And 106, judging whether the first position and the second position are correct or not according to the gray information of the first image and the position of the pointer.
For example, each lane corresponds to a group of lane lines, which includes a stop line and two lane dividing lines, and the lane dividing line position and the stop line position in the first position or the second position are obtained and are respectively screened to determine whether the first position and the second position are correct. Because the stop line and the lane dividing line are in different positions, all lanes of one intersection share one stop line, and two lane dividing lines of each lane have a certain degree of association, further, because the angles of the camera devices are different, the lane dividing line conditions of the outermost lane and the middle lane are different. The method for determining whether the first position and the second position are correct may be determined by using three types of lane lines, namely, a stop line, a middle lane line and an outermost lane line:
a. a stop line:
firstly, a reference stop line is screened, the stop line of each lane in the first position and the second position extends to the left edge and the right edge of the first image, the gray average value of the extended stop line obtained by the two methods (SDM algorithm and Hough transform algorithm) is respectively calculated, and the maximum gray average value is used as the reference stop line.
The stop-line in the first position, the stop-line in the second position and the reference stop-line are judged according to the following conditions:
1. when the distance between the stop line position and the pointer is larger than the distance between the crosswalk and the pointer in the first image, the stop line position is wrong.
2. When the stop line position is within the arrow region of the pointer, the stop line position is erroneous.
3. And when the slope of the position of the stop line is smaller than a preset second angle, the position of the stop line is wrong.
4. And when the gray average value of the stop line position is smaller than the gray average values of the preset areas at the upper side and the lower side of the stop line position, the stop line position is wrong.
b. Lane dividing line of the middle lane:
1. when the intersection point of the upper edge of the minimum external rectangle of the left direction indicator arrow of the lane line position and the lane line position is on the left side of the center point of the left direction indicator arrow of the lane line position, or the intersection point of the upper edge of the minimum external rectangle of the right direction indicator arrow of the lane line position and the lane line position is on the right side of the center point of the right direction indicator arrow of the lane line position, at the moment, the lane line is not between the two direction indicators, and the lane line position is wrong.
2. When the slope of the position of the lane line is smaller than a preset first angle (for example, may be 30 degrees), the position of the lane line is erroneous.
3. And when the average gray level value of the position of the lane line is smaller than the average gray level value of the preset areas at the left side and the right side of the position of the lane line, the position of the lane line is wrong.
c. Lane dividing line of the outermost lane:
1. when the position of the lane line is in the arrow area of the pointer, the position of the lane line is wrong.
2. When the intersection of the lane line position and the middle lane line is below the first image (i.e., away from the direction of the arrow pointing to the target), the lane line position is wrong.
3. And when the difference between the length of the stop line corresponding to the lane dividing line position and the average value of the lengths of the stop lines of the N lanes is smaller than a preset threshold value, the lane dividing line position is wrong.
4. If a barrier exists in the first image, when one end of the lane line position far away from the stop line is positioned above the midpoint of the area where the barrier is positioned (i.e. close to the direction of the arrow of the beacon), the lane line position is wrong.
5. If no fence exists in the first image, when the gray average value of the position of the lane line is smaller than the gray average values of the preset areas at the left side and the right side of the position of the lane line, the position of the lane line is wrong; if the fence exists in the first image, when the gray average value of the position of the lane line is smaller than the gray average value of the area which is shifted from the fence area to the direction of the lane line by a preset distance, the position of the lane line is wrong.
And step 107, when at least one of the first position and the second position is wrong, acquiring a third position of the lane line on the first image by using a watershed segmentation method according to the first position and the second position.
For example, when at least one of the first position and the second position is incorrect, the detection result may be corrected by a watershed segmentation method. A watershed segmentation method is a segmentation method of mathematical morphology based on a topological theory, an original image is regarded as a topological landform, areas of similar gray values are equivalent to flat basins in steep edges, similar elements are marked as a class by reasonably selecting points (seed points) in the basins, and then edge information is obtained. The essence of the detection area is determined according to the first position and the second position, when the results of the first position and the second position are different, the minimum circumscribed rectangle formed by intersecting two lines corresponding to the first position and the second position is widened to the left side and the right side respectively by a preset distance to be used as the detection area, and when the second position does not exist, the middle area of the two pointers is used as the detection area. The seed point selection method comprises the following steps: taking the first position and the second position as centers, sequentially translating the distances from one to two to the width of N lane lines (namely width x N { N ═ 1,2, …, N }) to the left and right sides of the detection area as a pair of seed lines, as shown in FIG. 5b, carrying out detection for N times for a group of seed lines with the same label, and finally selecting a line with the minimum fitting variance as a third position of the lane line on the first image.
And step 108, taking the position with the maximum gray average value in the first position, the second position and the third position as the actual position of the output lane line.
In an example, the gray level mean values corresponding to the areas where the first position, the second position and the third position are located, which are obtained by the three algorithms, are respectively calculated, and the position with the largest gray level mean value is selected as the actual position of the output lane line.
In summary, in the present disclosure, a first image recognition algorithm is first used to recognize whether a vehicle exists in a first image captured by a camera device, a second image recognition algorithm is used to recognize a position of a beacon in the first image under the condition that the first image does not have a vehicle that may interfere with lane line recognition, a region where a lane line may exist, that is, a target region, is then determined according to the position of the beacon, and finally a third image recognition algorithm is used to recognize the first position in the first image where the lane line in the target region corresponds to. The position of the lane line can be effectively identified under the road monitoring visual angle, so that the traffic monitoring system is assisted to judge the violation of the vehicle.
Fig. 6 is a block diagram illustrating a lane line detecting apparatus according to an exemplary embodiment, and as shown in fig. 6, the apparatus 200 includes:
the first identification module 201 is configured to identify a first image acquired by the camera device by using a first image identification algorithm to determine whether a vehicle exists in the first image, where the first image is any one frame of image acquired by the camera device and contains target road image information.
The second recognition module 202 is configured to recognize the first image by using a second image recognition algorithm when the vehicle is not present in the first image, so as to determine a position of the beacon in the first image.
And the area determining module 203 is configured to determine a target area corresponding to the pointer in the first image according to the position of the pointer.
The third identifying module 204 is configured to identify the target area by using a third image identification algorithm to obtain a first position of the lane line in the target area on the first image.
Optionally, the first image recognition algorithm is a single-shot multi-box detector SSD algorithm, and the first recognition module 201 is configured to:
and taking the preset vehicle model and the first image as the input of the SSD algorithm, and determining whether the vehicle exists in the first image according to the output result of the SSD algorithm.
The vehicle model is an SSD model obtained through training according to a preset vehicle sample set.
Fig. 7 is a block diagram illustrating another lane line detecting apparatus according to an exemplary embodiment, where as shown in fig. 7, the second image recognition algorithm is an SSD algorithm, and the second recognition module 202 includes:
the segmentation sub-module 2021 is configured to segment the first image into a plurality of detection frames according to a preset size, where the preset size includes one or more sizes.
The first identification submodule 2022 is configured to use the preset beacon model and each detection box as an input of the SSD algorithm, so as to obtain an identification result corresponding to the plurality of detection boxes output by the SSD algorithm.
The merging sub-module 2023 is configured to merge recognition results of the plurality of detection frames as a detection result of the first image.
The determining sub-module 2024 is configured to determine the position of the pointer in the first image according to the detection result of the first image.
The beacon model is an SSD model obtained by training according to a preset beacon sample set, and the beacon sample set comprises one or more beacon samples.
Fig. 8 is a block diagram illustrating another lane line detecting apparatus according to an exemplary embodiment, and as shown in fig. 8, the second identifying module 202 further includes:
the determining sub-module 2024 is further configured to determine the number of the pointers in the first image according to the detection result of the first image.
The dividing sub-module 2025 is configured to, when N pointers are identified, divide the first image into N identification regions each including the N pointers, where N is a positive integer.
The second recognition sub-module 2026 is configured to use the preset arrow model and the N recognition areas as inputs of the SSD algorithm, so as to determine the number of arrows included in each recognition area according to an output result of the SSD algorithm, where the arrow model is an SSD model obtained by training according to a preset arrow sample set.
The correction submodule 2027 is configured to correct the detection result of the first image according to the number of arrows included in each recognition area, and determine the types of the N pointers.
Fig. 9 is a block diagram illustrating another lane line detecting apparatus according to an exemplary embodiment, and as shown in fig. 9, the area determining module 203 includes:
a first determining sub-module 2031 configured to use the first edge of the first beacon as a lower edge of the target area, and use a position where the first edge of the crosswalk or the second edge of the first beacon in the first image is shifted upward by a preset distance as an upper edge of the target area; the first edge of the first guide mark is a side line of the first guide mark without one end of an arrow, wherein the first guide mark is any guide mark on the first image, the first edge of the pedestrian crossing is one edge close to the first guide mark, and the second edge of the first guide mark is a side line which is positioned at one end of the first guide mark with the arrow and is vertical to the first guide mark.
The second determining submodule 2032 is configured to, when the first beacon exists on both the left and right sides of the first beacon, respectively use two lines perpendicular to the upper edge and passing through the arrow center points of the two beacons on the left and right sides of the first beacon as the left edge and the right edge of the target area.
The second determining sub-module 2032 is further configured to, when there is a second pointer on the left side of the first pointer and there is no pointer on the right side, regard a line passing through the center point of the arrow of the second pointer and perpendicular to the upper edge as the left edge of the target area, and regard a line symmetrical to the left edge with respect to the center point of the arrow of the first pointer as the right edge.
The second determining sub-module 2032 is further configured to, when a third beacon is present on the right side of the first beacon and no beacon is present on the left side, regard a line passing through the center point of the arrow tip of the third beacon and perpendicular to the upper edge as the right edge of the target area, and regard a line symmetrical to the right edge about the center point of the arrow of the first beacon as the left edge.
The third determining sub-module 2033 is configured to use a rectangular area surrounded by the upper edge, the lower edge, the left edge, and the right edge as a target area corresponding to the first beacon.
Optionally, the third image recognition algorithm is a supervisory drop algorithm SDM, and the third recognition module 204 is configured to:
the target area, the lane line model and the first image are used as input to an SDM algorithm to determine a first position of the lane line on the first image based on the output of the SDM algorithm.
The lane line model is a learning matrix obtained by training according to a preset lane line sample set.
Fig. 10 is a block diagram illustrating another lane line detecting apparatus according to an exemplary embodiment, and as shown in fig. 10, the apparatus 200 further includes:
the fourth identifying module 205 is configured to identify the target region by using a hough transform algorithm to obtain a second position of the lane line on the first image.
The determining module 206 is configured to determine whether the first position and the second position are correct according to the gray information of the first image and the position of the pointer.
And the position determining module 207 is configured to, when at least one of the first position and the second position is wrong, obtain a third position of the lane line on the first image by using a watershed segmentation method according to the first position and the second position.
The position determining module 207 is further configured to use a position with a maximum gray average value among the first position, the second position, and the third position as an actual position of the output lane line.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, in the present disclosure, a first image recognition algorithm is first used to recognize whether a vehicle exists in a first image captured by a camera device, a second image recognition algorithm is used to recognize a position of a beacon in the first image under the condition that the first image does not have a vehicle that may interfere with lane line recognition, a region where a lane line may exist, that is, a target region, is then determined according to the position of the beacon, and finally a third image recognition algorithm is used to recognize the first position in the first image where the lane line in the target region corresponds to. The position of the lane line can be effectively identified under the road monitoring visual angle, so that the traffic monitoring system is assisted to judge the violation of the vehicle.
Fig. 11 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. As shown in fig. 11, the electronic device 700 may include: a processor 701, a memory 702, multimedia components 703, input/output (I/O) interfaces 704, and communication components 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the lane line detection method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the lane line detection method described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions, such as the memory 702 comprising program instructions, executable by the processor 701 of the electronic device 700 to perform the lane line detection method described above is also provided.
In summary, in the present disclosure, a first image recognition algorithm is first used to recognize whether a vehicle exists in a first image captured by a camera device, a second image recognition algorithm is used to recognize a position of a beacon in the first image under the condition that the first image does not have a vehicle that may interfere with lane line recognition, a region where a lane line may exist, that is, a target region, is then determined according to the position of the beacon, and finally a third image recognition algorithm is used to recognize the first position in the first image where the lane line in the target region corresponds to. The position of the lane line can be effectively identified under the road monitoring visual angle, so that the traffic monitoring system is assisted to judge the violation of the vehicle.
Preferred embodiments of the present disclosure are described in detail above with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and other embodiments of the present disclosure may be easily conceived by those skilled in the art within the technical spirit of the present disclosure after considering the description and practicing the present disclosure, and all fall within the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. Meanwhile, any combination can be made between various different embodiments of the disclosure, and the disclosure should be regarded as the disclosure of the disclosure as long as the combination does not depart from the idea of the disclosure. The present disclosure is not limited to the precise structures that have been described above, and the scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A lane line detection method, comprising:
identifying a first image acquired by a camera device by using a first image identification algorithm to determine whether a vehicle exists in the first image, wherein the first image is any frame of image acquired by the camera device and containing target road image information;
when no vehicle exists in the first image, identifying the first image by using a second image identification algorithm to determine the position of a pointer in the first image;
determining a target area corresponding to the pointer in the first image according to the position of the pointer;
identifying the target area by using a third image identification algorithm to acquire a first position of a lane line in the target area on the first image;
the determining a target region corresponding to the beacon in the first image according to the position of the beacon includes:
taking a first edge of a first beacon as a lower edge of the target area, and taking a position of a first edge of a pedestrian crossing or a second edge of the first beacon in the first image, which is translated upwards by a preset distance, as an upper edge of the target area; the first edge of the first beacon is a border line which is not provided with one end of an arrow on the first beacon, wherein the first beacon is any beacon on the first image, the first edge of the pedestrian crossing is an edge close to the first beacon, and the second edge of the first beacon is a border line which is positioned at one end of the first beacon which is provided with the arrow and is vertical to the first beacon;
when the pointers exist on the left side and the right side of the first pointer, two lines which respectively pass through the arrow center points of the two pointers on the left side and the right side of the first pointer and are perpendicular to the upper edge are respectively used as the left edge and the right edge of the target area;
when a second pointer exists on the left side of the first pointer and the pointer does not exist on the right side of the first pointer, taking a line which passes through the center point of an arrow of the second pointer and is vertical to the upper edge as the left edge of the target area, and taking a line which is symmetrical to the left edge about the midpoint of the arrow of the first pointer as the right edge;
when a third beacon exists on the right side of the first beacon and the beacon does not exist on the left side of the first beacon, taking a line which passes through the center point of the arrow point of the third beacon and is vertical to the upper edge as the right edge of the target area, and taking a line which is symmetrical to the right edge about the midpoint of the arrow of the first beacon as the left edge;
and taking a rectangular area surrounded by the upper edge, the lower edge, the left edge and the right edge as a target area corresponding to the first beacon.
2. The method of claim 1, wherein the first image recognition algorithm is a single-shot multi-box detector (SSD) algorithm, and the recognizing a first image captured by a camera device using the first image recognition algorithm to determine whether a vehicle is present in the first image comprises:
taking a preset vehicle model and the first image as input of an SSD algorithm, and determining whether a vehicle exists in the first image according to an output result of the SSD algorithm;
the vehicle model is an SSD model obtained through training according to a preset vehicle sample set.
3. The method of claim 1, wherein the second image recognition algorithm is an SSD algorithm, and wherein recognizing the first image using the second image recognition algorithm to determine the location of the beacon in the first image when the vehicle is not present in the first image comprises:
dividing the first image into a plurality of detection frames according to a preset size, wherein the preset size comprises one or more sizes;
taking a preset beacon model and each detection box as the input of an SSD algorithm to obtain the identification results corresponding to the detection boxes output by the SSD algorithm;
merging the recognition results of the plurality of detection frames as the detection result of the first image;
determining the position of a pointer in the first image according to the detection result of the first image;
the beacon model is an SSD model obtained by training according to a preset beacon sample set, and the beacon sample set comprises one or more beacon samples.
4. The method of claim 3, wherein identifying the first image using a second image recognition algorithm to determine the location of the beacon in the first image when no vehicle is present in the first image further comprises:
determining the number of the pointers in the first image according to the detection result of the first image;
when N pointers are identified, dividing the first image into N identification areas respectively containing N pointers, wherein N is a positive integer;
taking a preset arrow model and the N identification areas as the input of an SSD algorithm, and determining the number of arrows contained in each identification area according to the output result of the SSD algorithm, wherein the arrow model is an SSD model obtained by training according to a preset arrow sample set;
and correcting the detection result of the first image according to the number of the arrows contained in each identification area, and determining the types of the N pointers.
5. The method of claim 1, wherein the third image recognition algorithm is a supervised descent algorithm (SDM), and wherein the identifying the target region using the third image recognition algorithm to obtain the first position of the lane line in the target region on the first image comprises:
taking the target area, a lane line model and the first image as input of the SDM algorithm to determine the first position of the lane line on the first image according to the output result of the SDM algorithm;
the lane line model is a learning matrix obtained by training according to a preset lane line sample set.
6. The method of claim 5, further comprising:
identifying the target area by utilizing a Hough transform algorithm to obtain a second position of the lane line on the first image;
judging whether the first position and the second position are correct or not according to the gray information of the first image and the position of the pointer;
when at least one of the first position and the second position is wrong, acquiring a third position of the lane line on the first image by using a watershed segmentation method according to the first position and the second position;
and taking the position with the maximum gray average value in the first position, the second position and the third position as the actual position of the output lane line.
7. A lane line detection apparatus, characterized in that the apparatus comprises:
the first identification module is used for identifying a first image acquired by a camera device by utilizing a first image identification algorithm so as to determine whether a vehicle exists in the first image, wherein the first image is any frame of image acquired by the camera device and contains target road image information;
the second identification module is used for identifying the first image by using a second image identification algorithm when no vehicle exists in the first image so as to determine the position of the pointer in the first image;
the area determining module is used for determining a target area corresponding to the pointer in the first image according to the position of the pointer;
the third identification module is used for identifying the target area by utilizing a third image identification algorithm so as to acquire a first position of a lane line in the target area on the first image;
the region determination module includes:
a first determining submodule, configured to use a first edge of a first beacon as a lower edge of the target area, and use a position where the first edge of the pedestrian crossing in the first image or the second edge of the first beacon is shifted upward by a preset distance as an upper edge of the target area; the first edge of the first beacon is a border line which is not provided with one end of an arrow on the first beacon, wherein the first beacon is any beacon on the first image, the first edge of the pedestrian crossing is an edge close to the first beacon, and the second edge of the first beacon is a border line which is positioned at one end of the first beacon which is provided with the arrow and is vertical to the first beacon;
a second determining submodule, configured to, when the beacons exist on both left and right sides of the first beacon, respectively use two lines that respectively pass through arrow center points of the two beacons on the left and right sides of the first beacon and are perpendicular to the upper edge as a left edge and a right edge of the target area;
the second determining submodule is further used for taking a line which passes through the center point of the arrow of the second pointer and is perpendicular to the upper edge as the left edge of the target area and taking a line which is symmetrical to the left edge about the midpoint of the arrow of the first pointer as the right edge when the second pointer exists on the left side of the first pointer and the pointer does not exist on the right side of the first pointer;
the second determining submodule is further used for taking a line which passes through the center point of the arrow point of the third beacon and is perpendicular to the upper edge as the right edge of the target area and taking a line which is symmetrical to the right edge about the midpoint of the arrow of the first beacon as the left edge when the third beacon exists on the right side of the first beacon and the beacon does not exist on the left side;
and the third determining submodule is used for taking a rectangular area formed by the upper edge, the lower edge, the left edge and the right edge as a target area corresponding to the first beacon.
8. The apparatus of claim 7, wherein the first image recognition algorithm is a single-shot multi-box detector (SSD) algorithm, and wherein the first recognition module is configured to:
taking a preset vehicle model and the first image as input of an SSD algorithm, and determining whether a vehicle exists in the first image according to an output result of the SSD algorithm;
the vehicle model is an SSD model obtained through training according to a preset vehicle sample set.
9. The apparatus of claim 7, wherein the second image recognition algorithm is an SSD algorithm, and wherein the second recognition module comprises:
the segmentation submodule is used for segmenting the first image into a plurality of detection frames according to preset sizes, and the preset sizes comprise one or more sizes;
the first identification submodule is used for taking a preset beacon model and each detection box as the input of an SSD algorithm so as to obtain the identification results corresponding to the detection boxes output by the SSD algorithm;
a merging submodule, configured to merge recognition results of the plurality of detection frames to obtain a detection result of the first image;
the determining submodule is used for determining the position of a pointer in the first image according to the detection result of the first image;
the beacon model is an SSD model obtained by training according to a preset beacon sample set, and the beacon sample set comprises one or more beacon samples.
10. The apparatus of claim 9, wherein the second identification module further comprises:
the determining submodule is further configured to determine the number of the pointers in the first image according to the detection result of the first image;
the dividing submodule is used for dividing the first image into N identification areas respectively containing N pointers when the N pointers are identified, wherein N is a positive integer;
the second identification submodule is used for taking a preset arrow model and the N identification areas as the input of an SSD algorithm, and determining the number of arrows contained in each identification area according to the output result of the SSD algorithm, wherein the arrow model is an SSD model obtained by training according to a preset arrow sample set;
and the correction submodule is used for correcting the detection result of the first image according to the number of the arrows contained in each identification area and determining the types of the N pointers.
11. The apparatus of claim 7, wherein the third image recognition algorithm is a supervised descent algorithm (SDM), and wherein the third recognition module is configured to:
taking the target area, a lane line model and the first image as input of the SDM algorithm to determine the first position of the lane line on the first image according to the output result of the SDM algorithm;
the lane line model is a learning matrix obtained by training according to a preset lane line sample set.
12. The apparatus of claim 11, further comprising:
the fourth identification module is used for identifying the target area by utilizing a Hough transform algorithm so as to obtain a second position of the lane line on the first image;
the judging module is used for judging whether the first position and the second position are correct or not according to the gray information of the first image and the position of the pointer;
a position determining module, configured to, when at least one of the first position and the second position is incorrect, obtain, according to the first position and the second position, a third position of the lane line on the first image by using a watershed segmentation method;
the position determining module is further configured to use a position with a maximum gray average value among the first position, the second position, and the third position as an output actual position of the lane line.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
14. An electronic device, comprising:
the computer-readable storage medium recited in claim 13; and
one or more processors to execute the program in the computer-readable storage medium.
CN201810361951.XA 2018-04-20 2018-04-20 Lane line detection method, lane line detection device, storage medium, and electronic apparatus Active CN108734105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810361951.XA CN108734105B (en) 2018-04-20 2018-04-20 Lane line detection method, lane line detection device, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810361951.XA CN108734105B (en) 2018-04-20 2018-04-20 Lane line detection method, lane line detection device, storage medium, and electronic apparatus

Publications (2)

Publication Number Publication Date
CN108734105A CN108734105A (en) 2018-11-02
CN108734105B true CN108734105B (en) 2020-12-04

Family

ID=63939199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810361951.XA Active CN108734105B (en) 2018-04-20 2018-04-20 Lane line detection method, lane line detection device, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN108734105B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887137B (en) * 2019-02-18 2021-04-09 新华三技术有限公司 Control method and device for lifting rod
CN110188661B (en) * 2019-05-27 2021-07-20 广州极飞科技股份有限公司 Boundary identification method and device
CN111079598B (en) * 2019-12-06 2023-08-08 深圳市艾为智能有限公司 Lane line detection method based on image texture and machine learning
CN111191619B (en) * 2020-01-02 2023-09-05 北京百度网讯科技有限公司 Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN113392682A (en) * 2020-03-13 2021-09-14 富士通株式会社 Lane line recognition device and method and electronic equipment
CN113569596A (en) * 2020-04-28 2021-10-29 千寻位置网络有限公司 Method and device for identifying printed matter on satellite image road
CN111563463A (en) * 2020-05-11 2020-08-21 上海眼控科技股份有限公司 Method and device for identifying road lane lines, electronic equipment and storage medium
CN111540010B (en) * 2020-05-15 2023-09-19 阿波罗智联(北京)科技有限公司 Road monitoring method and device, electronic equipment and storage medium
CN112016514A (en) * 2020-09-09 2020-12-01 平安科技(深圳)有限公司 Traffic sign identification method, device, equipment and storage medium
CN112560717B (en) * 2020-12-21 2023-04-21 青岛科技大学 Lane line detection method based on deep learning
CN114998770B (en) * 2022-07-06 2023-04-07 中国科学院地理科学与资源研究所 Highway identifier extraction method and system
CN115082888B (en) * 2022-08-18 2022-10-25 北京轻舟智航智能技术有限公司 Lane line detection method and device
CN115294545A (en) * 2022-09-06 2022-11-04 中诚华隆计算机技术有限公司 Complex road surface lane identification method and chip based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740805A (en) * 2016-01-27 2016-07-06 大连楼兰科技股份有限公司 Lane line detection method based on multi-region joint
CN107886729A (en) * 2016-09-30 2018-04-06 比亚迪股份有限公司 Vehicle identification method, device and vehicle

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360499B (en) * 2011-06-30 2014-01-22 电子科技大学 Multi-lane line tracking method based on Kalman filter bank
CN102819952B (en) * 2012-06-29 2014-04-16 浙江大学 Method for detecting illegal lane change of vehicle based on video detection technique
CN104036253A (en) * 2014-06-20 2014-09-10 智慧城市系统服务(中国)有限公司 Lane line tracking method and lane line tracking system
US10013610B2 (en) * 2015-10-23 2018-07-03 Nokia Technologies Oy Integration of positional data and overhead images for lane identification
JP2018005618A (en) * 2016-07-04 2018-01-11 株式会社Soken Road recognition device
CN106682586A (en) * 2016-12-03 2017-05-17 北京联合大学 Method for real-time lane line detection based on vision under complex lighting conditions
CN106529505A (en) * 2016-12-05 2017-03-22 惠州华阳通用电子有限公司 Image-vision-based lane line detection method
CN107315998B (en) * 2017-05-31 2019-08-06 淮阴工学院 Vehicle class division method and system based on lane line

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740805A (en) * 2016-01-27 2016-07-06 大连楼兰科技股份有限公司 Lane line detection method based on multi-region joint
CN107886729A (en) * 2016-09-30 2018-04-06 比亚迪股份有限公司 Vehicle identification method, device and vehicle

Also Published As

Publication number Publication date
CN108734105A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108734105B (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN109284674B (en) Method and device for determining lane line
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
US8184859B2 (en) Road marking recognition apparatus and method
US8917934B2 (en) Multi-cue object detection and analysis
US10552706B2 (en) Attachable matter detection apparatus and attachable matter detection method
CN108550258B (en) Vehicle queuing length detection method and device, storage medium and electronic equipment
CN110991215B (en) Lane line detection method and device, storage medium and electronic equipment
CN111898491A (en) Method and device for identifying reverse driving of vehicle and electronic equipment
CN112084822A (en) Lane detection device and method and electronic equipment
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN114419165B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and storage medium
CN108509826B (en) Road identification method and system for remote sensing image
CN112699711A (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN112447060A (en) Method and device for recognizing lane and computing equipment
Hernández et al. Lane marking detection using image features and line fitting model
CN112784494B (en) Training method of false positive recognition model, target recognition method and device
CN109255797B (en) Image processing device and method, and electronic device
CN110826364A (en) Stock position identification method and device
CN113610770A (en) License plate recognition method, device and equipment
CN111325811B (en) Lane line data processing method and processing device
JP2014194698A (en) Road end detection system, method and program
CN115035495A (en) Image processing method and device
CN113435350A (en) Traffic marking detection method, device, equipment and medium
JP6354186B2 (en) Information processing apparatus, blur condition calculation method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant