CN109165579A - The method and apparatus for detecting stop line - Google Patents

The method and apparatus for detecting stop line Download PDF

Info

Publication number
CN109165579A
CN109165579A CN201810895175.1A CN201810895175A CN109165579A CN 109165579 A CN109165579 A CN 109165579A CN 201810895175 A CN201810895175 A CN 201810895175A CN 109165579 A CN109165579 A CN 109165579A
Authority
CN
China
Prior art keywords
image data
group
stop line
detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810895175.1A
Other languages
Chinese (zh)
Inventor
丁坤
杜金枝
王秀田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
SAIC Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Chery Automobile Co Ltd filed Critical SAIC Chery Automobile Co Ltd
Priority to CN201810895175.1A priority Critical patent/CN109165579A/en
Publication of CN109165579A publication Critical patent/CN109165579A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Abstract

Present disclose provides a kind of method and apparatus for detecting stop line, belong to unpiloted intelligent vehicle technical field.The method is applied to unpiloted intelligent vehicle, which comprises obtains the image data in front of the intelligent vehicle;Described image data are inputted to stop line detection model trained in advance, obtain stop line testing result;If the stop line testing result is the first detection mark containing stop line, it is based on inverse perspective mapping algorithm, determines the distance between the intelligent vehicle and the stop line.Using the disclosure, intelligent vehicle detects stop line using this method, compared with using Hough transformation to detect stop line, can shorten detection time, improve the real-time of stop line detection.

Description

The method and apparatus for detecting stop line
Technical field
This disclosure relates to unpiloted intelligent vehicle technical field, in particular to a kind of method and dress for detecting stop line It sets.
Background technique
With the fast development of science and technology, unmanned technology gradually moves to maturity, and unpiloted intelligent vehicle is not Only need to detect that the barrier situation of vehicle periphery, it is also necessary to the traffic mark in real-time detection and identification road, for example, road On the situations of change of traffic lights, the presence or absence of stop line etc..
Since stop line had not only been related to the order of traffic but also it is related to the personal safety of pedestrian, so the reality of stop line detection When property and accuracy are for realizing unmanned be of crucial importance.In the related technology, the detection process of stop line is big Cause be the image data of acquisition is pre-processed, image borderization processing, recycle Hough transformation detection straight line principle inspection Survey stop line.
In implementing the present disclosure, inventor discovery the prior art has at least the following problems:
During above-mentioned detection stop line, using the process of Hough transformation detection straight line than relatively time-consuming, lead to whole consumption When it is longer, reduce stop line detection real-time.
Summary of the invention
The embodiment of the present disclosure provides a kind of method and apparatus for detecting stop line, to solve the problems, such as the relevant technologies.Institute It is as follows to state technical solution:
According to a kind of method for detecting stop line is present embodiments provided, the method is applied to unpiloted intelligence Vehicle, which comprises
Obtain the image data in front of the intelligent vehicle;
Described image data are inputted to stop line detection model trained in advance, obtain stop line testing result;
If the stop line testing result is the first detection mark containing stop line, calculated based on inverse perspective mapping Method determines the distance between the intelligent vehicle and the stop line.
Optionally, the method also includes:
Obtain multiple first containing stop line group image datas and second group of image data without containing stop line;
Obtain the corresponding first detection mark of the first group of image data stored in advance and second group of image The corresponding second detection mark of data;
Corresponding first detection of first group of image data and first group of image data is identified, is second group described Image data and the corresponding second detection mark of second group of image data detect the stop line as training sample Model is trained.
Optionally, the method also includes:
Image preprocessing is carried out to first group of image data and second group of image data, obtains image preprocessing First group of image data and second group of image data later;
It is described by the corresponding first detection mark of first group of image data and first group of image data, described the Two groups of image datas and the corresponding second detection mark of second group of image data, as training sample, to the stop line Detection model is trained, comprising:
Corresponding first detection mark, locate first group of image data by after image preprocessing and in advance by image Second group of image data and corresponding second detection mark after reason, as training sample, to the stop line detection model It is trained.
Optionally, described that image preprocessing is carried out to first group of image data and second group of image data, it obtains First group of image data and second group of image data after to image preprocessing, comprising:
The image preprocessing that specification adjustment is carried out to first group of image data and second group of image data, by institute The specification for stating first group of image data and second group of image data is adjusted to default specification.
Optionally, the method also includes:
If the stop line testing result is the first detection mark containing stop line, and detects current traffic letter Signal lamp is in amber light or red light phase, then when detecting that the stop line in current image date is located at the intelligent vehicle advance side To pre-determined distance threshold position at when, carry out braking processing.
A kind of device for detecting stop line is additionally provided according to the present embodiment, described device is applied to unpiloted intelligence Vehicle, described device include:
First obtains module, for obtaining the image data in front of the intelligent vehicle;
Detection module obtains stop line inspection for inputting described image data to stop line detection model trained in advance Survey result;
Range finder module is based on if being the first detection mark containing stop line for the stop line testing result Inverse perspective mapping algorithm determines the distance between the intelligent vehicle and the stop line.
Optionally, described device further include:
Second obtains module, for obtain multiple first containing stop line group image datas and without containing stop line the Two groups of image datas;
Third obtains module, for obtaining the corresponding first detection mark of first group of image data stored in advance, And second group of image data corresponding second detects mark;
Training module, for marking corresponding first detection of first group of image data and first group of image data Knowledge, second group of image data and the corresponding second detection mark of second group of image data, as training sample, to institute Stop line detection model is stated to be trained.
Optionally, described device further include:
Preprocessing module is located in advance for carrying out image to first group of image data and second group of image data Reason, obtains the first group of image data and second group of image data after image preprocessing;
The training module, is specifically used for:
Corresponding first detection mark, locate first group of image data by after image preprocessing and in advance by image Second group of image data and corresponding second detection mark after reason, as training sample, to the stop line detection model It is trained.
Optionally, the preprocessing module, is specifically used for:
The image preprocessing that specification adjustment is carried out to first group of image data and second group of image data, by institute The specification for stating first group of image data and second group of image data is adjusted to default specification.
Optionally, described device further include:
Brake module if being the first detection mark containing stop line for the stop line testing result, and detects It is in amber light or red light phase to current traffic lights, then when detecting that the stop line in current image date is located at institute When stating at the pre-determined distance threshold position of intelligent vehicle direction of advance, braking processing is carried out.
Technical solution bring beneficial effect provided in an embodiment of the present invention includes at least:
In the embodiments of the present disclosure, unpiloted intelligent vehicle obtains front during detecting stop line first Image data, the image data that then will acquire input in stop line detection model trained in advance, available stop line inspection It surveys as a result, being based on inverse perspective mapping algorithm if stop line testing result is the first detection mark containing stop line, determining The distance between the intelligent vehicle and the stop line.Intelligent vehicle detects stop line using this method, and uses Hough transformation Detection stop line is compared, and detection time can be shortened, and improves the real-time of stop line detection.
Detailed description of the invention
In order to illustrate more clearly of the technical solution in the embodiment of the present disclosure, will make below to required in embodiment description Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present disclosure, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of flow diagram of the method for detection stop line that the embodiment of the present disclosure provides;
Fig. 2 is a kind of flow diagram of the method for detection stop line that the embodiment of the present disclosure provides;
Fig. 3 is a kind of structural schematic diagram of the device for detection stop line that the embodiment of the present disclosure provides;
Fig. 4 is a kind of structural schematic diagram of the device for detection stop line that the embodiment of the present disclosure provides;
Fig. 5 is a kind of structural schematic diagram of the device for detection stop line that the embodiment of the present disclosure provides
Fig. 6 is a kind of structural schematic diagram of the device for detection stop line that the embodiment of the present disclosure provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the disclosure clearer, below in conjunction with attached drawing to disclosure embodiment party Formula is described in further detail.
The embodiment of the present disclosure provide it is a kind of detect stop line method, this method apply with unpiloted intelligent vehicle, Its executing subject can be the car-mounted terminal of intelligent vehicle.Wherein, car-mounted terminal could be alternatively referred to as ECU (Electronic Control Unit, electronic control unit), also known as " car running computer ", " vehicle-mounted computer " etc..It is then intelligent vehicle from purposes Dedicated microcomputer controller.
Wherein, multiple cameras, multiple range radars etc. are also equipped on intelligent vehicle, for detecting the obstacle around vehicle body Principle condition etc..The present embodiment mainly uses the front windshield for being mounted on intelligent vehicle mainly for detection of the stop line on road On camera, for the image data in the front of acquisition to be sent to car-mounted terminal.Due to as shown in Figure 1, this method can be with It is executed according to following process:
In a step 101, car-mounted terminal obtains the image data in front of intelligent vehicle.
In an implementation, the camera being mounted on intelligent vehicle front windshield can in real time send out the image data of acquisition Give car-mounted terminal, and then the available image data in front of intelligent vehicle of car-mounted terminal.
In a step 102, car-mounted terminal obtains stop line to stop line detection model input image data trained in advance Testing result.
Wherein, if containing stop line in image data, stop line testing result is the first detection mark, if image Stop line is not contained in data, then stop line testing result is the second detection mark.
In an implementation, after car-mounted terminal receives the image data that camera is sent, image data can be input to In advance in trained stop line detection model, and then available stop line testing result, car-mounted terminal are examined further according to stop line It surveys result and executes subsequent processing.
In step 103, if stop line testing result is the first detection mark containing stop line, car-mounted terminal base In inverse perspective mapping algorithm, the distance between intelligent vehicle and stop line are determined.
Wherein, the ranging formula of inverse perspective mapping algorithm may is that
D=K+ β d
In formula: D is the distance between stop line and headstock, and K is the blind area distance of camera, can be surveyed by experiment It measures, β is the proportionality coefficient of physical plane Yu inverse fluoroscopy images plane, and d is off the position of the inverse fluoroscopy images where line With the pixel distance of image least significant end.
In an implementation, if the stop line testing result of car-mounted terminal detection is the first detection mark, show picture number Contain stop line in, car-mounted terminal is further calculated according to inverse perspective mapping to judge the distance between vehicle body and stop line Method determines the distance between intelligent vehicle and stop line.
In practical applications, in the case where green light, intelligent vehicle is not necessarily to the distance between basis and stop line and judges whether Braking processing should be carried out, correspondingly, if stop line testing result is that the first detection containing stop line identifies, and detects Current traffic lights are in amber light or red light phase, then when detect the stop line in current image date be located at intelligence When at the pre-determined distance threshold position of vehicle direction of advance, braking processing is carried out.For example, intelligent vehicle is during enforcement, it is vehicle-mounted Terminal while detecting stop line also in detection other information, such as the case where traffic lights, front obstacle information etc., If car-mounted terminal detects stop line, but also monitors that Current traffic signal lamp is in red light or amber light state, usually In the case of, when car-mounted terminal detects stop line for the first time, the distance between stop line is greater than pre-determined distance threshold value, in this way, Current traffic signal lamp is in red light or amber light state, before detecting that the stop line in current image date is located at intelligent vehicle When at into the pre-determined distance threshold position in direction, the arrestment mechanism of car-mounted terminal control intelligent vehicle carries out braking processing.
In an implementation, if stop line testing result is the second detection mark without containing stop line, car-mounted terminal is not It carries out calculating the processing of the distance between stop line, but executes the detection process of a new round according still further to above-mentioned steps.Alternatively, such as Fruit stop line testing result is the first detection mark containing stop line, but Current traffic signal lamp is green light phase, then vehicle Mounted terminal executes the detection of a new round according still further to above-mentioned steps without calculating the processing of the distance between stop line Journey.
Wherein, it should be pointed out that since the required precision of intelligent vehicle and the distance between stop line is higher, so, peace The calibration of camera on intelligent vehicle front windshield is also critically important, then, camera is mounted on front by technical staff After on wind glass, need to demarcate camera, it is, for example, possible to use Zhang Zhengyou calibration methods to demarcate camera, And calibration result is input in car-mounted terminal, for example, the intrinsic parameter of camera and outer parameter can be input to car-mounted terminal In, wherein outer parameter mainly includes pitch angle, yaw angle, inclination angle and height of camera etc., and intrinsic parameter mainly includes taking the photograph As the focal length etc. of head.
Based on described above, image of unpiloted intelligent vehicle during detecting stop line, first in front of acquisition Data, the image data that then will acquire input in stop line detection model trained in advance, available stop line detection knot Fruit, if stop line testing result be containing stop line first detection mark, be based on inverse perspective mapping algorithm, determine described in The distance between intelligent vehicle and the stop line.Intelligent vehicle detects stop line using this method, detects with Hough transformation is used Stop line is compared, and detection time can be shortened, and improves the real-time of stop line detection.
In addition, the method for this detection stop line can not only quickly detect the front of intelligent vehicle, whether there is or not stop lines, and And when detect front there are when stop line, can also accurately calculate the distance between intelligent vehicle and stop line.
Optionally, car-mounted terminal is mainly and is determined whether there is to stop using stop line detection model trained in advance Line, wherein the training process of stop line detection model can be executed according to process as shown in Figure 2:
In step 201, car-mounted terminal obtains multiple first containing stop line group image datas and without containing stop line Second group of image data.Wherein, these data can establish a database for data training.
Wherein, the quantity of image data is set by technical staff according to test and theoretical calculation in first group of image data It is fixed, for example, it may be 1500 etc..Equally, second group of image data be also by technical staff according to test and theoretical calculation and Setting, it is generally the case that second group of image data is significantly larger than first group of image data, for example, scheming in second group of image data As the quantity of data can be twice to five times etc. of image data in first group of image data, for example, second group of image data The quantity of middle image data can be 4500.
In an implementation, available multiple first containing the stop line group image datas of car-mounted terminal and do not contain stop line Second group of image data, first group of image data and second group of image data may come from the picture number of intelligent vehicle acquisition According to there is clearly stop line also to have the stop line by wearing in various degree in stop line contained in first group of image data Deng, and have complete stop line in the stop line in first group of image data, also there is incomplete stop line etc..
In step 202, car-mounted terminal obtains the corresponding first detection mark of the first group of image data stored in advance, with And the corresponding second detection mark of second group of image data.
It in an implementation, can be according to figure after technical staff obtains first group of image data and second group of image data As the total content of data, the corresponding first detection mark of each image data, the second group picture in first group of image data are determined As the corresponding second detection mark of each image data in data, and each image data and corresponding detection are identified It is sent to car-mounted terminal, that is to say each of first group of image data image data and corresponding first detection mark hair Car-mounted terminal is given, each of second group of image data image data and corresponding second detection mark are sent to vehicle-mounted Terminal.
In step 203, car-mounted terminal marks corresponding first detection of first group of image data and first group of image data Know, second group of image data and the corresponding second detection mark of second group of image data detect stop line as training sample Model is trained.
In an implementation, car-mounted terminal obtains each of first group of image data image data and corresponding first detection It, can be by the after mark and each of second group of image data image data and corresponding second detection mark It is each in each of one group of image data image data and corresponding first detection mark and second group of image data A image data and corresponding second detection mark, as training sample, are trained stop line detection model, wherein The adaboost algorithm based on Haar feature can be used in training process to be trained.
Optionally, car-mounted terminal usually requires before being trained to stop line detection model to the picture number in sample According to being pre-processed, locate in advance correspondingly, car-mounted terminal can carry out image to first group of image data and second group of image data Reason, obtains the first group of image data and second group of image data after image preprocessing, then, then will pass through image preprocessing First group of image data later and corresponding first detection mark, second group of image data after image preprocessing and Corresponding second detection mark, as training sample, is trained stop line detection model.
In an implementation, image preprocessing, which can be, carries out specification adjustment to image data, correspondingly, car-mounted terminal is to first Group image data and second group of image data carry out the image preprocessing of specification adjustment, by first group of image data and the second group picture As the specification of data is adjusted to default specification.
Wherein, specification adjustment may include size adjusting and pixel adjustment of image data etc. again, for example, can be by the The pixel of one group of image data and the image data in second group of image data is adjusted to 20 × 20 specification.
In the embodiments of the present disclosure, unpiloted intelligent vehicle obtains front during detecting stop line first Image data, the image data that then will acquire input in stop line detection model trained in advance, available stop line inspection It surveys as a result, being based on inverse perspective mapping algorithm if stop line testing result is the first detection mark containing stop line, determining The distance between the intelligent vehicle and the stop line.Intelligent vehicle detects stop line using this method, and uses Hough transformation Detection stop line is compared, and detection time can be shortened, and improves the real-time of stop line detection.
The present embodiment additionally provides a kind of device for detecting stop line, and described device is applied to unpiloted intelligent vehicle, As shown in figure 3, described device includes:
First obtains module 310, for obtaining the image data in front of the intelligent vehicle;
Detection module 320 obtains stop line for inputting described image data to stop line detection model trained in advance Testing result;
Range finder module 330, if being the first detection mark containing stop line, base for the stop line testing result In inverse perspective mapping algorithm, the distance between the intelligent vehicle and the stop line are determined.
Optionally, as shown in figure 4, described device further include:
Second obtains module 410, for obtaining multiple first containing stop line group image datas and without containing stop line Second group of image data;
Third obtains module 420, for obtaining the corresponding first detection mark of first group of image data stored in advance Know and corresponding second detection of second group of image data identifies;
Training module 430, for examining first group of image data and first group of image data corresponding first Mark knows, second group of image data and the corresponding second detection mark of second group of image data, as training sample, The stop line detection model is trained.
Optionally, as shown in figure 5, described device further include:
Preprocessing module 420 ', it is pre- for carrying out image to first group of image data and second group of image data Processing, obtains the first group of image data and second group of image data after image preprocessing;
The training module 430, is specifically used for:
Corresponding first detection mark, locate first group of image data by after image preprocessing and in advance by image Second group of image data and corresponding second detection mark after reason, as training sample, to the stop line detection model It is trained.
Optionally, the preprocessing module 420 ', is specifically used for:
The image preprocessing that specification adjustment is carried out to first group of image data and second group of image data, by institute The specification for stating first group of image data and second group of image data is adjusted to default specification.
Optionally, as shown in fig. 6, described device further include:
Brake module 340 if being the first detection mark containing stop line for the stop line testing result, and is examined It measures current traffic lights and is in amber light or red light phase, then when detecting that the stop line in current image date is located at When at the pre-determined distance threshold position of the intelligent vehicle direction of advance, braking processing is carried out.
In the embodiments of the present disclosure, unpiloted intelligent vehicle obtains front during detecting stop line first Image data, the image data that then will acquire input in stop line detection model trained in advance, available stop line inspection It surveys as a result, being based on inverse perspective mapping algorithm if stop line testing result is the first detection mark containing stop line, determining The distance between the intelligent vehicle and the stop line.Intelligent vehicle detects stop line using the device, and uses Hough transformation Detection stop line is compared, and detection time can be shortened, and improves the real-time of stop line detection.
It should be understood that it is provided by the above embodiment detection stop line device when detecting stop line, only with above-mentioned The division progress of each functional module can according to need and for example, in practical application by above-mentioned function distribution by different Functional module is completed, i.e., the internal structure of device is divided into different functional modules, with complete it is described above whole or Partial function.In addition, the device of detection stop line provided by the above embodiment and the embodiment of the method for detection stop line belong to together One design, specific implementation process are detailed in embodiment of the method, and which is not described herein again.
The present embodiment additionally provides a kind of unpiloted intelligent vehicle, and the method which detects stop line is as follows:
Obtain the image data in front of the intelligent vehicle;
Described image data are inputted to stop line detection model trained in advance, obtain stop line testing result;
If the stop line testing result is the first detection mark containing stop line, calculated based on inverse perspective mapping Method determines the distance between the intelligent vehicle and the stop line.
Optionally, the method also includes:
Obtain multiple first containing stop line group image datas and second group of image data without containing stop line;
Obtain the corresponding first detection mark of the first group of image data stored in advance and second group of image The corresponding second detection mark of data;
Corresponding first detection of first group of image data and first group of image data is identified, is second group described Image data and the corresponding second detection mark of second group of image data detect the stop line as training sample Model is trained.
Optionally, the method also includes:
Image preprocessing is carried out to first group of image data and second group of image data, obtains image preprocessing First group of image data and second group of image data later;
It is described by the corresponding first detection mark of first group of image data and first group of image data, described the Two groups of image datas and the corresponding second detection mark of second group of image data, as training sample, to the stop line Detection model is trained, comprising:
Corresponding first detection mark, locate first group of image data by after image preprocessing and in advance by image Second group of image data and corresponding second detection mark after reason, as training sample, to the stop line detection model It is trained.
Optionally, described that image preprocessing is carried out to first group of image data and second group of image data, it obtains First group of image data and second group of image data after to image preprocessing, comprising:
The image preprocessing that specification adjustment is carried out to first group of image data and second group of image data, by institute The specification for stating first group of image data and second group of image data is adjusted to default specification.
Optionally, the method also includes:
If the stop line testing result is the first detection mark containing stop line, and detects current traffic letter Signal lamp is in amber light or red light phase, then when detecting that the stop line in current image date is located at the intelligent vehicle advance side To pre-determined distance threshold position at when, carry out braking processing.
In the embodiments of the present disclosure, unpiloted intelligent vehicle obtains front during detecting stop line first Image data, the image data that then will acquire input in stop line detection model trained in advance, available stop line inspection It surveys as a result, being based on inverse perspective mapping algorithm if stop line testing result is the first detection mark containing stop line, determining The distance between the intelligent vehicle and the stop line.Intelligent vehicle detects stop line using this method, and uses Hough transformation Detection stop line is compared, and detection time can be shortened, and improves the real-time of stop line detection.
The foregoing is merely the preferred embodiments of the disclosure, are not intended to limit the invention, it is all the disclosure spirit and Within principle, any modification, equivalent replacement, improvement and so on be should be included within the protection scope of the disclosure.

Claims (10)

1. a kind of method for detecting stop line, which is characterized in that the method is applied to unpiloted intelligent vehicle, the method Include:
Obtain the image data in front of the intelligent vehicle;
Described image data are inputted to stop line detection model trained in advance, obtain stop line testing result;
If the stop line testing result is the first detection mark containing stop line, it is based on inverse perspective mapping algorithm, really Fixed the distance between the intelligent vehicle and the stop line.
2. the method according to claim 1, wherein the method also includes:
Obtain multiple first containing stop line group image datas and second group of image data without containing stop line;
Obtain the corresponding first detection mark of the first group of image data stored in advance and second group of image data Corresponding second detection mark;
By the corresponding first detection mark of first group of image data and first group of image data, second group of image Data and the corresponding second detection mark of second group of image data, as training sample, to the stop line detection model It is trained.
3. according to the method described in claim 2, it is characterized in that, the method also includes:
Image preprocessing is carried out to first group of image data and second group of image data, after obtaining image preprocessing First group of image data and second group of image data;
It is described that corresponding first detection of first group of image data and first group of image data is identified, is second group described Image data and the corresponding second detection mark of second group of image data detect the stop line as training sample Model is trained, comprising:
By by after image preprocessing first group of image data and corresponding first detection mark, by image preprocessing it Second group of image data and corresponding second detection mark afterwards carries out the stop line detection model as training sample Training.
4. according to the method described in claim 3, it is characterized in that, described to first group of image data and second group described Image data carries out image preprocessing, obtains the first group of image data and second group of image data after image preprocessing, wraps It includes:
The image preprocessing that specification adjustment is carried out to first group of image data and second group of image data, by described the The specification of one group of image data and second group of image data is adjusted to default specification.
5. method according to claim 1-4, which is characterized in that the method also includes:
If the stop line testing result is the first detection mark containing stop line, and detects current traffic lights In amber light or red light phase, then when detecting that the stop line in current image date is located at the intelligent vehicle direction of advance When at pre-determined distance threshold position, braking processing is carried out.
6. a kind of device for detecting stop line, which is characterized in that described device is applied to unpiloted intelligent vehicle, described device Include:
First obtains module, for obtaining the image data in front of the intelligent vehicle;
Detection module obtains stop line detection knot for inputting described image data to stop line detection model trained in advance Fruit;
Range finder module, if being the first detection mark containing stop line for the stop line testing result, based on against thoroughly Depending on converting algorithm, the distance between the intelligent vehicle and the stop line are determined.
7. device according to claim 6, which is characterized in that described device further include:
Second obtains module, for obtaining multiple first containing stop line group image datas and second group without containing stop line Image data;
Third obtains module, identifies for obtaining corresponding first detection of first group of image data stored in advance, and Second group of image data corresponding second detects mark;
Training module, for identifying corresponding first detection of first group of image data and first group of image data, Second group of image data and the corresponding second detection mark of second group of image data, as training sample, to described Stop line detection model is trained.
8. device according to claim 7, which is characterized in that described device further include:
Preprocessing module is obtained for carrying out image preprocessing to first group of image data and second group of image data First group of image data and second group of image data after to image preprocessing;
The training module, is specifically used for:
By by after image preprocessing first group of image data and corresponding first detection mark, by image preprocessing it Second group of image data and corresponding second detection mark afterwards carries out the stop line detection model as training sample Training.
9. device according to claim 8, which is characterized in that the preprocessing module is specifically used for:
The image preprocessing that specification adjustment is carried out to first group of image data and second group of image data, by described the The specification of one group of image data and second group of image data is adjusted to default specification.
10. according to the described in any item devices of claim 6-9, which is characterized in that described device further include:
Brake module if being the first detection mark containing stop line for the stop line testing result, and detects and works as Preceding traffic lights are in amber light or red light phase, then when detecting that the stop line in current image date is located at the intelligence When at the pre-determined distance threshold position of energy vehicle direction of advance, braking processing is carried out.
CN201810895175.1A 2018-08-08 2018-08-08 The method and apparatus for detecting stop line Pending CN109165579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810895175.1A CN109165579A (en) 2018-08-08 2018-08-08 The method and apparatus for detecting stop line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810895175.1A CN109165579A (en) 2018-08-08 2018-08-08 The method and apparatus for detecting stop line

Publications (1)

Publication Number Publication Date
CN109165579A true CN109165579A (en) 2019-01-08

Family

ID=64895137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810895175.1A Pending CN109165579A (en) 2018-08-08 2018-08-08 The method and apparatus for detecting stop line

Country Status (1)

Country Link
CN (1) CN109165579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333414A (en) * 2020-09-29 2022-04-12 丰田自动车株式会社 Parking yield detection device, parking yield detection system, and recording medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682292A (en) * 2012-05-10 2012-09-19 清华大学 Method based on monocular vision for detecting and roughly positioning edge of road
CN103544844A (en) * 2013-10-12 2014-01-29 浙江吉利控股集团有限公司 Driver assistance method and system for avoiding violation of traffic lights
CN105930830A (en) * 2016-05-18 2016-09-07 大连理工大学 Road surface traffic sign recognition method based on convolution neural network
CN106156723A (en) * 2016-05-23 2016-11-23 北京联合大学 A kind of crossing fine positioning method of view-based access control model
CN106710271A (en) * 2016-12-28 2017-05-24 深圳市赛格导航科技股份有限公司 Automobile driving assistance method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682292A (en) * 2012-05-10 2012-09-19 清华大学 Method based on monocular vision for detecting and roughly positioning edge of road
CN103544844A (en) * 2013-10-12 2014-01-29 浙江吉利控股集团有限公司 Driver assistance method and system for avoiding violation of traffic lights
CN105930830A (en) * 2016-05-18 2016-09-07 大连理工大学 Road surface traffic sign recognition method based on convolution neural network
CN106156723A (en) * 2016-05-23 2016-11-23 北京联合大学 A kind of crossing fine positioning method of view-based access control model
CN106710271A (en) * 2016-12-28 2017-05-24 深圳市赛格导航科技股份有限公司 Automobile driving assistance method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333414A (en) * 2020-09-29 2022-04-12 丰田自动车株式会社 Parking yield detection device, parking yield detection system, and recording medium

Similar Documents

Publication Publication Date Title
CN104573646B (en) Chinese herbaceous peony pedestrian detection method and system based on laser radar and binocular camera
US20200041284A1 (en) Map road marking and road quality collecting apparatus and method based on adas system
CN109284674B (en) Method and device for determining lane line
CN112700470B (en) Target detection and track extraction method based on traffic video stream
AU2015306477B2 (en) Method and axle-counting device for contact-free axle counting of a vehicle and axle-counting system for road traffic
CN109949594A (en) Real-time traffic light recognition method
CN105956632B (en) A kind of method and apparatus detecting target
CN106503653A (en) Area marking method, device and electronic equipment
CN109919074B (en) Vehicle sensing method and device based on visual sensing technology
CN109583415A (en) A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN110969055B (en) Method, apparatus, device and computer readable storage medium for vehicle positioning
CN102765365A (en) Pedestrian detection method based on machine vision and pedestrian anti-collision warning system based on machine vision
CN111398989A (en) Performance analysis method and test equipment of driving assistance system
WO2023240805A1 (en) Connected vehicle overspeed early warning method and system based on filtering correction
CN107609483A (en) Risk object detection method, device towards drive assist system
US20210295058A1 (en) Apparatus, method, and computer program for identifying state of object, and controller
CN112001235A (en) Vehicle traffic information generation method and device and computer equipment
CN111582255A (en) Vehicle overrun detection method and device, computer equipment and storage medium
CN115657002A (en) Vehicle motion state estimation method based on traffic millimeter wave radar
Wu et al. Design and implementation of vehicle speed estimation using road marking-based perspective transformation
CN109165579A (en) The method and apparatus for detecting stop line
CN112528944A (en) Image identification method and device, electronic equipment and storage medium
EP3786845A1 (en) Difficulty-adaptive training for machine learning modules
CN110009032B (en) Hyperspectral imaging-based assembly classification method
CN105740783B (en) Traffic police's detection method and system based on attitude detection Yu clothing feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190108